New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

aiconfig

Package Overview
Dependencies
Maintainers
5
Versions
25
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

aiconfig - npm Package Versions

23

1.1.15

Diff

Changelog

Source

(2024-01-23) Python Version 1.1.15, NPM Version 1.1.7

Last PR included in this release: https://github.com/lastmile-ai/aiconfig/pull/995

Features

  • sdk: Updated input attachments with AttachmentDataWithStringValue type to distinguish the data representation ‘kind’ (file_uri or base64) (#929). Please note that this can break existing SDK calls for model parsers that use non-text inputs
  • editor: Added telemetry data to log editor usage. Users can opt-out of telemetry by setting allow_usage_data_sharing: False in the .aiconfigrc runtime configuration file (#869, #899, #946)
  • editor: Added CLI rage command so users can submit bug reports (#870)
  • editor: Changed streaming format to be output chunks for the running prompt instead of entire AIConfig (#896)
  • editor: Disabled run button on other prompts if a prompt is currently running (#907)
  • editor: Made callback handler props optional and no-op if not included (#941)
  • editor: Added mode prop to customize UI themes on client, as well as match user dark/light mode system preferences (#950, #966)
  • editor: Added read-only mode where editing of AIConfig is disabled (#916, #935, #936, #939, #967, #961, #962)
  • eval: Generalized params to take in arbitrary dict instead of list of arguments (#951)
  • eval: Created @metric decorator to make defining metrics and adding tests easier by only needing to define the evaluation metric implementation inside the inner function (#988)
  • python-sdk: Refactored delete_output to set outputs attribute of Prompt to None rather than an empty list (#811)

Bug Fixes / Tasks

  • editor: Refactored run prompt server implementation to use stop_streaming, output_chunk, aiconfig_chunk, and aiconfig so server can more explicitly pass data to client (#914, #911)
  • editor: Split RUN_PROMPT event on client into RUN_PROMPT_START, RUN_PROMPT_CANCEL, RUN_PROMPT_SUCCESS, and RUN_PROMPT_ERROR (#925, #922, #924)
  • editor: Rearranged default model ordering to be more user-friendly (#994)
  • editor: Centered the Add Prompt button and fixed styling (#912, #953)
  • editor: Fixed an issue where changing the model for a prompt resulted in the model settings being cleared; now they will persist (#964)
  • editor: Cleared outputs when first clicking the run button in order to make it clearer that new outputs are created (#969)
  • editor: Fixed bug to display array objects in model input settings properly (#902)
  • python-sdk: Fixed issue where we were referencing PIL.Image as a type instead of a module in the HuggingFace image_2_text.py model parser (#970)
  • editor: Connected HuggingFace model parser tasks names to schema input renderers (#900)
  • editor: Fixed float model settings schema renderer to number (#989)

Documentation

  • [new] Added docs page for AIConfig Editor (#876, #947)
  • [updated] Renamed “variables” to “parameters” to make it less confusing (#968)
  • [updated] Updated Getting Started page with quickstart section, and more detailed instructions for adding API keys (#956, #895)
rossdan
published 1.1.14 •

Changelog

Source

(2024-03-11) Python Version 1.1.31, NPM Version 1.1.14

Last PR included in this release: https://github.com/lastmile-ai/aiconfig/pull/1426

Features

  • python-sdk: Added OpenAIVisionParser to core model parsers, allowing integrations with OpenAI chat/vision models and adding gpt-4-vision-preview as a core model parser (https://github.com/lastmile-ai/aiconfig/pull/1416, https://github.com/lastmile-ai/aiconfig/pull/1417)
  • editor: Added model schema and prompt input formatting for GPT-4 vision (https://github.com/lastmile-ai/aiconfig/pull/1397)
  • extension: Created extension for Groq inference (https://github.com/lastmile-ai/aiconfig/pull/1402)

Bug Fixes / Tasks

  • python-sdk: Unpinned openai dependency and updated to 1.13.3 (https://github.com/lastmile-ai/aiconfig/pull/1415)
  • vscode: Removed check for .env file path needing to be a parent of user’s VS Code workspace, allowing users to specify an .env file that they can define anywhere (https://github.com/lastmile-ai/aiconfig/pull/1398) Documentation
  • [new] Created README and cookbook to show how to use the Groq inference extension (https://github.com/lastmile-ai/aiconfig/pull/1405, https://github.com/lastmile-ai/aiconfig/pull/1402)
  • [update] Removed warning text from Gradio Notebooks docs saying that Gradio SDK needs to be <= v4.16.0 because that issue is now resolved and we can now use the latest Gradio SDK versions (https://github.com/lastmile-ai/aiconfig/pull/1421)
rossdan
published 1.1.13 •

Changelog

Source

(2024-03-05) Python Version 1.1.29, NPM Version 1.1.13

Last PR included in this release: https://github.com/lastmile-ai/aiconfig/pull/1401

Features

  • vscode: Enabled find widget (CMD/CTRL + F) in AIConfig editor webviews (https://github.com/lastmile-ai/aiconfig/pull/1369)
  • editor: Added input model schema for Hugging Face Visual Question Answering tasks (https://github.com/lastmile-ai/aiconfig/pull/1396)
  • editor: Set the environment variables in an .env file that gets saved into the VS Code configuration settings and refreshed during the current session (https://github.com/lastmile-ai/aiconfig/pull/1390)

Bug Fixes / Tasks

  • vscode: Fixed issue where autosaving was causing outputs to disappear and prompt inputs to lose focus when typing (https://github.com/lastmile-ai/aiconfig/pull/1380)
  • vscode: Updated new/untitled AIConfig file flow to follow regular new/untitled file flow in VS Code, prompting for file name on first save (https://github.com/lastmile-ai/aiconfig/pull/1351)
  • vscode: Used untitled name instead of first line contents for untitled file tab name (https://github.com/lastmile-ai/aiconfig/pull/1354)
  • vscode: Removed surfacing ‘Error updating aiconfig server’ message when closing untitled AIConfig files (https://github.com/lastmile-ai/aiconfig/pull/1352)
  • editor: Fixed an issue where readonly rendering of prompt settings was causing the page rendering to fail (https://github.com/lastmile-ai/aiconfig/pull/1358)
  • editor: Fixed default cell styles when no mode or themeOverride is specified (https://github.com/lastmile-ai/aiconfig/pull/1388)
  • vscode: Reran extension server to re-read environment variables after they’re been updated (https://github.com/lastmile-ai/aiconfig/pull/1376)
rossdan
published 1.1.12 •

Changelog

Source

(2024-01-11) Python Version 1.1.12, NPM Version 1.1.5

We built an AIConfig Editor which is like VSCode + Jupyter notebooks for AIConfig files! You can edit the config prompts, parameters, settings, and most importantly, run them for generating outputs. Source control your AIConfig files by clearing outputs and saving. It’s the most convenient way to work with Generative AI models through a local, user interface. See the README to learn more on how to use it!

Editor Capabilities (see linked PRs for screenshots and videos)

  • Add and delete prompts (#682, #665)
  • Select prompt model and model settings with easy-to-read descriptions (#707, #760)
  • Modify local and global parameters (#673)
  • Run prompts with streaming or non-streaming outputs (#806)
  • Cancel inference runs mid-execution (#789)
  • Modify name and description of AIConfig (#682)
  • Render input and outputs as text, image, or audio format (#744, #834)
  • View prompt input, output, model settings in both regular UI display or purely in raw JSON format (#686, #656, #757)
  • Copy and clear prompt output results (#656, #791)
  • Autosave every 15s, or press (CTRL/CMD) + S or Save button to do it manually (#734, #735)
  • Edit on existing AIConfig file or create a new one if not specified (#697)
  • Run multiple editor instances simultaneously (#624)
  • Error handling for malformed input + settings data, unexpected outputs, and heartbeat status when server has disconnected (#799, #803, #762)
  • Specify explicit model names to use for generic HuggingFace model parsers tasks (#850)

Features

  • sdk: Schematized prompt OutputData format to be of type string, OutputDataWithStringValue, or OutputDataWithToolCallsValue (#636). Please note that this can break existing SDK calls
  • extensions: Created 5 new HuggingFace local transformers: text-to-speech, image-to-text, automatic speech recognition, text summarization, & text translation (#793, #821, #780, #740, #753)
  • sdk: Created Anyscale model parser and cookbook to demonstrate how to use it (#730, #746)
  • python-sdk: Explicitly set model in completion params for several model parsers (#783)
  • extensions: Refactored HuggingFace model parsers to use default model for pipeline transformer if model is not provided (#863, #879)
  • python-sdk: Made get_api_key_from_environment non-required and able to return nullable, wrapping it around Result-Ok (#772, #787)
  • python-sdk: Created get_parameters method (#668)
  • python-sdk: Added exception handling for add_output method (#687)
  • sdk: Changed run output type to be list[Output] instead of Output (#617, #618)
  • extensions: Refactored HuggingFace text to image model parser response data into a single object (#805)
  • extensions: Renamed python-aiconfig-llama to aiconfig-extension-llama (#607)

Bug Fixes / Tasks

  • python-sdk: Fixed get_prompt_template() issue for non-text prompt inputs (#866)
  • python-sdk: Fixed core HuggingFace library issue where response type was not a string (#769)
  • python-sdk: Fixed bug by adding kwargs to ParameterizedModelParser (#882)
  • python-sdk: Added automated tests for add_output() method (#687)
  • python-sdk: Updated set_parameters() to work if parameters haven’t been defined already (#670)
  • python-sdk: Removed callback_manager argument from run method (#886)
  • extensions: Removed extra python dir from aiconfig-extension-llama-guard (#653)
  • python-sdk: Removed unused model-ids from OpenAI model parser (#729)

Documentation

  • [new] AIConfig Editor README: https://github.com/lastmile-ai/aiconfig/tree/main/python/src/aiconfig/editor#readme
  • [new] Anyscale cookbook: https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Anyscale
  • [new] Gradio cookbook for HuggingFace extension model parsers: https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Gradio
  • [updated] AIConfig README: https://github.com/lastmile-ai/aiconfig/blob/main/README.md
rossdan
published 1.1.11 •

rossdan
published 1.1.10 •

Changelog

Source

(2024-02-20) Python Version 1.1.26, NPM Version 1.1.10

Last PR included in this release: https://github.com/lastmile-ai/aiconfig/pull/1264

Features

  • editor: Added support for modifying general prompt metadata, such as remember_chat_context, below model settings (#1205)
  • editor: Added logging events for share and download button clicks as well as any actions that edit the config (#1217, #1220)
  • extensions: Created conversational model parser to Hugging Face remote inference extension and added input model schema to editor client (#1229, #1230)

Bug Fixes / Tasks

  • editor: Updated ‘model’ value in model settings to clear when the model for a prompt (which can include general group of models such as Hugging Face Tasks which require the model field to specify a specific model name) is updated (#1245, #1257)
  • extensions: Set default model names for the Hugging Face remote inference model parsers for Summarization, Translation, Automatic Speech Recognition and Text-to-Speech tasks (#1246, #1221)
  • gradio-notebook: Fixed styles for checkboxes, markdown links, loading spinners and output lists, as well as general cleanup to buttons and input sizing (#1248, #1249, #1250, #1251, #1252, #1231)
  • python-sdk: Fixed dependency issue to no longer pin pydantic to 2.4.2 so that aiconfig-editor can be compatible with other libraries (#1225)

Documentation

  • [updated] Added new content to Gradio Notebooks documentation, including 5-mins tutorial video, local model support, more streamlined content format, and warnings for discovered issues with Gradio SDK version (#1247, #1234, #1243, #1238)
rossdan
published 1.1.8 •

Changelog

Source

(2023-12-26) Python Version 1.1.8, NPM Version 1.1.2

Features

  • Added support for YAML file format in addition to JSON for improved readability of AIConfigs: (#583)
  • python-sdk: Added optional param in add_prompt() method to specify index where to add prompt (#599)
  • eval: Added generalized metric builder for creating your own metric evaluation class (#513)
  • python-sdk: Supported using default model if no prompt model is provided (#600)
  • python-sdk: Refactored update_model() method to take in model name and settings as separate arguments (#507)
  • python-sdk: Supported additional types in Gemini model parser. Now includes list of strings, Content string, and Content struct: (#532)
  • extensions: Added callback handlers to HuggingFace extensions (#597)
  • python-sdk: Pinned google-generativeai to version 0.3.1 on Gemini model parser (#534)
  • Added explicit output types to the ExecuteResult.data schema. Freeform also still supported (#589)

Bug Fixes / Tasks

  • Checked for null in system prompt (#541)
  • Converted protobuf to dict to fix pydantic BaseModel errors on Gemini (#558)
  • Fixed issue where we were overwriting a single prompt output instead of creating a new one in batch execution (#566)
  • Unpinned requests==2.30.0 dependency and using https instead of http in load_from_workbook() method (#582)
  • typescript-sdk: Created automated test for typescript save() API (#198)

Documentation

  • OpenAI Prompt Engineering Guide: https://openai-prompt-guide.streamlit.app/
  • Chain-of-Verification Demo: https://chain-of-verification.streamlit.app/
rossdan
published 1.1.9 •

Changelog

Source

(2024-02-12) Python Version 1.1.22, NPM Version 1.1.9

Features

  • vscode: Now utilizes the user's Python interpreter in the VS Code environment when installing dependencies for the AIConfig Editor extension. PR #1151
  • vscode: Added a command for opening an AIConfig file directly. PR #1164
  • vscode: Added a VS Code command for displaying a Welcome Page on how to use the extension effectively. PR #1194

Bug Fixes / Tasks

  • Python SDK:
    • AIConfig Format Support: Added support for AIConfig format issue for chats starting with an assistant (AI) message by making the initial prompt input empty. PR #1158
    • Dependency Management: Pinned google-generativeai module version to >=0.3.1 in requirements.txt files. PR #1171
    • Python Version Requirement: Defined all pyproject.toml files to require Python version >= 3.10. PR #1146
  • VS Code:
    • Extension Dependencies: Removed the Hugging Face extension from VS Code extension dependencies. PR #1167
    • Editor Component Theming: Fixed color scheming in the AIConfig editor component to match VS Code settings. PR #1168, PR #1176
    • Share Command Fix: Fixed an issue where the Share command was not working for unsigned AWS S3 credentials. PR #1213
    • Notification Issue: Fixed an issue where a notification, “Failed to start aiconfig server,” would show when closing a config with unsaved changes. PR #1201

Documentation

  • Tutorials and Guides:
    • Created a getting-started tutorial for Gradio Notebooks. Documentation
    • Created a cookbook for RAG with model-graded evaluation. PR #1169, PR #1200
rossdan
published 1.1.7 •

Changelog

Source

(2023-12-18) Python Version 1.1.7, NPM Version 1.1.1

Features

  • python-sdk: Created model parser extension for Google’s Gemini (#478, cookbook)
  • Added attachment field to PromptInput schema to support non-text input data (ex: image, audio) (#473)
  • python-sdk: Created batch execution interface using config.run_batch() (#469)
  • Added model parser for HuggingFace text2Image tasks (#460)
  • Updated evaluation metric values to be any arbitrary type, not just floats, & renamed fields for easier understanding (#484, #437)
  • Merged aiconfig-extension-hugging-face-transformers into aiconfig-extension-hugging-face where all Hugging Face tasks will now be supported (#498, README)

Bug Fixes

  • Fixed caching issue where re-running the same prompt caused nondeterministic behavior (#491)
  • typescript-sdk: Pinned OpenAI dependency to to 4.11.1 to have a stable API surface (#524)
  • typescript-sdk: Removed redundant PromptWithOutputs type (#508)

Documentation

  • Refactored and shortened README (#493)
  • Created table of supported models (#501)
  • Updated cookbooks with explicit instructions on how to set API keys (#441)
rossdan
published 1.1.6 •

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc