New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

portkey-ai

Package Overview
Dependencies
Maintainers
2
Versions
47
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

portkey-ai

Node client library for the Portkey API

  • 0.1.13
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
36K
decreased by-3.25%
Maintainers
2
Weekly downloads
 
Created
Source

Build reliable, secure, and production-ready AI apps easily.

💡 Features

🚪 AI Gateway:

  • Unified API Signature: If you've used OpenAI, you already know how to use Portkey with any other provider.
  • Interoperability: Write once, run with any provider. Switch between any model from any provider seamlessly.
  • Automated Fallbacks & Retries: Ensure your application remains functional even if a primary service fails.
  • Load Balancing: Efficiently distribute incoming requests among multiple models.
  • Semantic Caching: Reduce costs and latency by intelligently caching results.

🔬 Observability:

  • Logging: Keep track of all requests for monitoring and debugging.
  • Requests Tracing: Understand the journey of each request for optimization.
  • Custom Tags: Segment and categorize requests for better insights.

🚀 Quick Start

4️ Steps to Integrate the SDK

  1. Get your Portkey API key and your virtual key for AI providers.
  2. Construct your LLM, add Portkey features, provider features, and prompt.
  3. Construct the Portkey client and set your usage mode.
  4. Now call Portkey regularly like you would call your OpenAI constructor.

Let's dive in! If you are an advanced user and want to directly jump to various full-fledged examples, click here.


Step 1️ : Get your Portkey API Key and your Virtual Keys for AI providers

Portkey API Key: Log into Portkey here, then click on the profile icon on top left and “Copy API Key”.

export PORTKEY_API_KEY="PORTKEY_API_KEY"

Virtual Keys: Navigate to the "Virtual Keys" page on Portkey and hit the "Add Key" button. Choose your AI provider and assign a unique name to your key. Your virtual key is ready!

Step 2️ : Construct your LLM, add Portkey features, provider features, and prompt

Portkey Features: You can find a comprehensive list of Portkey features here. This includes settings for caching, retries, metadata, and more.

Provider Features: Portkey is designed to be flexible. All the features you're familiar with from your LLM provider, like top_p, top_k, and temperature, can be used seamlessly. Check out the complete list of provider features here.

Setting the Prompt Input: This param lets you override any prompt that is passed during the completion call - set a model-specific prompt here to optimise the model performance. You can set the input in two ways. For models like Claude and GPT3, use prompt = (str), and for models like GPT3.5 & GPT4, use messages = [array].

Here's how you can combine everything:

import { Portkey } from "portkey-ai";

// Portkey Config
const portkey = new Portkey({
    mode: "single",
    llms: [{
        provider: "openai",
        virtual_key: "<>",
        model: "gpt-3.5-turbo",
        max_tokens: 2000,
        temperature: 0,
        // ** more params can be added here.
    }]
})

Steo 3️ : Construct the Portkey Client

Portkey client's config takes 3 params: api_key, mode, llms.

  • api_key: You can set your Portkey API key here or with bash script as done above.
  • mode: There are 3 modes - Single, Fallback, Loadbalance.
    • Single - This is the standard mode. Use it if you do not want Fallback OR Loadbalance features.
    • Fallback - Set this mode if you want to enable the Fallback feature.
    • Loadbalance - Set this mode if you want to enable the Loadbalance feature.
  • llms: This is an array where we pass our LLMs constructed using the LLMOptions interface.

Step 4️ : Let's Call the Portkey Client!

The Portkey client can do ChatCompletions and Completions.

Since our LLM is GPT4, we will use ChatCompletions:

async function main() {
    const response = await portkey.chatCompletions.create({
        messages: [{
            "role": "user",
            "content": "Who are you ?"
        }]
    })
    console.log(response.choices[0].message)
}

main().catch((err) => {
    console.error(err);
    process.exit(1);
});

You have integrated Portkey's Node SDK in just 4 steps!


📔 Full List of Portkey Config

FeatureConfig KeyValue(Type)Required
Provider Nameproviderstring✅ Required
Model Namemodelstring✅ Required
Virtual Key OR API Keyvirtual_key or api_keystring✅ Required (can be set externally)
Cache Typecache_statussimple, semantic❔ Optional
Force Cache Refreshcache_force_refreshTrue, False (Boolean)❔ Optional
Cache Agecache_ageinteger (in seconds)❔ Optional
Trace IDtrace_idstring❔ Optional
Retriesretryinteger [0,5]❔ Optional
Metadatametadatajson object More info❔ Optional

🤝 Supported Providers

ProviderSupport StatusSupported Endpoints
OpenAI✅ Supported/completion, /embed
Azure OpenAI✅ Supported/completion, /embed
Anthropic✅ Supported/complete
Cohere🚧 Coming Soongenerate, embed

📝 Full Documentation | 🛠️ Integration Requests |

follow on Twitter Discord

FAQs

Package last updated on 10 Oct 2023

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc