Ship Ambitious Gen AI Apps with Portkey's full-stack LLMOps Platform
npm install portkey-ai
💡 Features
🚪 AI Gateway:
- Unified API Signature: If you've used OpenAI, you already know how to use Portkey with any other provider.
- Interoperability: Write once, run with any provider. Switch between any model from any provider seamlessly.
- Automated Fallbacks & Retries: Ensure your application remains functional even if a primary service fails.
- Load Balancing & A/B Testing: Efficiently distribute incoming requests among multiple models and run A/B tests at scale.
- Semantic Caching: Reduce costs and latency by intelligently caching results.
🔬 Observability:
- Logging: Keep track of all requests for monitoring and debugging.
- Requests Tracing: Understand the journey of each request for optimization.
- Custom Tags: Segment and categorize requests for better insights.
🚀 Quick Start
First, install the SDK & export Portkey API Key
Get Portkey API key here.
$ npm install portkey-ai
$ export PORTKEY_API_KEY="PORTKEY_API_KEY"
Now, let's make a request with GPT-4
import Portkey from 'portkey-ai';
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY",
virtualKey: "VIRTUAL_KEY"
})
async function main() {
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'gpt-3.5-turbo',
});
console.log(chatCompletion.choices);
};
main();
Portkey fully adheres to the OpenAI SDK signature. This means that you can instantly switch to Portkey and start using Portkey's advanced production features right out of the box.
📔 List of Portkey Features
You can set all of these features while constructing your LLMOptions object.
Feature | Config Key | Value(Type) | Required |
---|
API Key OR Virtual Key | api_key OR virtual_key | string | ✅ Required |
Provider Name | provider | openai , cohere , anthropic , azure-openai | ✅ Required |
Model Name | model | The relevant model name from the provider. For example, gpt-3.5-turbo OR claude-2 | ❔ Optional |
Weight (For Loadbalance) | weight | integer | ❔ Optional |
Cache Type | cache_status | simple , semantic | ❔ Optional |
Force Cache Refresh | cache_force_refresh | True , False (Boolean) | ❔ Optional |
Cache Age | cache_age | integer (in seconds) | ❔ Optional |
Trace ID | trace_id | string | ❔ Optional |
Retries | retry | integer [0,5] | ❔ Optional |
Metadata | metadata | json object More info | ❔ Optional |
All Model Params | As per the model/provider | This is params like top_p , temperature , etc | ❔ Optional |
🤝 Supported Providers
| Provider | Support Status | Supported Endpoints |
---|
 | OpenAI | ✅ Supported | /completion , /chatcompletion |
 | Azure OpenAI | ✅ Supported | /completion , /chatcompletion |
 | Anthropic | ✅ Supported | /complete |
 | Cohere | ✅ Supported | generate |

🛠️ Contributing
Get started by checking out Github issues. Feel free to open an issue, or reach out if you would like to add to the project!