Security News
pnpm 10.0.0 Blocks Lifecycle Scripts by Default
pnpm 10 blocks lifecycle scripts by default to improve security, addressing supply chain attack risks but sparking debate over compatibility and workflow changes.
@sap-ai-sdk/foundation-models
Advanced tools
SAP Cloud SDK for AI is the official Software Development Kit (SDK) for **SAP AI Core**, **SAP Generative AI Hub**, and **Orchestration Service**.
SAP Cloud SDK for AI is the official Software Development Kit (SDK) for SAP AI Core, SAP Generative AI Hub, and Orchestration Service.
This package incorporates generative AI foundation models into your AI activities in SAP AI Core and SAP AI Launchpad.
$ npm install @sap-ai-sdk/foundation-models
DeploymentApi
from @sap-ai-sdk/ai-api
to deploy a model.
Alternatively, you can also create deployments using the SAP AI Launchpad.
Deployment can be set up for each model and model version, as well as a resource group.deploymentUrl
.Accessing the AI Core Service via the SDK
The SDK automatically retrieves the
AI Core
service credentials and resolves the access token needed for authentication.
- In Cloud Foundry, it's accessed from the
VCAP_SERVICES
environment variable.- In Kubernetes / Kyma environments, you have to mount the service binding as a secret instead, for more information refer to this documentation.
SAP AI Core manages access to generative AI models through the global AI scenario foundation-models
.
Creating a deployment for a model requires access to this scenario.
Each model, model version, and resource group allows for a one-time deployment.
After deployment completion, the response includes a deploymentUrl
and an id
, which is the deployment ID.
For more information, see here.
Resource groups represent a virtual collection of related resources within the scope of one SAP AI Core tenant.
Consequently, each deployment ID and resource group uniquely map to a combination of model and model version within the foundation-models
scenario.
You can pass the model name as a parameter to a client, the SDK will implicitly fetch the deployment ID for the model from the AI Core service and use it in the request.
By default, the SDK caches the deployment information, including the deployment ID, model name, and version, for 5 minutes to avoid performance issues from fetching this data with each request.
import {
AzureOpenAiChatClient,
AzureOpenAiEmbeddingClient
} from '@sap-ai-sdk/foundation-models';
// For a chat client
const chatClient = new AzureOpenAiChatClient({ modelName: 'gpt-4o' });
// For an embedding client
const embeddingClient = new AzureOpenAiEmbeddingClient({ modelName: 'gpt-4o' });
The deployment ID and resource group can be used as an alternative to the model name for obtaining a model.
const chatClient = new AzureOpenAiChatClient({
deploymentId: 'd1234',
resourceGroup: 'rg1234'
});
Use the AzureOpenAiChatClient
to send chat completion requests to an OpenAI model deployed in SAP generative AI hub.
The client sends request with Azure OpenAI API version 2024-06-01
.
import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models';
const chatClient = new AzureOpenAiChatClient('gpt-4o');
const response = await chatClient.run({
messages: [
{
role: 'user',
content: 'Where is the deepest place on earth located'
}
]
});
const responseContent = response.getContent();
Multiple messages can be sent in a single request, enabling the model to reference the conversation history.
Include parameters like max_tokens
and temperature
in the request to control the completion behavior:
const response = await chatClient.run({
messages: [
{
role: 'system',
content: 'You are a friendly chatbot.'
},
{
role: 'user',
content: 'Hi, my name is Isa'
},
{
role: 'assistant',
content:
'Hi Isa! It is nice to meet you. Is there anything I can help you with today?'
},
{
role: 'user',
content: 'Can you remind me, What is my name?'
}
],
max_tokens: 100,
temperature: 0.0
});
const responseContent = response.getContent();
const tokenUsage = response.getTokenUsage();
console.log(
`Total tokens consumed by the request: ${tokenUsage.total_tokens}\n` +
`Input prompt tokens consumed: ${tokenUsage.prompt_tokens}\n` +
`Output text completion tokens consumed: ${tokenUsage.completion_tokens}\n`
);
Refer to AzureOpenAiChatCompletionParameters
interface for other parameters that can be passed to the chat completion request.
The AzureOpenAiChatClient
supports streaming response for chat completion requests based on the Server-sent events standard.
Use the stream()
method to receive a stream of chunk responses from the model.
After consuming the stream, call the helper methods to get the finish reason and token usage information respectively.
const chatClient = new AzureOpenAiChatClient('gpt-4o');
const response = await chatClient.stream({
messages: [
{
role: 'user',
content: 'Give me a very long introduction of SAP Cloud SDK.'
}
]
});
for await (const chunk of response.stream) {
console.log(JSON.stringify(chunk));
}
const finishReason = response.getFinishReason();
const tokenUsage = response.getTokenUsage();
console.log(`Finish reason: ${finishReason}\n`);
console.log(`Token usage: ${JSON.stringify(tokenUsage)}\n`);
The client provides a helper method to extract delta content and stream string directly.
for await (const chunk of response.stream.toContentStream()) {
console.log(chunk); // will log the delta content
}
Each chunk will be a defined string containing the delta content.
Set choiceIndex
parameter for toContentStream()
method to stream a specific choice.
Streaming request can be aborted using the AbortController
API.
In case of an error, the SAP Cloud SDK for AI will automatically close the stream.
Additionally, it can be aborted manually by calling the stream()
method with an AbortController
object.
const chatClient = new AzureOpenAiChatClient('gpt-4o');
const controller = new AbortController();
const response = await new AzureOpenAiChatClient('gpt-35-turbo').stream(
{
messages: [
{
role: 'user',
content: 'Give me a very long introduction of SAP Cloud SDK.'
}
]
},
controller
);
// Abort the streaming request after one second
setTimeout(() => {
controller.abort();
}, 1000);
for await (const chunk of response.stream) {
console.log(JSON.stringify(chunk));
}
In this example, streaming request will be aborted after one second. Abort controller can be useful, e.g., when end-user wants to stop the stream or refreshes the page.
Use the AzureOpenAiEmbeddingClient
to send embedding requests to an OpenAI model deployed in SAP generative AI hub.
import { AzureOpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models';
const embeddingClient = new AzureOpenAiEmbeddingClient(
'text-embedding-ada-002'
);
const response = await embeddingClient.run({
input: 'AI is fascinating'
});
const embedding = response.getEmbedding();
Set custom request configuration in the requestConfig
parameter when calling the run()
method of a chat or embedding client.
const response = await client.run(
{
...
},
{
headers: {
'x-custom-header': 'custom-value'
// Add more headers here
},
params: {
// Add more parameters here
}
// Add more request configuration here
}
);
When initializing the AzureOpenAiChatClient
and AzureOpenAiEmbeddingClient
clients, it is possible to provide a custom destination.
For example, when targeting a destination with the name my-destination
, the following code can be used:
const client = await new AzureOpenAiChatClient('gpt-35-turbo', {
destinationName: 'my-destination'
});
By default, the fetched destination is cached.
To disable caching, set the useCache
parameter to false
together with the destinationName
parameter.
For local testing instructions, refer to this section.
This project is open to feature requests, bug reports and questions via GitHub issues.
Contribution and feedback are encouraged and always welcome. For more information about how to contribute, the project structure, as well as additional contribution information, see our Contribution Guidelines.
The SAP Cloud SDK for AI is released under the Apache License Version 2.0.
FAQs
SAP Cloud SDK for AI is the official Software Development Kit (SDK) for **SAP AI Core**, **SAP Generative AI Hub**, and **Orchestration Service**.
The npm package @sap-ai-sdk/foundation-models receives a total of 922 weekly downloads. As such, @sap-ai-sdk/foundation-models popularity was classified as not popular.
We found that @sap-ai-sdk/foundation-models demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
pnpm 10 blocks lifecycle scripts by default to improve security, addressing supply chain attack risks but sparking debate over compatibility and workflow changes.
Product
Socket now supports uv.lock files to ensure consistent, secure dependency resolution for Python projects and enhance supply chain security.
Research
Security News
Socket researchers have discovered multiple malicious npm packages targeting Solana private keys, abusing Gmail to exfiltrate the data and drain Solana wallets.