Security News
Node.js EOL Versions CVE Dubbed the "Worst CVE of the Year" by Security Experts
Critics call the Node.js EOL CVE a misuse of the system, sparking debate over CVE standards and the growing noise in vulnerability databases.
@sap-ai-sdk/langchain
Advanced tools
SAP Cloud SDK for AI is the official Software Development Kit (SDK) for SAP AI Core, SAP Generative AI Hub, and Orchestration Service.
This package provides LangChain model clients built on top of the foundation model clients of the SAP Cloud SDK for AI.
$ npm install @sap-ai-sdk/langchain
@langchain/core
version as the @sap-ai-sdk/langchain
package, to see which langchain version this package is currently using, check our package.json.DeploymentApi
from @sap-ai-sdk/ai-api
to deploy a model.
Alternatively, you can also create deployments using the SAP AI Launchpad.deploymentUrl
.Accessing the AI Core Service via the SDK
The SDK automatically retrieves the
AI Core
service credentials and resolves the access token needed for authentication.
- In Cloud Foundry, it's accessed from the
VCAP_SERVICES
environment variable.- In Kubernetes / Kyma environments, you have to mount the service binding as a secret instead, for more information refer to this documentation.
SAP AI Core manages access to generative AI models through the global AI scenario foundation-models
.
Creating a deployment for a model requires access to this scenario.
Each model, model version, and resource group allows for a one-time deployment.
After deployment completion, the response includes a deploymentUrl
and an id
, which is the deployment ID.
For more information, see here.
Resource groups represent a virtual collection of related resources within the scope of one SAP AI Core tenant.
Consequently, each deployment ID and resource group uniquely map to a combination of model and model version within the foundation-models
scenario.
This package offers both chat and embedding clients, currently supporting Azure OpenAI. All clients comply with LangChain's interface.
To initialize a client, provide the model name:
import {
AzureOpenAiChatClient,
AzureOpenAiEmbeddingClient
} from '@sap-ai-sdk/langchain';
// For a chat client
const chatClient = new AzureOpenAiChatClient({ modelName: 'gpt-4o' });
// For an embedding client
const embeddingClient = new AzureOpenAiEmbeddingClient({ modelName: 'gpt-4o' });
In addition to the default parameters of the model vendor (e.g., OpenAI) and LangChain, additional parameters can be used to help narrow down the search for the desired model:
const chatClient = new AzureOpenAiChatClient({
modelName: 'gpt-4o',
modelVersion: '24-07-2021',
resourceGroup: 'my-resource-group'
});
Do not pass a deployment ID
to initialize the client.
For the LangChain model clients, initialization is done using the model name, model version and resource group.
An important note is that LangChain clients by default attempt 6 retries with exponential backoff in case of a failure. Especially in testing environments you might want to reduce this number to speed up the process:
const embeddingClient = new AzureOpenAiEmbeddingClient({
modelName: 'gpt-4o',
maxRetries: 0
});
When initializing the AzureOpenAiChatClient
and AzureOpenAiEmbeddingClient
clients, it is possible to provide a custom destination.
For example, when targeting a destination with the name my-destination
, the following code can be used:
const chatClient = new AzureOpenAiChatClient(
{
modelName: 'gpt-4o',
modelVersion: '24-07-2021',
resourceGroup: 'my-resource-group'
},
{
destinationName: 'my-destination'
}
);
By default, the fetched destination is cached.
To disable caching, set the useCache
parameter to false
together with the destinationName
parameter.
The chat client allows you to interact with Azure OpenAI chat models, accessible via the generative AI hub of SAP AI Core. To invoke the client, pass a prompt:
const response = await chatClient.invoke("What's the capital of France?");
import { AzureOpenAiChatClient } from '@sap-ai-sdk/langchain';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { ChatPromptTemplate } from '@langchain/core/prompts';
// initialize the client
const client = new AzureOpenAiChatClient({ modelName: 'gpt-35-turbo' });
// create a prompt template
const promptTemplate = ChatPromptTemplate.fromMessages([
['system', 'Answer the following in {language}:'],
['user', '{text}']
]);
// create an output parser
const parser = new StringOutputParser();
// chain together template, client, and parser
const llmChain = promptTemplate.pipe(client).pipe(parser);
// invoke the chain
return llmChain.invoke({
language: 'german',
text: 'What is the capital of France?'
});
Embedding clients allow embedding either text or document chunks (represented as arrays of strings). While you can use them standalone, they are usually used in combination with other LangChain utilities, like a text splitter for preprocessing and a vector store for storage and retrieval of the relevant embeddings. For a complete example how to implement RAG with our LangChain client, take a look at our sample code.
const embeddedText = await embeddingClient.embedQuery(
'Paris is the capital of France.'
);
const embeddedDocuments = await embeddingClient.embedDocuments([
'Page 1: Paris is the capital of France.',
'Page 2: It is a beautiful city.'
]);
// Create a text splitter and split the document
const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 2000,
chunkOverlap: 200
});
const splits = await textSplitter.splitDocuments(docs);
// Initialize the embedding client
const embeddingClient = new AzureOpenAiEmbeddingClient({
modelName: 'text-embedding-ada-002'
});
// Create a vector store from the document
const vectorStore = await MemoryVectorStore.fromDocuments(
splits,
embeddingClient
);
// Create a retriever for the vector store
const retriever = vectorStore.asRetriever();
For local testing instructions, refer to this section.
This project is open to feature requests, bug reports and questions via GitHub issues.
Contribution and feedback are encouraged and always welcome. For more information about how to contribute, the project structure, as well as additional contribution information, see our Contribution Guidelines.
The SAP Cloud SDK for AI is released under the Apache License Version 2.0..
FAQs
LangChain clients based on the @sap-ai-sdk
The npm package @sap-ai-sdk/langchain receives a total of 566 weekly downloads. As such, @sap-ai-sdk/langchain popularity was classified as not popular.
We found that @sap-ai-sdk/langchain demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Critics call the Node.js EOL CVE a misuse of the system, sparking debate over CVE standards and the growing noise in vulnerability databases.
Security News
cURL and Go security teams are publicly rejecting CVSS as flawed for assessing vulnerabilities and are calling for more accurate, context-aware approaches.
Security News
Bun 1.2 enhances its JavaScript runtime with 90% Node.js compatibility, built-in S3 and Postgres support, HTML Imports, and faster, cloud-first performance.