New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

@sap-ai-sdk/foundation-models

Package Overview
Dependencies
Maintainers
0
Versions
158
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@sap-ai-sdk/foundation-models

This package incorporates generative AI foundation models into your AI activities in SAP AI Core and SAP AI Launchpad.

  • 1.0.0
  • npm
  • Socket score

Version published
Weekly downloads
0
Maintainers
0
Weekly downloads
 
Created
Source

@sap-ai-sdk/foundation-models

This package incorporates generative AI foundation models into your AI activities in SAP AI Core and SAP AI Launchpad.

Installation

$ npm install @sap-ai-sdk/foundation-models

Azure OpenAI Client

To make a generative AI model available for use, you need to create a deployment. You can create a deployment for each model and model version, as well as for each resource group that you want to use with generative AI hub.

After the deployment is complete, you have a deploymentUrl, which can be used to access the model.

The Azure OpenAI client allows you to send chat completion or embedding requests to OpenAI models deployed in SAP generative AI hub.

Prerequisites

  • Enable the AI Core service in BTP.
  • Project configured with Node.js v20 or higher and native ESM support enabled.
  • A deployed OpenAI model in SAP generative AI hub.
    • You can use the DeploymentApi from @sap-ai-sdk/ai-api to deploy a model to SAP generative AI hub. For more information, see here.
  • sap-ai-sdk/foundation-models package installed in your project.

Usage of Azure OpenAI Chat Client

Use the AzureOpenAiChatClient to send chat completion requests to an OpenAI model deployed in SAP generative AI hub. You can pass the model name as a parameter to the client, the SDK will implicitly fetch the deployment ID for the model from the AI Core service and use it to send the request.

By default, the SDK caches the deployment information, which includes the deployment ID and properties such as the model name and model version, for 5 minutes to prevent performance impact from fetching the deployment information for every request.

import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models';

const client = new AzureOpenAiChatClient('gpt-35-turbo');
const response = await client.run({
  messages: [
    {
      role: 'user',
      content: 'Where is the deepest place on earth located'
    }
  ]
});

const responseContent = response.getContent();

Use the following snippet to send a chat completion request with system messages:

import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models';

const client = new AzureOpenAiChatClient('gpt-35-turbo');
const response = await client.run({
  messages: [
    {
      role: 'system',
      content: 'You are a friendly chatbot.'
    },
    {
      role: 'user',
      content: 'Hi, my name is Isa'
    },
    {
      role: 'assistant',
      content: 'Hi Isa! It is nice to meet you. Is there anything I can help you with today?'
    },
    {
      role: 'user',
      content: 'Can you remind me, What is my name?'
    }
  ],
  max_tokens: 100,
  temperature: 0.0
});

const responseContent = response.getContent();
const tokenUsage = response.getTokenUsage();

logger.info(
  `Total tokens consumed by the request: ${tokenUsage.total_tokens}\n` +
  `Input prompt tokens consumed: ${tokenUsage.prompt_tokens}\n` +
  `Output text completion tokens consumed: ${tokenUsage.completion_tokens}\n`
);

It is possible to send multiple messages in a single request. This feature is useful for providing a history of the conversation to the model.

Pass parameters like max_tokens and temperature to the request to control the completion behavior. Refer to AzureOpenAiChatCompletionParameters interface for knowing more parameters that can be passed to the chat completion request.

Obtaining a Client using Resource Groups

Resource groups represent a virtual collection of related resources within the scope of one SAP AI Core tenant.

You can use the deployment ID and resource group as an alternative to obtaining a model by using the model name.

import { AzureOpenAiChatClient } from '@sap-ai-sdk/foundation-models';

const response = await new AzureOpenAiChatClient({ deploymentId: 'd1234' , resourceGroup: 'rg1234' }).run({
  messages: [
    {
      'role':'user',
      'content': 'What is the capital of France?'
    }
  ]
});

Usage of Azure OpenAI Embedding Client

Use the AzureOpenAiEmbeddingClient to send embedding requests to an OpenAI model deployed in SAP generative AI hub. You can pass the model name as a parameter to the client, the SDK will implicitly fetch the deployment ID for the model from the AI Core service and use it to send the request.

By default, the SDK caches the deployment information, which includes the deployment ID and properties such as the model name and model version, for 5 minutes to prevent performance impact from fetching the deployment information for every request.

import { AzureOpenAiEmbeddingClient } from '@sap-ai-sdk/foundation-models';

const client = new AzureOpenAiEmbeddingClient('text-embedding-ada-002');
const response = await client.run({
  input: 'AI is fascinating'
});
const embedding = response.getEmbedding();

Like in Azure OpenAI Chat client, you could also pass the resource group name to the client along with the deployment ID instead of the model name.

const client = new AzureOpenAiEmbeddingClient({ deploymentId: 'd1234' , resourceGroup: 'rg1234' })

Support, Feedback, Contribution

This project is open to feature requests/suggestions, bug reports etc. via GitHub issues.

Contribution and feedback are encouraged and always welcome. For more information about how to contribute, the project structure, as well as additional contribution information, see our Contribution Guidelines.

License

The SAP Cloud SDK for AI is released under the Apache License Version 2.0.

Keywords

FAQs

Package last updated on 19 Sep 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc