langchain-gradientai
This package contains the LangChain integration with DigitalOcean
Installation
pip install -U langchain-gradientai
And you should configure credentials by setting the DIGITALOCEAN_INFERENCE_KEY
environment variable:
- Log in to the DigitalOcean Cloud console
- Go to the GradienAI Platform and navigate to Serverless Inference.
- Click on Create model access key, enter a name, and create the key.
- Use the generated key as your
DIGITALOCEAN_INFERENCE_KEY
:
Create .env file with your access key:
DIGITALOCEAN_INFERENCE_KEY=your_access_key_here
Chat Models
ChatGradientAI
class exposes chat models from langchain-gradientai.
Invoke
import os
from dotenv import load_dotenv
from langchain_gradientai import ChatGradientAI
load_dotenv()
llm = ChatGradientAI(
model="llama3.3-70b-instruct",
api_key=os.getenv("DIGITALOCEAN_INFERENCE_KEY")
)
result = llm.invoke("What is the capital of France?.")
print(result)
Stream
import os
from dotenv import load_dotenv
from langchain_gradientai import ChatGradientAI
load_dotenv()
llm = ChatGradientAI(
model="llama3.3-70b-instruct",
api_key=os.getenv("DIGITALOCEAN_INFERENCE_KEY")
)
for chunk in llm.stream("Tell me what happened to the Dinosaurs?"):
print(chunk.content, end="", flush=True)
More features coming soon.