
Security News
AI Agent Lands PRs in Major OSS Projects, Targets Maintainers via Cold Outreach
An AI agent is merging PRs into major OSS projects and cold-emailing maintainers to drum up more work.
heroku-langchain
Advanced tools
Integrate LangChainJS with Heroku's Managed Inference and Agents (Mia) services.
This SDK provides a convenient way to interact with Heroku's AI services, specifically for chat completions, agent functionalities, and text embeddings.
Node.js 20+ is required.
pnpm install heroku-langchain
This SDK includes three main classes:
ChatHeroku: Chat completions with support for function calling, structured outputs, and streamingHerokuAgent: Autonomous agents with access to Heroku tools and MCP (Model Context Protocol) toolsHerokuEmbeddings: Text embeddings for similarity search, RAG applications, and semantic understandingHere's a simple example of how to use the ChatHeroku class for chat completions:
import { ChatHeroku } from "heroku-langchain";
import { HumanMessage } from "@langchain/core/messages";
async function main() {
// Ensure INFERENCE_MODEL_ID and INFERENCE_KEY are set in your environment
// or pass them directly to the constructor:
// const chat = new ChatHeroku({ model: "your-model-id", apiKey: "your-api-key" });
const chat = new ChatHeroku({ model: "gpt-oss-120b" });
const messages = [new HumanMessage("Hello, how are you doing today?")];
try {
const response = await chat.invoke(messages);
console.log("AI Response:", response.content);
// Example of streaming
// const stream = await chat.stream(messages);
// for await (const chunk of stream) {
// console.log(chunk.content);
// }
} catch (error) {
console.error("Error:", error);
}
}
main();
Here's how to use the HerokuEmbeddings class for generating text embeddings:
import { HerokuEmbeddings } from "heroku-langchain";
async function main() {
const embeddings = new HerokuEmbeddings({
model: "cohere-embed-multilingual",
// Optional: set API credentials explicitly
// apiKey: process.env.EMBEDDING_KEY,
// apiUrl: process.env.EMBEDDING_URL
});
try {
// Generate embedding for a single query
const queryEmbedding = await embeddings.embedQuery("What is Heroku?", {
input_type: "search_query",
});
console.log(`Query embedding dimensions: ${queryEmbedding.length}`);
// Generate embeddings for multiple documents
const documents = [
"Heroku is a cloud platform as a service (PaaS)",
"It supports multiple programming languages",
"Heroku makes it easy to deploy and scale applications",
];
const docEmbeddings = await embeddings.embedDocuments(documents, {
input_type: "search_document",
});
console.log(`Generated ${docEmbeddings.length} document embeddings`);
} catch (error) {
console.error("Error:", error);
}
}
main();
The SDK can utilize the following environment variables:
INFERENCE_MODEL_ID: The ID of the inference model to use. This is required if not provided in the constructor.INFERENCE_KEY: Your Heroku Managed Inference and Agents API key. This is required if not provided in the constructor.INFERENCE_URL: The base URL for the Heroku Managed Inference and Agents API.EMBEDDING_MODEL_ID: The ID of the embedding model to use (e.g., "cohere-embed-multilingual").EMBEDDING_KEY: Your Heroku Embedding API key.EMBEDDING_URL: The base URL for the Heroku Embedding API.You can build an agent with LangChain's createAgent helper and attach tools declared with the tool helper for richer interactions.
import { ChatHeroku } from "heroku-langchain";
import { HumanMessage } from "@langchain/core/messages";
import { createAgent, tool } from "langchain";
import { z } from "zod";
const getWeather = tool(
async ({ location }) => {
// In a real scenario, you would call a weather API here
if (location.toLowerCase().includes("san francisco")) {
return JSON.stringify({ weather: "sunny", temperature: "70F" });
}
return JSON.stringify({ weather: "unknown", temperature: "unknown" });
},
{
name: "get_weather",
description: "Gets the current weather for a given location.",
schema: z.object({
location: z
.string()
.describe("The city and state, e.g., San Francisco, CA"),
}),
},
);
async function main() {
const model = new ChatHeroku({ model: "your-model-id" });
const agent = createAgent({
model,
tools: [getWeather],
systemPrompt:
"You are a weather assistant. Call the get_weather tool when needed.",
});
const response = await agent.invoke({
messages: [new HumanMessage("What's the weather like in San Francisco?")],
});
console.log("AI Response:", response.messages.at(-1)?.content);
}
main();
The HerokuAgent class allows for more autonomous interactions with access to Heroku tools and MCP (Model Context Protocol) tools. Here's an example demonstrating agent usage:
import { createAgent } from "langchain";
import { HumanMessage } from "@langchain/core/messages";
import { HerokuAgent } from "heroku-langchain";
import { HerokuTool } from "heroku-langchain/types";
async function agentExample() {
console.log("Running Heroku createAgent Example...");
const appName = process.env.HEROKU_APP_NAME || "mia-inference-demo";
const tools: HerokuTool[] = [
{
type: "heroku_tool",
name: "dyno_run_command",
runtime_params: {
target_app_name: appName,
tool_params: {
cmd: "date",
description: "Gets the current date and time on the server.",
parameters: { type: "object", properties: {} },
},
},
},
];
console.log(`📱 Using app: ${appName}`);
console.log("💡 Note: Make sure this app exists and you have access to it!");
console.log(
" Set HEROKU_APP_NAME environment variable to use a different app.",
);
const model = new HerokuAgent();
const agent = createAgent({
model,
tools,
systemPrompt:
"You are a Heroku operator. Prefer dyno_run_command to inspect the target app.",
});
const response = await agent.invoke({
messages: [
new HumanMessage(
"What time is it on the app server? Please use the available tools to check.",
),
],
});
const finalMessage = response.messages.at(-1);
console.log(finalMessage?.content);
}
agentExample().catch(console.error);
You can also use MCP (Model Context Protocol) tools with the agent:
import { createAgent } from "langchain";
import { HumanMessage } from "@langchain/core/messages";
import { HerokuAgent } from "heroku-langchain";
import { HerokuTool } from "heroku-langchain/types";
async function mcpExample() {
const tools: HerokuTool[] = [
{
type: "mcp",
name: "mcp-brave/brave_web_search", // MCP tool name registered on Heroku MCP Toolkit
},
];
const model = new HerokuAgent();
const agent = createAgent({
model,
tools,
systemPrompt: "Answer with help from brave_web_search when needed.",
});
const response = await agent.invoke({
messages: [new HumanMessage("What is new in the world of AI?")],
});
console.log(response.messages.at(-1)?.content);
}
Complete working examples are available in the examples/ folder, organized by functionality:
ChatHeroku)examples/chat-basic.ts — Basic chat completionexamples/chat-structured-output.ts — Structured output with Zod schemasexamples/chat-structured-output-advanced.ts — Structured output with complex Zod schemasexamples/chat-lcel-prompt.ts — LCEL with prompt templatesexamples/chat-runnable-sequence.ts — Chaining with RunnableSequenceexamples/create-agent-wikipedia-tool.ts — Tool integration with Wikipedia searchexamples/create-agent-custom-tool.ts — Custom weather tool with function callingexamples/create-agent-updates-stream.ts — Streaming tool execution with createAgent and updates stream modeHerokuAgent)examples/create-heroku-agent-basic.ts — Minimal createAgent wiring for Heroku toolsexamples/create-heroku-agent-streaming.ts — Streaming tool execution with createAgentexamples/create-heroku-agent-mcp.ts — Using MCP tools with createAgentexamples/create-heroku-agent-structured-output.ts — Structured output with createAgentHerokuEmbeddings)examples/embeddings-basic.ts — Basic embeddings usage for queries and documentsexamples/langraph.ts — Multi-agent workflow with LangGraphexamples/langraph-mcp.ts — LangGraph with MCP tools for database interactionsexamples/langgraph-human-in-the-loop.ts — LangGraph with human in the loop interruptionexamples/create-heroku-agent-langgraph.ts — Agent workflow with LangGraph and Heroku toolsTo run the examples:
# Set required environment variables for chat/agents
export INFERENCE_MODEL_ID="gpt-oss-120b"
export INFERENCE_KEY="your-heroku-api-key"
export HEROKU_APP_NAME="your-app-name" # Optional, defaults to "mia-inference-demo"
# Set required environment variables for embeddings
export EMBEDDING_MODEL_ID="cohere-embed-multilingual"
export EMBEDDING_KEY="your-embedding-api-key"
export EMBEDDING_URL="your-embedding-api-url"
# Run a chat example
npx tsx examples/chat-basic.ts
# Run a structured output example
npx tsx examples/chat-structured-output.ts
# Run an agent example
npx tsx examples/create-heroku-agent-basic.ts
# Run the embeddings example
npx tsx examples/embeddings-basic.ts
For more detailed information on the available classes, methods, and types, please refer to the source code and TypeDoc generated documentation (if available).
ChatHeroku: For chat completions with function calling and structured output support.HerokuAgent: For agent-based interactions with Heroku and MCP tools.HerokuEmbeddings: For generating text embeddings and semantic search.types.ts: Contains all relevant TypeScript type definitions.This project uses Node.js's native test runner with TypeScript support. The test suite covers:
# Run all tests once
pnpm test
# Run tests in watch mode (re-runs on file changes)
pnpm test:watch
The test files are organized as follows:
test/common.test.ts - Tests for utility functions and error handlingtest/types.test.ts - Type definition validation teststest/chat-heroku.test.ts - ChatHeroku class teststest/heroku-agent.test.ts - HerokuAgent class teststest/embeddings.test.ts - HerokuEmbeddings class teststest/integration/** - End-to-end integration testsAll tests but the integration tests use environment variable mocking to avoid requiring actual API keys during testing.
Apache 2.0
FAQs
Integrate LangChainJS with Heroku's Managed Inference and Agents (Mia) services.
We found that heroku-langchain demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
An AI agent is merging PRs into major OSS projects and cold-emailing maintainers to drum up more work.

Research
/Security News
Chrome extension CL Suite by @CLMasters neutralizes 2FA for Facebook and Meta Business accounts while exfiltrating Business Manager contact and analytics data.

Security News
After Matplotlib rejected an AI-written PR, the agent fired back with a blog post, igniting debate over AI contributions and maintainer burden.