
Research
Supply Chain Attack on Axios Pulls Malicious Dependency from npm
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.
@letta-ai/agentic-learning
Advanced tools
Drop-in SDK to add an AI memory layer to any application. Works with OpenAI, Anthropic, Gemini, Claude, Vercel AI SDK.
Add continual learning and long-term memory to any LLM agent with one line of code. This SDK enables agents to learn from every conversation and recall context across sessions—making any agent across any platform stateful.
import OpenAI from 'openai';
import { learning } from '@letta-ai/agentic-learning';
const client = new OpenAI();
await learning({ agent: 'my_agent' }, async () => {
// LLM is now stateful!
response = await client.chat.completions.create(...)
});
npm install @letta-ai/agentic-learning
# Set your API keys
export OPENAI_API_KEY="your-openai-key"
export LETTA_API_KEY="your-letta-key"
import { learning } from '@letta-ai/agentic-learning';
import OpenAI from 'openai';
const client = new OpenAI();
// Add continual learning with one line
await learning({ agent: "my_assistant" }, async () => {
// All LLM calls inside this block have learning enabled
const response = await client.chat.completions.create({
model: "gpt-5",
messages: [{ role: "user", content: "My name is Alice" }]
});
// Agent remembers prior context
const response2 = await client.chat.completions.create({
model: "gpt-5",
messages: [{ role: "user", content: "What's my name?" }]
});
// Returns: "Your name is Alice"
});
That's it - this SDK automatically:
| Provider | Package | Status | Example |
|---|---|---|---|
| OpenAI Chat | openai>=4.0.0 | ✅ Stable | openai_example.ts |
| OpenAI Responses | openai>=4.0.0 | ✅ Stable | openai_responses_example.ts |
| Anthropic | @anthropic-ai/sdk>=0.30.0 | ✅ Stable | anthropic_example.ts |
| Claude Agent SDK | @anthropic-ai/claude-agent-sdk>=0.1.0 | ✅ Stable | claude_example.ts |
| Gemini | @google/generative-ai>=0.21.0 | ✅ Stable | gemini_example.ts |
| Vercel AI SDK | ai>=3.0.0 | ✅ Stable | vercel_example.ts |
Create an issue to request support for another provider, or contribute a PR.
This SDK adds stateful memory to your existing LLM code with zero architectural changes:
Benefits:
Architecture:
1. 🎯 Wrap 2. 📝 Capture 3. 🔍 Retrieve 4. 🤖 Respond
your code conversations relevant with full
in learning automatically memories context
┌─────────────┐
│ Your Code │
│ learning() │
└──────┬──────┘
│
▼
┌─────────────┐ ┌──────────────┐
│ Interceptor │───▶│ Letta Server │ (Stores conversations,
│ (Inject) │◀───│ (Memory) │ retrieves context)
└──────┬──────┘ └──────────────┘
│
▼
┌─────────────┐
│ LLM API │ (Sees enriched prompts)
│ OpenAI/etc │
└─────────────┘
// First session
await learning({ agent: "sales_bot" }, async () => {
const response = await client.chat.completions.create({
messages: [{ role: "user", content: "I'm interested in Product X" }]
});
});
// Later session - agent remembers automatically
await learning({ agent: "sales_bot" }, async () => {
const response = await client.chat.completions.create({
messages: [{ role: "user", content: "Tell me more about that product" }]
});
// Agent knows you're asking about Product X
});
import { AgenticLearning } from '@letta-ai/agentic-learning';
const learningClient = new AgenticLearning();
// Search past conversations
const messages = await learningClient.memory.search({
agent: "my_agent",
query: "What are my project requirements?"
});
// Store conversations without injecting memory (useful for logging)
await learning({ agent: "my_agent", captureOnly: true }, async () => {
const response = await client.chat.completions.create(...);
});
// Configure which memory blocks to use
await learning({ agent: "sales_bot", memory: ["customer", "product_preferences"] }, async () => {
const response = await client.chat.completions.create(...);
});
import { AgenticLearning, learning } from '@letta-ai/agentic-learning';
// Connect to local server
const learningClient = new AgenticLearning({
baseUrl: "http://localhost:8283"
});
await learning({ agent: 'my_agent', client: learningClient }, async () => {
const response = await client.chat.completions.create(...);
});
Run Letta locally with Docker:
docker run \
-v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
-p 8283:8283 \
-e OPENAI_API_KEY="your_key" \
letta/letta:latest
See the self-hosting guide for more options.
# Clone repository
git clone https://github.com/letta-ai/agentic-learning-sdk.git
cd agentic-learning-sdk/typescript
# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
# Run Claude tests (separate runner)
npm run test:claude
# Watch mode
npm run dev
See the examples/ directory for complete working examples:
cd ../examples
npm install
npx tsx openai_example.ts
Apache 2.0 - See LICENSE for details.
Built with Letta - the leading platform for building stateful AI agents with long-term memory.
FAQs
Drop-in SDK to add an AI memory layer to any application. Works with OpenAI, Anthropic, Gemini, Claude, Vercel AI SDK.
We found that @letta-ai/agentic-learning demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 4 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.

Research
Malicious versions of the Telnyx Python SDK on PyPI delivered credential-stealing malware via a multi-stage supply chain attack.

Security News
TeamPCP is partnering with ransomware group Vect to turn open source supply chain attacks on tools like Trivy and LiteLLM into large-scale ransomware operations.