
Security News
Axios Supply Chain Attack Reaches OpenAI macOS Signing Pipeline, Forces Certificate Rotation
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.
engram-sdk
Advanced tools
Universal memory layer for AI agents
Engram gives AI agents knowledge graphs, consolidation, and spreading activation. Not storage. Understanding.
npm install -g engram-sdk
engram init
That's it. 10 memory tools available via MCP.
npm install -g engram-sdk
export GEMINI_API_KEY=your-key-here
npx engram-serve
Server starts on http://127.0.0.1:3800.
| Traditional memory | Engram | |
|---|---|---|
| Storage | Flat vectors or files | Knowledge graph with typed edges |
| Maintenance | Manual curation | Sleep-cycle consolidation (LLM-powered) |
| Retrieval | You ask, it answers | Spreading activation surfaces context you didn't ask for |
Benchmarks (LOCOMO):
| Tool | Description |
|---|---|
engram_remember | Store a memory. Auto-extracts entities and topics. |
engram_recall | Recall relevant memories via semantic search. |
engram_briefing | Structured session briefing — key facts, pending commitments, recent activity. |
engram_consolidate | Run consolidation — distills episodes into semantic knowledge, discovers entities, finds contradictions. |
engram_surface | Proactive memory surfacing — pushes relevant memories based on current context. |
engram_connect | Create a relationship between two memories in the knowledge graph. |
engram_forget | Forget a memory (soft or hard delete). |
engram_entities | List all tracked entities with memory counts. |
engram_stats | Vault statistics — memory counts by type, entity count, etc. |
engram_ingest | Auto-ingest conversation transcripts or raw text into structured memories. |
All endpoints return JSON. Base URL: http://127.0.0.1:3800
POST /v1/memories — Store a memorycurl -X POST http://localhost:3800/v1/memories \
-H "Content-Type: application/json" \
-d '{"content": "User prefers TypeScript over JavaScript", "type": "semantic"}'
{
"id": "m_abc123",
"content": "User prefers TypeScript over JavaScript",
"type": "semantic",
"entities": ["TypeScript", "JavaScript"],
"topics": ["programming", "preferences"],
"salience": 0.7,
"createdAt": "2025-01-15T10:30:00.000Z"
}
GET /v1/memories/recall — Recall memoriescurl "http://localhost:3800/v1/memories/recall?context=language+preferences&limit=5"
Query parameters: context (required), entities, topics, types, limit, spread, spreadHops, spreadDecay, spreadEntityHops
{
"memories": [
{
"id": "m_abc123",
"content": "User prefers TypeScript over JavaScript",
"type": "semantic",
"salience": 0.7
}
],
"count": 1
}
POST /v1/memories/recall — Recall (complex query)curl -X POST http://localhost:3800/v1/memories/recall \
-H "Content-Type: application/json" \
-d '{"context": "project setup", "entities": ["React"], "limit": 10, "spread": true}'
Response: same shape as GET recall.
DELETE /v1/memories/:id — Forget a memorycurl -X DELETE "http://localhost:3800/v1/memories/m_abc123?hard=true"
{ "deleted": "m_abc123", "hard": true }
GET /v1/memories/:id/neighbors — Graph neighborscurl "http://localhost:3800/v1/memories/m_abc123/neighbors?depth=2"
{
"memories": [ ... ],
"count": 3
}
POST /v1/consolidate — Run consolidationcurl -X POST http://localhost:3800/v1/consolidate
{
"consolidated": 5,
"entitiesDiscovered": 3,
"contradictions": 1,
"connectionsFormed": 7
}
GET /v1/briefing — Session briefingcurl "http://localhost:3800/v1/briefing?context=morning+standup&limit=10"
{
"summary": "...",
"keyFacts": [{ "content": "...", "salience": 0.9 }],
"activeCommitments": [{ "content": "...", "status": "pending" }],
"recentActivity": [{ "content": "..." }]
}
Also available as POST /v1/briefing with JSON body.
GET /v1/stats — Vault statisticscurl http://localhost:3800/v1/stats
{
"total": 142,
"byType": { "episodic": 89, "semantic": 41, "procedural": 12 },
"entities": 27,
"edges": 63
}
GET /v1/entities — List entitiescurl http://localhost:3800/v1/entities
{
"entities": [
{ "name": "TypeScript", "count": 12 },
{ "name": "React", "count": 8 }
],
"count": 27
}
GET /health — Health checkcurl http://localhost:3800/health
{ "status": "ok", "version": "0.1.0", "timestamp": "2025-01-15T10:30:00.000Z" }
import { Vault } from 'engram-sdk';
const vault = new Vault({ owner: 'my-agent' });
await vault.remember('User prefers TypeScript');
const memories = await vault.recall('language preferences');
await vault.consolidate();
engram init Set up Engram for Claude Code / Cursor / MCP clients
engram mcp Start the MCP server (stdio transport)
engram remember <text> Store a memory
engram recall <context> Retrieve relevant memories
engram consolidate Run memory consolidation
engram stats Show vault statistics
engram entities List known entities
engram forget <id> [--hard] Forget a memory (soft or hard delete)
engram search <query> Full-text search
engram export Export entire vault as JSON
engram eval Health report & value assessment
engram repl Interactive REPL mode
engram shadow start Start shadow mode (server + watcher, background)
engram shadow stop Stop shadow mode
engram shadow status Check shadow mode status
engram shadow results Compare Engram vs your CLAUDE.md
Options:
--db <path> Database file path (default: ~/.engram/default.db)
--owner <name> Owner identifier (default: "default")
--agent <id> Agent ID for source tracking
--json Output as JSON
--help Show help
Required for embeddings, consolidation, and LLM-powered extraction:
export GEMINI_API_KEY=your-key-here
Engram stores data in ~/.engram/ by default. Override with:
export ENGRAM_DB_PATH=/path/to/engram.db
| Variable | Description | Default |
|---|---|---|
GEMINI_API_KEY | Gemini API key for embeddings & consolidation | — |
ENGRAM_LLM_PROVIDER | LLM provider: gemini, openai, anthropic | gemini |
ENGRAM_LLM_API_KEY | LLM API key (falls back to GEMINI_API_KEY for gemini) | — |
ENGRAM_LLM_MODEL | LLM model name | provider default |
ENGRAM_LLM_BASE_URL | Custom API base URL (Groq, Cerebras, Ollama, etc.) | provider default |
ENGRAM_DB_PATH | SQLite database path | ~/.engram/default.db |
ENGRAM_OWNER | Vault owner name | default |
ENGRAM_HOST | Server bind address | 127.0.0.1 |
PORT | Server port | 3800 |
ENGRAM_AUTH_TOKEN | Bearer token for API auth | — |
ENGRAM_CORS_ORIGIN | CORS allowed origin | localhost only |
| System | LOCOMO Score | Tokens/Query |
|---|---|---|
| Engram | 80.0% | 776 |
| Mem0 | 66.9% | — |
| Manual files | 74.5% | 1,373 |
| Full Context | 86.2% | 22,976 |
Full context (dumping entire conversation history) scores highest but uses 30x more tokens and can't scale past context window limits. Engram closes most of the gap while using 96.6% fewer tokens. For comparison, Mem0 (the most popular agent memory system) scores 66.9% on the same benchmark.
Engram works with Gemini's free API tier, but be aware of its limits:
gemini-2.5-flash, ~1,500 requests/dayEngram has built-in retry logic: if you hit a rate limit, it will automatically wait and retry up to 3 times. You'll see a log message like:
[engram] Gemini embedContent rate limited. Retrying in 33s (attempt 1/3)...
If you're making heavy use of Engram (frequent remembers + recalls in quick succession), consider upgrading to a paid Gemini API key for higher limits.
Using Engram in your project? Add the badge to your README:
[](https://github.com/tstockham96/engram)
You can use, modify, and self-host Engram freely. The one restriction: you cannot offer Engram as a hosted or managed service to third parties. After 4 years, each version converts to Apache 2.0.
For commercial licensing, contact tstockham96@gmail.com.
FAQs
Universal memory layer for AI agents. Remember, recall, consolidate.
We found that engram-sdk demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.

Security News
Open source is under attack because of how much value it creates. It has been the foundation of every major software innovation for the last three decades. This is not the time to walk away from it.

Security News
Socket CEO Feross Aboukhadijeh breaks down how North Korea hijacked Axios and what it means for the future of software supply chain security.