New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

@transia/engram

Package Overview
Dependencies
Maintainers
1
Versions
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@transia/engram

Neuroscience-native temporal memory for AI agents. Implements TCM, hippocampal indexing, and middle-out retrieval.

latest
npmnpm
Version
0.1.0
Version published
Maintainers
1
Created
Source

Engram

Neuroscience-native temporal memory for AI agents. Implements Temporal Context Model (TCM), hippocampal indexing, middle-out retrieval, and semantic memory extraction — so agents remember when things happened, what they mean, and how confident they should be.

Why

Traditional vector-similarity memory treats every memory as an isolated point. Engram models memories as a temporal chain with asymmetric contiguity — recalling "C" retrieves "D" (forward) more strongly than "B" (backward), just like human episodic memory.

Repeated patterns get consolidated into semantic memory — context-free knowledge with intelligence-grade confidence scoring (ICD 203 dual-axis). "The user prefers dark mode" is semantic; "The user asked for dark mode on March 5th" is episodic.

Architecture

ENCODING                              CONSOLIDATION
┌─────────────┐                       ┌──────────────────────────┐
│ TCM encode   │──→ MemoryRecord ──→  │ Phase 1: Edge Decay/Prune│
│ HippoIndex   │    TemporalEdge      │ Phase 2: Semantic Extract│
│ EdgeCreation │                       └──────────────────────────┘
│ SchemaGate   │                                │
└─────────────┘                                 ▼
                                      ┌──────────────────┐
RETRIEVAL                             │ SemanticStore     │
┌─────────────────────┐              │  SemanticNode[]   │
│ MiddleOut (episodic) │◀─────────── │  SemanticEdge[]   │
│ + SemanticLookup     │              └──────────────────┘
└─────────────────────┘

TCM — Maintains a drifting context vector per user. Each new memory blends into context, creating temporal signatures that encode when something was experienced relative to everything else.

Hippocampal Index — Fast pattern-completion lookup: given a query, find the closest temporal context signature and jump straight to that memory.

Middle-Out Retrieval — From the origin memory, walk forward and backward along temporal edges, scoring by weight × directional bias. Forward edges are stronger (asymmetric contiguity).

Consolidation — Two-phase background process:

  • Phase 1: Decay edge weights, prune weak edges, boost recently-traversed edges (temporal edge maintenance)
  • Phase 2: Extract semantic knowledge from episodic clusters, promote/reinforce semantic nodes, anti-anchoring decay, progressive compression

Semantic Memory — Context-free knowledge extracted from repeated episodic patterns via CLS theory (dual learning rates) and BCPNN (Bayesian decontextualization). Confidence scored using ICD 203 dual-axis model with promotion tiers, single-source ceiling, dissent preservation, and anti-anchoring.

Install

npm install engram

Requires Node >= 22.

Usage

Library

import { EngramService } from 'engram'

const engram = EngramService.createInMemory()

// Store memories — order matters
await engram.remember('user-1', 'Started the project')
await engram.remember('user-1', 'Defined the requirements')
await engram.remember('user-1', 'Built the prototype')

// Recall — middle-out retrieval from best match
// Returns both episodic memories and semantic nodes
const result = await engram.recall('user-1', 'requirements', {
  maxResults: 5,
  maxHops: 3,
  forwardBias: 2.0, // forward neighbors scored 2x higher
})

for (const entry of result.chain) {
  const memory = result.memories.find((m) => m.id === entry.memoryId)
  console.log(`[${entry.direction}] ${memory?.content} (score: ${entry.score})`)
}

// Semantic nodes (if any have been consolidated)
if (result.semanticNodes?.length) {
  for (const node of result.semanticNodes) {
    console.log(`[semantic] ${node.content} (tier: ${node.tier}, confidence: ${node.confidence.confidence})`)
  }
}

// Forget
await engram.forget('user-1', result.memories[0].id)

// Consolidate — decay edges + extract semantic knowledge
await engram.consolidate('user-1', {
  decayRate: 0.95,
  pruneThreshold: 0.05,
  semantic: {
    minSourcesForCorroborated: 2,
    contextVariabilityThreshold: 0.3,
  },
})

Custom Storage

Implement IMemoryStore, IEdgeStore, and optionally ISemanticStore to back engram with MongoDB, Postgres, etc:

const engram = new EngramService(myMongoMemoryStore, myMongoEdgeStore, {
  contextDimension: 64,
  defaultForwardBias: 2.0,
}, myMongoSemanticStore)

Logging

Engram accepts an injectable logger. By default it's silent (library behavior). Wire your own for production observability:

import { EngramService, CONSOLE_LOGGER } from 'engram'
import type { EngramLogger } from 'engram'

// Quick debugging
const engram = EngramService.createInMemory({ logger: CONSOLE_LOGGER })

// Production: wire your own logger
const productionLogger: EngramLogger = {
  warn: (tag, msg, data) => myLogger.warn(`[engram:${tag}] ${msg}`, data),
  error: (tag, msg, data) => myLogger.error(`[engram:${tag}] ${msg}`, data),
  debug: (tag, msg, data) => myLogger.debug(`[engram:${tag}] ${msg}`, data),
}
const engram = EngramService.createInMemory({ logger: productionLogger })

Log tags follow UPPER_SNAKE_CASE convention: ENCODING, ENCODING_SCHEMA_GATE, RETRIEVAL, RECALL, CONSOLIDATION, SEMANTIC_CONSOLIDATE, SEMANTIC_ANTI_ANCHOR, SEMANTIC_CASCADE, SEMANTIC_TIER_PROMOTION, SEMANTIC_BLEND_CENTROIDS.

CLI

# Store memories
engram remember "Started the project" --user alice --importance 8
engram remember "Defined requirements" --user alice

# Recall
engram recall "project" --user alice --max-results 5

# Forget
engram forget <memory-id> --user alice

# Consolidate
engram consolidate --user alice --decay-rate 0.95

# Run the asymmetric contiguity demo
engram demo

Semantic Memory

Semantic memory extracts context-free knowledge from repeated episodic patterns. When the same information appears across multiple conversations/projects, it gets promoted from episodic to semantic.

Promotion Tiers (ICD 203)

TierCriteriaDecay Rate
rawSingle observationWeeks
corroborated2+ independent sourcesMonths
assessed4+ sources, 2+ consolidation cycles, high context variabilityQuarters
baseline8+ sources, confidence ≥ 0.8Years

Dual-Axis Confidence

Every semantic node has two independent confidence axes:

  • probability: How likely is this true? (0.0–1.0)
  • confidence: How strong is the evidence? (0.0–1.0)

Single-source ceiling: confidence capped at 0.6 when only 1 source exists.

Schema-Gated Encoding

When a new memory matches existing semantic knowledge (cosine similarity ≥ 0.75), it gets fast-path integration — the semantic node is reinforced immediately. Novel memories take the slow episodic path for later consolidation.

Anti-Anchoring

Semantic nodes not reinforced by fresh evidence within 30 days automatically decay in confidence. Prevents stale assessments from persisting at artificially high confidence.

Dissent Preservation

Contradictory evidence is recorded as dissent on the semantic node, never silently discarded.

Configuration

EngramConfig

ParameterDefaultDescription
contextDimension64TCM context vector size
signatureDimension32Compressed signature size for index lookup
betaEncoding0.6Blend rate when encoding new memories (0–1)
betaRetrieval0.4Blend rate when recalling memories (0–1)
defaultForwardBias2.0Forward edge score multiplier
timeScaleMs86400000Time normalization scale (1 day)
neighborK3Edges created per new memory
loggerNOOP_LOGGERInjectable logger (see Logging section)

SemanticConsolidationConfig

ParameterDefaultDescription
clusterSimilarityThreshold0.7Cosine similarity for clustering
contextVariabilityThreshold0.5Min distinct contexts for promotion
minSourcesForCorroborated2Sources needed for corroborated tier
minSourcesForAssessed4Sources needed for assessed tier
minSourcesForBaseline8Sources needed for baseline tier
minCyclesForAssessed2Consolidation cycles for assessed tier
minConfidenceForBaseline0.8Confidence threshold for baseline tier
singleSourceCeiling0.6Max confidence with single source
staleDecayRate0.95Confidence decay per cycle without reinforcement
staleThresholdDays30Days before anti-anchoring kicks in
compressionThreshold0.85Similarity for marking episodic as compressible
schemaCongruencyThreshold0.75Similarity for schema-gated fast-path

Development

npm test              # run tests
npm run lint          # eslint
npm run lint:fix      # eslint --fix
npm run format        # prettier --write
npm run format:check  # prettier --check
npm run typecheck     # tsc --noEmit
npm run build         # tsc

License

MIT

Keywords

memory

FAQs

Package last updated on 20 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts