
Research
Node.js Fixes AsyncLocalStorage Crash Bug That Could Take Down Production Servers
Node.js patched a crash bug where AsyncLocalStorage could cause stack overflows to bypass error handlers and terminate production servers.
LoroRepo is the collection-sync layer that sits above Flock. It keeps document metadata, CRDT bodies, and binary assets coordinated so apps can:
repo.listDoc() and repo.watch() expose LWW metadata so UIs can render collections before bodies arrive.openPersistedDoc() hands back a repo-managed LoroDoc that persists locally and can sync once or join live rooms; openDetachedDoc() is a read-only snapshot.linkAsset()/fetchAsset() dedupe SHA-256 addressed blobs across docs, while gcAssets() sweeps unreferenced payloads.TransportAdapter, StorageAdapter, and AssetTransportAdapter (or use the built-ins below) to target servers, CF Durable Objects, or local-first meshes.by: "local" | "sync" | "live" so you can react differently to local edits, explicit sync pulls, or realtime merges.import { LoroRepo } from "loro-repo";
import { BroadcastChannelTransportAdapter } from "loro-repo/transport/broadcast-channel";
import { IndexedDBStorageAdaptor } from "loro-repo/storage/indexeddb";
type DocMeta = { title?: string; tags?: string[] };
const repo = await LoroRepo.create<DocMeta>({
transportAdapter: new BroadcastChannelTransportAdapter({ namespace: "notes" }),
storageAdapter: new IndexedDBStorageAdaptor({ dbName: "notes-db" }),
});
await repo.sync({ scope: "meta" }); // metadata-first
await repo.upsertDocMeta("note:welcome", { title: "Welcome" });
const handle = await repo.openPersistedDoc("note:welcome");
await handle.syncOnce(); // optional: fetch body once
const room = await handle.joinRoom(); // optional: live updates
handle.doc.getText("content").insert(0, "Hello from LoroRepo");
handle.doc.commit();
room.unsubscribe();
await repo.unloadDoc("note:welcome");
await LoroRepo.create<Meta>({ transportAdapter?, storageAdapter?, assetTransportAdapter?, docFrontierDebounceMs? }); metadata is hydrated automatically.await repo.setTransportAdapter(adapter) (useful when booting offline, then enabling realtime once connectivity/auth is ready).repo.hasTransport() / repo.hasStorage() before calling joinMetaRoom / joinDocRoom.Meta. All metadata helpers (upsertDocMeta, getDocMeta, listDoc, watch) stay type-safe.repo.sync({ scope: "meta" | "doc" | "full", docIds?: string[] }) to pull remote changes on demand.openPersistedDoc(docId) for repo-managed docs (persisted snapshots + frontier tracking) and openDetachedDoc(docId) for isolated snapshots; call joinDocRoom/handle.joinRoom for live sync, or unloadDoc/flush to persist and drop cached docs.joinMetaRoom() / joinDocRoom(docId); the behaviour depends entirely on the transport adapter you injected.linkAsset, uploadAsset, fetchAsset (alias ensureAsset), listAssets, and gcAssets({ minKeepMs }).repo.watch(listener, { docIds, kinds, metadataFields, by }).await repo.destroy() to flush snapshots and dispose adapters.Adapters are shipped as subpath exports so the default loro-repo entry stays host-agnostic. Import them directly from their paths, e.g. loro-repo/transport/websocket or loro-repo/storage/indexeddb.
BroadcastChannelTransportAdapter (src/transport/broadcast-channel.ts)
Same-origin peer-to-peer transport that lets browser tabs exchange metadata/doc deltas through the BroadcastChannel API. Perfect for demos, offline PWAs, or local-first UIs; used in the quick-start snippet and the P2P Journal example. Import via loro-repo/transport/broadcast-channel.
WebSocketTransportAdapter (src/transport/websocket.ts)
loro-websocket powered transport for centralized servers or Durable Objects. Provide url, metadataRoomId, and optional auth callbacks and it handles join/sync lifecycles for you:
import { WebSocketTransportAdapter } from "loro-repo/transport/websocket";
const transport = new WebSocketTransportAdapter({
url: "wss://sync.example.com/repo",
metadataRoomId: "workspace:meta",
docAuth: (docId) => authFor(docId),
onStatusChange: (status) => setConnectionBadge(status),
onRoomStatusChange: ({ roomId, status }) =>
console.info(`room ${roomId} -> ${status}`),
});
// Force an immediate reconnect (resets backoff) when the UI exposes a retry button.
await transport.reconnect({ resetBackoff: true });
// Per-room status callbacks surface reconnect cycles: connecting | joined | reconnecting | disconnected | error.
const live = await repo.joinDocRoom("doc:123", {
onStatusChange: (status) => setRoomState(status),
});
// Subscriptions also expose status + onStatusChange hooks:
live.onStatusChange((status) => setRoomBadge(status));
console.log(live.status); // e.g. "joined"
Auto-reconnect (via loro-websocket@0.5.0) is enabled by default with exponential backoff (0.5s → 15s + jitter), pings to detect half-open sockets, offline pause/resume, and automatic rejoin of previously joined rooms. Observe the lifecycle through onStatusChange (adapter-level), onRoomStatusChange (all rooms), or per-join onStatusChange callbacks; feed ping RTTs to telemetry with onLatency. Manual controls: connect({ resetBackoff }) to restart after a fatal close and reconnect() / reconnect({ resetBackoff: true }) to trigger a retry immediately. Frames now use the loro-protocol v1 format (introduced in loro-protocol@0.3.x), so your server endpoint must speak the v1 WebSocket dialect.
IndexedDBStorageAdaptor (src/storage/indexeddb.ts)
Browser storage for metadata snapshots, doc snapshots/updates, and cached assets. Swap it out for SQLite/LevelDB/file-system adaptors when running on desktop or server environments. Import via loro-repo/storage/indexeddb.
FileSystemStorageAdaptor (src/storage/filesystem.ts)
Node-friendly persistence layer that writes metadata snapshots, doc snapshots/updates, and assets to the local file system. Point it at a writable directory (defaults to .loro-repo in your current working folder) when building Electron apps, desktop sync daemons, or tests that need durable state without IndexedDB. Import via loro-repo/storage/filesystem.
Asset transports
Bring your own AssetTransportAdapter (HTTP uploads, peer meshes, S3, etc.). LoroRepo dedupes via SHA-256 assetIds while your adaptor decides how to encrypt/store the bytes.
Lifecycle
await LoroRepo.create<Meta>({ transportAdapter?, storageAdapter?, assetTransportAdapter?, docFrontierDebounceMs? }) – hydrate metadata and initialise adapters.await repo.sync({ scope: "meta" | "doc" | "full", docIds?: string[] }) – pull remote updates on demand.await repo.destroy() – persist pending work and dispose adapters.Metadata
await repo.upsertDocMeta(docId, patch) – LWW merge with your Meta type.await repo.getDocMeta(docId) – resolve to { meta, deletedAtMs? } for the stored doc (or undefined when it doesn’t exist).await repo.listDoc(query?) – list docs by prefix/range/limit (RepoDocMeta<Meta>[]).repo.getMeta() – access raw Flock if you need advanced scans.Documents
await repo.openPersistedDoc(docId) – returns { doc, syncOnce, joinRoom }; mutations persist locally and frontiers are written to metadata.await repo.openDetachedDoc(docId) – isolated snapshot handle (no persistence, no live sync) ideal for read-only tasks.await repo.joinDocRoom(docId, params?) or await handle.joinRoom(auth?) – spawn a realtime session through your transport; use subscription.unsubscribe() when done.await repo.unloadDoc(docId) – flush pending work for a doc and evict it from memory.await repo.flush() – persist all loaded docs and flush pending frontier updates.Deletion & retention
await repo.deleteDoc(docId, { deletedAt?, force? }) – soft-delete by writing a tombstone (ts/*); repeats are no-ops unless force: true overwrites the timestamp.await repo.restoreDoc(docId) – remove the tombstone so the doc can be opened again (idempotent when not deleted).await repo.purgeDoc(docId) – hard-delete immediately: removes doc bodies, metadata/frontiers, tombstone, and doc→asset links; emits unlink + metadata clear events.await repo.gcDeletedDocs({ minKeepMs?, now? }) – sweep tombstoned docs whose retention window expired; returns the count purged.Assets
await repo.linkAsset(docId, { content, mime?, tag?, policy?, assetId?, createdAt? }) – upload + link, returning the SHA-256 assetId.await repo.uploadAsset(options) – upload without linking to a doc (pre-warm caches).await repo.fetchAsset(assetId) / ensureAsset(assetId) – fetch metadata + lazy content() stream (prefers cached blobs).await repo.listAssets(docId) – view linked assets (RepoAssetMetadata[]).await repo.unlinkAsset(docId, assetId) – drop a link; GC picks up orphans.await repo.gcAssets({ minKeepMs, batchSize }) – sweep stale unlinked blobs via the storage adapter.Events
const handle = repo.watch(listener, { docIds, kinds, metadataFields, by }) – subscribe to RepoEvent unions (metadata/frontiers/asset lifecycle) with provenance.handle.unsubscribe() – stop receiving events.Realtime metadata
await repo.joinMetaRoom(params?) – opt into live metadata sync via the transport adapter; call subscription.unsubscribe() to leave.The repo separates logical deletion from storage reclamation so UIs stay responsive while bytes are cleaned up safely:
Soft delete (safe delete) — deleteDoc writes a tombstone at ts/<docId> but keeps metadata (m/*), frontiers (f/*), doc snapshots/updates, and asset links intact. Tombstoned docs remain readable and joinable; the tombstone is a visibility/retention hint, not an access ban. UIs can still surface the doc (often with a “deleted” badge) and choose whether to join the room. Watchers see doc-soft-deleted events with provenance (by: "local" | "sync" | "live").
Retention window — A tombstoned doc waits for deletedDocKeepMs (default 30 days; configurable when creating the repo). Run gcDeletedDocs() periodically (or pass { minKeepMs, now }) to purge anything past its keep window; each purge internally calls purgeDoc.
Hard delete (purgeDoc / gcDeletedDocs) — Removes all local state and triggers storage deletion when supported:
m/*, f/*, ld/*) and the tombstone from the Flock.DocManager.dropDoc evicts cached docs and calls storage.deleteDoc (FileSystem/IndexedDB adaptors delete the snapshot plus pending updates).gcAssets later removes the binary via storage.deleteAsset once its own keep window elapses.Remote purge propagation — If a sync/live event shows a peer cleared doc metadata (empty doc-metadata patch) or removed the tombstone while no metadata remains, handlePurgeSignals invokes dropDoc locally. This keeps your storage aligned with the authoritative replica even if you never called purgeDoc yourself.
What does not delete storage — Soft delete alone never removes bytes. unloadDoc only flushes snapshots; it does not delete them. Storage reclaim happens only through purgeDoc, gcDeletedDocs, or the remote-purge path described above.
| Command | Purpose |
|---|---|
pnpm --filter loro-repo typecheck | Runs tsc with noEmit. |
pnpm --filter loro-repo test | Executes the Vitest suites. |
pnpm --filter loro-repo check | Runs typecheck + tests. |
Set LORO_WEBSOCKET_E2E=1 when you want to run the websocket end-to-end spec.
examples/p2p-journal/) – Vite + React demo that pairs BroadcastChannelTransportAdapter with IndexedDBStorageAdaptor for tab-to-tab sync.examples/sync-example.ts) – Node-based walkthrough that sets up two repos, a memory transport hub, and an in-memory filesystem to illustrate metadata-first fetch, selective doc sync, and asset flows.Follow Conventional Commits, run pnpm --filter loro-repo check before opening a PR, and reference the “LoroRepo Product Requirements” doc when explaining behavioural changes (metadata-first fetch, pluggable adapters, progressive encryption/GC). Keep generated artifacts in sync and avoid committing build outputs such as target/. If you add a new workflow or feature, link the relevant prd/ entry so the intent stays discoverable.
FAQs
Draft TypeScript definitions for the LoroRepo orchestrator.
We found that loro-repo demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
Node.js patched a crash bug where AsyncLocalStorage could cause stack overflows to bypass error handlers and terminate production servers.

Research
/Security News
A malicious Chrome extension steals newly created MEXC API keys, exfiltrates them to Telegram, and enables full account takeover with trading and withdrawal rights.

Security News
CVE disclosures hit a record 48,185 in 2025, driven largely by vulnerabilities in third-party WordPress plugins.