Socket
Book a DemoInstallSign in
Socket

loro-repo

Package Overview
Dependencies
Maintainers
1
Versions
21
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

loro-repo

Draft TypeScript definitions for the LoroRepo orchestrator.

latest
Source
npmnpm
Version
0.11.0
Version published
Maintainers
1
Created
Source

LoroRepo TypeScript bindings

LoroRepo is the collection-sync layer that sits above Flock. It keeps document metadata, CRDT bodies, and binary assets coordinated so apps can:

  • fetch metadata first, then stream document bodies on demand,
  • reuse the same API across centralized servers, Durable Objects, or peer-to-peer transports,
  • progressively add asset sync, encryption, and garbage collection without changing app code.

What you get

  • Metadata-first coordinationrepo.listDoc() and repo.watch() expose LWW metadata so UIs can render collections before bodies arrive.
  • On-demand documentsopenPersistedDoc() hands back a repo-managed LoroDoc that persists locally and can sync once or join live rooms; openDetachedDoc() is a read-only snapshot.
  • Binary asset orchestrationlinkAsset()/fetchAsset() dedupe SHA-256 addressed blobs across docs, while gcAssets() sweeps unreferenced payloads.
  • Pluggable adapters – supply your own TransportAdapter, StorageAdapter, and AssetTransportAdapter (or use the built-ins below) to target servers, CF Durable Objects, or local-first meshes.
  • Consistent events – every event includes by: "local" | "sync" | "live" so you can react differently to local edits, explicit sync pulls, or realtime merges.

Quick start

import { LoroRepo } from "loro-repo";
import { BroadcastChannelTransportAdapter } from "loro-repo/transport/broadcast-channel";
import { IndexedDBStorageAdaptor } from "loro-repo/storage/indexeddb";

type DocMeta = { title?: string; tags?: string[] };

const repo = await LoroRepo.create<DocMeta>({
  transportAdapter: new BroadcastChannelTransportAdapter({ namespace: "notes" }),
  storageAdapter: new IndexedDBStorageAdaptor({ dbName: "notes-db" }),
});

await repo.sync({ scope: "meta" }); // metadata-first

await repo.upsertDocMeta("note:welcome", { title: "Welcome" });

const handle = await repo.openPersistedDoc("note:welcome");
await handle.syncOnce(); // optional: fetch body once
const room = await handle.joinRoom(); // optional: live updates
handle.doc.getText("content").insert(0, "Hello from LoroRepo");
handle.doc.commit();
room.unsubscribe();
await repo.unloadDoc("note:welcome");

Using the API

  • Create a repo with await LoroRepo.create<Meta>({ transportAdapter?, storageAdapter?, assetTransportAdapter?, docFrontierDebounceMs? }); metadata is hydrated automatically.
  • Swap or attach a transport later by calling await repo.setTransportAdapter(adapter) (useful when booting offline, then enabling realtime once connectivity/auth is ready).
  • Check adapter availability with repo.hasTransport() / repo.hasStorage() before calling joinMetaRoom / joinDocRoom.
  • Define your metadata contract once via the generic Meta. All metadata helpers (upsertDocMeta, getDocMeta, listDoc, watch) stay type-safe.
  • Choose sync lanes with repo.sync({ scope: "meta" | "doc" | "full", docIds?: string[] }) to pull remote changes on demand.
  • Work with documents using openPersistedDoc(docId) for repo-managed docs (persisted snapshots + frontier tracking) and openDetachedDoc(docId) for isolated snapshots; call joinDocRoom/handle.joinRoom for live sync, or unloadDoc/flush to persist and drop cached docs.
  • Join realtime rooms by calling joinMetaRoom() / joinDocRoom(docId); the behaviour depends entirely on the transport adapter you injected.
  • Manage assets through linkAsset, uploadAsset, fetchAsset (alias ensureAsset), listAssets, and gcAssets({ minKeepMs }).
  • React to changes by subscribing with repo.watch(listener, { docIds, kinds, metadataFields, by }).
  • Shut down cleanly via await repo.destroy() to flush snapshots and dispose adapters.

Built-in adapters

Adapters are shipped as subpath exports so the default loro-repo entry stays host-agnostic. Import them directly from their paths, e.g. loro-repo/transport/websocket or loro-repo/storage/indexeddb.

  • BroadcastChannelTransportAdapter (src/transport/broadcast-channel.ts)
    Same-origin peer-to-peer transport that lets browser tabs exchange metadata/doc deltas through the BroadcastChannel API. Perfect for demos, offline PWAs, or local-first UIs; used in the quick-start snippet and the P2P Journal example. Import via loro-repo/transport/broadcast-channel.

  • WebSocketTransportAdapter (src/transport/websocket.ts)
    loro-websocket powered transport for centralized servers or Durable Objects. Provide url, metadataRoomId, and optional auth callbacks and it handles join/sync lifecycles for you:

    import { WebSocketTransportAdapter } from "loro-repo/transport/websocket";
    
    const transport = new WebSocketTransportAdapter({
      url: "wss://sync.example.com/repo",
      metadataRoomId: "workspace:meta",
      docAuth: (docId) => authFor(docId),
      onStatusChange: (status) => setConnectionBadge(status),
      onRoomStatusChange: ({ roomId, status }) =>
        console.info(`room ${roomId} -> ${status}`),
    });
    
    // Force an immediate reconnect (resets backoff) when the UI exposes a retry button.
    await transport.reconnect({ resetBackoff: true });
    
    // Per-room status callbacks surface reconnect cycles: connecting | joined | reconnecting | disconnected | error.
    const live = await repo.joinDocRoom("doc:123", {
      onStatusChange: (status) => setRoomState(status),
    });
    // Subscriptions also expose status + onStatusChange hooks:
    live.onStatusChange((status) => setRoomBadge(status));
    console.log(live.status); // e.g. "joined"
    

    Auto-reconnect (via loro-websocket@0.5.0) is enabled by default with exponential backoff (0.5s → 15s + jitter), pings to detect half-open sockets, offline pause/resume, and automatic rejoin of previously joined rooms. Observe the lifecycle through onStatusChange (adapter-level), onRoomStatusChange (all rooms), or per-join onStatusChange callbacks; feed ping RTTs to telemetry with onLatency. Manual controls: connect({ resetBackoff }) to restart after a fatal close and reconnect() / reconnect({ resetBackoff: true }) to trigger a retry immediately. Frames now use the loro-protocol v1 format (introduced in loro-protocol@0.3.x), so your server endpoint must speak the v1 WebSocket dialect.

  • IndexedDBStorageAdaptor (src/storage/indexeddb.ts)
    Browser storage for metadata snapshots, doc snapshots/updates, and cached assets. Swap it out for SQLite/LevelDB/file-system adaptors when running on desktop or server environments. Import via loro-repo/storage/indexeddb.

  • FileSystemStorageAdaptor (src/storage/filesystem.ts)
    Node-friendly persistence layer that writes metadata snapshots, doc snapshots/updates, and assets to the local file system. Point it at a writable directory (defaults to .loro-repo in your current working folder) when building Electron apps, desktop sync daemons, or tests that need durable state without IndexedDB. Import via loro-repo/storage/filesystem.

  • Asset transports
    Bring your own AssetTransportAdapter (HTTP uploads, peer meshes, S3, etc.). LoroRepo dedupes via SHA-256 assetIds while your adaptor decides how to encrypt/store the bytes.

Core API surface

Lifecycle

  • await LoroRepo.create<Meta>({ transportAdapter?, storageAdapter?, assetTransportAdapter?, docFrontierDebounceMs? }) – hydrate metadata and initialise adapters.
  • await repo.sync({ scope: "meta" | "doc" | "full", docIds?: string[] }) – pull remote updates on demand.
  • await repo.destroy() – persist pending work and dispose adapters.

Metadata

  • await repo.upsertDocMeta(docId, patch) – LWW merge with your Meta type.
  • await repo.getDocMeta(docId) – resolve to { meta, deletedAtMs? } for the stored doc (or undefined when it doesn’t exist).
  • await repo.listDoc(query?) – list docs by prefix/range/limit (RepoDocMeta<Meta>[]).
  • repo.getMeta() – access raw Flock if you need advanced scans.

Documents

  • await repo.openPersistedDoc(docId) – returns { doc, syncOnce, joinRoom }; mutations persist locally and frontiers are written to metadata.
  • await repo.openDetachedDoc(docId) – isolated snapshot handle (no persistence, no live sync) ideal for read-only tasks.
  • await repo.joinDocRoom(docId, params?) or await handle.joinRoom(auth?) – spawn a realtime session through your transport; use subscription.unsubscribe() when done.
  • await repo.unloadDoc(docId) – flush pending work for a doc and evict it from memory.
  • await repo.flush() – persist all loaded docs and flush pending frontier updates.

Deletion & retention

  • Tombstoning is explicit and enforced: tombstoned docs stay visible, readable, and joinable; a configurable retention window (default 30 days) governs when they can be purged.
  • await repo.deleteDoc(docId, { deletedAt?, force? }) – soft-delete by writing a tombstone (ts/*); repeats are no-ops unless force: true overwrites the timestamp.
  • await repo.restoreDoc(docId) – remove the tombstone so the doc can be opened again (idempotent when not deleted).
  • await repo.purgeDoc(docId) – hard-delete immediately: removes doc bodies, metadata/frontiers, tombstone, and doc→asset links; emits unlink + metadata clear events.
  • await repo.gcDeletedDocs({ minKeepMs?, now? }) – sweep tombstoned docs whose retention window expired; returns the count purged.

Assets

  • await repo.linkAsset(docId, { content, mime?, tag?, policy?, assetId?, createdAt? }) – upload + link, returning the SHA-256 assetId.
  • await repo.uploadAsset(options) – upload without linking to a doc (pre-warm caches).
  • await repo.fetchAsset(assetId) / ensureAsset(assetId) – fetch metadata + lazy content() stream (prefers cached blobs).
  • await repo.listAssets(docId) – view linked assets (RepoAssetMetadata[]).
  • await repo.unlinkAsset(docId, assetId) – drop a link; GC picks up orphans.
  • await repo.gcAssets({ minKeepMs, batchSize }) – sweep stale unlinked blobs via the storage adapter.

Events

  • const handle = repo.watch(listener, { docIds, kinds, metadataFields, by }) – subscribe to RepoEvent unions (metadata/frontiers/asset lifecycle) with provenance.
  • handle.unsubscribe() – stop receiving events.

Realtime metadata

  • await repo.joinMetaRoom(params?) – opt into live metadata sync via the transport adapter; call subscription.unsubscribe() to leave.

Doc deletion lifecycle (soft vs. hard)

The repo separates logical deletion from storage reclamation so UIs stay responsive while bytes are cleaned up safely:

  • Soft delete (safe delete)deleteDoc writes a tombstone at ts/<docId> but keeps metadata (m/*), frontiers (f/*), doc snapshots/updates, and asset links intact. Tombstoned docs remain readable and joinable; the tombstone is a visibility/retention hint, not an access ban. UIs can still surface the doc (often with a “deleted” badge) and choose whether to join the room. Watchers see doc-soft-deleted events with provenance (by: "local" | "sync" | "live").

  • Retention window — A tombstoned doc waits for deletedDocKeepMs (default 30 days; configurable when creating the repo). Run gcDeletedDocs() periodically (or pass { minKeepMs, now }) to purge anything past its keep window; each purge internally calls purgeDoc.

  • Hard delete (purgeDoc / gcDeletedDocs) — Removes all local state and triggers storage deletion when supported:

    • Clears metadata/frontiers/link rows (m/*, f/*, ld/*) and the tombstone from the Flock.
    • DocManager.dropDoc evicts cached docs and calls storage.deleteDoc (FileSystem/IndexedDB adaptors delete the snapshot plus pending updates).
    • Asset links are removed; assets that become orphaned are marked, and gcAssets later removes the binary via storage.deleteAsset once its own keep window elapses.
  • Remote purge propagation — If a sync/live event shows a peer cleared doc metadata (empty doc-metadata patch) or removed the tombstone while no metadata remains, handlePurgeSignals invokes dropDoc locally. This keeps your storage aligned with the authoritative replica even if you never called purgeDoc yourself.

  • What does not delete storage — Soft delete alone never removes bytes. unloadDoc only flushes snapshots; it does not delete them. Storage reclaim happens only through purgeDoc, gcDeletedDocs, or the remote-purge path described above.

Commands

CommandPurpose
pnpm --filter loro-repo typecheckRuns tsc with noEmit.
pnpm --filter loro-repo testExecutes the Vitest suites.
pnpm --filter loro-repo checkRuns typecheck + tests.

Set LORO_WEBSOCKET_E2E=1 when you want to run the websocket end-to-end spec.

Examples

  • P2P Journal (examples/p2p-journal/) – Vite + React demo that pairs BroadcastChannelTransportAdapter with IndexedDBStorageAdaptor for tab-to-tab sync.
  • Sync script (examples/sync-example.ts) – Node-based walkthrough that sets up two repos, a memory transport hub, and an in-memory filesystem to illustrate metadata-first fetch, selective doc sync, and asset flows.

Contributing

Follow Conventional Commits, run pnpm --filter loro-repo check before opening a PR, and reference the “LoroRepo Product Requirements” doc when explaining behavioural changes (metadata-first fetch, pluggable adapters, progressive encryption/GC). Keep generated artifacts in sync and avoid committing build outputs such as target/. If you add a new workflow or feature, link the relevant prd/ entry so the intent stays discoverable.

FAQs

Package last updated on 14 Jan 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts