
Security News
Axios Maintainer Confirms Social Engineering Attack Behind npm Compromise
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.
EventDBX is a local-first, single-tenant, write-side event database for developers who want an auditable system of record without standing up a full event platform.
It stores every accepted change as an immutable event, materializes current aggregate state from that history, and keeps the write path explicit and schema-checked. In practice, that makes it a good fit for applications where state transitions matter more than ad hoc document updates: order flows, workflow engines, internal tools, operational systems, and domain models that need replayable history.
The core model is intentionally narrow:
default.dbx/config.tomlschema.dbxWhat the supported core provides:
schema.dbxWhat this lean build does not try to be:
dbx init
This creates ./.dbx and stores both configuration and runtime data there. Runtime commands discover the nearest .dbx/config.toml by walking up from the current directory, similar to Git repository discovery.
dbx init also emits a CLI bootstrap token and persists it under .dbx/cli.token; the default init-issued token expires after 86,400 seconds.
Use --ttl to override that bootstrap token lifetime with either raw seconds or human suffixes like 10m, 24h, 10d, or 2w. Use --ttl 0 to issue a non-expiring bootstrap token.
dbx serve start --foreground
Use dbx serve status to inspect the daemon, dbx serve restart to restart it, and dbx serve stop when you want to shut it down.
The published Docker image sets EVENTDBX_AUTO_INIT=1, so a first run against an empty /var/lib/eventdbx volume will create /var/lib/eventdbx/.dbx/config.toml automatically before the server starts.
schema.dbx:aggregate order {
snapshot_threshold = 100
field status {
type = "text"
}
event order_created {
fields = ["status"]
}
event order_paid {
fields = ["status"]
}
}
dbx schema validate
dbx schema show
dbx schema show order
Schema commands discover the nearest schema.dbx in the current directory tree. Pass --file <PATH> to dbx schema validate, dbx schema show, or dbx schema apply to override discovery.
dbx schema show is a deterministic preview of the compiled runtime shape. It intentionally normalizes volatile timestamps so unchanged source renders the same output across runs. Pass an aggregate name to render only that aggregate's compiled schema.
dbx schema apply
dbx token bootstrap --stdout
dbx token generate --group ops --user alice --action aggregate.read --action aggregate.append --json
dbx aggregate create order order-42 --event order_created --field status=open --json
dbx aggregate apply order order-42 order_paid --field status=paid
dbx aggregate get order order-42 --include-events
dbx aggregate list order --json
dbx events order order-42 --json
dbx aggregate verify order order-42 --json
Automatic snapshots are still part of the core write path. Set snapshot_threshold in schema.dbx; EventDBX will create snapshots internally after qualifying writes. The standalone snapshot command surface has been removed.
schema.dbx is the authoring format for runtime schema compilation. The compiler parses a small DSL whose assigned values are JSON.
Top-level grammar:
# comments with '#' or '//'
aggregate order {
snapshot_threshold = 100
field status {
type = "text"
rules = {"required": true}
}
event order_created {
fields = ["status"]
}
}
Rules of the format:
aggregate <name> { ... }# ... and // ... comments are supportedUse snake_case names in practice. Quoted names are accepted by the parser, but snake_case names are the safest choice for CLI usage and reference compatibility.
| Property | Type | Meaning |
|---|---|---|
snapshot_threshold | integer or null | Enables automatic snapshots every N qualifying writes. |
locked | boolean | Rejects writes for the aggregate when true. |
hidden | boolean | Marks the aggregate hidden in schema metadata. |
hidden_fields | array of strings | Marks named fields as hidden in schema metadata. |
field_locks | array of strings | Rejects writes that attempt to mutate those fields. |
field <name> { ... } | block | Declares field types and validation rules. |
event <name> { ... } | block | Declares allowed event names and event-level field constraints. |
Each field block supports:
| Property | Type | Meaning |
|---|---|---|
type | string | Required. Declares the field column type or type alias. |
rules | JSON object | Optional validation rules. Must be a JSON object when present. |
hidden | boolean | Adds the field to the aggregate's hidden field list. |
locked | boolean | Adds the field to the aggregate's locked field list. |
Supported field types and aliases:
| Type | Accepted values |
|---|---|
| integer | integer, int |
| float | float, double |
| decimal | decimal(p,s), numeric(p,s) |
| boolean | boolean, bool |
| text | text, string |
| timestamp | timestamp |
| date | date |
| json | json |
| binary | binary, bytes |
| object | object |
| reference alias | ref, reference, aggregate_ref, aggregate-reference |
Reference aliases compile to a text column with format = "reference" semantics.
Each event block supports:
| Property | Type | Meaning |
|---|---|---|
fields | array of strings | Optional event field allowlist. When non-empty, these fields become both the required set and the permit list for that event payload. |
note | string or null | Optional default note for the event. Maximum length is 128 characters. |
rules must be a JSON object. Supported keys:
| Rule | Type | Meaning |
|---|---|---|
required | boolean | Field must be present. |
contains | array of strings | Text value must contain every listed substring. |
does_not_contain | array of strings | Text value must not contain any listed substring. |
regex | array of strings | Text value must match every listed regular expression. |
length | object | Length limits for text or binary values. Supports min and max. |
range | object | Range limits for integer, float, decimal, timestamp, or date values. Supports min and max. |
format | string | Built-in semantic validator. |
reference | object | Additional constraints for reference-valued text fields. |
properties | object | Nested property definitions for structured object fields. |
Supported format values:
emailurlcredit_cardcountry_codeiso8601wgs_84camel_casesnake_casekebab_casepascal_caseupper_case_snake_casereferenceImportant validation constraints:
length applies only to text and binary valuesrange applies only to integer, float, decimal, timestamp, and date valuesformat = "reference" requires a text fieldreference rules without format = "reference" are invalidproperties are only accepted on object or json field typesobject for recursive nested validation and nested reference normalizationReference-valued fields use a text column with format = "reference" and optional reference rules:
field manager {
type = "reference"
rules = {
"format": "reference",
"reference": {
"aggregate_type": "person",
"integrity": "strong",
"cascade": "restrict"
}
}
}
Supported reference keys:
| Key | Type | Meaning |
|---|---|---|
integrity | string | strong or weak. strong rejects missing/forbidden targets; weak allows unresolved targets. |
aggregate_type | string | Restricts the reference to a specific aggregate type. |
cascade | string | none, restrict, or nullify. Stored in schema metadata for downstream reference handling. |
Accepted reference string shapes:
aggregate#id#idLegacy domain#aggregate#id values are still accepted during reads, but reference values are canonicalized to aggregate#id during normalization. Invalid shapes, wrong target aggregate, or unresolved strong references fail validation.
Use properties to validate nested fields inside structured values:
field profile {
type = "object"
rules = {
"properties": {
"country": {
"type": "text",
"format": "country_code"
},
"nickname": {
"type": "text",
"length": {"max": 32}
}
}
}
}
Inside properties, each entry uses JSON schema settings:
"postal_code": "text""type" plus flattened rule keys such as "country": {"type": "text", "format": "country_code"}aggregate person {
snapshot_threshold = 100
locked = false
hidden = false
hidden_fields = ["internal_notes"]
field_locks = ["id"]
field email {
type = "text"
rules = {"required": true, "format": "email"}
}
field manager {
type = "reference"
hidden = true
locked = true
rules = {
"format": "reference",
"reference": {
"tenant": "default",
"aggregate_type": "person",
"integrity": "strong",
"cascade": "restrict"
}
}
}
field profile {
type = "object"
rules = {
"properties": {
"country": {
"type": "text",
"format": "country_code"
},
"nickname": {
"type": "text",
"length": {"max": 32}
}
}
}
}
event person_created {
fields = ["email", "manager"]
note = "Initial import"
}
event person_updated {}
}
Common rules examples:
field email {
type = "text"
rules = {"required": true, "format": "email"}
}
field sku {
type = "text"
rules = {
"format": "upper_case_snake_case",
"regex": ["^[A-Z0-9_]+$"],
"length": {"min": 3, "max": 32}
}
}
field amount {
type = "decimal(12,2)"
rules = {"range": {"min": "0.00", "max": "9999999999.99"}}
}
field effective_at {
type = "timestamp"
rules = {"format": "iso8601", "range": {"min": "2026-01-01T00:00:00Z"}}
}
field attachment {
type = "binary"
rules = {"length": {"max": 1048576}}
}
dbx schema validate checks the source file, and dbx schema show prints the normalized compiled runtime schema rather than echoing raw source text.
EventDBX exposes a pull-based control-socket outbox for downstream replication.
Contract:
readOutbox(afterEventId?, take?) -> { eventsJson, nextAfterEventId }event_idafterEventId is exclusivenextAfterEventId is the checkpoint a replicator should persistRecommended replication shape:
This replaces built-in peer replication. The core daemon does not manage destination config, retries, scheduling, or bidirectional reconciliation.
Supported commands:
dbx initdbx serve start|stop|status|restart|destroydbx configdbx token generate|list|revoke|refresh|bootstrapdbx schema validate|show|applydbx aggregate create|apply|list|get|verifydbx eventsOperational notes:
dbx serve. The old top-level commands dbx start, dbx stop, dbx status, dbx restart, and dbx destroy are no longer supported.dbx serve start --restrict [off|default|strict] controls schema enforcement mode for the running daemondbx serve destroy removes the active .dbx workspace directory and prompts for confirmation unless --yes is passeddbx serve destroy does not imply that an external data_dir outside the workspace will be removeddbx schema validate|show|apply discover the nearest schema.dbx by default and accept --file <PATH> to overridedbx schema show [AGGREGATE] renders either the full compiled schema set or a single aggregate previewLifecycle examples:
dbx serve start
dbx serve status
dbx serve restart --foreground
dbx serve stop
dbx serve destroy --yes
Run dbx config with no update flags to print the current lean runtime config as TOML.
Writable configuration flags are:
--port--data-dir--cache-threshold--data-encryption-key--list-page-size--page-limit--bind-addr--snowflake-worker-id--noise--no-noiseThe default config location is the nearest .dbx/config.toml in the current directory tree.
Workspace initialization generates auth keys automatically; the printed config includes the auth block even though key material is not configured through dedicated dbx config flags.
Schema enforcement mode is a runtime/startup concern controlled via dbx serve start --restrict, not dbx config.
EventDBX now runs as a single-tenant core pinned to the logical domain default.
This build refuses to start against legacy multi-domain or multi-tenant layouts.
Examples of rejected state:
config.toml selecting a non-default domain[tenants].multi_tenant = trueThe error is intentional. Export/import the data or run a dedicated migration before upgrading to the lean single-tenant build.
FAQs
Immutable, event-sourced, nosql, write-side database system.
The npm package eventdbx receives a total of 110 weekly downloads. As such, eventdbx popularity was classified as not popular.
We found that eventdbx demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.

Security News
The Axios compromise shows how time-dependent dependency resolution makes exposure harder to detect and contain.