
Research
Supply Chain Attack on Axios Pulls Malicious Dependency from npm
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the projectβs GitHub releases.
logs-gateway
Advanced tools
Standardized logging gateway for Node.js with PII sanitization, correlation trails, Shadow Logging for test/debug capture, scoping with text filters, story output, troubleshooting integration, and multi-format output (JSON/YAML/text)
A standardized logging gateway for Node.js applications. Flexible multi-transport logging with console, file, and unified-logger outputs; ENV-first configuration; PII/credentials sanitization; dual correlation trails (operation & thread); OpenTelemetry context; YAML/JSON/text formats; per-run "Shadow Logging" for test/debug capture; scoping with text filters; story output powered by scopeRecord; and troubleshooting integration with nx-troubleshooting.
This component supports zero-config initialization via environment variables and is compliant with the Env-Ready Component Standard (ERC 2.0).
.env contract (libraries)For per-package log thresholds, logs-gateway follows {PREFIX}_LOGS_LEVEL (canonical). {PREFIX}_LOG_LEVEL remains supported when _LOGS_LEVEL is not set in the environment. Omitting both keys defaults to warn (warn and error only); set {PREFIX}_LOGS_LEVEL=off to silence. Cross-cutting options (console, file, format) stay host-level. Full detail: docs/package-usage.md (published on npm). Helpers: resolvePackageLogsLevel, parsePackageLogsLevelString, packageLogsLevelEnvKey.
erc-manifest.json with all requirements.env.example with all transitive requirements# 1. Install the package
npm install logs-gateway
# 2. Set environment variables (replace MY_APP with your prefix)
export MY_APP_LOG_TO_CONSOLE=true
export MY_APP_LOGS_LEVEL=warn
export MY_APP_LOG_FORMAT=json
# 3. Use with zero config!
import { createLogger } from 'logs-gateway';
const logger = createLogger(
{ packageName: 'MY_APP', envPrefix: 'MY_APP' }
); // Auto-discovers from process.env
import { createLogger } from 'logs-gateway';
const logger = createLogger(
{ packageName: 'MY_APP', envPrefix: 'MY_APP' },
{
logToFile: true,
logFilePath: '/var/log/myapp.log',
logLevel: 'info'
}
);
Note: This component uses dynamic environment variable prefixes based on packageConfig.envPrefix. Replace {PREFIX} with your actual prefix (e.g., MY_APP, API_SERVICE).
See .env.example for the complete list of required and optional variables with descriptions. Generate it by running:
npm run generate-erc
@x-developer/unified-logger)verbose, debug, info, warn, errortext, json, yaml (console & file; unified stays JSON)runId, jobId, correlationId, sessionIdtraceId, spanId) when availablenx-troubleshooting for intelligent error-to-solution matchingnpm install logs-gateway
For troubleshooting integration:
npm install logs-gateway nx-troubleshooting
import { createLogger } from 'logs-gateway';
const logger = createLogger(
{ packageName: 'MY_APP', envPrefix: 'MY_APP' },
{
logToFile: true,
logFilePath: '/var/log/myapp.log',
logLevel: 'info', // verbose|debug|info|warn|error
logFormat: 'json', // text|json|yaml (yaml: console/file only)
enableUnifiedLogger: true,
unifiedLogger: {
transports: { papertrail: true },
service: 'my-app',
env: 'production'
},
// Optional: per-run Shadow Logging defaults (can also be enabled at runtime)
shadow: { enabled: false, ttlMs: 86_400_000 }, // 1 day
// Optional: Scoping & Troubleshooting
scoping: {
enabled: true,
errorScoping: { enabled: true, windowMsBefore: 60_000, windowMsAfter: 30_000 },
buffer: { maxEntries: 5000, preferShadow: true }
},
troubleshooting: {
enabled: true,
narrativesPath: './metadata/troubleshooting.json',
output: { formats: ['markdown'], emitAsLogEntry: true }
}
}
);
// Standard usage
logger.verbose('Very detailed info', { step: 'init' });
logger.debug('Debug info', { data: 'x' });
logger.info('Application initialized', { version: '1.0.0' });
logger.warn('Deprecated feature used');
logger.error('Error occurred', { error: new Error('boom') });
logs-gateway automatically detects and includes your application's package name and version in every log entry. This is done by reading the consuming project's package.json file (the project using logs-gateway, not logs-gateway itself).
When you create a logger instance, logs-gateway automatically:
process.cwd() to find the nearest package.jsonname and version fields from that fileappName and appVersionThis happens automatically - no configuration needed! The detection is cached after the first call for performance.
If your project's package.json contains:
{
"name": "my-awesome-app",
"version": "2.1.0"
}
Then all log entries will automatically include:
{
"timestamp": "2025-01-15T10:30:45.123Z",
"package": "MY_APP",
"level": "INFO",
"message": "Application initialized",
"appName": "my-awesome-app",
"appVersion": "2.1.0",
"data": { ... }
}
process.cwd() (the working directory where your app runs)package.json that is not in node_modulespackage.json is found, appName and appVersion are simply omitted (no error)This extension adds four major capabilities:
Runtime Filtering (logger-debug.json) β Filter logs at the source before they're written. Place logger-debug.json at your project root to filter logs by identity or application name. This reduces noise at runtime and complements post-processing scoping.
Scoping β Given a verbose log stream, derive a focused subset of logs relevant to a problem or question. Scopes can be:
error entry)runId, correlationId, etc.)Scoping always uses verbose logs if available, regardless of the current log level.
Story vs Full Data Output β A scope can be returned as:
ScopedLogView with all entriesscopeRecord helperTroubleshooting Integration β Wire in nx-troubleshooting so that errors/scopes:
troubleshooting)The same scopeRecord helper is also exported as a generic tool for scoping arbitrary JSON records (not just logs).
const logger = createLogger(
{ packageName: 'MY_PACKAGE', envPrefix: 'MY_PACKAGE', debugNamespace: 'my-pkg' },
{
// Outputs
logToConsole: true, // default: true
logToFile: false, // default: false
logFilePath: '/var/log/app.log', // required if logToFile
enableUnifiedLogger: false, // default: false
unifiedLogger: { /* ... */ },
// Behavior
logLevel: 'info', // verbose|debug|info|warn|error (default: info)
logFormat: 'text', // text|json|yaml (default: text)
defaultSource: 'application', // fallback source tag
// Sanitization (opt-in)
sanitization: {
enabled: false, // default: false
maskWith: '[REDACTED]',
keysDenylist: ['authorization','password','secret','api_key'],
fieldsHashInsteadOfMask: ['userId'],
detectJWTs: true
// + other detectors & guardrails
},
// Trails & tracing (optional; safe no-ops if not used)
trails: {
enableDepthTrail: true, // operation trail
enableThreadTrail: true, // causal/thread trail
injectHeaders: true, // for HTTP/queues adapters
extractHeaders: true
},
tracing: { enableOtelContext: true },
// Shadow Logging (per-run capture; can also be toggled at runtime)
shadow: {
enabled: false, // default: false
format: 'json', // json|yaml (default: json)
directory: './logs/shadow', // default path
ttlMs: 86_400_000, // 1 day
forceVerbose: true, // capture all levels for the runId
respectRoutingBlocks: true, // honor _routing.blockOutputs: ['shadow'|'file']
includeRaw: false, // also write unsanitized (dangerous; tests only)
rollingBuffer: { maxEntries: 0, maxAgeMs: 0 } // optional retro-capture
},
// Scoping (opt-in)
scoping: {
enabled: false, // default: false
errorScoping: {
enabled: true, // default: true if scoping.enabled
levels: ['error'], // default: ['error']
windowMsBefore: 30_000, // default: 30_000
windowMsAfter: 30_000 // default: 30_000
},
runScoping: {
enabled: true, // default: true if scoping.enabled
defaultWindowMsBeforeFirstError: 60_000,
defaultWindowMsAfterLastEntry: 30_000
},
narrativeScoping: {
enabled: false, // default: false
narrativesPath: './metadata/log-scopes.json',
envPrefix: 'NX_SCOPE'
},
buffer: {
maxEntries: 0, // default: 0 => disabled if Shadow is enough
maxAgeMs: 0,
includeLevels: ['verbose','debug','info','warn','error'],
preferShadow: true // default: true
}
},
// Troubleshooting (opt-in, requires nx-troubleshooting)
troubleshooting: {
enabled: false, // default: false
narrativesPath: './metadata/troubleshooting.json',
envPrefix: 'NX_TROUBLE',
loggingConfig: { /* optional */ },
engine: undefined, // optional DI
output: {
formats: ['markdown'], // default: ['markdown']
writeToFileDir: undefined,
attachToShadow: false,
emitAsLogEntry: false,
callback: undefined
}
}
}
);
Note: Environment variable names use a dynamic prefix based on packageConfig.envPrefix. Replace {PREFIX} in the examples below with your actual prefix (e.g., MY_APP, API_SERVICE).
# Console & file
{PREFIX}_LOG_TO_CONSOLE=true|false
{PREFIX}_LOG_TO_FILE=true|false
{PREFIX}_LOG_FILE=/path/to/log
# Unified-logger
{PREFIX}_LOG_TO_UNIFIED=true|false
# Level & format (per-package threshold: canonical _LOGS_LEVEL; legacy _LOG_LEVEL if _LOGS_LEVEL unset)
{PREFIX}_LOGS_LEVEL=off|none|silent|error|warn|info|debug|verbose
{PREFIX}_LOG_LEVEL=verbose|debug|info|warn|error
{PREFIX}_LOG_FORMAT=text|json|yaml|table
# Console output options
{PREFIX}_SHOW_FULL_TIMESTAMP=true|false # Show full ISO timestamp (default: false)
{PREFIX}_CONSOLE_PACKAGES_SHOW=package1,package2 # Only show these packages in console (default: show all)
{PREFIX}_CONSOLE_PACKAGES_HIDE=package1,package2 # Hide these packages in console (default: show all)
# Debug namespace β enables verbose+debug for that namespace
DEBUG=my-pkg,other-*
# Sanitization (subset shown)
{PREFIX}_SANITIZE_ENABLED=true|false
{PREFIX}_SANITIZE_KEYS_DENYLIST=authorization,token,secret,api_key,password
# Trails/tracing
{PREFIX}_TRACE_OTEL=true|false
{PREFIX}_TRAILS_DEPTH=true|false
{PREFIX}_TRAILS_THREAD=true|false
{PREFIX}_TRAILS_INJECT=true|false
{PREFIX}_TRAILS_EXTRACT=true|false
# Shadow Logging
{PREFIX}_SHADOW_ENABLED=true|false
{PREFIX}_SHADOW_FORMAT=json|yaml
{PREFIX}_SHADOW_DIR=/var/log/myapp/shadow
{PREFIX}_SHADOW_TTL_MS=86400000
{PREFIX}_SHADOW_FORCE_VERBOSE=true|false
{PREFIX}_SHADOW_RESPECT_ROUTING=true|false
{PREFIX}_SHADOW_INCLUDE_RAW=false
{PREFIX}_SHADOW_BUFFER_ENTRIES=0
{PREFIX}_SHADOW_BUFFER_AGE_MS=0
# Scoping
{PREFIX}_SCOPING_ENABLED=true|false
{PREFIX}_SCOPING_ERROR_ENABLED=true|false
{PREFIX}_SCOPING_ERROR_WINDOW_MS_BEFORE=30000
{PREFIX}_SCOPING_ERROR_WINDOW_MS_AFTER=30000
{PREFIX}_SCOPING_BUFFER_ENTRIES=5000
{PREFIX}_SCOPING_BUFFER_AGE_MS=300000
{PREFIX}_SCOPING_BUFFER_PREFER_SHADOW=true|false
# Troubleshooting
{PREFIX}_TROUBLESHOOTING_ENABLED=true|false
{PREFIX}_TROUBLESHOOTING_NARRATIVES_PATH=./metadata/troubleshooting.json
{PREFIX}_TROUBLESHOOTING_OUTPUT_FORMATS=markdown,json
{PREFIX}_TROUBLESHOOTING_OUTPUT_EMIT_AS_LOG_ENTRY=true|false
# Unified-logger dependencies (non-ERC, manually documented)
# Required when unified-logger papertrail transport is enabled:
PAPERTRAIL_HOST=logs.papertrailapp.com
PAPERTRAIL_PORT=12345
# Required when unified-logger udpRelay transport is enabled:
UDP_RELAY_HOST=127.0.0.1
UDP_RELAY_PORT=514
Default min level:
info.DEBUG=: enables bothverboseanddebugfor matching namespaces.ERC 2.0 Note: Generate a complete
.env.examplefile with all variables by runningnpm run generate-erc.
Internal normalized log shape:
export interface LogEntry {
timestamp: string; // ISO-8601
level: 'verbose' | 'debug' | 'info' | 'warn' | 'error';
package: string;
message: string;
source?: string; // e.g. 'application', 'auth-service'
data?: Record<string, any>; // user metadata, error, ids, etc.
// Automatic application identification (from consuming project's package.json)
appName?: string; // Auto-detected from package.json "name" field
appVersion?: string; // Auto-detected from package.json "version" field
// Correlation / trails / tracing
runId?: string;
jobId?: string;
correlationId?: string;
sessionId?: string;
operationId?: string;
parentOperationId?: string;
operationName?: string;
threadId?: string;
traceId?: string;
spanId?: string;
// Routing
_routing?: RoutingMeta;
// Optional scope metadata (for future/advanced use)
scope?: ScopedMetadata;
}
Place a logger-debug.json file at your project root to filter logs at runtime:
{
"scoping": {
"status": "enabled",
"filterIdentities": ["src/auth.ts:login", "src/payment.ts:processPayment"],
"filteredApplications": ["my-app", "other-app"],
"between": [
{
"action": "include",
"exactMatch": false,
"searchLog": false,
"startIdentities": ["src/api.ts:handleRequest"],
"endIdentities": ["src/api.ts:handleRequestEnd"]
}
]
}
}
Behavior:
status: "enabled", logs are filtered before being written to any output (console, file, unified-logger, shadow)identity matches any entry in filterIdentities OR its appName matches any entry in filteredApplications OR it falls within an active "between" range (OR logic)process.cwd() (searches up directory tree like package.json)Examples:
// Filter by identity only - only show logs from specific code locations
{
"scoping": {
"status": "enabled",
"filterIdentities": ["src/auth.ts:login", "src/payment.ts:processPayment"]
}
}
// Filter by application only - only show logs from specific apps
{
"scoping": {
"status": "enabled",
"filteredApplications": ["my-app"]
}
}
// Filter by both (OR logic - matches if identity OR appName matches)
{
"scoping": {
"status": "enabled",
"filterIdentities": ["src/auth.ts:login"],
"filteredApplications": ["my-app"]
}
}
// Between rules - stateful range-based filtering
{
"scoping": {
"status": "enabled",
"between": [
{
"action": "include",
"exactMatch": false,
"searchLog": false,
"startIdentities": ["src/api.ts:handleRequest"],
"endIdentities": ["src/api.ts:handleRequestEnd"]
},
{
"action": "exclude",
"exactMatch": true,
"searchLog": false,
"startIdentities": ["src/db.ts:query"],
"endIdentities": ["src/db.ts:queryEnd"]
},
{
"action": "include",
"exactMatch": false,
"searchLog": true,
"startIdentities": [],
"endIdentities": ["src/init.ts:complete"]
},
{
"action": "include",
"exactMatch": true,
"searchLog": true,
"startIdentities": ["Payment started"],
"endIdentities": ["Payment completed"]
}
]
}
}
Between Rules:
action: "include" to show logs within range, "exclude" to hide logs within rangeexactMatch:
true: Exact string match (case sensitive)false: Partial substring match (case insensitive, default)searchLog:
true: Search entire log (message + identity + all meta fields stringified)false: Search only identity field (default)startIdentities: Array of patterns that activate the range. Empty array means range starts from the beginningendIdentities: Array of patterns that deactivate the range. Empty array means range never endsstartIdentities are active from the first logendIdentities never close once activatedIntegration with Existing Scoping:
logger-debug.json β Runtime filtering (reduces noise at source, filters before writing)scopeLogs() β Post-processing scoping (analyzes already-written logs)Scoping criteria defines which logs belong to a scope. It supports:
export interface ScopeCriteria {
// Correlation / keys
runId?: string;
correlationId?: string;
sessionId?: string;
threadId?: string;
traceId?: string;
// Time window bounds
fromTimestamp?: string; // ISO-8601
toTimestamp?: string; // ISO-8601
// Window relative to an anchor (error or first/last entry)
windowMsBefore?: number; // relative to anchor timestamp
windowMsAfter?: number;
// Levels
levelAtLeast?: 'verbose' | 'debug' | 'info' | 'warn' | 'error';
includeLevels?: ('verbose'|'debug'|'info'|'warn'|'error')[];
// Source filters
sources?: string[]; // e.g. ['api-gateway','payments-service']
// TEXT FILTERS
/**
* Match logs whose message OR data (stringified) contains ANY of the given strings (case-insensitive).
* - string: single substring
* - string[]: log must contain at least one of them
*/
textIncludesAny?: string | string[];
/**
* Match logs whose message OR data (stringified) contains ALL of the given substrings (case-insensitive).
*/
textIncludesAll?: string[];
/**
* Optional RegExp filter over the combined text (message + JSON-stringified data).
* If provided as string, it is treated as a new RegExp(text, 'i').
*/
textMatches?: RegExp | string;
// Scope metadata filters (if used)
scopeTags?: string[]; // must include all provided tags
// Custom predicate for in-process advanced filtering
predicate?: (entry: LogEntry) => boolean;
}
Text filtering behavior:
entry.message and a JSON string of entry.data into one string (e.g. "Payment failed {...}").textIncludesAny β inclusive OR.textIncludesAll β inclusive AND.textMatches β regex test.export interface ScopedLogView {
id: string; // e.g. 'scope:runId:checkout-42'
criteria: ScopeCriteria;
entries: LogEntry[]; // sorted by timestamp ascending
summary: {
firstTimestamp?: string;
lastTimestamp?: string;
totalCount: number;
errorCount: number;
warnCount: number;
infoCount: number;
debugCount: number;
verboseCount: number;
uniqueSources: string[];
};
}
The scopeRecord helper is generic (not log-specific): given any JSON record, it produces:
This function is:
LogEntrys / aggregated records.import { scopeRecord } from 'logs-gateway';
// Example usage
const record = {
userId: '123',
action: 'payment',
amount: 100.50,
timestamp: '2025-01-01T10:00:00Z'
};
const result = scopeRecord(record, {
label: 'Payment Event',
maxFieldStringLength: 200,
excludeKeys: ['password', 'token']
});
console.log(result.text);
// Output: "Payment Event\n User ID: 123\n Action: payment\n Amount: 100.5\n Timestamp: 2025-01-01T10:00:00Z"
console.log(result.structured);
// Output: { label: 'Payment Event', fields: [...], ... }
Types:
export interface AutoScopeRecordOptions {
label?: string;
formatting?: ScopingFormattingOptions;
maxFields?: number;
maxFieldStringLength?: number;
includeKeys?: string[];
excludeKeys?: string[];
skipNullish?: boolean;
header?: string;
footer?: string;
}
export interface ScopeRecordResult {
text: string; // Human-readable text output
structured: StructuredScopedPayload; // Structured representation
}
To let a scope answer with full data or story format:
export type ScopeOutputMode = 'raw' | 'story' | 'both';
export interface ScopeStoryOptions {
recordOptions?: AutoScopeRecordOptions;
maxEntries?: number;
includeEntryHeader?: boolean;
}
export interface ScopeLogsResult {
view: ScopedLogView; // always present
story?: ScopedLogStory; // present if mode = 'story' or 'both'
}
createLogger(packageConfig, userConfig?) β LogsGatewayLogsGateway Methodsverbose(message, data?)debug(message, data?)info(message, data?)warn(message, data?)error(message, data?)isLevelEnabled(level) β threshold check (namespace DEBUG forces verbose+debug)getConfig() β effective resolved configscopeLogs(criteria, options?) β Scope logs by criteria, return full data and/or storyconst result = await logger.scopeLogs({
runId: 'checkout-42',
textIncludesAny: 'timeout',
levelAtLeast: 'debug'
}, {
mode: 'both',
storyOptions: {
maxEntries: 50,
includeEntryHeader: true,
recordOptions: {
label: 'Log Entry',
maxFieldStringLength: 200
}
}
});
console.log(result.view.summary);
console.log(result.story?.text);
troubleshootError(error, context?, options?) β Error-centric troubleshootingconst { scope, reports } = await logger.troubleshootError(
new Error('Missing connections configuration'),
{
config: { /* app config */ },
query: { requestId: req.id },
operation: 'checkout'
},
{
formats: ['markdown'],
generateScopeStory: true,
storyOptions: { /* ... */ }
}
);
troubleshootScope(scope, options?) β Scope-centric troubleshootingconst { scope: scopeResult, reports } = await logger.troubleshootScope(
scopeView, // or ScopeCriteria or scope id string
{
formats: ['markdown', 'json'],
generateScopeStory: true
}
);
scopeByNarratives(options?) β Narrative-based scoping (optional)logger.shadow.enable(runId, opts?)logger.shadow.disable(runId)logger.shadow.isEnabled(runId)logger.shadow.listActive()logger.shadow.export(runId, outPath?, compress?) β Promise<string>logger.shadow.readIndex(runId) β Promise<ShadowIndex>logger.shadow.cleanupExpired(now?) β Promise<number>(Shadow writes sidecar files; primary transports unaffected.)
import { createLogger, scopeRecord } from 'logs-gateway';
const logger = createLogger(
{ packageName: 'PAYMENTS', envPrefix: 'PAY' },
{
logToConsole: true,
logFormat: 'json',
shadow: {
enabled: true,
format: 'json',
directory: './logs/shadow',
ttlMs: 86400000,
forceVerbose: true
},
scoping: {
enabled: true,
errorScoping: {
enabled: true,
windowMsBefore: 60_000,
windowMsAfter: 30_000
},
buffer: {
maxEntries: 5000,
maxAgeMs: 300_000,
includeLevels: ['verbose','debug','info','warn','error'],
preferShadow: true
}
},
troubleshooting: {
enabled: true,
narrativesPath: './metadata/troubleshooting.json',
output: {
formats: ['markdown'],
emitAsLogEntry: true,
writeToFileDir: './logs/troubleshooting'
}
}
}
);
async function handleCheckout(req: any) {
const runId = `checkout-${Date.now()}`;
logger.info('Checkout started', { runId });
logger.verbose('Preparing payment context', { runId });
try {
// ...
throw new Error('Missing connections configuration');
} catch (error) {
// Direct troubleshooting call
const { scope, reports } = await logger.troubleshootError(error, {
config: { /* app config */ },
query: { requestId: req.id },
operation: 'checkout'
}, {
formats: ['markdown'],
generateScopeStory: true,
storyOptions: {
maxEntries: 50,
includeEntryHeader: true,
recordOptions: {
label: 'Log Entry',
maxFieldStringLength: 200,
excludeKeys: ['password', 'token']
}
}
});
// Scope includes full data + story
console.log(scope?.view.summary);
console.log(scope?.story?.text);
// Reports contain troubleshooting text
return {
ok: false,
troubleshooting: reports.map(r => r.rendered)
};
}
}
// Scope all logs in the last 5 minutes that mention "timeout" anywhere:
const timeoutScope = await logger.scopeLogs({
levelAtLeast: 'debug',
fromTimestamp: new Date(Date.now() - 5 * 60 * 1000).toISOString(),
textIncludesAny: 'timeout' // message or data, case-insensitive
}, {
mode: 'both',
storyOptions: {
includeEntryHeader: true,
recordOptions: { label: 'Timeout Log' }
}
});
console.log(timeoutScope.view.summary);
console.log(timeoutScope.story?.text);
// src/logger.ts
import { createLogger, LoggingConfig, LogsGateway } from 'logs-gateway';
export function createAppLogger(config?: LoggingConfig): LogsGateway {
return createLogger(
{ packageName: 'WEB_APP', envPrefix: 'WEB_APP', debugNamespace: 'web-app' },
config
);
}
// src/index.ts
const logger = createAppLogger({ logFormat: 'json' });
logger.info('Web application initialized', { version: '1.0.0' });
async function handleRequest(request: any) {
logger.debug('Handling request', { requestId: request.id });
// ...
logger.info('Request processed', { responseTime: 42, runId: request.runId });
}
Capture everything for a specific runId to a side file (forced-verbose), then fetch it β perfect for tests.
const logger = createLogger(
{ packageName: 'API', envPrefix: 'API' },
{ shadow: { enabled: true, format: 'yaml', ttlMs: 86_400_000 } }
);
const runId = `test-${Date.now()}`;
// Turn on capture for this run (can be mid-execution)
logger.shadow.enable(runId);
logger.info('test start', { runId });
logger.verbose('deep details', { runId, step: 1 });
logger.debug('more details', { runId, step: 2 });
// ... run test ...
// Export & stop capturing
await logger.shadow.export(runId, './artifacts'); // returns exported path
logger.shadow.disable(runId);
Shadow files are stored per-run (JSONL or YAML multi-doc), rotated by size/age, and removed by TTL (default 1 day). If
forceVerbose=true, all levels for thatrunIdare captured even when global level is higher.
Attach correlation fields freely on each call:
runId, jobId, correlationId, sessionIdoperationId, parentOperationId, operationName, operationPath, operationStepthreadId, eventId, causationId, sequenceNo, partitionKey, attempt, shardId, workerIdtraceId, spanId (if OpenTelemetry context is active)Every log entry automatically merges the current trail context. The library provides optional adapters for HTTP and queue systems to inject/extract headers across process boundaries.
interface RoutingMeta {
allowedOutputs?: string[]; // e.g. ['unified-logger','console']
blockOutputs?: string[]; // e.g. ['unified-logger','file','shadow','troubleshooting']
reason?: string;
tags?: string[];
}
source: 'logs-gateway-internal') never reach unified-logger._routing lets you allow/block specific outputs per entry._routing.blockOutputs by default (configurable)._routing.blockOutputs: ['troubleshooting'].Text
[2025-01-15T10:30:45.123Z] [MY_APP] [INFO] Application initialized {"version":"1.0.0"}
JSON
{
"timestamp": "2025-01-15T10:30:45.123Z",
"package": "MY_APP",
"level": "INFO",
"message": "Application initialized",
"source": "application",
"data": {"version": "1.0.0"}
}
YAML (console/file; unified stays JSON)
---
timestamp: 2025-01-15T10:30:45.123Z
package: MY_APP
level: INFO
message: Application initialized
source: application
appName: my-awesome-app
appVersion: "2.1.0"
data:
version: "1.0.0"
Note: YAML is human-friendly but slower; prefer JSON for ingestion.
Enable to auto-detect and mask common sensitive data (JWTs, API keys, passwords, emails, credit cards, cloud keys, etc.). Supports key-based rules (denylist/allowlist), hashing specific fields, depth/size/time guardrails, and a truncation flag.
const logger = createLogger(
{ packageName: 'MY_APP', envPrefix: 'MY_APP' },
{
sanitization: {
enabled: true,
maskWith: '[REDACTED]',
keysDenylist: ['authorization','password','secret','api_key'],
fieldsHashInsteadOfMask: ['userId'],
detectJWTs: true
}
}
);
Shadow Logging allows you to capture all logs for a specific runId to a separate file in JSON or YAML format, with raw (unsanitized) data, regardless of the global log level. This is ideal for debugging tests, CI runs, or specific production workflows.
interface ShadowConfig {
enabled?: boolean; // default: false
format?: 'json' | 'yaml'; // default: 'json'
directory?: string; // default: './logs/shadow'
ttlMs?: number; // default: 86400000 (1 day)
respectRoutingBlocks?: boolean; // default: true
rollingBuffer?: {
maxEntries?: number; // default: 0 (disabled)
maxAgeMs?: number; // default: 0 (disabled)
};
}
logger.shadow.enable(runId, opts?)Enable shadow capture for a specific runId. Optionally override config for this run.
logger.shadow.enable('test-run-123');
// Or with custom options
logger.shadow.enable('test-run-yaml', {
format: 'yaml',
ttlMs: 3600000 // 1 hour
});
logger.shadow.disable(runId)Stop capturing logs for a runId and finalize the shadow file.
logger.shadow.isEnabled(runId)Check if shadow capture is active for a runId.
logger.shadow.listActive()List all currently active shadow captures.
logger.shadow.export(runId, outPath?)Copy the shadow file to a destination path.
logger.shadow.cleanupExpired(now?)Delete expired shadow files based on TTL. Returns number of deleted runs.
β οΈ Security: Shadow captures raw, unsanitized data. This means passwords, API keys, and PII will be preserved in shadow files. Only use shadow logging in secure environments (development, CI, isolated test systems).
πΎ Storage: Shadow files can grow quickly when capturing verbose logs. Configure appropriate ttlMs values and regularly run cleanupExpired().
π« Default OFF: Shadow logging is disabled by default and must be explicitly enabled via config or environment variables.
The troubleshooting integration uses nx-troubleshooting to match errors to solutions. See the nx-troubleshooting documentation for details on creating troubleshooting narratives.
{{variable}} syntax{
"id": "missing-connections-config",
"title": "Missing Connections Configuration",
"description": "The application config is missing the required 'connections' object...",
"symptoms": [
{
"probe": "config-check",
"params": { "field": "connections" },
"condition": "result.exists == false"
}
],
"solution": [
{
"type": "code",
"message": "Add a 'connections' object to your config:",
"code": "{\n \"connections\": { ... }\n}"
}
]
}
scoping.enabled and troubleshooting.enabled default to false.nx-troubleshooting is not installed:
troubleshooting.enabled must remain false or initialization fails clearly.scopeLogs can still filter currently available logs if in-memory buffer is enabled; otherwise, scopes may be empty.scopeRecord is pure and can be used independently anywhere.| Key | Description | Default |
|---|---|---|
{P}_LOG_TO_CONSOLE | Enable console output | true |
{P}_LOG_TO_FILE | Enable file output | false |
{P}_LOG_FILE | Log file path | β |
{P}_LOG_TO_UNIFIED | Enable unified-logger | false |
{P}_LOGS_LEVEL | Per-package threshold (canonical); omit both this and {P}_LOG_LEVEL β warn | warn |
{P}_LOG_LEVEL | Legacy level β used only if {P}_LOGS_LEVEL is not set in the environment | β |
{P}_LOG_FORMAT | text|json|yaml|table | table |
{P}_SHOW_FULL_TIMESTAMP | Show full ISO timestamp in console | false |
{P}_CONSOLE_PACKAGES_SHOW | Comma-separated packages to show (console only) | (show all) |
{P}_CONSOLE_PACKAGES_HIDE | Comma-separated packages to hide (console only) | (show all) |
DEBUG | Namespace(s) enabling verbose+debug | β |
{P}_SANITIZE_ENABLED | Turn on sanitization | false |
{P}_TRACE_OTEL | Attach traceId/spanId if available | true |
{P}_TRAILS_* | Toggle trails/header adapters | see above |
{P}_SHADOW_* | Shadow Logging controls | see above |
{P}_SCOPING_* | Scoping controls | see above |
{P}_TROUBLESHOOTING_* | Troubleshooting controls | see above |
npm install
npm run build
npm run dev
npm run clean
MIT
Tip for tests: set a runId at the beginning of each test (e.g., runId: 'ci-<suite>-<timestamp>'), enable Shadow Logging, run, then export() the captured logs for artifacts/triage.
FAQs
Standardized logging gateway for Node.js with PII sanitization, correlation trails, Shadow Logging for test/debug capture, scoping with text filters, story output, troubleshooting integration, and multi-format output (JSON/YAML/text)
We found that logs-gateway demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Β It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the projectβs GitHub releases.

Research
Malicious versions of the Telnyx Python SDK on PyPI delivered credential-stealing malware via a multi-stage supply chain attack.

Security News
TeamPCP is partnering with ransomware group Vect to turn open source supply chain attacks on tools like Trivy and LiteLLM into large-scale ransomware operations.