@rawnodes/logger
Flexible Winston-based logger with AsyncLocalStorage context, level rules, and multiple output formats.
Features
- Multiple Formats - JSON, plain (colored), logfmt, simple
- Context Propagation - Automatic context via AsyncLocalStorage
- Level Rules - Configure log levels per module/context in config
- Lazy Meta - Defer metadata creation until log level check passes
- Timing Utilities - Built-in performance measurement
- Request ID - Generate and extract request IDs
- Secret Masking - Automatic masking of sensitive data
- TypeScript First - Full generic type support
Installation
pnpm add @rawnodes/logger
npm install @rawnodes/logger
Quick Start
import { Logger } from '@rawnodes/logger';
const logger = Logger.create({
level: 'info',
console: { format: 'plain' },
});
logger.info('Hello world');
logger.info('User logged in', { userId: 123 });
Configuration
Basic Config
import { Logger } from '@rawnodes/logger';
const logger = Logger.create({
level: 'info',
console: { format: 'plain' },
file: {
format: 'json',
dirname: 'logs',
filename: 'app-%DATE%.log',
datePattern: 'YYYY-MM-DD',
maxFiles: '14d',
maxSize: '20m',
},
});
Level Rules
Configure different log levels for specific modules or contexts:
const logger = Logger.create({
level: {
default: 'info',
rules: [
{ match: { context: 'auth' }, level: 'debug' },
{ match: { context: 'database' }, level: 'warn' },
{ match: { userId: 123 }, level: 'debug' },
{ match: { context: 'api', userId: 456 }, level: 'silly' },
],
},
console: { format: 'plain' },
});
const authLogger = logger.for('auth');
authLogger.debug('This will be logged');
const dbLogger = logger.for('database');
dbLogger.info('This will NOT be logged');
Rules from config are readonly and cannot be removed via API.
Output Formats
json | Structured JSON | {"level":"info","message":"hello","timestamp":"..."} |
plain | Colored, human-readable | [2025-01-01T12:00:00] info [APP] hello |
logfmt | Key=value pairs | level=info msg=hello context=APP ts=2025-01-01T12:00:00 |
simple | Minimal | [2025-01-01T12:00:00] info: hello |
const logger = Logger.create({
level: 'info',
console: { format: 'plain' },
file: {
format: 'json',
dirname: 'logs',
filename: 'app-%DATE%.log',
},
});
Per-Transport Level
Each transport can have its own log level:
const logger = Logger.create({
level: 'silly',
console: {
format: 'plain',
level: 'debug',
},
file: {
format: 'json',
level: 'warn',
dirname: 'logs',
filename: 'app-%DATE%.log',
},
});
| Development | debug | โ |
| Production | info | warn |
| Troubleshooting | debug | info |
| Critical alerts | info | error |
This is useful for sending only critical errors to alerting systems while keeping verbose logs in console.
Level precedence
Each transport decides on its own whether to emit a given log entry. The precedence depends on whether the transport respects runtime overrides (see the next section):
When respecting overrides (observability transports โ console/file/cloudwatch/relay by default):
- Runtime override from
setLevelOverride(...) or the top-level level.rules array
- Per-transport rule (a matching entry in
console.rules / file.rules / etc.)
- Per-transport level (
console.level, file.level, โฆ)
- Global default level (
level or level.default)
When ignoring overrides (alert transports โ discord/telegram/zohoCliq by default):
- Per-transport rule
- Per-transport level
- Global default level
Practically: a matching setLevelOverride({ userId: 123 }, 'debug') will surface debug logs to viewing transports (console/cloudwatch/file), even if their own level is set to info. But the same override will not leak through notification channels (discord/telegram/zohoCliq) unless you explicitly opt them in via respectRuntimeOverrides: true.
Transport roles (respectRuntimeOverrides)
Every transport config accepts an optional respectRuntimeOverrides?: boolean flag. Defaults:
console | true | observability |
file | true | observability |
cloudwatch | true | observability |
relay | true | observability |
discord | false | alert |
telegram | false | alert |
zohoCliq | false | alert |
The defaults reflect typical usage โ console/file/cloudwatch are where operators look for logs during troubleshooting, while discord/telegram/zohoCliq are user-facing notification channels that should not get flooded by ad-hoc debug overrides. Override the default per-transport when needed:
const logger = Logger.create({
level: 'info',
console: { format: 'plain' },
telegram: {
botToken: '...',
chatId: '...',
level: 'warn',
respectRuntimeOverrides: true,
},
cloudwatch: {
logGroupName: '...',
level: 'info',
respectRuntimeOverrides: false,
region: '...', accessKeyId: '...', secretAccessKey: '...',
},
});
Per-Transport Rules
Each transport can have its own filtering rules. Use level: 'off' with rules to create a whitelist pattern:
const logger = Logger.create({
level: 'info',
console: { format: 'plain' },
file: {
format: 'json',
level: 'off',
rules: [
{ match: { context: 'payments' }, level: 'error' },
{ match: { context: 'auth' }, level: 'warn' },
],
dirname: 'logs',
filename: 'critical-%DATE%.log',
},
});
logger.for('payments').error('Payment failed');
logger.for('payments').info('Processing');
logger.for('other').error('Generic error');
Rules can also match store context (AsyncLocalStorage):
const logger = Logger.create({
level: 'info',
console: { format: 'plain' },
file: {
format: 'json',
level: 'off',
rules: [
{ match: { userId: 123 }, level: 'debug' },
{ match: { context: 'api', premium: true }, level: 'debug' },
],
dirname: 'logs',
filename: 'debug-%DATE%.log',
},
});
store.run({ userId: 123 }, () => {
logger.debug('User action');
});
Use level: 'off' in rules to suppress specific contexts:
const logger = Logger.create({
level: 'debug',
console: {
format: 'plain',
level: 'debug',
rules: [
{ match: { context: 'noisy-module' }, level: 'off' },
],
},
});
logger.for('noisy-module').debug('Spam');
logger.for('other').debug('Useful info');
External Transports
Discord
Send logs to Discord via webhooks with rich embeds:
const logger = Logger.create({
level: 'info',
console: { format: 'plain' },
discord: {
webhookUrl: 'https://discord.com/api/webhooks/xxx/yyy',
level: 'error',
username: 'My App Logger',
avatarUrl: 'https://...',
embedColors: {
error: 0xFF0000,
warn: 0xFFAA00,
},
batchSize: 10,
flushInterval: 2000,
},
});
Telegram
Send logs to Telegram chats/channels:
const logger = Logger.create({
level: 'info',
console: { format: 'plain' },
telegram: {
botToken: process.env.TG_BOT_TOKEN!,
chatId: process.env.TG_CHAT_ID!,
level: 'warn',
parseMode: 'Markdown',
disableNotification: false,
threadId: 123,
replyToMessageId: 456,
batchSize: 20,
flushInterval: 1000,
},
});
Use level: 'off' with rules for selective logging:
telegram: {
botToken: '...',
chatId: '...',
level: 'off',
rules: [
{ match: { context: 'payments' }, level: 'error' },
],
}
CloudWatch
Send logs to AWS CloudWatch Logs:
const logger = Logger.create({
level: 'info',
console: { format: 'plain' },
cloudwatch: {
logGroupName: '/app/my-service',
region: 'us-east-1',
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
createLogGroup: true,
createLogStream: true,
batchSize: 100,
flushInterval: 1000,
},
});
Log Stream Name
The logStreamName field is optional and supports multiple configuration formats:
cloudwatch: {
logGroupName: '/app/my-service',
logStreamName: 'my-static-stream',
}
cloudwatch: {
logGroupName: '/app/my-service',
logStreamName: { pattern: 'hostname-date' },
}
cloudwatch: {
logGroupName: '/app/my-service',
logStreamName: { template: '{hostname}/{env}/{date}' },
}
cloudwatch: {
logGroupName: '/app/my-service',
}
Available patterns:
hostname | my-server |
hostname-date | my-server/2025-01-15 |
hostname-uuid | my-server-a1b2c3d4 |
date | 2025-01-15 |
uuid | a1b2c3d4-e5f6-7890-abcd |
Template variables:
{hostname} | Server hostname (config.hostname โ HOSTNAME env โ os.hostname()) |
{date} | Current date (YYYY-MM-DD) |
{datetime} | Current datetime (YYYY-MM-DD-HH-mm-ss) |
{uuid} | 8-char UUID (generated once at startup) |
{pid} | Process ID |
{env} | NODE_ENV (default: "development") |
Hostname resolution priority:
config.hostname (from logger config)
process.env.HOSTNAME (useful in Docker/Kubernetes)
os.hostname()
Examples:
cloudwatch: {
logGroupName: '/app/my-service/production',
logStreamName: { pattern: 'hostname' },
}
cloudwatch: {
logGroupName: '/app/my-service',
logStreamName: { pattern: 'hostname-date' },
}
cloudwatch: {
logGroupName: '/app/my-service',
logStreamName: { template: '{env}/{hostname}/{date}' },
}
Relay (remote live-tail)
The relay transport runs in a worker thread, polls a config endpoint you control, and โ when that endpoint returns enabled: true โ opens a WebSocket and streams logs to it in real time. Lets you flip on live-tail debugging for a running prod service from a dashboard, without redeploying.
Logger.create({
level: 'info',
console: { format: 'json' },
relay: {
apiUrl: 'https://relay.internal.company.com',
token: process.env.RELAY_TOKEN!,
maskSecrets: true,
allowedWsHosts: ['ws.internal.company.com'],
},
});
Server contract (GET /api/relay/config with Authorization: Bearer <token>):
{
"enabled": true, // false = stop streaming, drop logs cheaply
"wsUrl": "wss://...", // WebSocket gateway to stream to
"rules": [{ "level": "debug", "context": "auth" }] // optional filters
}
Security defaults to know about:
wsUrl is pinned to apiUrl's origin by default. A compromised config endpoint cannot redirect your logs to an arbitrary host. Use allowedWsHosts to authorize alternates explicitly.
maskSecrets does NOT inherit from the parent logger โ the relay runs in a worker thread and gets only the options you pass to its config. Pass it again here if you want forwarded logs masked. Off by default.
- The relay writes one stderr line on first successful WS open (
[RelayTransport] streaming logs to ...) so the fact of streaming is auditable in your normal log output.
Transport Options
All external transports support:
level | โ | Log level filter |
rules | โ | Per-transport filtering rules |
batchSize | varies | Max messages per batch |
flushInterval | varies | Flush interval in ms |
maxRetries | 3 | Retry count on failure |
retryDelay | 1000 | Base retry delay in ms |
Multiple Transports
You can configure multiple instances of the same transport type using arrays:
const logger = Logger.create({
level: 'info',
console: { format: 'plain' },
discord: [
{
webhookUrl: 'https://discord.com/api/webhooks/payments/xxx',
level: 'off',
rules: [{ match: { context: 'payments' }, level: 'error' }],
},
{
webhookUrl: 'https://discord.com/api/webhooks/auth/yyy',
level: 'off',
rules: [{ match: { context: 'auth' }, level: 'error' }],
},
],
telegram: [
{ botToken: 'token1', chatId: 'errors-chat', level: 'error' },
{ botToken: 'token2', chatId: 'alerts-chat', level: 'warn' },
],
});
logger.for('payments').error('Payment failed');
logger.for('auth').error('Login failed');
Graceful Shutdown
External transports (Discord, Telegram, CloudWatch) buffer messages before sending. To ensure no logs are lost on process exit, use graceful shutdown.
Automatic (Recommended)
Enable autoShutdown in config to automatically handle SIGTERM/SIGINT:
const logger = Logger.create({
level: 'info',
console: { format: 'plain' },
cloudwatch: { },
autoShutdown: true,
});
const logger = Logger.create({
level: 'info',
console: { format: 'plain' },
cloudwatch: { },
autoShutdown: {
timeout: 10000,
signals: ['SIGTERM'],
},
});
Manual
For more control, use registerShutdown or call shutdown() directly:
import { Logger, registerShutdown } from '@rawnodes/logger';
const logger = Logger.create(config);
registerShutdown(logger, {
timeout: 10000,
onShutdown: async () => {
console.log('Flushing logs...');
},
});
process.on('SIGTERM', async () => {
await logger.shutdown();
process.exit(0);
});
In Tests
For tests, call shutdown() in afterAll to flush pending logs:
afterAll(async () => {
await logger.shutdown();
});
Singleton Pattern
For app-wide logging:
import { createSingletonLogger, type LoggerContext } from '@rawnodes/logger';
export interface AppContext extends LoggerContext {
userId?: number;
requestId?: string;
}
export const AppLogger = createSingletonLogger<AppContext>();
import { AppLogger } from './logger.js';
AppLogger.init({
level: {
default: 'info',
rules: [{ match: { context: 'debug-module' }, level: 'debug' }],
},
console: { format: 'plain' },
});
const logger = AppLogger.for('UserService');
logger.info('User created', { userId: 123 });
Context Propagation
Automatically include context in all logs within an async scope:
app.use((req, res, next) => {
const context = {
userId: req.user?.id,
requestId: req.headers['x-request-id'] || generateRequestId(),
};
AppLogger.getStore().run(context, () => next());
});
logger.info('Processing request');
Dynamic Level Overrides
Add/remove level overrides at runtime:
logger.setLevelOverride({ userId: 123 }, 'debug');
logger.setLevelOverride({ context: 'payments' }, 'debug');
logger.removeLevelOverride({ userId: 123 });
logger.clearLevelOverrides();
const overrides = logger.getLevelOverrides();
Lazy Meta
Defer expensive object creation:
logger.debug('Data processed', { result: expensiveSerialize(data) });
logger.debug('Data processed', () => ({ result: expensiveSerialize(data) }));
Child Loggers
Create scoped loggers:
const logger = AppLogger.for('PaymentService');
logger.info('Processing payment');
const stripeLogger = logger.for('Stripe');
stripeLogger.info('Charging card');
Utilities
Timing
import { measureAsync, measureSync } from '@rawnodes/logger';
const { result, timing } = await measureAsync('fetch-users', async () => {
return await userService.findAll();
});
console.log(timing);
const { result, timing } = measureSync('compute', () => {
return heavyComputation();
});
Request ID
import { generateRequestId, extractRequestId, getOrGenerateRequestId } from '@rawnodes/logger';
generateRequestId();
generateRequestId({ short: true });
generateRequestId({ prefix: 'req' });
extractRequestId(req.headers);
getOrGenerateRequestId(req.headers);
Secret Masking
Two ways to use it: a global flag that masks every log automatically, or manual utilities when you want fine-grained control.
Global flag (recommended)
Set maskSecrets: true in Logger.create to mask all output across console / file / CloudWatch / Discord / Telegram / Zoho Cliq. Backwards compatible โ masking is off by default.
const logger = Logger.create({
level: 'info',
console: { format: 'json' },
file: { format: 'json', dirname: './logs', filename: 'app.log' },
maskSecrets: true,
});
logger.info('login', { user: 'alice', password: 'hunter2' });
logger.info('connect', { url: 'https://u:p@db.example.com/foo' });
Custom patterns / mask string:
Logger.create({
...,
maskSecrets: {
patterns: ['ssn', 'creditCard'],
mask: '[REDACTED]',
},
});
Default masked patterns: password, secret, token, apikey, api_key, api-key, auth, credential, private (case-insensitive substring match on key names).
Performance: ~+2.5 ยตs per log call on a typical payload. Implemented via a JSON.stringify replacer for json/CloudWatch (zero clone) and a one-shot clone for HTTP transports that build markdown/embed payloads (Discord/Telegram/Zoho). For most services (< 10k logs/sec) the cost is invisible.
Manual utilities
If you only want to mask specific calls โ or need a JSON.stringify replacer for your own serialization โ three helpers are exported:
import { maskSecrets, createMasker, maskReplacer } from '@rawnodes/logger';
maskSecrets({ user: 'admin', password: 'secret' });
const masker = createMasker({ patterns: ['ssn'], mask: '[REDACTED]' });
masker({ ssn: '123-45-6789' });
JSON.stringify(payload, maskReplacer());
JSON.stringify(payload, maskReplacer({ patterns: ['ssn'] }), 2);
Error logging
error() accepts an error value as the first argument. This keeps call sites short in the common case โ you forward whatever catch produced without pre-normalizing it.
try {
await doWork();
} catch (err) {
logger.error(err, 'doWork failed', { userId });
}
Supported shapes:
logger.error('something broke');
logger.error('something broke', { retries: 3 });
logger.error(err);
logger.error(err, 'doWork failed');
logger.error(err, { userId });
logger.error(err, 'doWork failed', { userId });
The first argument is interpreted as an error whenever it is not a string. That includes anything TypeScript's catch clause can produce (Error, plain objects, primitives, null, undefined). Non-Error values are normalized via serializeError:
new Error('boom') | errorMessage, stack, errorName (if not Error) |
'string value' | errorMessage: 'string value' |
| axios error | errorMessage, stack, http: { status, url, โฆ } |
fetch-style { response, config } | Same, extracted heuristically |
{ message: 'x', code: 'ECONN' } | errorMessage, errorCode |
null / undefined | errorMessage: 'Unknown error' |
String-as-first-arg is always a message, never an error. If you want to log a string as an error, wrap it: logger.error(new Error(str)).
Logfmt Utilities
Helper functions for logfmt format:
import { flattenObject, formatLogfmt, formatLogfmtValue } from '@rawnodes/logger';
flattenObject({ user: { id: 123, name: 'John' } });
formatLogfmt({ level: 'info', msg: 'hello', userId: 123 });
API Reference
Logger
class Logger<TContext> {
static create(config: LoggerConfig, store?: LoggerStore): Logger;
for(context: string): Logger;
getStore(): LoggerStore<TContext>;
error(errorOrMessage: Error | string | unknown, messageOrMeta?: string | Meta, meta?: Meta): void;
warn(message: string, meta?: Meta): void;
info(message: string, meta?: Meta): void;
http(message: string, meta?: Meta): void;
verbose(message: string, meta?: Meta): void;
debug(message: string, meta?: Meta): void;
silly(message: string, meta?: Meta): void;
setLevelOverride(match: LevelOverrideMatch, level: LogLevel): void;
removeLevelOverride(match: LevelOverrideMatch): boolean;
clearLevelOverrides(): void;
getLevelOverrides(): LevelOverride[];
profile(id: string, meta?: object): void;
}
Types
type LogLevel = 'error' | 'warn' | 'info' | 'http' | 'verbose' | 'debug' | 'silly' | 'off';
type LogFormat = 'json' | 'plain' | 'logfmt' | 'simple';
type Meta = object | (() => object);
interface LoggerConfig {
level: LogLevel | {
default: LogLevel;
rules?: LevelRule[];
};
console: {
format: LogFormat;
level?: LogLevel;
rules?: LevelRule[];
};
file?: {
format: LogFormat;
level?: LogLevel;
rules?: LevelRule[];
dirname: string;
filename: string;
datePattern?: string;
maxFiles?: string;
maxSize?: string;
zippedArchive?: boolean;
};
}
interface LevelRule {
match: Record<string, unknown> & { context?: string };
level: LogLevel;
}
Integration Examples
Express
import express from 'express';
import { AppLogger, generateRequestId } from './logger';
const app = express();
app.use((req, res, next) => {
const context = {
requestId: req.headers['x-request-id'] || generateRequestId({ short: true }),
userId: req.user?.id,
};
AppLogger.getStore().run(context, () => next());
});
NestJS
import { Injectable, NestMiddleware } from '@nestjs/common';
import { AppLogger, generateRequestId } from './logger';
@Injectable()
export class LoggerMiddleware implements NestMiddleware {
use(req: any, res: any, next: () => void) {
const context = {
requestId: req.headers['x-request-id'] || generateRequestId({ short: true }),
};
AppLogger.getStore().run(context, () => next());
}
}
Telegraf
import { Telegraf } from 'telegraf';
import { AppLogger } from './logger';
bot.use((ctx, next) => {
const context = {
telegramUserId: ctx.from?.id,
chatId: ctx.chat?.id,
};
return AppLogger.getStore().run(context, () => next());
});
License
MIT