๐Ÿš€ Big News:Socket Has Acquired Secure Annex.Learn More โ†’
Socket
Book a DemoSign in
Socket

@rawnodes/logger

Package Overview
Dependencies
Maintainers
1
Versions
28
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@rawnodes/logger

High-performance Pino-based logger with AsyncLocalStorage context, O(1) level rules, and timing utilities

latest
Source
npmnpm
Version
2.10.0
Version published
Maintainers
1
Created
Source

@rawnodes/logger

Flexible Winston-based logger with AsyncLocalStorage context, level rules, and multiple output formats.

Features

  • Multiple Formats - JSON, plain (colored), logfmt, simple
  • Context Propagation - Automatic context via AsyncLocalStorage
  • Level Rules - Configure log levels per module/context in config
  • Lazy Meta - Defer metadata creation until log level check passes
  • Timing Utilities - Built-in performance measurement
  • Request ID - Generate and extract request IDs
  • Secret Masking - Automatic masking of sensitive data
  • TypeScript First - Full generic type support

Installation

pnpm add @rawnodes/logger
# or
npm install @rawnodes/logger

Quick Start

import { Logger } from '@rawnodes/logger';

const logger = Logger.create({
  level: 'info',
  console: { format: 'plain' },
});

logger.info('Hello world');
logger.info('User logged in', { userId: 123 });

Configuration

Basic Config

import { Logger } from '@rawnodes/logger';

const logger = Logger.create({
  level: 'info',                    // default log level
  console: { format: 'plain' },     // console output format
  file: {                           // optional file output
    format: 'json',
    dirname: 'logs',
    filename: 'app-%DATE%.log',
    datePattern: 'YYYY-MM-DD',
    maxFiles: '14d',
    maxSize: '20m',
  },
});

Level Rules

Configure different log levels for specific modules or contexts:

const logger = Logger.create({
  level: {
    default: 'info',
    rules: [
      { match: { context: 'auth' }, level: 'debug' },        // debug for auth module
      { match: { context: 'database' }, level: 'warn' },     // warn for database
      { match: { userId: 123 }, level: 'debug' },            // debug for user 123
      { match: { context: 'api', userId: 456 }, level: 'silly' }, // combined match
    ],
  },
  console: { format: 'plain' },
});

const authLogger = logger.for('auth');
authLogger.debug('This will be logged');  // matches rule

const dbLogger = logger.for('database');
dbLogger.info('This will NOT be logged'); // level is warn

Rules from config are readonly and cannot be removed via API.

Output Formats

FormatDescriptionExample
jsonStructured JSON{"level":"info","message":"hello","timestamp":"..."}
plainColored, human-readable[2025-01-01T12:00:00] info [APP] hello
logfmtKey=value pairslevel=info msg=hello context=APP ts=2025-01-01T12:00:00
simpleMinimal[2025-01-01T12:00:00] info: hello
// Different formats for console and file
const logger = Logger.create({
  level: 'info',
  console: { format: 'plain' },   // colored for development
  file: {
    format: 'json',               // structured for log aggregation
    dirname: 'logs',
    filename: 'app-%DATE%.log',
  },
});

Per-Transport Level

Each transport can have its own log level:

const logger = Logger.create({
  level: 'silly',                        // accept all at logger level
  console: {
    format: 'plain',
    level: 'debug',                      // console: debug and above
  },
  file: {
    format: 'json',
    level: 'warn',                       // file: only warn and error
    dirname: 'logs',
    filename: 'app-%DATE%.log',
  },
});
ScenarioConsoleFile
Developmentdebugโ€”
Productioninfowarn
Troubleshootingdebuginfo
Critical alertsinfoerror

This is useful for sending only critical errors to alerting systems while keeping verbose logs in console.

Level precedence

Each transport decides on its own whether to emit a given log entry. The precedence depends on whether the transport respects runtime overrides (see the next section):

When respecting overrides (observability transports โ€” console/file/cloudwatch/relay by default):

  • Runtime override from setLevelOverride(...) or the top-level level.rules array
  • Per-transport rule (a matching entry in console.rules / file.rules / etc.)
  • Per-transport level (console.level, file.level, โ€ฆ)
  • Global default level (level or level.default)

When ignoring overrides (alert transports โ€” discord/telegram/zohoCliq by default):

  • Per-transport rule
  • Per-transport level
  • Global default level

Practically: a matching setLevelOverride({ userId: 123 }, 'debug') will surface debug logs to viewing transports (console/cloudwatch/file), even if their own level is set to info. But the same override will not leak through notification channels (discord/telegram/zohoCliq) unless you explicitly opt them in via respectRuntimeOverrides: true.

Transport roles (respectRuntimeOverrides)

Every transport config accepts an optional respectRuntimeOverrides?: boolean flag. Defaults:

TransportDefaultRole
consoletrueobservability
filetrueobservability
cloudwatchtrueobservability
relaytrueobservability
discordfalsealert
telegramfalsealert
zohoCliqfalsealert

The defaults reflect typical usage โ€” console/file/cloudwatch are where operators look for logs during troubleshooting, while discord/telegram/zohoCliq are user-facing notification channels that should not get flooded by ad-hoc debug overrides. Override the default per-transport when needed:

const logger = Logger.create({
  level: 'info',
  console: { format: 'plain' },
  telegram: {
    botToken: '...',
    chatId: '...',
    level: 'warn',
    respectRuntimeOverrides: true,   // let overrides through (rare)
  },
  cloudwatch: {
    logGroupName: '...',
    level: 'info',
    respectRuntimeOverrides: false,  // strict gate even under troubleshooting (also rare)
    region: '...', accessKeyId: '...', secretAccessKey: '...',
  },
});

Per-Transport Rules

Each transport can have its own filtering rules. Use level: 'off' with rules to create a whitelist pattern:

const logger = Logger.create({
  level: 'info',
  console: { format: 'plain' },
  file: {
    format: 'json',
    level: 'off',                              // off by default
    rules: [
      { match: { context: 'payments' }, level: 'error' },  // only errors from payments
      { match: { context: 'auth' }, level: 'warn' },       // warn+ from auth
    ],
    dirname: 'logs',
    filename: 'critical-%DATE%.log',
  },
});

// Only error logs from 'payments' context go to file
logger.for('payments').error('Payment failed');  // โ†’ file
logger.for('payments').info('Processing');       // โœ— not logged to file
logger.for('other').error('Generic error');      // โœ— not logged to file (no matching rule)

Rules can also match store context (AsyncLocalStorage):

const logger = Logger.create({
  level: 'info',
  console: { format: 'plain' },
  file: {
    format: 'json',
    level: 'off',
    rules: [
      { match: { userId: 123 }, level: 'debug' },           // debug for specific user
      { match: { context: 'api', premium: true }, level: 'debug' }, // debug for premium API users
    ],
    dirname: 'logs',
    filename: 'debug-%DATE%.log',
  },
});

// Logs for user 123 go to file
store.run({ userId: 123 }, () => {
  logger.debug('User action');  // โ†’ file
});

Use level: 'off' in rules to suppress specific contexts:

const logger = Logger.create({
  level: 'debug',
  console: {
    format: 'plain',
    level: 'debug',
    rules: [
      { match: { context: 'noisy-module' }, level: 'off' },  // suppress noisy logs
    ],
  },
});

logger.for('noisy-module').debug('Spam');  // โœ— not logged
logger.for('other').debug('Useful info');  // โ†’ logged

External Transports

Discord

Send logs to Discord via webhooks with rich embeds:

const logger = Logger.create({
  level: 'info',
  console: { format: 'plain' },
  discord: {
    webhookUrl: 'https://discord.com/api/webhooks/xxx/yyy',
    level: 'error',                    // only errors to Discord
    username: 'My App Logger',         // optional bot name
    avatarUrl: 'https://...',          // optional avatar
    embedColors: {                     // optional custom colors
      error: 0xFF0000,
      warn: 0xFFAA00,
    },
    batchSize: 10,                     // messages per batch (default: 10)
    flushInterval: 2000,               // flush interval ms (default: 2000)
  },
});

Telegram

Send logs to Telegram chats/channels:

const logger = Logger.create({
  level: 'info',
  console: { format: 'plain' },
  telegram: {
    botToken: process.env.TG_BOT_TOKEN!,
    chatId: process.env.TG_CHAT_ID!,
    level: 'warn',                     // warn and above
    parseMode: 'Markdown',             // 'Markdown' | 'MarkdownV2' | 'HTML'
    disableNotification: false,        // mute non-error by default
    threadId: 123,                     // optional forum topic ID
    replyToMessageId: 456,             // optional reply to message
    batchSize: 20,                     // default: 20
    flushInterval: 1000,               // default: 1000
  },
});

Use level: 'off' with rules for selective logging:

telegram: {
  botToken: '...',
  chatId: '...',
  level: 'off',                        // off by default
  rules: [
    { match: { context: 'payments' }, level: 'error' },  // only payment errors
  ],
}

CloudWatch

Send logs to AWS CloudWatch Logs:

const logger = Logger.create({
  level: 'info',
  console: { format: 'plain' },
  cloudwatch: {
    logGroupName: '/app/my-service',
    region: 'us-east-1',
    accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
    createLogGroup: true,              // auto-create group (default: false)
    createLogStream: true,             // auto-create stream (default: true)
    batchSize: 100,                    // default: 100
    flushInterval: 1000,               // default: 1000
  },
});

Log Stream Name

The logStreamName field is optional and supports multiple configuration formats:

// Option 1: Static string
cloudwatch: {
  logGroupName: '/app/my-service',
  logStreamName: 'my-static-stream',
  // ...
}

// Option 2: Predefined pattern
cloudwatch: {
  logGroupName: '/app/my-service',
  logStreamName: { pattern: 'hostname-date' },
  // ...
}

// Option 3: Custom template
cloudwatch: {
  logGroupName: '/app/my-service',
  logStreamName: { template: '{hostname}/{env}/{date}' },
  // ...
}

// Option 4: Omit for default (hostname)
cloudwatch: {
  logGroupName: '/app/my-service',
  // logStreamName defaults to hostname
  // ...
}

Available patterns:

PatternExample Output
hostnamemy-server
hostname-datemy-server/2025-01-15
hostname-uuidmy-server-a1b2c3d4
date2025-01-15
uuida1b2c3d4-e5f6-7890-abcd

Template variables:

VariableDescription
{hostname}Server hostname (config.hostname โ†’ HOSTNAME env โ†’ os.hostname())
{date}Current date (YYYY-MM-DD)
{datetime}Current datetime (YYYY-MM-DD-HH-mm-ss)
{uuid}8-char UUID (generated once at startup)
{pid}Process ID
{env}NODE_ENV (default: "development")

Hostname resolution priority:

  • config.hostname (from logger config)
  • process.env.HOSTNAME (useful in Docker/Kubernetes)
  • os.hostname()

Examples:

// Kubernetes-friendly: pod name as stream
cloudwatch: {
  logGroupName: '/app/my-service/production',
  logStreamName: { pattern: 'hostname' },  // -> "my-app-pod-abc123"
  // ...
}

// Daily rotation with hostname
cloudwatch: {
  logGroupName: '/app/my-service',
  logStreamName: { pattern: 'hostname-date' },  // -> "my-server/2025-01-15"
  // ...
}

// Custom format with environment
cloudwatch: {
  logGroupName: '/app/my-service',
  logStreamName: { template: '{env}/{hostname}/{date}' },  // -> "production/my-server/2025-01-15"
  // ...
}

Relay (remote live-tail)

The relay transport runs in a worker thread, polls a config endpoint you control, and โ€” when that endpoint returns enabled: true โ€” opens a WebSocket and streams logs to it in real time. Lets you flip on live-tail debugging for a running prod service from a dashboard, without redeploying.

Logger.create({
  level: 'info',
  console: { format: 'json' },
  relay: {
    apiUrl: 'https://relay.internal.company.com',
    token: process.env.RELAY_TOKEN!,
    maskSecrets: true,                      // mask before forwarding (recommended)
    allowedWsHosts: ['ws.internal.company.com'], // only needed if WS host โ‰  apiUrl host
  },
});

Server contract (GET /api/relay/config with Authorization: Bearer <token>):

{
  "enabled": true,                 // false = stop streaming, drop logs cheaply
  "wsUrl":   "wss://...",          // WebSocket gateway to stream to
  "rules":   [{ "level": "debug", "context": "auth" }]  // optional filters
}

Security defaults to know about:

  • wsUrl is pinned to apiUrl's origin by default. A compromised config endpoint cannot redirect your logs to an arbitrary host. Use allowedWsHosts to authorize alternates explicitly.
  • maskSecrets does NOT inherit from the parent logger โ€” the relay runs in a worker thread and gets only the options you pass to its config. Pass it again here if you want forwarded logs masked. Off by default.
  • The relay writes one stderr line on first successful WS open ([RelayTransport] streaming logs to ...) so the fact of streaming is auditable in your normal log output.

Transport Options

All external transports support:

OptionDefaultDescription
levelโ€”Log level filter
rulesโ€”Per-transport filtering rules
batchSizevariesMax messages per batch
flushIntervalvariesFlush interval in ms
maxRetries3Retry count on failure
retryDelay1000Base retry delay in ms

Multiple Transports

You can configure multiple instances of the same transport type using arrays:

const logger = Logger.create({
  level: 'info',
  console: { format: 'plain' },
  discord: [
    {
      webhookUrl: 'https://discord.com/api/webhooks/payments/xxx',
      level: 'off',
      rules: [{ match: { context: 'payments' }, level: 'error' }],
    },
    {
      webhookUrl: 'https://discord.com/api/webhooks/auth/yyy',
      level: 'off',
      rules: [{ match: { context: 'auth' }, level: 'error' }],
    },
  ],
  telegram: [
    { botToken: 'token1', chatId: 'errors-chat', level: 'error' },
    { botToken: 'token2', chatId: 'alerts-chat', level: 'warn' },
  ],
});

// Payment errors go to payments Discord channel
logger.for('payments').error('Payment failed');

// Auth errors go to auth Discord channel
logger.for('auth').error('Login failed');

Graceful Shutdown

External transports (Discord, Telegram, CloudWatch) buffer messages before sending. To ensure no logs are lost on process exit, use graceful shutdown.

Enable autoShutdown in config to automatically handle SIGTERM/SIGINT:

const logger = Logger.create({
  level: 'info',
  console: { format: 'plain' },
  cloudwatch: { /* ... */ },
  autoShutdown: true,  // Auto-register shutdown handlers
});

// Or with options:
const logger = Logger.create({
  level: 'info',
  console: { format: 'plain' },
  cloudwatch: { /* ... */ },
  autoShutdown: {
    timeout: 10000,              // Max wait time (default: 5000ms)
    signals: ['SIGTERM'],        // Signals to handle (default: ['SIGTERM', 'SIGINT'])
  },
});

Manual

For more control, use registerShutdown or call shutdown() directly:

import { Logger, registerShutdown } from '@rawnodes/logger';

const logger = Logger.create(config);

// Option 1: Register handlers manually
registerShutdown(logger, {
  timeout: 10000,
  onShutdown: async () => {
    console.log('Flushing logs...');
  },
});

// Option 2: Call shutdown directly (e.g., in your own signal handler)
process.on('SIGTERM', async () => {
  await logger.shutdown();  // Flush all buffered messages
  process.exit(0);
});

In Tests

For tests, call shutdown() in afterAll to flush pending logs:

afterAll(async () => {
  await logger.shutdown();
});

Singleton Pattern

For app-wide logging:

// src/logger.ts
import { createSingletonLogger, type LoggerContext } from '@rawnodes/logger';

export interface AppContext extends LoggerContext {
  userId?: number;
  requestId?: string;
}

export const AppLogger = createSingletonLogger<AppContext>();

// src/main.ts
import { AppLogger } from './logger.js';

AppLogger.init({
  level: {
    default: 'info',
    rules: [{ match: { context: 'debug-module' }, level: 'debug' }],
  },
  console: { format: 'plain' },
});

// Anywhere in your app
const logger = AppLogger.for('UserService');
logger.info('User created', { userId: 123 });

Context Propagation

Automatically include context in all logs within an async scope:

// Express middleware
app.use((req, res, next) => {
  const context = {
    userId: req.user?.id,
    requestId: req.headers['x-request-id'] || generateRequestId(),
  };
  AppLogger.getStore().run(context, () => next());
});

// All logs within this request will include userId and requestId
logger.info('Processing request');
// Output: [2025-01-01T12:00:00] info [APP] Processing request
//   userId: 123
//   requestId: abc-123

Dynamic Level Overrides

Add/remove level overrides at runtime:

// Enable debug for specific user (e.g., for troubleshooting)
logger.setLevelOverride({ userId: 123 }, 'debug');

// Enable debug for specific module
logger.setLevelOverride({ context: 'payments' }, 'debug');

// Remove override
logger.removeLevelOverride({ userId: 123 });

// Clear all dynamic overrides (keeps config rules)
logger.clearLevelOverrides();

// Get all overrides
const overrides = logger.getLevelOverrides();
// [{ match: { context: 'payments' }, level: 'debug', readonly: false }]

Lazy Meta

Defer expensive object creation:

// BAD: Object created even if debug is disabled
logger.debug('Data processed', { result: expensiveSerialize(data) });

// GOOD: Function only called when debug is enabled
logger.debug('Data processed', () => ({ result: expensiveSerialize(data) }));

Child Loggers

Create scoped loggers:

const logger = AppLogger.for('PaymentService');
logger.info('Processing payment');
// Output: [timestamp] info [PaymentService] Processing payment

const stripeLogger = logger.for('Stripe');
stripeLogger.info('Charging card');
// Output: [timestamp] info [Stripe] Charging card

Utilities

Timing

import { measureAsync, measureSync } from '@rawnodes/logger';

// Async
const { result, timing } = await measureAsync('fetch-users', async () => {
  return await userService.findAll();
});
console.log(timing); // { label: 'fetch-users', durationMs: 45.23, durationFormatted: '45.23ms' }

// Sync
const { result, timing } = measureSync('compute', () => {
  return heavyComputation();
});

Request ID

import { generateRequestId, extractRequestId, getOrGenerateRequestId } from '@rawnodes/logger';

// Generate new
generateRequestId();                          // "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
generateRequestId({ short: true });           // "a1b2c3d4"
generateRequestId({ prefix: 'req' });         // "req-a1b2c3d4-e5f6-..."

// Extract from headers (checks x-request-id, x-correlation-id, x-trace-id)
extractRequestId(req.headers);                // string | undefined

// Extract or generate
getOrGenerateRequestId(req.headers);          // always returns string

Secret Masking

Two ways to use it: a global flag that masks every log automatically, or manual utilities when you want fine-grained control.

Set maskSecrets: true in Logger.create to mask all output across console / file / CloudWatch / Discord / Telegram / Zoho Cliq. Backwards compatible โ€” masking is off by default.

const logger = Logger.create({
  level: 'info',
  console: { format: 'json' },
  file: { format: 'json', dirname: './logs', filename: 'app.log' },
  maskSecrets: true,
});

logger.info('login', { user: 'alice', password: 'hunter2' });
// {"level":"info","message":"login","user":"alice","password":"***",...}

logger.info('connect', { url: 'https://u:p@db.example.com/foo' });
// URL credentials masked: "https://***:***@db.example.com/foo"

Custom patterns / mask string:

Logger.create({
  ...,
  maskSecrets: {
    patterns: ['ssn', 'creditCard'],   // adds to defaults? NO โ€” replaces them
    mask: '[REDACTED]',
  },
});

Default masked patterns: password, secret, token, apikey, api_key, api-key, auth, credential, private (case-insensitive substring match on key names).

Performance: ~+2.5 ยตs per log call on a typical payload. Implemented via a JSON.stringify replacer for json/CloudWatch (zero clone) and a one-shot clone for HTTP transports that build markdown/embed payloads (Discord/Telegram/Zoho). For most services (< 10k logs/sec) the cost is invisible.

Manual utilities

If you only want to mask specific calls โ€” or need a JSON.stringify replacer for your own serialization โ€” three helpers are exported:

import { maskSecrets, createMasker, maskReplacer } from '@rawnodes/logger';

// One-shot: returns a deep-cloned masked object
maskSecrets({ user: 'admin', password: 'secret' });
// { user: 'admin', password: '***' }

// Reusable: pre-resolves options, slightly faster for repeated calls
const masker = createMasker({ patterns: ['ssn'], mask: '[REDACTED]' });
masker({ ssn: '123-45-6789' }); // { ssn: '[REDACTED]' }

// JSON.stringify replacer: zero allocations, masks during serialization
JSON.stringify(payload, maskReplacer());
JSON.stringify(payload, maskReplacer({ patterns: ['ssn'] }), 2);

Error logging

error() accepts an error value as the first argument. This keeps call sites short in the common case โ€” you forward whatever catch produced without pre-normalizing it.

try {
  await doWork();
} catch (err) {
  // err: unknown โ€” passes straight through, no cast needed
  logger.error(err, 'doWork failed', { userId });
}

Supported shapes:

logger.error('something broke');                       // message only
logger.error('something broke', { retries: 3 });       // message + meta
logger.error(err);                                     // error only (uses err.message)
logger.error(err, 'doWork failed');                    // error + message
logger.error(err, { userId });                         // error + meta
logger.error(err, 'doWork failed', { userId });        // error + message + meta

The first argument is interpreted as an error whenever it is not a string. That includes anything TypeScript's catch clause can produce (Error, plain objects, primitives, null, undefined). Non-Error values are normalized via serializeError:

InputLog fields extracted
new Error('boom')errorMessage, stack, errorName (if not Error)
'string value'errorMessage: 'string value'
axios errorerrorMessage, stack, http: { status, url, โ€ฆ }
fetch-style { response, config }Same, extracted heuristically
{ message: 'x', code: 'ECONN' }errorMessage, errorCode
null / undefinederrorMessage: 'Unknown error'

String-as-first-arg is always a message, never an error. If you want to log a string as an error, wrap it: logger.error(new Error(str)).

Logfmt Utilities

Helper functions for logfmt format:

import { flattenObject, formatLogfmt, formatLogfmtValue } from '@rawnodes/logger';

flattenObject({ user: { id: 123, name: 'John' } });
// { 'user.id': 123, 'user.name': 'John' }

formatLogfmt({ level: 'info', msg: 'hello', userId: 123 });
// "level=info msg=hello userId=123"

API Reference

Logger

class Logger<TContext> {
  static create(config: LoggerConfig, store?: LoggerStore): Logger;

  for(context: string): Logger;
  getStore(): LoggerStore<TContext>;

  // Logging
  // error() accepts an error value as the first argument โ€” including `unknown`
  // (what `catch (err)` produces under TypeScript's strict mode). Non-Error
  // values are normalized via `serializeError` so logs still get a sensible
  // `errorMessage` / `stack` / HTTP payload. See "Error logging" below.
  error(errorOrMessage: Error | string | unknown, messageOrMeta?: string | Meta, meta?: Meta): void;
  warn(message: string, meta?: Meta): void;
  info(message: string, meta?: Meta): void;
  http(message: string, meta?: Meta): void;
  verbose(message: string, meta?: Meta): void;
  debug(message: string, meta?: Meta): void;
  silly(message: string, meta?: Meta): void;

  // Level overrides
  setLevelOverride(match: LevelOverrideMatch, level: LogLevel): void;
  removeLevelOverride(match: LevelOverrideMatch): boolean;
  clearLevelOverrides(): void;
  getLevelOverrides(): LevelOverride[];

  // Winston profiling
  profile(id: string, meta?: object): void;
}

Types

type LogLevel = 'error' | 'warn' | 'info' | 'http' | 'verbose' | 'debug' | 'silly' | 'off';
type LogFormat = 'json' | 'plain' | 'logfmt' | 'simple';
type Meta = object | (() => object);

interface LoggerConfig {
  level: LogLevel | {
    default: LogLevel;
    rules?: LevelRule[];
  };
  console: {
    format: LogFormat;
    level?: LogLevel;      // optional, overrides global level
    rules?: LevelRule[];   // optional, per-transport filtering
  };
  file?: {
    format: LogFormat;
    level?: LogLevel;      // optional, overrides global level
    rules?: LevelRule[];   // optional, per-transport filtering
    dirname: string;
    filename: string;
    datePattern?: string;
    maxFiles?: string;
    maxSize?: string;
    zippedArchive?: boolean;
  };
}

interface LevelRule {
  match: Record<string, unknown> & { context?: string };
  level: LogLevel;
}

Integration Examples

Express

import express from 'express';
import { AppLogger, generateRequestId } from './logger';

const app = express();

app.use((req, res, next) => {
  const context = {
    requestId: req.headers['x-request-id'] || generateRequestId({ short: true }),
    userId: req.user?.id,
  };
  AppLogger.getStore().run(context, () => next());
});

NestJS

import { Injectable, NestMiddleware } from '@nestjs/common';
import { AppLogger, generateRequestId } from './logger';

@Injectable()
export class LoggerMiddleware implements NestMiddleware {
  use(req: any, res: any, next: () => void) {
    const context = {
      requestId: req.headers['x-request-id'] || generateRequestId({ short: true }),
    };
    AppLogger.getStore().run(context, () => next());
  }
}

Telegraf

import { Telegraf } from 'telegraf';
import { AppLogger } from './logger';

bot.use((ctx, next) => {
  const context = {
    telegramUserId: ctx.from?.id,
    chatId: ctx.chat?.id,
  };
  return AppLogger.getStore().run(context, () => next());
});

License

MIT

Keywords

logger

FAQs

Package last updated on 29 Apr 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts