New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

@botler/bim

Package Overview
Dependencies
Maintainers
1
Versions
58
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@botler/bim

Forever Canada operational utilities and scripts

latest
npmnpm
Version
3.9.65
Version published
Maintainers
1
Created
Source

TypeScript Node.js Starter Template

A modern, production-ready TypeScript Node.js starter template with best practices, proper tooling, and essential configurations.

BIM Database Expansion Overview

The BIM Intelligence program now ships with a fully documented persistence layer covering BIM file cataloguing, workflow lifecycle tracking, reporter output storage, and incremental usage persistence. Phases 1–7 of the implementation plan are complete, providing:

  • Schema, repository, and service coverage for bim_files, project_bim_files, workflows, workflow_tasks, and workflow_results tables.
  • Application wiring that optionally persists planner, manager, and reporter executions via WorkflowPersistenceCoordinator.
  • Operational documentation and rollout guidance for staging and production deployments.

Key references:

  • src/database/README.md — repository and service usage.
  • src/bim-intelligence/usage-guide.md — workflow integration and persistence configuration.
  • src/bim-intelligence/docs/workflow-persistence-rollout.md — deployment runbook.
  • docs/workflows/usage-persistence/overview.md — incremental usage architecture, metrics, and operator workflow references.

Unified tool contract (for agent tools)

All agent-exposed tools in this repo return a standardized shape to simplify calling code and eliminate per-tool error branching:

  • ok: boolean — overall success flag
  • error?: { code: string; message: string } — present when ok is false
  • results: any[] — the tool’s primary payload

Conventions:

  • Not found isn’t an error. You’ll get ok: true with results: [] so callers can handle “no matches” uniformly.
  • Tools may include extra fields alongside the contract (for convenience or compatibility), but ok/error/results are the source of truth.
  • Common error codes include: VALIDATION_ERROR, CONFIG_MISSING, DEP_UNAVAILABLE, TIMEOUT, RATE_LIMITED, UNEXPECTED.

Example:

{ "ok": true, "results": [] }

See the specific tool READMEs for details and examples (e.g., Edmonton geocoding/overlays and vector search).

Quick links:

  • Vector search (query_by_text)
  • Edmonton geocoding
  • Edmonton overlay by point
  • Edmonton overlays by address
  • Drawing compliance planning & types: Project overview, Canonical interfaces, Tool stubs
  • BIM workflow persistence rollout: Runbook

🚀 Features

  • TypeScript: Fully configured with strict type checking
  • Modern tooling: ESLint, Prettier, Jest for development workflow
  • Environment management: Secure environment variable handling with validation
  • Logging: Structured logging with different levels
  • Error handling: Comprehensive error handling utilities
  • Testing: Jest testing framework with coverage reports
  • Build system: Clean build process with TypeScript compilation
  • Development: Hot reload with ts-node-dev
  • BIM workflow persistence: Database services and application helpers to capture BIM files, workflow executions, tasks, reporter results, and incremental usage metrics.
  • Usage reporting tooling: scripts/report-workflow-usage.ts provides a no-SQL-required way to inspect token totals per agent/model.

📦 What's Included

  • Pre-configured TypeScript with strict settings
  • ESLint + Prettier for code quality
  • Jest for testing with coverage
  • Environment variable validation with Zod
  • Structured logging utility
  • Error handling classes
  • Basic project structure
  • Essential npm scripts
  • Git hooks ready setup

🛠️ Quick Start

1. Clone and Setup

# Clone this template
git clone git@github.com:botler-ai/nodejs-typescript-starter.git my-new-project
cd my-new-project

# Install dependencies
npm install

2. Configure Environment

# Copy environment template
cp .env.example .env

# Edit .env file with your values
nano .env

3. Start Development

# Start development server with hot reload
npm run dev

# Or build and run production
npm run build
npm start

📚 Available Scripts

ScriptDescription
npm run devStart development server with hot reload
npm run buildBuild the project for production
npm startRun the built application
npm testRun tests
npm run test:watchRun tests in watch mode
npm run test:coverageRun tests with coverage report
npm run lintLint the code
npm run lint:fixLint and fix issues automatically
npm run formatFormat code with Prettier
npm run cleanClean build directory

🗂️ Project Structure

├── src/
│   ├── config/          # Configuration files
│   │   └── env.ts       # Environment validation
│   ├── utils/           # Utility functions
│   │   ├── logger.ts    # Logging utility
│   │   └── errors.ts    # Error handling
│   ├── __tests__/       # Test files
│   └── index.ts         # Main entry point
├── dist/                # Build output
├── .env.example         # Environment template
├── .gitignore          # Git ignore rules
├── .eslintrc.js        # ESLint configuration
├── .prettierrc.js      # Prettier configuration
├── jest.config.js      # Jest configuration
├── tsconfig.json       # TypeScript configuration
└── package.json        # Project configuration

🔧 Configuration

Environment Variables

Copy .env.example to .env and configure:

  • NODE_ENV: Application environment (development/production/test)
  • LOG_LEVEL: Logging level (debug/info/warn/error)
  • Add your service-specific variables as needed

TypeScript

The TypeScript configuration includes:

  • Strict type checking
  • Modern ES2022 target
  • Source maps for debugging
  • Declaration files generation

ESLint & Prettier

Pre-configured for:

  • TypeScript support
  • Prettier integration
  • Common best practices
  • Consistent code style

🧪 Testing

# Run all tests
npm test

# Watch mode for development
npm run test:watch

# Generate coverage report
npm run test:coverage

Tests are located in src/__tests__/ and use Jest with TypeScript support.

📝 Development Guidelines

Adding New Features

  • Create your modules in appropriate src/ subdirectories
  • Add tests in src/__tests__/
  • Update environment variables in .env.example if needed
  • Update this README if necessary

Code Style

  • Use TypeScript strict mode
  • Follow ESLint rules
  • Format with Prettier
  • Write tests for new functionality
  • Use structured logging

Error Handling

Use the provided error classes:

import { AppError, ValidationError, NotFoundError } from "./utils/errors";

throw new ValidationError("Invalid input data");
throw new NotFoundError("User not found");
throw new AppError("Custom error", 500);

Logging

Use the structured logger:

import { logger } from "./utils/logger";

logger.info("Operation completed successfully");
logger.error("Operation failed", { error: err.message });
logger.debug("Debug information", { data });

📊 Workflow Usage Reporting

  • Review docs/workflows/usage-persistence/overview.md for the incremental usage architecture, observability metrics, and operator workflow.
  • Run npx ts-node scripts/report-workflow-usage.ts --workflow-id=<uuid> --include-tasks to inspect planner/manager/reporter/e2b-reader totals without writing SQL. Optional flags (--role=manager, --model=gpt-4o, --json) help slice per agent or model.
  • Persisted data lives in the normalized workflow_usage_totals and workflow_task_usage tables, accessible through WorkflowUsageService for custom dashboards or exports.
  • Metrics emitted under bim_intelligence.workflow.usage_persistence_* expose queue depth, drop rates, and success/failure counters with role/model labels for alerting.

🔒 Security

  • Environment variables are validated and type-safe
  • No sensitive data in version control
  • Secure defaults for production
  • Error messages don't leak sensitive information

📦 Production Deployment

  • Set NODE_ENV=production
  • Apply the BIM database expansion migration (20251110T120000_bim_database_expansion.sql).
  • Configure production environment variables (database credentials, OpenAI keys).
  • Build the application: npm run build
  • Start with: npm start
  • Follow the rollout runbook (src/bim-intelligence/docs/workflow-persistence-rollout.md) for verification and monitoring steps.

🤝 Contributing

  • Fork the repository
  • Create a feature branch
  • Make your changes
  • Add tests
  • Run linting and tests
  • Submit a pull request

📄 License

This project is licensed under the UNLICENSED license.

Happy coding! 🎉

Using this package from other repos (subpath imports)

Import only what you need using subpath exports:

// Database module
import {
   DatabaseService,
   DatabaseConnection,
   getDatabaseConfig,
} from "@botler/bim/database";

const cfg = getDatabaseConfig();
const conn = DatabaseConnection.getInstance(cfg);
const db = new DatabaseService(conn);
await db.initialize();

const registrations = await db
   .getRegistrationService()
   .searchRegistrations({ volunteer: true }, { limit: 20 });

// Batch service
import { BatchService } from "@botler/bim/batch-service";
// Email utilities
import { EmailService } from "@botler/bim/email";
// PDF filler
import { PDFFormFiller } from "@botler/bim/pdf-filler";

Available subpaths:

  • @botler/bim/database
  • @botler/bim/batch-service
  • @botler/bim/email
  • @botler/bim/pdf-filler

Compatibility note (Aug 2025): Repositories and services now accept an optional DatabaseConnection in their constructors to support multi-connection use cases. If omitted, they default to the shared singleton connection, preserving backward compatibility for existing consumers and scripts.

Signing Locations module

See src/database/README.signing-locations.md for the schema, migration steps, seeding, and the full SigningLocationService API. Quickstart:

import { DatabaseService } from "@botler/bim/database";

const db = new DatabaseService();
await db.initialize();

const svc = db.getSigningLocationService();
const { location } = await svc.createLocation({ name: "Supermarket" }, [
   { start_at: new Date(), end_at: new Date(Date.now() + 3600_000) },
]);

Note: For the database module, call DatabaseConnection.getInstance(config) once with a valid config before any repository/service is used.

🕵️ Fraud detection (Chat API with address-aware checks)

This repo includes a fraud detection workflow that evaluates whether a canvasser sign-up appears legitimate. It uses the Chat Completions API (not Responses API) and enforces a strict JSON schema { "legitimate": boolean }.

Key points

  • first_name, last_name, email
  • physicalAddress, physicalMunicipality, physicalPostalCode
  • mailingAddress, mailingMunicipality, mailingPostalCode

Files involved

Environment

Usage (optional)

# Prepare a batch JSONL with address-aware inputs
npm run script:fraud-detection-batch

# Create a small DB-backed sample (e.g., 5 records)
npm run script:fraud-detection-sample -- --limit=5

# One-step (end-to-end): generate sample, submit, wait (poll), save, and print a preview
npm run script:fraud-detection-sample:submit -- --limit=5

# End-to-end flow helpers (prepare, submit, monitor, process)
npm run script:fraud-detection-e2e

Output schema

{ "legitimate": true }

Note: We intentionally remain on the Chat Completions API for batch bodies and single-call scripts.

How records are selected from the database

Both the full batch and the sample flows select canvassers with these conditions:

Run a 5-record end-to-end sample (with polling)

Requirements:

Command:

# Loads env, generates a 5-record sample, submits to OpenAI, polls until complete, saves output, and prints a short preview
npm run script:fraud-detection-sample:submit -- --limit=5

Notes:

  • Results and local artifacts are kept under temp/01-batches/<local-id>/; for samples, the JSONL input also saves under temp/01-fraud-detection-batches/.

FAQs

Package last updated on 05 Feb 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts