Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@cs-chatbots/router-masterbot-tools

Package Overview
Dependencies
Maintainers
0
Versions
434
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@cs-chatbots/router-masterbot-tools

## MockService

  • 1.67.3
  • latest
  • npm
  • Socket score

Version published
Weekly downloads
34
decreased by-27.66%
Maintainers
0
Weekly downloads
 
Created
Source

Router Masterbot Tools

MockService

As MockService is new addition to the old codebase, it requires some setup. The service is in typeScript and to not to break the old codebase, it is done as it is. Before publishing the package, the service needs to be compiled (npm run build) to JavaScript.

This module provides Wingbot Processor plugins extending the Wingbot core functionality. In the project such plugin is initialized in the following way:

const { Processor } = require('wingbot');
const { PLUGIN_CLASS } = require('router-masterbot-tools');

const processor = new Processor(...);
processor.plugin(new PLUGIN_CLASS());

Deployer

Deployer is an utility for automation of the application deployment to the MS Azure cloud using the Azure CLI. The CLI is called via exec as it is not provided as an API or a Node.js lib.

The basic usage is:

az login
npm run deploy -- DEPLOYMENT_TARGET

NOTE: -- is not needed in this special case. However it is needed whenever we add some options.

In the first step we interactively login to the Azure account. The deployment command performs the following steps:

  1. Exports the HEAD branch of the GIT repository to the directory ./tmp/deployment.TARGET (the ./tmp folder is created if it does not exists).

  2. Runs npm install and builds the application.

  3. Executes a sequence of Azure CLI commands needed to deploy it to Azure using the ZIP deploy method. The deployment is sent to the inactive slot and a slot swap is triggered after the deploy (so it is expected that auto slot-swap is disabled on Azure side).

To get the list of deployment targets (defined in the configuration file DEPLOY_SCRIPT_PARENT_DIR/deployments.json) we may run:

npm run deploy -- -l

There are more options available, see the output of

npm run deploy -- -h

for more details.

A detailed log is created in tmp/deployment.ENV.log.

Logger

The library provides a logger with the following features:

  1. Logging to Azure AppInsights
  2. Logging to console (with some format standardization).
  3. Logging to process memory - the configured amount of logs is stored in the process memory and exposed as an API

Usage

A common usage in a project is:

const { logger } = require('@cs-chatbots/router-masterbot-tools');
const log = logger(APPLICATION_CONFIG);
log.info(...) // console.* like logging

Configuration

The configuration is expected to contain the property logger where the following properties are recognized:

ParameterTypeMandatoryDescription
logLevelstringNThe lowest log level that is not ignored. Supported values: log, info, warn, error. The default value is info
stringifyObjectsbooleanNAffects only console logging. If set to true complex objects are in a stringified form. Default is false
apiobjectNPresence of this configuration option switches on a logging to the process memory.
api.memAllowedMBnumberYMaximum amount of memory consumed by the in-memory logs (more exactly: the storage used is counted in a JSON-stringified form and the garbace collector is triggered synchronously AFTER ading a record - so the configured value may be exceeded by the size of one log record)
api.userstringYUser for the log API
api.passwordstringYPassword for the log API
api.debugAuthbooleanNWhether incorrect authentication attempt should be logged along with sensitive details (default: false)

Log API

When we are about to log in-memory along with other methods we have to provide log configuration containing an object api. In order to expose this log as an API we have to include the code:

const express = require('express');
const { logger } = require('@cs-chatbots/router-masterbot-tools');
const app = express();
const log = logger(APPLICATION_CONFIG);
log.bindRouter(app);

The API is exposed on the route /logs. In order to call the API we have to provide an HTTP header Authorization in the form Basic SECRET where SECRET is a base64 encoded string API_USER:API_PASSWORD. It supports a GET method with the following query parameters:

ParameterDescription
levelA comma separated list of log levels to be returned (log, info, warn, error)
patternA regex that is applied on the log message
argA comma separated list of conditions applied on complex arguments (all conditions must be true). Each condition has the form: PATH:VALUE, e. g. /root_prop/child_prop:VALUE. If an expression somewhere in the path is an array the condition is considered truthy if it passes for at least one array element.

Management API

Management API provides basic information about application health.

Usage

const { managementApi } = require('@cs-chatbots/router-masterbot-tools');
const express = require('express');
const app = express();
managementApi(app, { CONFIGURATION });

This code exposes GET endpoints:

/management/health
/management/health/liveness
/management/health/readiness
/management/info

In case we need to attach to API to a child router that is already mapped to the the path /management (or another prefix), we may use the CONFIGURATION

{
    routePrefix: ''
}

The Readiness Check

For /management/health/readiness which is the most complex check we typically need to provide handlers for DB check, Redis check and Watson check (for the last one the libary provides a factory). The initialization of the management API with these handlers looks like:

managementApi(
    router, {
        readinessHandler: managementApi.readinessHandlerFactory({
            dbCheck: async () => {
                ...
                return Promise.reject(...);
            }
            redisCheck: async () => {
                ...
                return Promise.reject(...);
            },
            watsonCheck: managementApi.watsonCheckFactory({ config: CONFIG }),
            probeUris: READINESS_PROBE_URIS,
            log: LOGGER
        }),
        infoHandler: (req, res) => {
            res.send(BUILD_INFO_JSON);
        }
    }
);

In the code fragment above

  • CONFIG means a usual bot application configuration.
  • READINESS_PROBE_URIS is a semicolon separated list of URIs to check (a GET request is sent to each of them and a response with an HTTP code < 300 is expected). Applications are expected to configure this list in process.env.READINESS_PROBE_URIS. An undefined or an empty value means that this check will not be applied.
  • LOGGER is any console compatible logger.
  • BUILD_INFO_JSON is an optional JSON with the build info. The library use its own /management/info implementation if we don't provide the infoHandler.

Requirements

The deployer should work on Windows, Linux and Mac provided the following packages are installed:

  1. Azure CLI
  2. GIT.
  3. GNU zip/unzip.

Adding The Deployer to The Project

To make use of the deployer in some project we need to:

  1. Create the configuration file deployments.json.

  2. Create a simple deployment script which looks like

const { Deployer } = require('@cs-chatbots/router-masterbot-tools');
const deployer = new Deployer({ cfgFile: `${__dirname}/deployments.json` });
deployer.execute(process.argv.slice(2));
  1. Add the deploy script to package.json:
    "deploy": "node ./deployment/local/deploy.js"

The Deployment Configuration File

The JSON configuration file specifies the deployment targets and other details related to the deployment. It has the following format:

{
    "targets": { ... }
    "buildCmd": "...",
    "filesToRemove": ["FILE_1", "FILE_2", ...]
}

buildCmd and filesToRemove are optional. targets must specify at least one depoyment target in the following format:

"DEPLOYMENT_TARGET_ID": {
    "subscription": "SUBSCRIPTION_NAME",
    "appName": "APPLICATION_NAME",
    "resourceGroup": "RESOURCE_GROUP",
    "deploymentSlot": "DEPLOYMENT_SLOT",
    "env": {
        "VARIABLE_NAME": "VARIABLE_VALUE"
    }
}

DEPLOYMENT_TARGET_ID may contain only alphanumeric characters, dash and underscore. env is optional.

SharedContextPlugin

The purpose of this plugin is to simplify handling of a shared context in a multi bot environment. The plugin ensures the following functionality:

  1. The shared context sent by the router in the pass_thread_control event is available in req.sharedContext and it is also stored in the conversation state.

  2. Adds the method setSharedContext(data: Object) to the Responder object. This method enables to inform the router about the shared context update. The method also updates the local copy of the shared context in the conversation state. The shared context is not overwritten but merged.

  3. Overrides Responder.trackAsSkill() to store the skill not only in the conversation state but also in the shared context. Additionally, it saves appId in shared context every time a skill is stored.

Log Plugin

How to setup in a bot

make sure that you installed this package:

npm -i @cs-chatbots/router-masterbot-tools

then you have add the plugin to the processor in bot/processor.js (only for non-production)

const { LogPlugin, SharedContextPlugin } = require('@cs-chatbots/router-masterbot-tools');
if (!config.isProduction) {
    processor.plugin(new SharedContextPlugin(stateStorage));
    processor.plugin(new LogPlugin());
}

and last step is add lines below to the bot/bot.js (to the top of the file)

bot.use(/sudo-log-(.+)-(.+)/, (req, res) => {

    const text = req.text();

    if (text) {
        const command = text.split(' ');
        const [, , severity, control] = command;
        res.setLog(severity, control);
    }

});

Usage

With the log plugin registered we may send debug information to the chat using the call

res.log.LOG_METHOD(message, ...params)

where res is the Wingbot responder object and LOG_METHOD stands for error, warn, info or debug. Although we can send any object in params, it is recommended to follow the follwing structure for the params:

{
    meta: {
        event: EVENT_NAME,
        eventLabel: `EVENT_LABEL`
    },
    params: OBJECt_RELATED_TO_THE_EVENT
}

This format is understood by Webchat and enables to display the debug info in a specific way dependent on the event.

KnexLogger

KnexLogger is a class which adds an SQL performance logging capability to the Knex object. The usage is:

const connection = knex(...);
new KnexLogger({
    pool: connection,
    options: ...
}).start();

The main configuration options are:

  • logDurationsAboveMs - queries lasting longer than this value are logged
  • cutDurationsOverMs - queries lasting longer than cutDurationsOverMs are logged without waiting for its completion (minDurationMs is logged along with the query instead of the real query duration)

It is recommended to map the above properties to env. variables KNEX_LOG_DURATIONS_ABOVE_MS and KNEX_CUT_DURATIONS_ABOVE_MS.

MIGRATIONS/SEEDING

You can find few helper functions that are made to be reused across all our repos...

  1. getMigrationDirs(): this function takes path to base knex migration folder and adds other subdirectories as needed based on DB_SEED_ENV
  2. update(): this function returns up and down function that are used by knex to do seeding

Problem being solved by update function: We were using knex migrations to insert seed data needed for the apps to work. The main problems were that many of the columns had json data which are difficult to update and the data were inserted across tens of seed files, making it harder to figure out the final state.

The solution: Simply said the solution is to have one function that gets data for all the tables to be populated with seed data and then you simply add that function call in the migration file which still has to be named after our date convention to achieve correct migration order. This function call will truncate the entire table and insert the data based on a declarative approach which you can see inside cz-azure/seed folder. The function is optimized so that the actual truncate and seeding gets called only once.

How to use: 1. You need to use the folder system using environment variable called DB_SEED_ENV (more info in webchat README) 2. Under your country-cloudProvider folder, create folder "seed" (if it doesn't already) 3. Inside seed folder, create folder "tables" (if it doesn't already) 4. Inside tables folder, create js files that are called exactly as the table that is gonna be seeded e.g. apps.js 5. This file can either export an array of objects (properties are columns and values are data) OR an object as such {data: [], base: {}} where data is the same as in the first case and base is an object of shared properties for all the data AND you can also export overrideFunction (knex: Knex, trx: Transaction, defaultFunction: truncatesAndInsertsData) to allow for further flexibility such as dropping and creating foreign key 6. Files inside tables directory that start with _ are not going to be used as described above and therefore are useful as util files 7. Now all that is left to do is to create a migration file with the date convention inside the SEED folder that includes these two lines: import { migrations } from '@cs-chatbots/router-masterbot-tools'; export default migrations.update(__filename);

Monitoring Library

Usage

To use this library, some middlewares have to be applied and some endpoints have to be created. You can use client profile as a reference.

1. Add config for the prometheus monitoring library

The config should look like this:

{
  version: process.env.IMAGE_TAG || pkg.version || '',
  namespace: process.env.KUBERNETES_POD_NAMESPACE || '',
  name: process.env.ELASTIC_APM_SERVICE_NAME || process.env.APP_NAME || process.env.APPLICATION || '',
  ipAddress: process.env.KUBERNETES_POD_IP || process.env.WEBCHAT_{APP}_LATEST_SERVICE_HOST || '',
  pod: process.env.KUBERNETES_POD_NAME || process.env.HOSTNAME || '',
  buildBranch: process.env.BRANCH_NAME,
  buildVersion: process.env.WEBCHAT_VERSION,
  distPath: join(__dirname, isProduction ? '../../' : '../../dist')
}

2. Initialize prometheus

To initialize Prometheus, create a file where you create new instance of Prometheus with the required config and export it.

3. Add monitoring and routes to Express

In your routes and middleware initialization add express middleware and attach routes. Please note that monitoring variable in the example is Prometheus instance. That can look like this:

app.use(monitoring.getExpressMetricsMiddleware());
monitoring.attachToExpress(app);

4. Register monitoring to Knex

Monitoring is able to gather Knex stats as well. To achieve this we have to attach it to the Knex instance like this:

monitoring.registerKnex(knex, `${dbClient}:${process.env.APPLICATION}`);

5. Add build script and call it during the build

Build script has to be added as well. This build script is used to gather info about the build and print it into JSON file. This build script should be call when build of the service is executed.

Make sure that git is included in the docker file of the service you are building as some info is gathered using git command!

Build file might look like this and should be called at the end of the app build:

import monitoring from '../lib/monitoring/index.js';

monitoring.buildInfoFile();

Notable ENV varibales

By build.js process:

  • WEBCHAT_VERSION TeamCity's or ECP's ENV parameter from deploy script
  • BRANCH_NAME - TeamCity's ENV parameter from deploy script, or actual git branch from fit info
  • buildTime from git info
  • branch from git info
  • commit from git info
  • tags from git info

By POD's ENV parameters:

  • IMAGE_TAG - POD's image tag
  • KUBERNETES_POD_NAMESPACE - POD's namespace
  • ELASTIC_APM_SERVICE_NAME or APP_NAME - POD's name
  • KUBERNETES_POD_IP or WEBCHAT_{APP}_LATEST_SERVICE_HOST - POD's IP address
  • KUBERNETES_POD_NAME or HOSTNAME - POD's instance name

{APP} is a name of the app.

FAQs

Package last updated on 14 Oct 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc