Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
@cs-chatbots/router-masterbot-tools
Advanced tools
As MockService is new addition to the old codebase, it requires some setup.
The service is in typeScript and to not to break the old codebase, it is done as it is.
Before publishing the package, the service needs to be compiled (npm run build
) to JavaScript.
This module provides Wingbot Processor
plugins extending the Wingbot core
functionality. In the project such plugin is initialized in the following way:
const { Processor } = require('wingbot');
const { PLUGIN_CLASS } = require('router-masterbot-tools');
const processor = new Processor(...);
processor.plugin(new PLUGIN_CLASS());
Deployer is an utility for automation of the application deployment to the MS Azure cloud using the Azure CLI. The CLI is called via exec as it is not provided as an API or a Node.js lib.
The basic usage is:
az login
npm run deploy -- DEPLOYMENT_TARGET
NOTE: --
is not needed in this special case. However it is needed whenever
we add some options.
In the first step we interactively login to the Azure account. The deployment command performs the following steps:
Exports the HEAD branch of the GIT repository to the directory
./tmp/deployment.TARGET
(the ./tmp
folder is created if it does not exists).
Runs npm install
and builds the application.
Executes a sequence of Azure CLI commands needed to deploy it to Azure using the ZIP deploy method. The deployment is sent to the inactive slot and a slot swap is triggered after the deploy (so it is expected that auto slot-swap is disabled on Azure side).
To get the list of deployment targets (defined in the configuration file
DEPLOY_SCRIPT_PARENT_DIR/deployments.json
) we may run:
npm run deploy -- -l
There are more options available, see the output of
npm run deploy -- -h
for more details.
A detailed log is created in tmp/deployment.ENV.log
.
The library provides a logger with the following features:
A common usage in a project is:
const { logger } = require('@cs-chatbots/router-masterbot-tools');
const log = logger(APPLICATION_CONFIG);
log.info(...) // console.* like logging
The configuration is expected to contain the property logger
where the
following properties are recognized:
Parameter | Type | Mandatory | Description |
---|---|---|---|
logLevel | string | N | The lowest log level that is not ignored. Supported values: log , info , warn , error . The default value is info |
stringifyObjects | boolean | N | Affects only console logging. If set to true complex objects are in a stringified form. Default is false |
api | object | N | Presence of this configuration option switches on a logging to the process memory. |
api.memAllowedMB | number | Y | Maximum amount of memory consumed by the in-memory logs (more exactly: the storage used is counted in a JSON-stringified form and the garbace collector is triggered synchronously AFTER ading a record - so the configured value may be exceeded by the size of one log record) |
api.user | string | Y | User for the log API |
api.password | string | Y | Password for the log API |
api.debugAuth | boolean | N | Whether incorrect authentication attempt should be logged along with sensitive details (default: false) |
When we are about to log in-memory along with other methods we have to provide
log configuration containing an object api
. In order to expose this log
as an API we have to include the code:
const express = require('express');
const { logger } = require('@cs-chatbots/router-masterbot-tools');
const app = express();
const log = logger(APPLICATION_CONFIG);
log.bindRouter(app);
The API is exposed on the route /logs
. In order to call the API we have
to provide an HTTP header Authorization
in the form Basic SECRET
where
SECRET
is a base64 encoded string API_USER:API_PASSWORD
. It supports
a GET method with the following query parameters:
Parameter | Description |
---|---|
level | A comma separated list of log levels to be returned (log , info , warn , error ) |
pattern | A regex that is applied on the log message |
arg | A comma separated list of conditions applied on complex arguments (all conditions must be true). Each condition has the form: PATH:VALUE , e. g. /root_prop/child_prop:VALUE . If an expression somewhere in the path is an array the condition is considered truthy if it passes for at least one array element. |
Management API provides basic information about application health.
const { managementApi } = require('@cs-chatbots/router-masterbot-tools');
const express = require('express');
const app = express();
managementApi(app, { CONFIGURATION });
This code exposes GET endpoints:
/management/health
/management/health/liveness
/management/health/readiness
/management/info
In case we need to attach to API to a child router that is already mapped to
the the path /management
(or another prefix), we may use the CONFIGURATION
{
routePrefix: ''
}
For /management/health/readiness
which is the most complex check we typically
need to provide handlers for DB check, Redis check and Watson check (for
the last one the libary provides a factory). The initialization of the
management API with these handlers looks like:
managementApi(
router, {
readinessHandler: managementApi.readinessHandlerFactory({
dbCheck: async () => {
...
return Promise.reject(...);
}
redisCheck: async () => {
...
return Promise.reject(...);
},
watsonCheck: managementApi.watsonCheckFactory({ config: CONFIG }),
probeUris: READINESS_PROBE_URIS,
log: LOGGER
}),
infoHandler: (req, res) => {
res.send(BUILD_INFO_JSON);
}
}
);
In the code fragment above
CONFIG
means a usual bot application configuration.READINESS_PROBE_URIS
is a semicolon separated list of URIs to check
(a GET request is sent to each of them and a response with an HTTP code < 300
is expected). Applications are expected to configure this list in
process.env.READINESS_PROBE_URIS
. An undefined or an empty value means
that this check will not be applied.LOGGER
is any console
compatible logger.BUILD_INFO_JSON
is an optional JSON with the build info. The library use its
own /management/info
implementation if we don't provide the infoHandler
.The deployer should work on Windows, Linux and Mac provided the following packages are installed:
To make use of the deployer in some project we need to:
Create the configuration file deployments.json
.
Create a simple deployment script which looks like
const { Deployer } = require('@cs-chatbots/router-masterbot-tools');
const deployer = new Deployer({ cfgFile: `${__dirname}/deployments.json` });
deployer.execute(process.argv.slice(2));
package.json
: "deploy": "node ./deployment/local/deploy.js"
The JSON configuration file specifies the deployment targets and other details related to the deployment. It has the following format:
{
"targets": { ... }
"buildCmd": "...",
"filesToRemove": ["FILE_1", "FILE_2", ...]
}
buildCmd
and filesToRemove
are optional. targets
must specify at least
one depoyment target in the following format:
"DEPLOYMENT_TARGET_ID": {
"subscription": "SUBSCRIPTION_NAME",
"appName": "APPLICATION_NAME",
"resourceGroup": "RESOURCE_GROUP",
"deploymentSlot": "DEPLOYMENT_SLOT",
"env": {
"VARIABLE_NAME": "VARIABLE_VALUE"
}
}
DEPLOYMENT_TARGET_ID
may contain only alphanumeric characters, dash and
underscore. env
is optional.
The purpose of this plugin is to simplify handling of a shared context in a multi bot environment. The plugin ensures the following functionality:
The shared context sent by the router in the pass_thread_control event is
available in req.sharedContext
and it is also stored in the conversation state.
Adds the method setSharedContext(data: Object)
to the Responder
object.
This method enables to inform the router about the shared context update. The
method also updates the local copy of the shared context in the conversation
state. The shared context is not overwritten but merged.
Overrides Responder.trackAsSkill()
to store the skill not only in the
conversation state but also in the shared context. Additionally, it saves appId
in shared context every time a skill is stored.
make sure that you installed this package:
npm -i @cs-chatbots/router-masterbot-tools
then you have add the plugin to the processor in bot/processor.js
(only for non-production)
const { LogPlugin, SharedContextPlugin } = require('@cs-chatbots/router-masterbot-tools');
if (!config.isProduction) {
processor.plugin(new SharedContextPlugin(stateStorage));
processor.plugin(new LogPlugin());
}
and last step is add lines below to the bot/bot.js
(to the top of the file)
bot.use(/sudo-log-(.+)-(.+)/, (req, res) => {
const text = req.text();
if (text) {
const command = text.split(' ');
const [, , severity, control] = command;
res.setLog(severity, control);
}
});
With the log plugin registered we may send debug information to the chat using the call
res.log.LOG_METHOD(message, ...params)
where res
is the Wingbot responder object and LOG_METHOD
stands for
error
, warn
, info
or debug
. Although we can send any object in params,
it is recommended to follow the follwing structure for the params:
{
meta: {
event: EVENT_NAME,
eventLabel: `EVENT_LABEL`
},
params: OBJECt_RELATED_TO_THE_EVENT
}
This format is understood by Webchat and enables to display the debug info in a specific way dependent on the event.
KnexLogger is a class which adds an SQL performance logging capability to the Knex object. The usage is:
const connection = knex(...);
new KnexLogger({
pool: connection,
options: ...
}).start();
The main configuration options are:
logDurationsAboveMs
- queries lasting longer than this value are loggedcutDurationsOverMs
- queries lasting longer than cutDurationsOverMs
are
logged without waiting for its completion (minDurationMs is logged along with
the query instead of the real query duration)It is recommended to map the above properties to env. variables KNEX_LOG_DURATIONS_ABOVE_MS and KNEX_CUT_DURATIONS_ABOVE_MS.
You can find few helper functions that are made to be reused across all our repos...
Problem being solved by update function: We were using knex migrations to insert seed data needed for the apps to work. The main problems were that many of the columns had json data which are difficult to update and the data were inserted across tens of seed files, making it harder to figure out the final state.
The solution: Simply said the solution is to have one function that gets data for all the tables to be populated with seed data and then you simply add that function call in the migration file which still has to be named after our date convention to achieve correct migration order. This function call will truncate the entire table and insert the data based on a declarative approach which you can see inside cz-azure/seed folder. The function is optimized so that the actual truncate and seeding gets called only once.
How to use: 1. You need to use the folder system using environment variable called DB_SEED_ENV (more info in webchat README) 2. Under your country-cloudProvider folder, create folder "seed" (if it doesn't already) 3. Inside seed folder, create folder "tables" (if it doesn't already) 4. Inside tables folder, create js files that are called exactly as the table that is gonna be seeded e.g. apps.js 5. This file can either export an array of objects (properties are columns and values are data) OR an object as such {data: [], base: {}} where data is the same as in the first case and base is an object of shared properties for all the data AND you can also export overrideFunction (knex: Knex, trx: Transaction, defaultFunction: truncatesAndInsertsData) to allow for further flexibility such as dropping and creating foreign key 6. Files inside tables directory that start with _ are not going to be used as described above and therefore are useful as util files 7. Now all that is left to do is to create a migration file with the date convention inside the SEED folder that includes these two lines: import { migrations } from '@cs-chatbots/router-masterbot-tools'; export default migrations.update(__filename);
To use this library, some middlewares have to be applied and some endpoints have to be created. You can use client profile as a reference.
The config should look like this:
{
version: process.env.IMAGE_TAG || pkg.version || '',
namespace: process.env.KUBERNETES_POD_NAMESPACE || '',
name: process.env.ELASTIC_APM_SERVICE_NAME || process.env.APP_NAME || process.env.APPLICATION || '',
ipAddress: process.env.KUBERNETES_POD_IP || process.env.WEBCHAT_{APP}_LATEST_SERVICE_HOST || '',
pod: process.env.KUBERNETES_POD_NAME || process.env.HOSTNAME || '',
buildBranch: process.env.BRANCH_NAME,
buildVersion: process.env.WEBCHAT_VERSION,
distPath: join(__dirname, isProduction ? '../../' : '../../dist')
}
To initialize Prometheus, create a file where you create new instance of Prometheus with the required config and export it.
In your routes and middleware initialization add express middleware and attach routes. Please note that monitoring variable in the example is Prometheus instance. That can look like this:
app.use(monitoring.getExpressMetricsMiddleware());
monitoring.attachToExpress(app);
Monitoring is able to gather Knex stats as well. To achieve this we have to attach it to the Knex instance like this:
monitoring.registerKnex(knex, `${dbClient}:${process.env.APPLICATION}`);
Build script has to be added as well. This build script is used to gather info about the build and print it into JSON file. This build script should be call when build of the service is executed.
Make sure that git is included in the docker file of the service you are building as some info is gathered using git command!
Build file might look like this and should be called at the end of the app build:
import monitoring from '../lib/monitoring/index.js';
monitoring.buildInfoFile();
By build.js process:
WEBCHAT_VERSION
TeamCity's or ECP's ENV parameter from deploy scriptBRANCH_NAME
- TeamCity's ENV parameter from deploy script, or actual git branch from fit infobuildTime
from git infobranch
from git infocommit
from git infotags
from git infoBy POD's ENV parameters:
IMAGE_TAG
- POD's image tagKUBERNETES_POD_NAMESPACE
- POD's namespaceELASTIC_APM_SERVICE_NAME
or APP_NAME
- POD's nameKUBERNETES_POD_IP
or WEBCHAT_{APP}_LATEST_SERVICE_HOST
- POD's IP addressKUBERNETES_POD_NAME
or HOSTNAME
- POD's instance name{APP} is a name of the app.
FAQs
## MockService
The npm package @cs-chatbots/router-masterbot-tools receives a total of 33 weekly downloads. As such, @cs-chatbots/router-masterbot-tools popularity was classified as not popular.
We found that @cs-chatbots/router-masterbot-tools demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.