Local Testing Framework
Local Testing Framework (LTF) is designed to develop Corva Lambda applications and generate workload on the local environments and remote machines.
Table of Contents
Features
- Support for the following app types:
- Real-Time (Stream, based on WITS data)
- Polling (Scheduled)
- Languages: Javascript (Node.js), Python, .NET, Java, etc. (full list of supported languages: https://aws.amazon.com/lambda/faqs/)
Requirements
Software
Utility Docker Images
corva-api-lite
- simplified and fast Corva API.redis
- for apps that need to save state between runsmongo
- for apps that need to save state between runs
Usage
Install
Use npm to install the application globally:
npm i -g @corva/local-testing-framework
Update
If there are no breaking changes:
npm update -g @corva/local-testing-framework
in other case you need to reinstall the LTF
Configuring data sources
Currently the app supports external MongoDB, local CSV, JSON files, and MongoDB's JS scripts.
External MongoDB
To use external MongoDB, first of all, you should disable local MongoDB setup by setting infra.mongo.url
.
{
infra: {
mongo: {
url: 'mongodb://external-mongo.com',
},
}
}
Local MongoDB
To use local MongoDB enable it's setup by setting infra.mongo.image
and infra.mongo.port
.
{
infra: {
mongo: {
image: 'mongo:latest',
port: 27018
},
}
}
Then you should specify the directory of file location with data you want to import.
CSV JSON
For CSV (TSV) and JSON data, you need to specify the directory where the data you want to import is located. It should follow the next structure
import-dir/
├── database_1/
│ ├── collection_1.csv
│ ├── collection_2.json
│ ├── collection_3.tsv
│ └── collection_4.json
└── database_2/
├── collection_5.json
└── collection_6.tsv
NOTE if you're exporting JSON data from MongoDB, use --jsonArray
flag to export valid JSON.
JS
This app allows importing data into local MongoDB by writing MongoDB scripts. Comparing to CSV/JSON import, it will enable more advanced configuration of your data.
Here's an example of such script:
**
* @var {import('mongodb').Db} db
*/
var error = true;
var mydb = db.getSiblingDB('test-database');
var myCollection = mydb.testcollection;
var res = [
myCollection.drop(),
myCollection.createIndex({ myfield: 1 }, { unique: true }),
myCollection.createIndex({ thatfield: 1 }),
myCollection.createIndex({ thatfield: 1 }),
myCollection.insert({ myfield: 'hello', thatfield: 'testing' }),
myCollection.insert({ myfield: 'hello2', thatfield: 'testing' }),
myCollection.insert({ myfield: 'hello3', thatfield: 'testing' }),
myCollection.insert({ myfield: 'hello3', thatfield: 'testing' }),
];
printjson(res);
if (error) {
print('Error, exiting');
quit(1);
}
To use this kind of import, you should provide the path to a directory with such JS files.
Preparing your Lambda Application
For Node.js based example see project in examples directory.
Dockerizing
To run Lambda App via LTF, you'll need to create a Docker image. Use the following example:
FROM lambci/lambda:nodejs12.x
COPY . /var/task
If you need other languages than Node, you can find them on dockerhub
NOTE corva init
command can generate Dockerfile for you
Building a Docker Image
If you want to use prebuilt image specify lambda.image
option.
If no lambda.image
option provided or no image found the LTF will try to build the image from sources on each run.
To build your image, you need to run the following command
docker build -t <lambda_name> .
Replace <lambda_name>
with your actual Lambda App name, so it will be easier to recognize what images do you have.
Prepare Local Testing Framework
- Install local-testing-framework
npm i -g @corva/local-testing-framework
- (Optionally) View cli app help via
node bin/corva.js --help
and node bin/corva.js --help
Help command
$ corva --help
corva --help
corva <command>
Commands:
corva.js init [cwd] Generate .corvarc file
corva.js local Run lambda function locally
corva.js completion generate completion script
Options:
--skip-version-upgrade Display warning on version update [boolean] [default: false]
--help Show help [boolean]
--version Show version number
Init command
$ corva init --help
corva init [cwd]
Generate .corvarc file
Positionals:
cwd Directory where to generate a .corvarc file [default: "."]
Options:
--skip-version-upgrade Display warning on version update [boolean] [default: false]
--help Show help [boolean]
--version Show version number [boolean]
Local command
$ corva local --help
corva.js local
Run lambda function locally
Lambda options
--lambda.image Lambda docker image [string]
--lambda.type Lambda type [string] [choices: "scheduler", "stream"] [default: "scheduler"]
--lambda.handler Lambda handler function name [string] [default: "index.handler"]
--lambda.env Additional env KEY=VALUE pairs that should be passed to lambda [array] [default: ["LOG_LEVEL=DEBUG"]]
Infrastructure options
--infra.stopContainers Decide to stop the containers after the test run [boolean] [default: false]
--infra.redis.url External Redis URL for lambda cache [string]
--infra.redis.image Docker image for Redis [string] [default: "redis:latest"]
--infra.redis.port Docker port for redis [string] [default: 6380]
--infra.mongo.url External MongoDB URL for lambda cache [string]
--infra.mongo.image Docker image for MongoDB [string] [default: "mongo"]
--infra.mongo.port Docker port for MongoDB [string] [default: 27018]
Source options
--source.import Should the test data be imported or not [boolean] [default: true]
--source.dir Directory where the test-sources are located (relative to the project root) [string] [default: "./test-sources"]
--source.database Source database for events [string] [default: "corva"]
--source.collection Events source collection [string] [default: "runner#wits"]
Event options
--event.json Pass single event to lambda from provided file [string]
--event.assetId Event asset id [number] [default: 1234]
--event.companyId Event company id [number] [default: 1]
--event.appKey App key [string] [default: "my-company.my-drilling-app"]
--event.sourceType Event source type [string] [choices: "drilling", "drillout", "wireline", "frac"] [default: "drilling"]
--event.config.batchSize Max amount of records in event [number] [default: 10]
--event.config.interval Event invocation interval in seconds (min 60): [number] [default: 60]
--event.config.logType [choices: "time", "depth"] [default: "time"]
Export options
--export.enabled Enables data exporting [boolean] [default: false]
--export.collections Collections to export [array] [default: []]
--export.format Format for export [string] [choices: "json", "csv"] [default: "json"]
--export.fields Fields to be exported in csv [string] [default: {}]
Options:
--skip-version-upgrade Display warning on version update [boolean] [default: false]
--help Show help [boolean]
Running your App
Before running your app you should decide what type of events does it consume. Currently, app re-runner supports Real-Time (Stream) and Polling (Scheduled) apps.
Here are some examples of such events:
Stream Event:
[
{
"metadata": {
"apps": {
"corva.wits-depth-summary": {
"app_connection_id": 123
}
},
"app_stream_id": 456
},
"records": [
{
"asset_id": 1,
"timestamp": 1546300800,
"company_id": 24,
"version": 1,
"data": {
"hole_depth": 99.4,
"weight_on_bit": 1,
"state": "Some unnecessary drilling that's excluded"
}
},
{
"asset_id": 1,
"timestamp": 1546300800,
"company_id": 24,
"version": 1,
"data": {
"hole_depth": 99.4,
"weight_on_bit": 1,
"state": "Rotary Drilling"
}
},
{
"asset_id": 1,
"timestamp": 1546300900,
"company_id": 24,
"version": 1,
"data": {
"hole_depth": 99.5,
"weight_on_bit": 1,
"state": "Rotary Drilling"
}
},
{
"asset_id": 1,
"timestamp": 1546301000,
"company_id": 24,
"version": 1,
"data": {
"hole_depth": 99.9,
"weight_on_bit": 1,
"state": "Rotary Drilling"
}
},
{
"asset_id": 1,
"timestamp": 1546301100,
"company_id": 24,
"version": 1,
"data": {
"hole_depth": 100.3,
"weight_on_bit": 1,
"state": "Rotary Drilling"
}
},
{
"asset_id": 1,
"timestamp": 1546301200,
"company_id": 24,
"version": 1,
"data": {
"hole_depth": 100.5,
"weight_on_bit": 1,
"state": "Rotary Drilling"
}
},
{
"asset_id": 1,
"timestamp": 1546301300,
"company_id": 24,
"version": 1,
"data": {
"hole_depth": 100.6,
"weight_on_bit": 1,
"state": "Rotary Drilling"
}
}
]
}
]
Scheduled Event:
{
"collection": "operations",
"source_type": "drilling",
"environment": "qa",
"interval": 300,
"schedule_start": 1578420000000,
"schedule_end": 1578420300000,
"asset_id": 16280
}
LTF based on app type generates one of those event types. By default, for streams, it queries WITS collection and creates events from it from the very beginning of the well until drilling is completed. For scheduled events, it gets intervals between first and last wits record and generates 300s events within a given interval.
Also, it's possible to launch the app for a single event. See LTF help for how to.
Config File
To create .corvarc configuration file please run corva init
in your $CWD
Example:
{
"version": 4,
"lambda": {
"type": "scheduler",
"handler": "index.handler",
"env": [
"LOG_LEVEL=DEBUG"
],
"port": 9002
},
"infra": {
"stopContainers": false,
"restartContainers": false,
"redis": {
"image": "redis:latest",
"port": 6380
},
"mongo": {
"image": "mongo",
"port": 27018
},
"apiLite": {
"image": "corva/corva-api-lite:latest",
"mode": "data",
"port": 3008
}
},
"source": {
"import": true,
"dir": "./test-sources",
"database": "corva",
"collection": "runner#wits",
"limits": {}
},
"event": {
"assetId": 1234,
"companyId": 1,
"sourceType": "drilling",
"config": {
"batchSize": 10,
"interval": 60
},
"appKey": "my-company.my-drilling-app",
"options": {}
},
"export": {
"enabled": true,
"format": "json",
"collections": [
"corva#destination-collection"
]
},
"debugOutput": "file"
};
Running
Here's an example how to launch LTF:
corva local --lambda.env MY_ENV_VARIABLE="<some_value>" --lambda.image=test-lambda
--lambda.image=test-lambda
- the name/uri of docker image of your application--lambda.env MY_ENV_VARIABLE="<some_value>"
- environment variables that will be passed to lambda docker container
You may override values from the config file with CLI options.
Useful variables:
LOG_LEVEL
controls on which log messages should be printed (info
by default), e.g. LOG_LEVEL=debug
will allow debug, info, warn and error messages, while LOG_LEVEL=error
will permit error messages onlyLOG_THRESHOLD_MESSAGE_SIZE
controls on how many symbols will be printer per logger invocation (1000 by default), e.g. LOG_THRESHOLD_MESSAGE_SIZE=10000
. This will not affect the deployed app.LOG_THRESHOLD_MESSAGE_COUNT
controls on how many log messages will be allowed to print per lambda invocation (15 by default), e.g. LOG_THRESHOLD_MESSAGE_COUNT=100
. This will not affect the deployed app.
NOTES:
Priority of the environment variables:
-
CLI lambda.env
options
-
.corvarc
file lambda.env
options
-
settings.env
section from manifest.json
file.
-
the CLI lambda.env
options will replace .corvarc
provided options
-
CLI/corvarc
will extend manifest.json
provided options
-
only settings.env
section from manifest.json
will be applied on app deployment
Workflow
In general, LTF follows the next steps
- Setup Docker
- Download/check for all needed images (
redis
, mongo
, corva-api-lite
, your app) - Launch Mongo
- Launch Redis
- Launch
corva-api-lite
- Prepare env variables for Lambda application
- Get WITS data and determine bounds for events
- Start loop
- Create an event
- Create a container for Lambda App
- Pass env variables to the container
- Run Lambda app container
- Shutdown container
- Back to 7.1
- (OPTIONAL) Export data to json/csv
- Shutdown all containers and exit
Exploring the outputs
- Lambda run output is piped to the console stdout by default. It is possible to redirect output to file by setting the
debugOutput
config option to file
. - Infrastructure containers like mongo, redis, etc., will not be removed after the test run by default. To remove containers, please override the option
infra.stopContainers
and set it to true
. - Infrastructure containers like mongo, redis, etc., will not be removed before the test run by default. To preserve containers running, please override the option
infra.restartContainers
and set it to true
.
Exporting run data
It is possible to export the content of the MongoDB collections in JSON or CSV format.
The results of export will be available in output
directory in your $CWD
, the file format is $LAMBDA_NAME_$ASSETID$TIMESTAMP_$COLLECTION.$FORMAT, e.g dev-center-gamma-depth_1234_1615369834679_corva#data.drillstring.csv
To export in JSON set export
property to
{
enabled: true,
format: 'json',
collections: ['corva#example1', 'corva#example2] // list of collections you want to export
}
To export in CSV set export
property to
{
enabled: true,
format: 'csv',
collections: ['corva#example1', 'corva#example2']
fields: {
'corva#example1': ['field1', 'field2', 'deep.structure.field'],
'corva#example2': ['some-other-field', 'field_with_dashes']
}
}