![Oracle Drags Its Feet in the JavaScript Trademark Dispute](https://cdn.sanity.io/images/cgdhsj6q/production/919c3b22c24f93884c548d60cbb338e819ff2435-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Oracle Drags Its Feet in the JavaScript Trademark Dispute
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
bespoken-batch-tester
Advanced tools
This project tests utterances in batch - supporting a variety of data sources and reporting formats
Installation | Execution | GitLab | DataDog
This project enables batch testing of utterances for voice experiences.It leverages Bespoken's Virtual Devices to run large sets of utterances through Alexa, Google Assistant, and other voice platforms.
This package requires Node.js 10 or greater.
To install the Bespoken Batch Tester, just run:
npm install bespoken-batch-tester --save
We recommend creating a new project to store artifacts related to the tester, such as the testing configuration file, CI configuration, and custom source code.
We use dotenv when running locally, which takes environment variables from a local .env
file.
To set this up, just make a copy of example.env
and name it .env
. Replace the values inside there with the correct values for your configuration.
For running with continuous integration (such as Jenkins, Circle CI or Gitlab), these values should instead come from actual environment variables.
If you want to use multiple tokens, potentially for different purposes, leverage tags:
{
"virtualDevices": {
"myToken": ["USAccount"],
"myOtherToken": ["UKAccount"]
}
}
The tags can then be assigned to a record with record.addDeviceTag:
record.addDeviceTag('USAccount')
Only tokens that have that tag (or tags) will be used to process it.
For more information on the best practices for virtual device management, read our guide here.
Here is a bare minimum configuration file:
{
"job": "utterance-tester",
"sequence": ["open my audio player"],
"source": "csv-source",
"sourceFile": "path/to/my/file.csv",
"virtualDevices": {
"myVirtualDevice": ["my-optional-tags"]
}
}
To get started, cut and paste those settings into a new file, such as batch-test.json
.
More information on configuring the batch test is below.
Once the configuration file is created, just enter:
bbt process batch-test.json
And it will be off and running. In practice, we recommend this not be run locally but in a CI environment.
The tester will create a results.csv file, as well as publish metrics to the configured metrics provider.
The environment variables store sensitive credentials.
Our configuration file stores information particular to how the tests should run, but of a non-sensitive nature.
An example file:
{
"fields": {
"imageURL": "$.raw.messageBody.directives[1].payload.content.art.sources[0].url"
},
"interceptor": "./src/my-interceptor",
"job": "utterance-tester",
"metrics": "datadog-metrics",
"sequence": ["open my audio player"],
"source": "csv-source",
"sourceFile": "path/to/my/file.csv",
"limit": 5
}
Each of the properties is explained below:
fields
Each field represents a column in the CSV file.
By default, we take these columns and treat them as expected fields in the response output from the Virtual Device.
However, in some cases, these fields are rather complicated. In that case, we can have a field with a simple name, like imageURL
, but then we specify a JSON path expression which is used to resolve that expression on the response payload.
This way we can perform complex verification on our utterances with a nice, clean CSV file.
interceptor
The interceptor allows for the core behavior of the batch runner to be modified.
There are two main methods currently:
Using interceptRecord, changes can be made to the utterance or the meta data of a record before it is used in a test.
Using interceptResult, changes can be made to the result of processing. This can involve:
success
flag based on custom validation logicYou can read all about the Interceptor class here: https://bespoken.gitlab.io/batch-tester/Interceptor.html
limit
The numbers of records to test during test execution. Very useful when you want to try just a small subset of utterances.
metrics
We have builtin two classes for metrics: datadog-metrics
and cloudwatch-metrics
.
This dictates where metrics on the results of the tests are sent.
Additionally, new metric providers can be used by implementing this base class:
https://bespoken.gitlab.io/batch-tester/Metrics.html
sequence
For tests in which there are multiple steps required before we do the "official" utterance that is being tested, we can specify them here.
Typically, this would involve launching a skill before saying the specific utterance we want to test, but more complex sequences are possible.
source
The source for records. Defaults to csv-source
. Additional builtin option is s3-source
.
For the csv-source
, the source file defaults to input/records.csv
. This can be overridden by setting the sourceFile
property:
{
"sourceFile
}
For the s3-source
, a sourceBucket must be set. Additionally, AWS credentials must be set in the environment that can access this bucket.
To resume a job that did not complete, due to errors or timeout, simply set the RUN_KEY
environment variable.
The run key can be found in the logs for any run - it will appear like this:
BATCH SAVE completed key: 7f6113df3e2af093f095d2d3b2505770d9af1c057b93d0dff378d83c0434ec61
The environment variable can be set locally with:
export RUN_KEY=<RUN_KEY>
It can also be set in Gitlab on the Run Pipeline
screen.
CSV reports can be reprinted at any time by running:
bbt reprint <RUN_KEY>
The run key can be found in the logs for any run - it will appear like this:
BATCH SAVE completed key: 7f6113df3e2af093f095d2d3b2505770d9af1c057b93d0dff378d83c0434ec61
.env
fileThe gitlab configuration is defined by the file .gitlab-ci.yml
. The file looks like this:
image: node:10
cache:
paths:
- node_modules/
stages:
- test
test:
stage: test
script:
- npm install
- npm run utterances
artifacts:
paths:
- utterance-results.csv
expire_in: 1 week
This build script runs the utterances and saves of the resulting CSV file.
We have setup this project to make use of a few different types of reporting to show off what is possible.
The reporting comes in these forms:
Each is discussed in more detail below.
The CSV File contains the following output:
Column | Description |
---|---|
name | The name of the receipt to ask for |
transcript | The actual response back from Alexa |
success | Whether or not the test was successful |
expectedResponses | The possible expected response back from the utterance |
DataDog captures metrics related to how all the tests have performed. Each time we run the tests, and when datadog
has been set as the metric
mechanism to use in the config.json
file, we push the result of each test to DataDog.
In general, we are using next metrics:
utterance.success
utterance.failure
The metrics can be easily reported on through a DataDog Dashboard. They also can be used to setup notifcations when certain conditions are triggered.
Read more about configuring DataDog in our walkthrough.
FAQs
This project tests utterances in batch - supporting a variety of data sources and reporting formats
We found that bespoken-batch-tester demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 4 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.