![Oracle Drags Its Feet in the JavaScript Trademark Dispute](https://cdn.sanity.io/images/cgdhsj6q/production/919c3b22c24f93884c548d60cbb338e819ff2435-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Oracle Drags Its Feet in the JavaScript Trademark Dispute
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
bespoken-batch-tester
Advanced tools
This project tests utterances in batch - supporting a variety of data sources and reporting formats
This project enables batch testing of utterances for voice experiences.
We use dotenv when running locally, which takes environment variables from a local .env
file.
To set this up, just make a copy of example.env
and name it .env
. Replace the values inside there with the correct values for your configuration.
For running with continuous integration (such as Jenkins, Circle CI or Gitlab), these values should instead come from actual environment variables.
The environment variables store sensitive credentials.
Our config.json
file stores information particular to how the tests should run, but of a non-sensitive nature.
An example file:
{
"fields": {
"imageURL": "$.raw.messageBody.directives[1].payload.content.art.sources[0].url"
},
"job": "utterance-tester",
"metrics": "datadog-metrics",
"sequence": ["open my audio player"]
}
Each of the pieces is explained below:
Each field represents a column in the CSV file.
By default, we take these columns and treat them as expected fields in the response output from the Virtual Device.
However, in some cases, these fields are rather complicated. In that case, we can have a field with a simple name, like imageURL
, but then we specify a JSON path expression which is used to resolve that expression on the response payload.
This way we can perform complex verification on our utterances with a nice, clean CSV file.
Valid values for this are datadog
, cloudwatch
or none
.
This dictates where metrics on the results of the tests are sent.
For tests in which there are multiple steps required before we do the "official" utterance that is being tested, we can specify them here.
Typically, this would involve launching a skill before saying the specific utterance we want to test, but more complex sequences are possible.
.env
file.env
fileThese tests check whether or not the utterance names are being understood correctly by Alexa.
To run the CSV-driven tests, enter this command:
npm run utterances
This will test each utterance defined in the utterances.csv file. The CSV file contains the following fields:
Column | Description |
---|---|
utterance | The utterance to be said to Alexa |
expectedResponses | One-to-many expected responses - each one is separated by a comma |
For the initial entries, we are typically just looking for the name of the recipe in the response. When the tests are run, here is what will happen:
Bespoken Says:
get the recipe for giada chicken piccata
Alexa Replies:
okay for giada chicken piccata I recommend quick chicken piccata 25 minutes to make what would you like start recipe send it to your phone or your next recipe
This test will pass because the actual response contains the expected response from our CSV file.
The gitlab configuration is defined by the file .gitlab-ci.yml
. The file looks like this:
image: node:10
cache:
paths:
- node_modules/
stages:
- test
test:
stage: test
script:
- npm install
- npm run utterances
artifacts:
paths:
- utterance-results.csv
expire_in: 1 week
This build script runs the utterances and saves of the resulting CSV file.
We have setup this project to make use of a few different types of reporting to show off what is possible.
The reporting comes in these forms:
Each is discussed in more detail below.
The CSV File contains the following output:
Column | Description |
---|---|
name | The name of the receipt to ask for |
actualResponse | The actual response back from Alexa |
success | Whether or not the test was successful |
expectedResponses | The possible expected response back from the utterance |
DataDog captures metrics related to how all the tests have performed.
The metrics can be easily reported on.
They also can be used to setup notifcations when certain conditions are triggered.
DataDog makes it easy to create a Dashboard.
DataDog makes it easy to setup alarms.
FAQs
This project tests utterances in batch - supporting a variety of data sources and reporting formats
The npm package bespoken-batch-tester receives a total of 1 weekly downloads. As such, bespoken-batch-tester popularity was classified as not popular.
We found that bespoken-batch-tester demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 4 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.