Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
ibm-watson
Advanced tools
Node.js client library to use the Watson APIs.
The SDK will no longer be tested with Node versions 6 and 8. Support will be officially dropped in v5.
This package has been moved under the name ibm-watson
. The package is still available at watson-developer-cloud
, but that will no longer receive updates. Use ibm-watson
to stay up to date.
npm install ibm-watson
The examples folder has basic and advanced examples. The examples within each service assume that you already have service credentials.
Credentials are checked for in the following order:
Hard-coded or programatic credentials passed to the service constructor
Environment variables:
SERVICE_NAME_USERNAME
and SERVICE_NAME_PASSWORD
environment propertiesSERVICE_NAME_IAM_APIKEY
and optionally SERVICE_NAME_IAM_URL
, or SERVICE_NAME_IAM_ACCESS_TOKEN
SERVICE_NAME_URL
VCAP_SERVICES
JSON-encoded environment property)If you run your app in IBM Cloud, the SDK gets credentials from the VCAP_SERVICES
environment variable.
See the examples/
folder for Browserify and Webpack client-side SDK examples (with server-side generation of auth tokens.)
Note: not all services currently support CORS, and therefore not all services can be used client-side. Of those that do, most require an auth token to be generated server-side via the Authorization Service.
Watson services are migrating to token-based Identity and Access Management (IAM) authentication.
To specify the type of authentication to use, there is an optional parameter called authentication_type
. Possible values are iam
, basic
, and icp4d
.
To find out which authentication to use, view the service credentials. You find the service credentials for authentication the same way for all Watson services:
On this page, you should be able to see your credentials for accessing your service instance.
In your code, you can use these values in the service constructor or with a method call after instantiating your service.
There are two ways to supply the credentials you found above to the SDK for authentication:
With a credentials file, you just need to put the file in the right place and the SDK will do the work of parsing it and authenticating. You can get this file by clicking the Download button for the credentials in the Manage tab of your service instance.
The file downloaded will be called ibm-credentials.env
. This is the name the SDK will search for and must be preserved unless you want to configure the file path (more on that later). The SDK will look for your ibm-credentials.env
file in the following places (in order):
IBM_CREDENTIALS_FILE
As long as you set that up correctly, you don't have to worry about setting any authentication options in your code. So, for example, if you created and downloaded the credential file for your Discovery instance, you just need to do the following:
const DiscoveryV1 = require('ibm-watson/discovery/v1');
const discovery = new DiscoveryV1({ version: '2019-02-01' });
And that's it!
If you're using more than one service at a time in your code and get two different ibm-credentials.env
files, just put the contents together in one ibm-credentials.env
file and the SDK will handle assigning credentials to their appropriate services.
If you would like to configure the location/name of your credential file, you can set an environment variable called IBM_CREDENTIALS_FILE
. This will take precedence over the locations specified above. Here's how you can do that:
export IBM_CREDENTIALS_FILE="<path>"
where <path>
is something like /home/user/Downloads/<file_name>.env
. If you just provide a path to a directory, the SDK will look for a file called ibm-credentials.env
in that directory.
The SDK also supports setting credentials manually in your code. You will either use IAM credentials or Basic Authentication (username/password) credentials.
Some services use token-based Identity and Access Management (IAM) authentication. IAM authentication uses a service API key to get an access token that is passed with the call. Access tokens are valid for approximately one hour and must be regenerated.
You supply either an IAM service API key or an access token:
Like IAM, you can pass in credentials to let the SDK manage an access token for you or directly supply an access token to do it yourself.
If you choose to let the SDK manage the token, authentication_type
must be set to icp4d
.
const AssistantV1 = require('ibm-watson/assistant/v1');
// letting the SDK manage the token
const assistant = new AssistantV1({
url: '<Service ICP URL>',
icp4d_url: '<ICP token exchange base URL>',
username: '<username>',
password: '<password>',
authentication_type: 'icp4d',
disable_ssl_verification: true,
});
const AssistantV1 = require('ibm-watson/assistant/v1');
// assuming control of managing the access token
const assistant = new AssistantV1({
url: '<Service ICP URL>',
icp4d_access_token: '<User-managed access token>',
disable_ssl_verification: true,
});
Be sure to both disable SSL verification when authenticating and set the endpoint explicitly to the URL given in ICP.
// in the constructor, letting the SDK manage the IAM token
const discovery = new DiscoveryV1({
url: '<service_url>',
version: '<version-date>',
iam_apikey: '<apikey>',
iam_url: '<iam_url>', // optional - the default value is https://cloud.ibm.com/identity/token
});
// in the constructor, assuming control of managing IAM token
const discovery = new DiscoveryV1({
url: '<service_url>',
version: '<version-date>',
iam_access_token: '<access-token>'
});
// after instantiation, assuming control of managing IAM token
const discovery = new DiscoveryV1({
url: '<service_url>',
version: '<version-date>'
});
discovery.setAccessToken('<access-token>')
var DiscoveryV1 = require('ibm-watson/discovery/v1');
var discovery = new DiscoveryV1({
version: '{version}',
username: '{username}',
password: '{password}'
});
All SDK methods are asynchronous, as they are making network requests to Watson services. To handle receiving the data from these requests, the SDK offers support for both Promises and Callback functions. A Promise will be returned by default unless a Callback function is provided.
const discovery = new watson.DiscoveryV1({
/* iam_apikey, version, url, etc... */
});
// using Promises
discovery.listEnvironments()
.then(body => {
console.log(JSON.stringify(body, null, 2));
})
.catch(err => {
console.log(err);
});
// using Promises provides the ability to use async / await
async function callDiscovery() { // note that callDiscovery also returns a Promise
const body = await discovery.listEnvironments();
}
// using a Callback function
discovery.listEnvironments((err, res) => {
if (err) {
console.log(err);
} else {
console.log(JSON.stringify(res, null, 2));
}
});
Custom headers can be passed with any request. Each method has an optional parameter headers
which can be used to pass in these custom headers, which can override headers that we use as parameters.
For example, this is how you can pass in custom headers to Watson Assistant service. In this example, the 'custom'
value for 'Accept-Language'
will override the default header for 'Accept-Language'
, and the 'Custom-Header'
while not overriding the default headers, will additionally be sent with the request.
var assistant = new watson.AssistantV1({
/* username, password, version, url, etc... */
});
assistant.message({
workspace_id: 'something',
input: {'text': 'Hello'},
headers: {
'Custom-Header': 'custom',
'Accept-Language': 'custom'
})
.then(result => {
console.log(JSON.stringify(result, null, 2));
})
.catch(err => {
console.log('error:', err);
});
To retrieve the HTTP response, all methods can be called with a callback function with three parameters, with the third being the response. Users for example may retrieve the response headers with this usage pattern.
If using Promises, the parameter return_response
must be added and set to true
. Then, the result returned will be equivalent to the third argument in the callback function - the entire response.
Here is an example of how to access the response headers for Watson Assistant:
var assistant = new watson.AssistantV1({
/* username, password, version, url, etc... */
});
assistant.message(params, function(err, result, response) {
if (err)
console.log('error:', err);
else
console.log(response.headers);
});
// using Promises
params.return_response = true;
assistant.message(params)
.then(response => {
console.log(response.headers);
})
.catch(err => {
console.log('error:', err);
});
By default, all requests are logged. This can be disabled of by setting the X-Watson-Learning-Opt-Out
header when creating the service instance:
var myInstance = new watson.WhateverServiceV1({
/* username, password, version, url, etc... */
headers: {
"X-Watson-Learning-Opt-Out": true
}
});
To use the SDK (which makes HTTPS requests) behind an HTTP proxy, a special tunneling agent must be used. Use the package tunnel
for this. Configure this agent with your proxy information, and pass it in as the HTTPS agent in the service constructor. Additionally, you must set proxy
to false
in the service constructor. See this example configuration:
const tunnel = require('tunnel');
const AssistantV1 = require('ibm-watson/assistant/v1');
const assistant = new AssistantV1({
iam_apikey: 'fakekey1234',
version: '2019-02-28',
httpsAgent: tunnel.httpsOverHttp({
proxy: {
host: 'some.host.org',
port: 1234,
},
}),
proxy: false,
});
The HTTP client can be configured to disable SSL verification. Note that this has serious security implications - only do this if you really mean to! ⚠️
To do this, set disable_ssl_verification
to true
in the service constructor, like below:
const discovery = new DiscoveryV1({
url: '<service_url>',
version: '<version-date>',
iam_apikey: '<apikey>',
disable_ssl_verification: true, // this will disable SSL verification for any request made with this object
});
You can find links to the documentation at https://cloud.ibm.com/developer/watson/documentation. Find the service that you're interested in, click API reference, and then select the Node tab.
There are also auto-generated JSDocs available at http://watson-developer-cloud.github.io/node-sdk/master/
If you are having difficulties using the APIs or have a question about the Watson services, please ask a question at dW Answers or Stack Overflow.
The Authorization service can generate auth tokens for situations where providing the service username/password is undesirable.
Tokens are valid for 1 hour and may be sent using the X-Watson-Authorization-Token
header or the watson-token
query param.
Note that the token is supplied URL-encoded, and will not be accepted if it is double-encoded in a querystring.
NOTE: Authenticating with the
X-Watson-Authorization-Token
header or thewatson-token
query param is now deprecated. The token continues to work with Cloud Foundry services, but is not supported for services that use Identity and Access Management (IAM) authentication. For details see Authenticating with IAM tokens or the README in the IBM Watson SDK you use. The Authorization SDK now supports returning IAM Access Tokens when instantiated with an IAM API key.
var watson = require('ibm-watson');
// to get an IAM Access Token
var authorization = new watson.AuthorizationV1({
iam_apikey: '<Service API key>',
iam_url: '<IAM endpoint URL - OPTIONAL>',
});
authorization.getToken(function (err, token) {
if (!token) {
console.log('error:', err);
} else {
// Use your token here
}
});
// to get a Watson Token - NOW DEPRECATED
var authorization = new watson.AuthorizationV1({
username: '<Text to Speech username>',
password: '<Text to Speech password>',
url: 'https://stream.watsonplatform.net/authorization/api', // Speech tokens
});
authorization.getToken({
url: 'https://stream.watsonplatform.net/text-to-speech/api'
},
function (err, token) {
if (!token) {
console.log('error:', err);
} else {
// Use your token here
}
});
Use the Assistant service to determine the intent of a message.
Note: You must first create a workspace via IBM Cloud. See the documentation for details.
var AssistantV2 = require('ibm-watson/assistant/v2');
var assistant = new AssistantV2({
iam_apikey: '<apikey>',
url: 'https://gateway.watsonplatform.net/assistant/api/',
version: '2018-09-19'
});
assistant.message(
{
input: { text: "What's the weather?" },
assistant_id: '<assistant id>',
session_id: '<session id>',
})
.then(result => {
console.log(JSON.stringify(result, null, 2));
})
.catch(err => {
console.log(err);
});
Use the Assistant service to determine the intent of a message.
Note: You must first create a workspace via IBM Cloud. See the documentation for details.
var AssistantV1 = require('ibm-watson/assistant/v1');
var assistant = new AssistantV1({
iam_apikey: '<apikey>',
url: 'https://gateway.watsonplatform.net/assistant/api/',
version: '2018-02-16'
});
assistant.message(
{
input: { text: "What's the weather?" },
workspace_id: '<workspace id>'
})
.then(result => {
console.log(JSON.stringify(result, null, 2));
})
.catch(err => {
console.log(err);
});
Use the Compare Comply service to compare and classify documents.
const fs = require('fs');
const CompareComplyV1 = require('ibm-watson/compare-comply/v1');
const compareComply = new CompareComplyV1({
iam_apikey: '<apikey>',
url: 'https://gateway.watsonplatform.net/compare-comply/api',
version: '2018-12-06'
});
compareComply.compareDocuments(
{
file_1: fs.createReadStream('<path-to-file-1>'),
file_1_filename: '<filename-1>',
file_1_label: 'file-1',
file_2: fs.createReadStream('<path-to-file-2>'),
file_2_filename: '<filename-2>',
file_2_label: 'file-2',
})
.then(result => {
console.log(JSON.stringify(result, null, 2));
})
.catch(err => {
console.log(err);
});
Use the Discovery Service to search and analyze structured and unstructured data.
var DiscoveryV1 = require('ibm-watson/discovery/v1');
var discovery = new DiscoveryV1({
iam_apikey: '<apikey>',
url: 'https://gateway.watsonplatform.net/discovery/api/',
version: '2017-09-01'
});
discovery.query(
{
environment_id: '<environment_id>',
collection_id: '<collection_id>',
query: 'my_query'
})
.then(result => {
console.log(JSON.stringify(result, null, 2));
})
.catch(err => {
console.log(err);
});
Translate text from one language to another or idenfity a language using the Language Translator service.
const LanguageTranslatorV3 = require('ibm-watson/language-translator/v3');
const languageTranslator = new LanguageTranslatorV3({
iam_apikey: '<apikey>',
url: 'https://gateway.watsonplatform.net/language-translator/api/',
version: 'YYYY-MM-DD',
});
languageTranslator.translate(
{
text: 'A sentence must have a verb',
source: 'en',
target: 'es'
})
.then(translation => {
console.log(JSON.stringify(translation, null, 2));
})
.catch(err => {
console.log('error:', err);
});
languageTranslator.identify(
{
text:
'The language translator service takes text input and identifies the language used.'
})
.then(language => {
console.log(JSON.stringify(language, null, 2));
})
.catch(err => {
console.log('error:', err);
});
Use Natural Language Classifier service to create a classifier instance by providing a set of representative strings and a set of one or more correct classes for each as training. Then use the trained classifier to classify your new question for best matching answers or to retrieve next actions for your application.
var NaturalLanguageClassifierV1 = require('ibm-watson/natural-language-classifier/v1');
var classifier = new NaturalLanguageClassifierV1({
iam_apikey: '<apikey>',
url: 'https://gateway.watsonplatform.net/natural-language-classifier/api/'
});
classifier.classify(
{
text: 'Is it sunny?',
classifier_id: '<classifier-id>'
})
.then(result => {
console.log(JSON.stringify(result, null, 2));
})
.catch(err => {
console.log('error:', err);
});
See this example to learn how to create a classifier.
Use Natural Language Understanding is a collection of natural language processing APIs that help you understand sentiment, keywords, entities, high-level concepts and more.
var fs = require('fs');
var NaturalLanguageUnderstandingV1 = require('ibm-watson/natural-language-understanding/v1.js');
var nlu = new NaturalLanguageUnderstandingV1({
iam_apikey: '<apikey>',
version: '2018-04-05',
url: 'https://gateway.watsonplatform.net/natural-language-understanding/api/'
});
nlu.analyze(
{
html: file_data, // Buffer or String
features: {
concepts: {},
keywords: {}
}
})
.then(result => {
console.log(JSON.stringify(result, null, 2));
})
.catch(err => {
console.log('error:', err);
});
Analyze text in English and get a personality profile by using the Personality Insights service.
var PersonalityInsightsV3 = require('ibm-watson/personality-insights/v3');
var personalityInsights = new PersonalityInsightsV3({
iam_apikey: '<apikey>',
version: '2016-10-19',
url: 'https://gateway.watsonplatform.net/personality-insights/api/'
});
personalityInsights.profile(
{
content: 'Enter more than 100 unique words here...',
content_type: 'text/plain',
consumption_preferences: true
})
.then(result => {
console.log(JSON.stringify(result, null, 2));
})
.catch(err => {
console.log('error:', err);
});
Use the Speech to Text service to recognize the text from a .wav
file.
var SpeechToTextV1 = require('ibm-watson/speech-to-text/v1');
var fs = require('fs');
var speechToText = new SpeechToTextV1({
iam_apikey: '<apikey>',
url: 'https://stream.watsonplatform.net/speech-to-text/api/'
});
var params = {
// From file
audio: fs.createReadStream('./resources/speech.wav'),
content_type: 'audio/l16; rate=44100'
};
speechToText.recognize(params)
.then(result => {
console.log(JSON.stringify(result, null, 2));
})
.catch(err => {
console.log(err);
});
// or streaming
fs.createReadStream('./resources/speech.wav')
.pipe(speechToText.recognizeUsingWebSocket({ content_type: 'audio/l16; rate=44100' }))
.pipe(fs.createWriteStream('./transcription.txt'));
Use the Text to Speech service to synthesize text into an audio file.
var TextToSpeechV1 = require('ibm-watson/text-to-speech/v1');
var fs = require('fs');
var textToSpeech = new TextToSpeechV1({
iam_apikey: '<apikey>',
url: 'https://stream.watsonplatform.net/text-to-speech/api/'
});
var params = {
text: 'Hello from IBM Watson',
voice: 'en-US_AllisonVoice', // Optional voice
accept: 'audio/wav'
};
// Synthesize speech, correct the wav header, then save to disk
// (wav header requires a file length, but this is unknown until after the header is already generated and sent)
textToSpeech
.synthesize(params)
.then(result => {
textToSpeech.repairWavHeader(audio);
fs.writeFileSync('audio.wav', audio);
console.log('audio.wav written with a corrected wav header');
})
.catch(err => {
console.log(err);
});
// or, using WebSockets
textToSpeech.synthesizeUsingWebSocket(params);
synthStream.pipe(fs.createWriteStream('./audio.ogg'));
// see more information in examples/text_to_speech_websocket.js
Use the Tone Analyzer service to analyze the emotion, writing and social tones of a text.
var ToneAnalyzerV3 = require('ibm-watson/tone-analyzer/v3');
var toneAnalyzer = new ToneAnalyzerV3({
iam_apikey: '<apikey>',
version: '2016-05-19',
url: 'https://gateway.watsonplatform.net/tone-analyzer/api/'
});
toneAnalyzer.tone(
{
tone_input: 'Greetings from Watson Developer Cloud!',
content_type: 'text/plain'
})
.then(result => {
console.log(JSON.stringify(result, null, 2));
})
.catch(err => {
console.log(err);
});
Use the Visual Recognition service to recognize the following picture.
var VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3');
var fs = require('fs');
var visualRecognition = new VisualRecognitionV3({
url: '<service_url>',
version: '2018-03-19',
iam_apikey: '<apikey>',
});
var params = {
images_file: fs.createReadStream('./resources/car.png')
};
visualRecognition.classify(params)
.then(result => {
console.log(JSON.stringify(result, null, 2));
})
.catch(err => {
console.log(err);
});
Sample code for integrating Tone Analyzer and Assistant is provided in the examples directory.
By default, the library tries to authenticate and will ask for iam_apikey
, iam_access_token
, or username
and password
to send an Authorization
header. You can avoid this by using:
use_unauthenticated
.
var watson = require('ibm-watson');
var assistant = new watson.AssistantV1({
use_unauthenticated: true
});
This library relies on the axios
npm module written by
axios to call the Watson Services. To debug the apps, add
'axios' to the NODE_DEBUG
environment variable:
$ NODE_DEBUG='axios' node app.js
where app.js
is your Node.js file.
Running all the tests:
$ npm test
Running a specific test:
$ jest '<path to test>'
Find more open source projects on the IBM Github Page.
See CONTRIBUTING.
We love to highlight cool open-source projects that use this SDK! If you'd like to get your project added to the list, feel free to make an issue linking us to it.
This library is licensed under Apache 2.0. Full license text is available in COPYING.
FAQs
Client library to use the IBM Watson Services
The npm package ibm-watson receives a total of 7,050 weekly downloads. As such, ibm-watson popularity was classified as popular.
We found that ibm-watson demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.