Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

nlpcloud

Package Overview
Dependencies
Maintainers
1
Versions
57
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

nlpcloud - npm Package Compare versions

Comparing version 1.0.21 to 1.0.22

4

index.js

@@ -13,2 +13,6 @@ const axios = require('axios')

if (lang == 'en') {
lang = ''
}
if (gpu && lang != '') {

@@ -15,0 +19,0 @@ this.rootURL = BASE_URL + '/' + API_VERSION + '/gpu/' + lang + '/' + model

2

package.json
{
"name": "nlpcloud",
"version": "1.0.21",
"version": "1.0.22",
"description": "NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, text generation, question answering, machine translation, language detection, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.\n\nThis is the Node.js client for the NLP Cloud API.\n\nMore details here: https://nlpcloud.io\n\nDocumentation: https://docs.nlpcloud.io",

@@ -5,0 +5,0 @@ "main": "index.js",

@@ -5,3 +5,3 @@ # Node.js Client For NLP Cloud

NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, intent classification, product description and ad generation, chatbot, grammar and spelling correction, keywords and keyphrases extraction, text generation, question answering, machine translation, language detection, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. production, served through a REST API.
NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, intent classification, product description and ad generation, chatbot, grammar and spelling correction, keywords and keyphrases extraction, text generation, question answering, machine translation, language detection, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.

@@ -28,3 +28,3 @@ You can either use the NLP Cloud pre-trained models, fine-tune your own models, or deploy your own models.

Here is a full example that performs Named Entity Recognition (NER) using spaCy's `en_core_web_lg` model, with a fake token:
Here is a full example that summarizes a text using Facebook's Bart Large CNN model, with a fake token:

@@ -34,5 +34,16 @@ ```js

const client = new NLPCloudClient('en_core_web_lg','4eC39HqLyjWDarjtT1zdp7dc')
const client = new NLPCloudClient('bart-large-cnn','4eC39HqLyjWDarjtT1zdp7dc')
client.entities("John Doe is a Go Developer at Google")
client.summarization(`One month after the United States began what has become a
troubled rollout of a national COVID vaccination campaign, the effort is finally
gathering real steam. Close to a million doses -- over 951,000, to be more exact --
made their way into the arms of Americans in the past 24 hours, the U.S. Centers
for Disease Control and Prevention reported Wednesday. That s the largest number
of shots given in one day since the rollout began and a big jump from the
previous day, when just under 340,000 doses were given, CBS News reported.
That number is likely to jump quickly after the federal government on Tuesday
gave states the OK to vaccinate anyone over 65 and said it would release all
the doses of vaccine it has available for distribution. Meanwhile, a number
of states have now opened mass vaccination sites in an effort to get larger
numbers of people inoculated, CBS News reported.`)
.then(function (response) {

@@ -47,3 +58,3 @@ console.log(response.data);

And a full example that uses your own custom model `7894`:
Here is a full example that does the same thing, but on a GPU:

@@ -53,5 +64,16 @@ ```js

const client = new NLPCloudClient('custom_model/7894','4eC39HqLyjWDarjtT1zdp7dc')
const client = new NLPCloudClient('bart-large-cnn','4eC39HqLyjWDarjtT1zdp7dc', true)
client.entities("John Doe is a Go Developer at Google")
client.summarization(`One month after the United States began what has become a
troubled rollout of a national COVID vaccination campaign, the effort is finally
gathering real steam. Close to a million doses -- over 951,000, to be more exact --
made their way into the arms of Americans in the past 24 hours, the U.S. Centers
for Disease Control and Prevention reported Wednesday. That s the largest number
of shots given in one day since the rollout began and a big jump from the
previous day, when just under 340,000 doses were given, CBS News reported.
That number is likely to jump quickly after the federal government on Tuesday
gave states the OK to vaccinate anyone over 65 and said it would release all
the doses of vaccine it has available for distribution. Meanwhile, a number
of states have now opened mass vaccination sites in an effort to get larger
numbers of people inoculated, CBS News reported.`)
.then(function (response) {

@@ -66,25 +88,37 @@ console.log(response.data);

A json object is returned. Here is what it could look like:
Here is a full example that does the same thing, but on a French text:
```js
const NLPCloudClient = require('nlpcloud');
const client = new NLPCloudClient('bart-large-cnn','4eC39HqLyjWDarjtT1zdp7dc', true, 'fr')
client.summarization(`Sur des images aériennes, prises la veille par un vol de surveillance
de la Nouvelle-Zélande, la côte d’une île est bordée d’arbres passés du vert
au gris sous l’effet des retombées volcaniques. On y voit aussi des immeubles
endommagés côtoyer des bâtiments intacts. « D’après le peu d’informations
dont nous disposons, l’échelle de la dévastation pourrait être immense,
spécialement pour les îles les plus isolées », avait déclaré plus tôt
Katie Greenwood, de la Fédération internationale des sociétés de la Croix-Rouge.
Selon l’Organisation mondiale de la santé (OMS), une centaine de maisons ont
été endommagées, dont cinquante ont été détruites sur l’île principale de
Tonga, Tongatapu. La police locale, citée par les autorités néo-zélandaises,
a également fait état de deux morts, dont une Britannique âgée de 50 ans,
Angela Glover, emportée par le tsunami après avoir essayé de sauver les chiens
de son refuge, selon sa famille.`)
.then(function (response) {
console.log(response.data);
})
.catch(function (err) {
console.error(err.response.status);
console.error(err.response.data.detail);
});
```
A JSON object is returned:
```json
[
{
"end": 8,
"start": 0,
"text": "John Doe",
"type": "PERSON"
},
{
"end": 25,
"start": 13,
"text": "Go Developer",
"type": "POSITION"
},
{
"end": 35,
"start": 30,
"text": "Google",
"type": "ORG"
},
]
{
"summary_text": "Over 951,000 doses were given in the past 24 hours. That's the largest number of shots given in one day since the rollout began. That number is likely to jump quickly after the federal government gave states the OK to vaccinate anyone over 65. A number of states have now opened mass vaccination sites."
}
```

@@ -185,20 +219,20 @@

1. The block of text that starts the generated text, as a string. 1200 tokens maximum.
1. (Optional) `minLength`: The minimum number of tokens that the generated text should contain, as an integer. The size of the generated text should not exceed 256 tokens on a CPU plan and 1024 tokens on GPU plan. If `lengthNoInput` is false, the size of the generated text is the difference between `minLength` and the length of your input text. If `lengthNoInput` is true, the size of the generated text simply is `minLength`. Defaults to 10.
1. (Optional) `maxLength`: The maximum number of tokens that the generated text should contain, as an integer. The size of the generated text should not exceed 256 tokens on a CPU plan and 1024 tokens on GPU plan. If `lengthNoInput` is false, the size of the generated text is the difference between `maxLength` and the length of your input text. If `lengthNoInput` is true, the size of the generated text simply is `maxLength`. Defaults to 50.
1. (Optional) `lengthNoInput`: Whether `minLength` and `maxLength` should not include the length of the input text, as a boolean. If false, `minLength` and `maxLength` include the length of the input text. If true, min_length and `maxLength` don't include the length of the input text. Defaults to false.
1. (Optional) `endSequence`: A specific token that should be the end of the generated sequence, as a string. For example if could be `.` or `\n` or `###` or anything else below 10 characters.
1. (Optional) `removeEndSequence`: Whether you want to remove the end sequence form the result, as a boolean. Defaults to false.
1. (Optional) `removeInput`: Whether you want to remove the input text form the result, as a boolean. Defaults to false.
1. (Optional) `doSample`: Whether or not to use sampling ; use greedy decoding otherwise, as a boolean. Defaults to true.
1. (Optional) `numBeams`: Number of beams for beam search. 1 means no beam search. This is an integer. Defaults to 1.
1. (Optional) `earlyStopping`: Whether to stop the beam search when at least num_beams sentences are finished per batch or not, as a boolean. Defaults to false.
1. (Optional) `noRepeatNgramSize`: If set to int > 0, all ngrams of that size can only occur once. This is an integer. Defaults to 0.
1. (Optional) `numReturnSequences`: The number of independently computed returned sequences for each element in the batch, as an integer. Defaults to 1.
1. (Optional) `topK`: The number of highest probability vocabulary tokens to keep for top-k-filtering, as an integer. Maximum 1000 tokens. Defaults to 0.
1. (Optional) `topP`: If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation. This is a float. Should be between 0 and 1. Defaults to 0.7.
1. The block of text that starts the generated text. 256 tokens maximum for GPT-J on CPU, 1024 tokens maximum for GPT-J and GPT-NeoX 20B on GPU, and 2048 tokens maximum for Fast GPT-J and Finetuned GPT-NeoX 20B on GPU.
1. (Optional) `min_length`: The minimum number of tokens that the generated text should contain. 256 tokens maximum for GPT-J on CPU, 1024 tokens maximum for GPT-J and GPT-NeoX 20B on GPU, and 2048 tokens maximum for Fast GPT-J and Finetuned GPT-NeoX 20B on GPU.. If `length_no_input` is false, the size of the generated text is the difference between `min_length` and the length of your input text. If `length_no_input` is true, the size of the generated text simply is `min_length`. Defaults to 10.
1. (Optional) `max_length`: Optional. The maximum number of tokens that the generated text should contain. 256 tokens maximum for GPT-J on CPU, 1024 tokens maximum for GPT-J and GPT-NeoX 20B on GPU, and 2048 tokens maximum for Fast GPT-J and Finetuned GPT-NeoX 20B on GPU. If `length_no_input` is false, the size of the generated text is the difference between `max_length` and the length of your input text. If `length_no_input` is true, the size of the generated text simply is `max_length`. Defaults to 50.
1. (Optional) `length_no_input`: Whether `min_length` and `max_length` should not include the length of the input text, as a boolean. If false, `min_length` and `max_length` include the length of the input text. If true, min_length and `max_length` don't include the length of the input text. Defaults to false.
1. (Optional) `end_sequence`: A specific token that should be the end of the generated sequence, as a string. For example if could be `.` or `\n` or `###` or anything else below 10 characters.
1. (Optional) `remove_end_sequence`: Optional. Whether you want to remove the `end_sequence` string from the result. Defaults to false.
1. (Optional) `remove_input`: Whether you want to remove the input text form the result, as a boolean. Defaults to false.
1. (Optional) `do_sample`: Whether or not to use sampling ; use greedy decoding otherwise, as a boolean. Defaults to true.
1. (Optional) `num_beams`: Number of beams for beam search. 1 means no beam search. This is an integer. Defaults to 1.
1. (Optional) `early_stopping`: Whether to stop the beam search when at least num_beams sentences are finished per batch or not, as a boolean. Defaults to false.
1. (Optional) `no_repeat_ngram_size`: If set to int > 0, all ngrams of that size can only occur once. This is an integer. Defaults to 0.
1. (Optional) `num_return_sequences`: The number of independently computed returned sequences for each element in the batch, as an integer. Defaults to 1.
1. (Optional) `top_k`: The number of highest probability vocabulary tokens to keep for top-k-filtering, as an integer. Maximum 1000 tokens. Defaults to 0.
1. (Optional) `top_p`: If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation. This is a float. Should be between 0 and 1. Defaults to 0.7.
1. (Optional) `temperature`: The value used to module the next token probabilities, as a float. Should be between 0 and 1. Defaults to 1.
1. (Optional) `repetitionPenalty`: The parameter for repetition penalty, as a float. 1.0 means no penalty. Defaults to 1.0.
1. (Optional) `lengthPenalty`: Exponential penalty to the length, as a float. 1.0 means no penalty. Set to values < 1.0 in order to encourage the model to generate shorter sequences, or to a value > 1.0 in order to encourage the model to produce longer sequences. Defaults to 1.0.
1. (Optional) `badWords`: List of tokens that are not allowed to be generated, as a list of strings. Defaults to null.
1. (Optional) `repetition_penalty`: The parameter for repetition penalty, as a float. 1.0 means no penalty. Defaults to 1.0.
1. (Optional) `length_penalty`: Exponential penalty to the length, as a float. 1.0 means no penalty. Set to values < 1.0 in order to encourage the model to generate shorter sequences, or to a value > 1.0 in order to encourage the model to produce longer sequences. Defaults to 1.0.
1. (Optional) `bad_words`: List of tokens that are not allowed to be generated, as a list of strings. Defaults to null.

@@ -205,0 +239,0 @@ ```js

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc