Comparing version 1.0.23 to 1.0.24
@@ -17,2 +17,9 @@ export = Client; | ||
}>; | ||
articleGeneration(title: string): Promise<{ | ||
status: number; | ||
statusText: string; | ||
data: { | ||
generated_article: string; | ||
}; | ||
}>; | ||
chatbot(input: string, history: { input: string, response: string }[]): Promise<{ | ||
@@ -34,2 +41,9 @@ status: number; | ||
}>; | ||
codeGeneration(instruction: string): Promise<{ | ||
status: number; | ||
statusText: string; | ||
data: { | ||
generated_code: string; | ||
}; | ||
}>; | ||
dependencies(text: string): Promise<{ | ||
@@ -36,0 +50,0 @@ status: number; |
16
index.js
@@ -40,2 +40,10 @@ const axios = require('axios') | ||
articleGeneration(title) { | ||
const payload = { | ||
'title': title | ||
}; | ||
return axios.post(this.rootURL + '/' + 'article-generation', payload, { headers: this.headers }) | ||
} | ||
chatbot(input, history = null) { | ||
@@ -60,2 +68,10 @@ const payload = { | ||
codeGeneration(instruction) { | ||
const payload = { | ||
'instruction': instruction | ||
}; | ||
return axios.post(this.rootURL + '/' + 'code-generation', payload, { headers: this.headers }) | ||
} | ||
dependencies(text) { | ||
@@ -62,0 +78,0 @@ const payload = { |
{ | ||
"name": "nlpcloud", | ||
"version": "1.0.23", | ||
"description": "NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, text generation, question answering, machine translation, language detection, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.\n\nThis is the Node.js client for the NLP Cloud API.\n\nMore details here: https://nlpcloud.io\n\nDocumentation: https://docs.nlpcloud.io", | ||
"version": "1.0.24", | ||
"description": "NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, text generation, blog post generation, code generation, question answering, machine translation, language detection, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.\n\nThis is the Node.js client for the NLP Cloud API.\n\nMore details here: https://nlpcloud.io\n\nDocumentation: https://docs.nlpcloud.io", | ||
"main": "index.js", | ||
@@ -6,0 +6,0 @@ "scripts": { |
@@ -5,3 +5,3 @@ # Node.js Client For NLP Cloud | ||
NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, intent classification, product description and ad generation, chatbot, grammar and spelling correction, keywords and keyphrases extraction, text generation, question answering, machine translation, language detection, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API. | ||
NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, dialogue summarization, paraphrasing, intent classification, product description and ad generation, chatbot, grammar and spelling correction, keywords and keyphrases extraction, text generation, blog post generation, text generation, question answering, machine translation, language detection, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API. | ||
@@ -161,4 +161,12 @@ You can either use the NLP Cloud pre-trained models, fine-tune your own models, or deploy your own models. | ||
### Chatbot/Conversational AI Endpoint | ||
### Blog Post Generation Endpoint | ||
Call the `articleGeneration()` method and pass the title of the article you want to generate. | ||
```js | ||
client.articleGeneration("<Your title>") | ||
``` | ||
### Chatbot Endpoint | ||
Call the `chatbot()` method and pass the following arguments: | ||
@@ -179,3 +187,3 @@ | ||
1. The candidate labels for your text, as an array of strings | ||
1. (Optional) `multi_class` Whether the classification should be multi-class or not, as a boolean | ||
1. (Optional) `multiClass` Whether the classification should be multi-class or not, as a boolean | ||
@@ -186,2 +194,10 @@ ```js | ||
### Code Generation Endpoint | ||
Call the `codeGeneration()` method and pass the instruction for the code you want to generate. | ||
```js | ||
client.codeGeneration("<Your instruction>") | ||
``` | ||
### Dependencies Endpoint | ||
@@ -218,19 +234,19 @@ | ||
1. The block of text that starts the generated text. 256 tokens maximum for GPT-J on CPU, 1024 tokens maximum for GPT-J and GPT-NeoX 20B on GPU, and 2048 tokens maximum for Fast GPT-J and Finetuned GPT-NeoX 20B on GPU. | ||
1. (Optional) `min_length`: The minimum number of tokens that the generated text should contain. 256 tokens maximum for GPT-J on CPU, 1024 tokens maximum for GPT-J and GPT-NeoX 20B on GPU, and 2048 tokens maximum for Fast GPT-J and Finetuned GPT-NeoX 20B on GPU.. If `length_no_input` is false, the size of the generated text is the difference between `min_length` and the length of your input text. If `length_no_input` is true, the size of the generated text simply is `min_length`. Defaults to 10. | ||
1. (Optional) `max_length`: Optional. The maximum number of tokens that the generated text should contain. 256 tokens maximum for GPT-J on CPU, 1024 tokens maximum for GPT-J and GPT-NeoX 20B on GPU, and 2048 tokens maximum for Fast GPT-J and Finetuned GPT-NeoX 20B on GPU. If `length_no_input` is false, the size of the generated text is the difference between `max_length` and the length of your input text. If `length_no_input` is true, the size of the generated text simply is `max_length`. Defaults to 50. | ||
1. (Optional) `length_no_input`: Whether `min_length` and `max_length` should not include the length of the input text, as a boolean. If false, `min_length` and `max_length` include the length of the input text. If true, min_length and `max_length` don't include the length of the input text. Defaults to false. | ||
1. (Optional) `end_sequence`: A specific token that should be the end of the generated sequence, as a string. For example if could be `.` or `\n` or `###` or anything else below 10 characters. | ||
1. (Optional) `remove_end_sequence`: Optional. Whether you want to remove the `end_sequence` string from the result. Defaults to false. | ||
1. (Optional) `remove_input`: Whether you want to remove the input text form the result, as a boolean. Defaults to false. | ||
1. (Optional) `do_sample`: Whether or not to use sampling ; use greedy decoding otherwise, as a boolean. Defaults to true. | ||
1. (Optional) `num_beams`: Number of beams for beam search. 1 means no beam search. This is an integer. Defaults to 1. | ||
1. (Optional) `early_stopping`: Whether to stop the beam search when at least num_beams sentences are finished per batch or not, as a boolean. Defaults to false. | ||
1. (Optional) `no_repeat_ngram_size`: If set to int > 0, all ngrams of that size can only occur once. This is an integer. Defaults to 0. | ||
1. (Optional) `num_return_sequences`: The number of independently computed returned sequences for each element in the batch, as an integer. Defaults to 1. | ||
1. (Optional) `top_k`: The number of highest probability vocabulary tokens to keep for top-k-filtering, as an integer. Maximum 1000 tokens. Defaults to 0. | ||
1. (Optional) `top_p`: If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation. This is a float. Should be between 0 and 1. Defaults to 0.7. | ||
1. (Optional) `minLength`: The minimum number of tokens that the generated text should contain. 256 tokens maximum for GPT-J on CPU, 1024 tokens maximum for GPT-J and GPT-NeoX 20B on GPU, and 2048 tokens maximum for Fast GPT-J and Finetuned GPT-NeoX 20B on GPU.. If `lengthNoInput` is false, the size of the generated text is the difference between `minLength` and the length of your input text. If `lengthNoInput` is true, the size of the generated text simply is `minLength`. Defaults to 10. | ||
1. (Optional) `maxLength`: Optional. The maximum number of tokens that the generated text should contain. 256 tokens maximum for GPT-J on CPU, 1024 tokens maximum for GPT-J and GPT-NeoX 20B on GPU, and 2048 tokens maximum for Fast GPT-J and Finetuned GPT-NeoX 20B on GPU. If `lengthNoInput` is false, the size of the generated text is the difference between `maxLength` and the length of your input text. If `lengthNoInput` is true, the size of the generated text simply is `maxLength`. Defaults to 50. | ||
1. (Optional) `lengthNoInput`: Whether `minLength` and `maxLength` should not include the length of the input text, as a boolean. If false, `minLength` and `maxLength` include the length of the input text. If true, min_length and `maxLength` don't include the length of the input text. Defaults to false. | ||
1. (Optional) `endSequence`: A specific token that should be the end of the generated sequence, as a string. For example if could be `.` or `\n` or `###` or anything else below 10 characters. | ||
1. (Optional) `removeEndSequence`: Optional. Whether you want to remove the `endSequence` string from the result. Defaults to false. | ||
1. (Optional) `removeInput`: Whether you want to remove the input text form the result, as a boolean. Defaults to false. | ||
1. (Optional) `doSample`: Whether or not to use sampling ; use greedy decoding otherwise, as a boolean. Defaults to true. | ||
1. (Optional) `numBeams`: Number of beams for beam search. 1 means no beam search. This is an integer. Defaults to 1. | ||
1. (Optional) `earlyStopping`: Whether to stop the beam search when at least num_beams sentences are finished per batch or not, as a boolean. Defaults to false. | ||
1. (Optional) `noRepeatNgramSize`: If set to int > 0, all ngrams of that size can only occur once. This is an integer. Defaults to 0. | ||
1. (Optional) `numReturnSequences`: The number of independently computed returned sequences for each element in the batch, as an integer. Defaults to 1. | ||
1. (Optional) `topK`: The number of highest probability vocabulary tokens to keep for top-k-filtering, as an integer. Maximum 1000 tokens. Defaults to 0. | ||
1. (Optional) `topP`: If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation. This is a float. Should be between 0 and 1. Defaults to 0.7. | ||
1. (Optional) `temperature`: The value used to module the next token probabilities, as a float. Should be between 0 and 1. Defaults to 1. | ||
1. (Optional) `repetition_penalty`: The parameter for repetition penalty, as a float. 1.0 means no penalty. Defaults to 1.0. | ||
1. (Optional) `length_penalty`: Exponential penalty to the length, as a float. 1.0 means no penalty. Set to values < 1.0 in order to encourage the model to generate shorter sequences, or to a value > 1.0 in order to encourage the model to produce longer sequences. Defaults to 1.0. | ||
1. (Optional) `bad_words`: List of tokens that are not allowed to be generated, as a list of strings. Defaults to null. | ||
1. (Optional) `repetitionPenalty`: The parameter for repetition penalty, as a float. 1.0 means no penalty. Defaults to 1.0. | ||
1. (Optional) `lengthPenalty`: Exponential penalty to the length, as a float. 1.0 means no penalty. Set to values < 1.0 in order to encourage the model to generate shorter sequences, or to a value > 1.0 in order to encourage the model to produce longer sequences. Defaults to 1.0. | ||
1. (Optional) `badWords`: List of tokens that are not allowed to be generated, as a list of strings. Defaults to null. | ||
@@ -304,3 +320,3 @@ ```js | ||
Call the `sentenceDependencies()` method and pass a block of text made up of several sentencies you want to perform POS + arcs on. | ||
Call the `sentenceDependencies()` method and pass a block of text made up of several sentences you want to perform POS + arcs on. | ||
@@ -307,0 +323,0 @@ ```js |
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
30649
393
360