@xenova/transformers
Advanced tools
Comparing version 2.5.2 to 2.5.3
{ | ||
"name": "@xenova/transformers", | ||
"version": "2.5.2", | ||
"version": "2.5.3", | ||
"description": "State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!", | ||
@@ -5,0 +5,0 @@ "main": "./src/transformers.js", |
@@ -101,3 +101,3 @@ | ||
<script type="module"> | ||
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.2'; | ||
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.3'; | ||
</script> | ||
@@ -129,3 +129,3 @@ ``` | ||
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.2/dist/), which should work out-of-the-box. You can customize this as follows: | ||
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.3/dist/), which should work out-of-the-box. You can customize this as follows: | ||
@@ -193,13 +193,13 @@ | ||
| [Conversational](https://huggingface.co/tasks/conversational) | `conversational` | Generating conversational text that is relevant, coherent and knowledgable given a prompt. | ❌ | | ||
| [Fill-Mask](https://huggingface.co/tasks/fill-mask) | `fill-mask` | Masking some of the words in a sentence and predicting which words should replace those masks. | ✅ | | ||
| [Question Answering](https://huggingface.co/tasks/question-answering) | `question-answering` | Retrieve the answer to a question from a given text. | ✅ | | ||
| [Sentence Similarity](https://huggingface.co/tasks/sentence-similarity) | `sentence-similarity` | Determining how similar two texts are. | ✅ | | ||
| [Summarization](https://huggingface.co/tasks/summarization) | `summarization` | Producing a shorter version of a document while preserving its important information. | ✅ | | ||
| [Fill-Mask](https://huggingface.co/tasks/fill-mask) | `fill-mask` | Masking some of the words in a sentence and predicting which words should replace those masks. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.FillMaskPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=fill-mask&library=transformers.js) | | ||
| [Question Answering](https://huggingface.co/tasks/question-answering) | `question-answering` | Retrieve the answer to a question from a given text. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.QuestionAnsweringPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=question-answering&library=transformers.js) | | ||
| [Sentence Similarity](https://huggingface.co/tasks/sentence-similarity) | `sentence-similarity` | Determining how similar two texts are. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.FeatureExtractionPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=feature-extraction&library=transformers.js) | | ||
| [Summarization](https://huggingface.co/tasks/summarization) | `summarization` | Producing a shorter version of a document while preserving its important information. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.SummarizationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=summarization&library=transformers.js) | | ||
| [Table Question Answering](https://huggingface.co/tasks/table-question-answering) | `table-question-answering` | Answering a question about information from a given table. | ❌ | | ||
| [Text Classification](https://huggingface.co/tasks/text-classification) | `text-classification` or `sentiment-analysis` | Assigning a label or class to a given text. | ✅ | | ||
| [Text Generation](https://huggingface.co/tasks/text-generation#completion-generation-models) | `text-generation` | Producing new text by predicting the next word in a sequence. | ✅ | | ||
| [Text-to-text Generation](https://huggingface.co/tasks/text-generation#text-to-text-generation-models) | `text2text-generation` | Converting one text sequence into another text sequence. | ✅ | | ||
| [Token Classification](https://huggingface.co/tasks/token-classification) | `token-classification` or `ner` | Assigning a label to each token in a text. | ✅ | | ||
| [Translation](https://huggingface.co/tasks/translation) | `translation` | Converting text from one language to another. | ✅ | | ||
| [Zero-Shot Classification](https://huggingface.co/tasks/zero-shot-classification) | `zero-shot-classification` | Classifying text into classes that are unseen during training. | ✅ | | ||
| [Text Classification](https://huggingface.co/tasks/text-classification) | `text-classification` or `sentiment-analysis` | Assigning a label or class to a given text. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TextClassificationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=text-classification&library=transformers.js) | | ||
| [Text Generation](https://huggingface.co/tasks/text-generation#completion-generation-models) | `text-generation` | Producing new text by predicting the next word in a sequence. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TextGenerationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js) | | ||
| [Text-to-text Generation](https://huggingface.co/tasks/text-generation#text-to-text-generation-models) | `text2text-generation` | Converting one text sequence into another text sequence. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.Text2TextGenerationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=text2text-generation&library=transformers.js) | | ||
| [Token Classification](https://huggingface.co/tasks/token-classification) | `token-classification` or `ner` | Assigning a label to each token in a text. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TokenClassificationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=token-classification&library=transformers.js) | | ||
| [Translation](https://huggingface.co/tasks/translation) | `translation` | Converting text from one language to another. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TranslationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=translation&library=transformers.js) | | ||
| [Zero-Shot Classification](https://huggingface.co/tasks/zero-shot-classification) | `zero-shot-classification` | Classifying text into classes that are unseen during training. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ZeroShotClassificationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=zero-shot-classification&library=transformers.js) | | ||
@@ -211,7 +211,7 @@ #### Vision | ||
| [Depth Estimation](https://huggingface.co/tasks/depth-estimation) | `depth-estimation` | Predicting the depth of objects present in an image. | ❌ | | ||
| [Image Classification](https://huggingface.co/tasks/image-classification) | `image-classification` | Assigning a label or class to an entire image. | ✅ | | ||
| [Image Segmentation](https://huggingface.co/tasks/image-segmentation) | `image-segmentation` | Divides an image into segments where each pixel is mapped to an object. This task has multiple variants such as instance segmentation, panoptic segmentation and semantic segmentation. | ✅ | | ||
| [Image Classification](https://huggingface.co/tasks/image-classification) | `image-classification` | Assigning a label or class to an entire image. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ImageClassificationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=image-classification&library=transformers.js) | | ||
| [Image Segmentation](https://huggingface.co/tasks/image-segmentation) | `image-segmentation` | Divides an image into segments where each pixel is mapped to an object. This task has multiple variants such as instance segmentation, panoptic segmentation and semantic segmentation. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ImageSegmentationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=image-segmentation&library=transformers.js) | | ||
| [Image-to-Image](https://huggingface.co/tasks/image-to-image) | `image-to-image` | Transforming a source image to match the characteristics of a target image or a target image domain. | ❌ | | ||
| [Mask Generation](https://huggingface.co/tasks/mask-generation) | `mask-generation` | Generate masks for the objects in an image. | ❌ | | ||
| [Object Detection](https://huggingface.co/tasks/object-detection) | `object-detection` | Identify objects of certain defined classes within an image. | ✅ | | ||
| [Object Detection](https://huggingface.co/tasks/object-detection) | `object-detection` | Identify objects of certain defined classes within an image. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ObjectDetectionPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=object-detection&library=transformers.js) | | ||
| [Video Classification](https://huggingface.co/tasks/video-classification) | n/a | Assigning a label or class to an entire video. | ❌ | | ||
@@ -224,5 +224,5 @@ | [Unconditional Image Generation](https://huggingface.co/tasks/unconditional-image-generation) | n/a | Generating images with no condition in any context (like a prompt text or another image). | ❌ | | ||
|--------------------------|----|-------------|------------| | ||
| [Audio Classification](https://huggingface.co/tasks/audio-classification) | `audio-classification` | Assigning a label or class to a given audio. | ✅ | | ||
| [Audio Classification](https://huggingface.co/tasks/audio-classification) | `audio-classification` | Assigning a label or class to a given audio. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.AudioClassificationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=audio-classification&library=transformers.js) | | ||
| [Audio-to-Audio](https://huggingface.co/tasks/audio-to-audio) | n/a | Generating audio from an input audio source. | ❌ | | ||
| [Automatic Speech Recognition](https://huggingface.co/tasks/automatic-speech-recognition) | `automatic-speech-recognition` | Transcribing a given audio into text. | ✅ | | ||
| [Automatic Speech Recognition](https://huggingface.co/tasks/automatic-speech-recognition) | `automatic-speech-recognition` | Transcribing a given audio into text. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.AutomaticSpeechRecognitionPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&library=transformers.js) | | ||
| [Text-to-Speech](https://huggingface.co/tasks/text-to-speech) | n/a | Generating natural-sounding speech given text input. | ❌ | | ||
@@ -244,7 +244,7 @@ | ||
| [Document Question Answering](https://huggingface.co/tasks/document-question-answering) | `document-question-answering` | Answering questions on document images. | ❌ | | ||
| [Feature Extraction](https://huggingface.co/tasks/feature-extraction) | `feature-extraction` | Transforming raw data into numerical features that can be processed while preserving the information in the original dataset. | ✅ | | ||
| [Image-to-Text](https://huggingface.co/tasks/image-to-text) | `image-to-text` | Output text from a given image. | ✅ | | ||
| [Feature Extraction](https://huggingface.co/tasks/feature-extraction) | `feature-extraction` | Transforming raw data into numerical features that can be processed while preserving the information in the original dataset. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.FeatureExtractionPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=feature-extraction&library=transformers.js) | | ||
| [Image-to-Text](https://huggingface.co/tasks/image-to-text) | `image-to-text` | Output text from a given image. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ImageToTextPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=image-to-text&library=transformers.js) | | ||
| [Text-to-Image](https://huggingface.co/tasks/text-to-image) | `text-to-image` | Generates images from input text. | ❌ | | ||
| [Visual Question Answering](https://huggingface.co/tasks/visual-question-answering) | `visual-question-answering` | Answering open-ended questions based on an image. | ❌ | | ||
| [Zero-Shot Image Classification](https://huggingface.co/tasks/zero-shot-image-classification) | `zero-shot-image-classification` | Classifying images into classes that are unseen during training. | ✅ | | ||
| [Zero-Shot Image Classification](https://huggingface.co/tasks/zero-shot-image-classification) | `zero-shot-image-classification` | Classifying images into classes that are unseen during training. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ZeroShotImageClassificationPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&library=transformers.js) | | ||
@@ -251,0 +251,0 @@ |
@@ -32,3 +32,3 @@ /** | ||
const VERSION = '2.5.2'; | ||
const VERSION = '2.5.3'; | ||
@@ -35,0 +35,0 @@ // Check if various APIs are available (depends on environment) |
@@ -413,4 +413,4 @@ | ||
/** @type {Response|undefined} */ | ||
let responseToCache; | ||
// Whether to cache the final response in the end. | ||
let toCacheResponse = false; | ||
@@ -479,10 +479,10 @@ /** @type {Response|FileResponse|undefined} */ | ||
if (cache && response instanceof Response && response.status === 200) { | ||
// only clone if cache available, and response is valid | ||
responseToCache = response.clone(); | ||
} | ||
// Only cache the response if: | ||
toCacheResponse = | ||
cache // 1. A caching system is available | ||
&& typeof Response !== 'undefined' // 2. `Response` is defined (i.e., we are in a browser-like environment) | ||
&& response instanceof Response // 3. result is a `Response` object (i.e., not a `FileResponse`) | ||
&& response.status === 200 // 4. request was successful (status code 200) | ||
} | ||
// Start downloading | ||
@@ -504,7 +504,6 @@ dispatchCallback(options.progress_callback, { | ||
if ( | ||
// Only cache web responses | ||
// i.e., do not cache FileResponses (prevents duplication) | ||
responseToCache && cacheKey | ||
toCacheResponse && cacheKey | ||
&& | ||
@@ -514,3 +513,6 @@ // Check again whether request is in cache. If not, we add the response to the cache | ||
) { | ||
await cache.put(cacheKey, responseToCache) | ||
// NOTE: We use `new Response(buffer, ...)` instead of `response.clone()` to handle LFS files | ||
await cache.put(cacheKey, new Response(buffer, { | ||
headers: response.headers | ||
})) | ||
.catch(err => { | ||
@@ -517,0 +519,0 @@ // Do not crash if unable to add to cache (e.g., QuotaExceededError). |
@@ -22,3 +22,3 @@ export namespace env { | ||
declare const __dirname: any; | ||
declare const VERSION: "2.5.2"; | ||
declare const VERSION: "2.5.3"; | ||
declare const localModelPath: any; | ||
@@ -25,0 +25,0 @@ declare const FS_AVAILABLE: boolean; |
@@ -74,3 +74,36 @@ /** | ||
* Text classification pipeline using any `ModelForSequenceClassification`. | ||
* @extends Pipeline | ||
* | ||
* **Example:** Sentiment-analysis w/ `Xenova/distilbert-base-uncased-finetuned-sst-2-english`. | ||
* ```javascript | ||
* let classifier = await pipeline('sentiment-analysis', 'Xenova/distilbert-base-uncased-finetuned-sst-2-english'); | ||
* let output = await classifier('I love transformers!'); | ||
* // [{ label: 'POSITIVE', score: 0.999788761138916 }] | ||
* ``` | ||
* | ||
* **Example:** Multilingual sentiment-analysis w/ `Xenova/bert-base-multilingual-uncased-sentiment` (and return top 5 classes). | ||
* ```javascript | ||
* let classifier = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment'); | ||
* let output = await classifier('Le meilleur film de tous les temps.', { topk: 5 }); | ||
* // [ | ||
* // { label: '5 stars', score: 0.9610759615898132 }, | ||
* // { label: '4 stars', score: 0.03323351591825485 }, | ||
* // { label: '3 stars', score: 0.0036155181005597115 }, | ||
* // { label: '1 star', score: 0.0011325967498123646 }, | ||
* // { label: '2 stars', score: 0.0009423971059732139 } | ||
* // ] | ||
* ``` | ||
* | ||
* **Example:** Toxic comment classification w/ `Xenova/toxic-bert` (and return all classes). | ||
* ```javascript | ||
* let classifier = await pipeline('text-classification', 'Xenova/toxic-bert'); | ||
* let output = await classifier('I hate you!', { topk: null }); | ||
* // [ | ||
* // { label: 'toxic', score: 0.9593140482902527 }, | ||
* // { label: 'insult', score: 0.16187334060668945 }, | ||
* // { label: 'obscene', score: 0.03452680632472038 }, | ||
* // { label: 'identity_hate', score: 0.0223250575363636 }, | ||
* // { label: 'threat', score: 0.019197041168808937 }, | ||
* // { label: 'severe_toxic', score: 0.005651099607348442 } | ||
* // ] | ||
* ``` | ||
*/ | ||
@@ -91,3 +124,28 @@ export class TextClassificationPipeline extends Pipeline { | ||
* Named Entity Recognition pipeline using any `ModelForTokenClassification`. | ||
* @extends Pipeline | ||
* | ||
* **Example:** Perform named entity recognition with `Xenova/bert-base-NER`. | ||
* ```javascript | ||
* let classifier = await pipeline('token-classification', 'Xenova/bert-base-NER'); | ||
* let output = await classifier('My name is Sarah and I live in London'); | ||
* // [ | ||
* // { entity: 'B-PER', score: 0.9980202913284302, index: 4, word: 'Sarah' }, | ||
* // { entity: 'B-LOC', score: 0.9994474053382874, index: 9, word: 'London' } | ||
* // ] | ||
* ``` | ||
* | ||
* **Example:** Perform named entity recognition with `Xenova/bert-base-NER` (and return all labels). | ||
* ```javascript | ||
* let classifier = await pipeline('token-classification', 'Xenova/bert-base-NER'); | ||
* let output = await classifier('Sarah lives in the United States of America', { ignore_labels: [] }); | ||
* // [ | ||
* // { entity: 'B-PER', score: 0.9966587424278259, index: 1, word: 'Sarah' }, | ||
* // { entity: 'O', score: 0.9987385869026184, index: 2, word: 'lives' }, | ||
* // { entity: 'O', score: 0.9990072846412659, index: 3, word: 'in' }, | ||
* // { entity: 'O', score: 0.9988298416137695, index: 4, word: 'the' }, | ||
* // { entity: 'B-LOC', score: 0.9995510578155518, index: 5, word: 'United' }, | ||
* // { entity: 'I-LOC', score: 0.9990395307540894, index: 6, word: 'States' }, | ||
* // { entity: 'I-LOC', score: 0.9986724853515625, index: 7, word: 'of' }, | ||
* // { entity: 'I-LOC', score: 0.9975294470787048, index: 8, word: 'America' } | ||
* // ] | ||
* ``` | ||
*/ | ||
@@ -106,3 +164,3 @@ export class TokenClassificationPipeline extends Pipeline { | ||
* | ||
* **Example:** Run question answering with `distilbert-base-uncased-distilled-squad`. | ||
* **Example:** Run question answering with `Xenova/distilbert-base-uncased-distilled-squad`. | ||
* ```javascript | ||
@@ -113,10 +171,8 @@ * let question = 'Who was Jim Henson?'; | ||
* let answerer = await pipeline('question-answering', 'Xenova/distilbert-base-uncased-distilled-squad'); | ||
* let outputs = await answerer(question, context); | ||
* console.log(outputs); | ||
* let output = await answerer(question, context); | ||
* // { | ||
* // "answer": "a nice puppet", | ||
* // "score": 0.5768911502526741 | ||
* // "answer": "a nice puppet", | ||
* // "score": 0.5768911502526741 | ||
* // } | ||
* ``` | ||
* @extends Pipeline | ||
*/ | ||
@@ -138,3 +194,22 @@ export class QuestionAnsweringPipeline extends Pipeline { | ||
* Masked language modeling prediction pipeline using any `ModelWithLMHead`. | ||
* @extends Pipeline | ||
* | ||
* **Example:** Perform masked language modelling (a.k.a. "fill-mask") with `Xenova/bert-base-uncased`. | ||
* ```javascript | ||
* let unmasker = await pipeline('fill-mask', 'Xenova/bert-base-cased'); | ||
* let output = await unmasker('The goal of life is [MASK].'); | ||
* // [ | ||
* // { token_str: 'survival', score: 0.06137419492006302, token: 8115, sequence: 'The goal of life is survival.' }, | ||
* // { token_str: 'love', score: 0.03902450203895569, token: 1567, sequence: 'The goal of life is love.' }, | ||
* // { token_str: 'happiness', score: 0.03253183513879776, token: 9266, sequence: 'The goal of life is happiness.' }, | ||
* // { token_str: 'freedom', score: 0.018736306577920914, token: 4438, sequence: 'The goal of life is freedom.' }, | ||
* // { token_str: 'life', score: 0.01859794743359089, token: 1297, sequence: 'The goal of life is life.' } | ||
* // ] | ||
* ``` | ||
* | ||
* **Example:** Perform masked language modelling (a.k.a. "fill-mask") with `Xenova/bert-base-cased` (and return top result). | ||
* ```javascript | ||
* let unmasker = await pipeline('fill-mask', 'Xenova/bert-base-cased'); | ||
* let output = await unmasker('The Milky Way is a [MASK] galaxy.', { topk: 1 }); | ||
* // [{ token_str: 'spiral', score: 0.6299987435340881, token: 14061, sequence: 'The Milky Way is a spiral galaxy.' }] | ||
* ``` | ||
*/ | ||
@@ -155,3 +230,11 @@ export class FillMaskPipeline extends Pipeline { | ||
* Text2TextGenerationPipeline class for generating text using a model that performs text-to-text generation tasks. | ||
* @extends Pipeline | ||
* | ||
* **Example:** Text-to-text generation w/ `Xenova/LaMini-Flan-T5-783M`. | ||
* ```javascript | ||
* let generator = await pipeline('text2text-generation', 'Xenova/LaMini-Flan-T5-783M'); | ||
* let output = await generator('how can I become more healthy?', { | ||
* max_new_tokens: 100, | ||
* }); | ||
* // [ 'To become more healthy, you can: 1. Eat a balanced diet with plenty of fruits, vegetables, whole grains, lean proteins, and healthy fats. 2. Stay hydrated by drinking plenty of water. 3. Get enough sleep and manage stress levels. 4. Avoid smoking and excessive alcohol consumption. 5. Regularly exercise and maintain a healthy weight. 6. Practice good hygiene and sanitation. 7. Seek medical attention if you experience any health issues.' ] | ||
* ``` | ||
*/ | ||
@@ -174,3 +257,20 @@ export class Text2TextGenerationPipeline extends Pipeline { | ||
* A pipeline for summarization tasks, inheriting from Text2TextGenerationPipeline. | ||
* @extends Text2TextGenerationPipeline | ||
* | ||
* **Example:** Summarization w/ `Xenova/distilbart-cnn-6-6`. | ||
* ```javascript | ||
* let text = 'The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, ' + | ||
* 'and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. ' + | ||
* 'During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest ' + | ||
* 'man-made structure in the world, a title it held for 41 years until the Chrysler Building in New ' + | ||
* 'York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to ' + | ||
* 'the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the ' + | ||
* 'Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second ' + | ||
* 'tallest free-standing structure in France after the Millau Viaduct.'; | ||
* | ||
* let generator = await pipeline('summarization', 'Xenova/distilbart-cnn-6-6'); | ||
* let output = await generator(text, { | ||
* max_new_tokens: 100, | ||
* }); | ||
* // [{ summary_text: ' The Eiffel Tower is about the same height as an 81-storey building and the tallest structure in Paris. It is the second tallest free-standing structure in France after the Millau Viaduct.' }] | ||
* ``` | ||
*/ | ||
@@ -191,6 +291,6 @@ export class SummarizationPipeline extends Text2TextGenerationPipeline { | ||
* let output = await translator('जीवन एक चॉकलेट बॉक्स की तरह है।', { | ||
* src_lang: 'hin_Deva', // Hindi | ||
* tgt_lang: 'fra_Latn', // French | ||
* src_lang: 'hin_Deva', // Hindi | ||
* tgt_lang: 'fra_Latn', // French | ||
* }); | ||
* // [ { translation_text: 'La vie est comme une boîte à chocolat.' } ] | ||
* // [{ translation_text: 'La vie est comme une boîte à chocolat.' }] | ||
* ``` | ||
@@ -206,6 +306,6 @@ * | ||
* let output = await translator('生活就像一盒巧克力。', { | ||
* src_lang: 'zh', // Chinese | ||
* tgt_lang: 'en', // English | ||
* src_lang: 'zh', // Chinese | ||
* tgt_lang: 'en', // English | ||
* }); | ||
* // [ { translation_text: 'Life is like a box of chocolate.' } ] | ||
* // [{ translation_text: 'Life is like a box of chocolate.' }] | ||
* ``` | ||
@@ -227,3 +327,2 @@ * | ||
* let output = await classifier(text); | ||
* console.log(output); | ||
* // [{ generated_text: "I enjoy walking with my cute dog, and I love to play with the other dogs." }] | ||
@@ -237,10 +336,9 @@ * ``` | ||
* let output = await classifier(text, { | ||
* temperature: 2, | ||
* max_new_tokens: 10, | ||
* repetition_penalty: 1.5, | ||
* no_repeat_ngram_size: 2, | ||
* num_beams: 2, | ||
* num_return_sequences: 2, | ||
* temperature: 2, | ||
* max_new_tokens: 10, | ||
* repetition_penalty: 1.5, | ||
* no_repeat_ngram_size: 2, | ||
* num_beams: 2, | ||
* num_return_sequences: 2, | ||
* }); | ||
* console.log(output); | ||
* // [{ | ||
@@ -258,14 +356,14 @@ * // "generated_text": "Once upon a time, there was an abundance of information about the history and activities that" | ||
* let output = await classifier(text, { | ||
* max_new_tokens: 40, | ||
* max_new_tokens: 44, | ||
* }); | ||
* console.log(output[0].generated_text); | ||
* // def fib(n): | ||
* // if n == 0: | ||
* // return 0 | ||
* // if n == 1: | ||
* // return 1 | ||
* // return fib(n-1) + fib(n-2) | ||
* // [{ | ||
* // generated_text: 'def fib(n):\n' + | ||
* // ' if n == 0:\n' + | ||
* // ' return 0\n' + | ||
* // ' elif n == 1:\n' + | ||
* // ' return 1\n' + | ||
* // ' else:\n' + | ||
* // ' return fib(n-1) + fib(n-2)\n' | ||
* // }] | ||
* ``` | ||
* | ||
* @extends Pipeline | ||
*/ | ||
@@ -293,8 +391,7 @@ export class TextGenerationPipeline extends Pipeline { | ||
* let output = await classifier(text, labels); | ||
* console.log(output); | ||
* // { | ||
* // sequence: 'Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.', | ||
* // labels: [ 'mobile', 'website', 'billing', 'account access' ], | ||
* // scores: [ 0.5562091040482018, 0.1843621307860853, 0.13942646639336376, 0.12000229877234923 ] | ||
* // } | ||
* // { | ||
* // sequence: 'Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.', | ||
* // labels: [ 'mobile', 'website', 'billing', 'account access' ], | ||
* // scores: [ 0.5562091040482018, 0.1843621307860853, 0.13942646639336376, 0.12000229877234923 ] | ||
* // } | ||
* ``` | ||
@@ -308,3 +405,2 @@ * | ||
* let output = await classifier(text, labels, { multi_label: true }); | ||
* console.log(output); | ||
* // { | ||
@@ -316,4 +412,2 @@ * // sequence: 'I have a problem with my iphone that needs to be resolved asap!', | ||
* ``` | ||
* | ||
* @extends Pipeline | ||
*/ | ||
@@ -362,8 +456,7 @@ export class ZeroShotClassificationPipeline extends Pipeline { | ||
* let extractor = await pipeline('feature-extraction', 'Xenova/bert-base-uncased', { revision: 'default' }); | ||
* let result = await extractor('This is a simple test.'); | ||
* console.log(result); | ||
* let output = await extractor('This is a simple test.'); | ||
* // Tensor { | ||
* // type: 'float32', | ||
* // data: Float32Array [0.05939924716949463, 0.021655935794115067, ...], | ||
* // dims: [1, 8, 768] | ||
* // type: 'float32', | ||
* // data: Float32Array [0.05939924716949463, 0.021655935794115067, ...], | ||
* // dims: [1, 8, 768] | ||
* // } | ||
@@ -375,8 +468,7 @@ * ``` | ||
* let extractor = await pipeline('feature-extraction', 'Xenova/bert-base-uncased', { revision: 'default' }); | ||
* let result = await extractor('This is a simple test.', { pooling: 'mean', normalize: true }); | ||
* console.log(result); | ||
* let output = await extractor('This is a simple test.', { pooling: 'mean', normalize: true }); | ||
* // Tensor { | ||
* // type: 'float32', | ||
* // data: Float32Array [0.03373778983950615, -0.010106077417731285, ...], | ||
* // dims: [1, 768] | ||
* // type: 'float32', | ||
* // data: Float32Array [0.03373778983950615, -0.010106077417731285, ...], | ||
* // dims: [1, 768] | ||
* // } | ||
@@ -388,11 +480,9 @@ * ``` | ||
* let extractor = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2'); | ||
* let result = await extractor('This is a simple test.', { pooling: 'mean', normalize: true }); | ||
* console.log(result); | ||
* let output = await extractor('This is a simple test.', { pooling: 'mean', normalize: true }); | ||
* // Tensor { | ||
* // type: 'float32', | ||
* // data: Float32Array [0.09094982594251633, -0.014774246141314507, ...], | ||
* // dims: [1, 384] | ||
* // type: 'float32', | ||
* // data: Float32Array [0.09094982594251633, -0.014774246141314507, ...], | ||
* // dims: [1, 384] | ||
* // } | ||
* ``` | ||
* @extends Pipeline | ||
*/ | ||
@@ -530,3 +620,2 @@ export class FeatureExtractionPipeline extends Pipeline { | ||
* ``` | ||
* @extends Pipeline | ||
*/ | ||
@@ -587,3 +676,10 @@ export class AutomaticSpeechRecognitionPipeline extends Pipeline { | ||
* Image To Text pipeline using a `AutoModelForVision2Seq`. This pipeline predicts a caption for a given image. | ||
* @extends Pipeline | ||
* | ||
* **Example:** Generate a caption for an image w/ `Xenova/vit-gpt2-image-captioning`. | ||
* ```javascript | ||
* let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/cats.jpg'; | ||
* let captioner = await pipeline('image-to-text', 'Xenova/vit-gpt2-image-captioning'); | ||
* let output = await captioner(url); | ||
* // [{ generated_text: 'a cat laying on a couch with another cat' }] | ||
* ``` | ||
*/ | ||
@@ -607,4 +703,4 @@ export class ImageToTextPipeline extends Pipeline { | ||
* let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/tiger.jpg'; | ||
* let outputs = await classifier(url); | ||
* // Array(1) [ | ||
* let output = await classifier(url); | ||
* // [ | ||
* // {label: 'tiger, Panthera tigris', score: 0.632695734500885}, | ||
@@ -618,7 +714,7 @@ * // ] | ||
* let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/tiger.jpg'; | ||
* let outputs = await classifier(url, { topk: 3 }); | ||
* // Array(3) [ | ||
* // {label: 'tiger, Panthera tigris', score: 0.632695734500885}, | ||
* // {label: 'tiger cat', score: 0.3634825646877289}, | ||
* // {label: 'lion, king of beasts, Panthera leo', score: 0.00045060308184474707}, | ||
* let output = await classifier(url, { topk: 3 }); | ||
* // [ | ||
* // { label: 'tiger, Panthera tigris', score: 0.632695734500885 }, | ||
* // { label: 'tiger cat', score: 0.3634825646877289 }, | ||
* // { label: 'lion, king of beasts, Panthera leo', score: 0.00045060308184474707 }, | ||
* // ] | ||
@@ -631,4 +727,4 @@ * ``` | ||
* let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/tiger.jpg'; | ||
* let outputs = await classifier(url, { topk: 0 }); | ||
* // Array(1000) [ | ||
* let output = await classifier(url, { topk: 0 }); | ||
* // [ | ||
* // {label: 'tiger, Panthera tigris', score: 0.632695734500885}, | ||
@@ -641,3 +737,2 @@ * // {label: 'tiger cat', score: 0.3634825646877289}, | ||
* ``` | ||
* @extends Pipeline | ||
*/ | ||
@@ -671,3 +766,13 @@ export class ImageClassificationPipeline extends Pipeline { | ||
* This pipeline predicts masks of objects and their classes. | ||
* @extends Pipeline | ||
* | ||
* **Example:** Perform image segmentation with `Xenova/detr-resnet-50-panoptic`. | ||
* ```javascript | ||
* let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/cats.jpg'; | ||
* let segmenter = await pipeline('image-segmentation', 'Xenova/detr-resnet-50-panoptic'); | ||
* let output = await segmenter(url); | ||
* // [ | ||
* // { label: 'remote', score: 0.9984649419784546, mask: RawImage { ... } }, | ||
* // { label: 'cat', score: 0.9994316101074219, mask: RawImage { ... } } | ||
* // ] | ||
* ``` | ||
*/ | ||
@@ -716,3 +821,14 @@ export class ImageSegmentationPipeline extends Pipeline { | ||
* an image when you provide an image and a set of `candidate_labels`. | ||
* @extends Pipeline | ||
* | ||
* **Example:** Zero shot image classification w/ `Xenova/clip-vit-base-patch32`. | ||
* ```javascript | ||
* let classifier = await pipeline('zero-shot-image-classification', 'Xenova/clip-vit-base-patch32'); | ||
* let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/tiger.jpg'; | ||
* let output = await classifier(url, ['tiger', 'horse', 'dog']); | ||
* // [ | ||
* // { score: 0.9993917942047119, label: 'tiger' }, | ||
* // { score: 0.0003519294841680676, label: 'horse' }, | ||
* // { score: 0.0002562698791734874, label: 'dog' } | ||
* // ] | ||
* ``` | ||
*/ | ||
@@ -723,3 +839,3 @@ export class ZeroShotImageClassificationPipeline extends Pipeline { | ||
* @param {Array} images The input images. | ||
* @param {Array} candidate_labels The candidate labels. | ||
* @param {string[]} candidate_labels The candidate labels. | ||
* @param {Object} options The options for the classification. | ||
@@ -729,3 +845,3 @@ * @param {string} [options.hypothesis_template] The hypothesis template to use for zero-shot classification. Default: "This is a photo of {}". | ||
*/ | ||
_call(images: any[], candidate_labels: any[], { hypothesis_template }?: { | ||
_call(images: any[], candidate_labels: string[], { hypothesis_template }?: { | ||
hypothesis_template?: string; | ||
@@ -756,4 +872,2 @@ }): Promise<any>; | ||
* ``` | ||
* | ||
* @extends Pipeline | ||
*/ | ||
@@ -760,0 +874,0 @@ export class ObjectDetectionPipeline extends Pipeline { |
Sorry, the diff of this file is too big to display
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is too big to display
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is too big to display
Sorry, the diff of this file is too big to display
Sorry, the diff of this file is too big to display
Sorry, the diff of this file is too big to display
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
44656183
39693