@xenova/transformers
Advanced tools
Comparing version 2.6.1 to 2.6.2
{ | ||
"name": "@xenova/transformers", | ||
"version": "2.6.1", | ||
"version": "2.6.2", | ||
"description": "State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!", | ||
@@ -5,0 +5,0 @@ "main": "./src/transformers.js", |
@@ -101,3 +101,3 @@ | ||
<script type="module"> | ||
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.6.1'; | ||
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.6.2'; | ||
</script> | ||
@@ -116,3 +116,4 @@ ``` | ||
| Code Playground | In-browser code completion website | [link](./examples/code-completion/) | | ||
| Semantic Image Search | Search for images with text (Next.js + Supabase) | [link](./examples/semantic-image-search/) | | ||
| Semantic Image Search (client-side) | Search for images with text | [link](./examples/semantic-image-search-client/) | | ||
| Semantic Image Search (server-side) | Search for images with text (Supabase) | [link](./examples/semantic-image-search/) | | ||
| Vanilla JavaScript | In-browser object detection | [link](./examples/vanilla-js/) | | ||
@@ -131,3 +132,3 @@ | React | Multilingual translation website | [link](./examples/react-translator/) | | ||
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/transformers@2.6.1/dist/), which should work out-of-the-box. You can customize this as follows: | ||
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/transformers@2.6.2/dist/), which should work out-of-the-box. You can customize this as follows: | ||
@@ -242,3 +243,3 @@ | ||
|--------------------------|----|-------------|------------| | ||
| [Document Question Answering](https://huggingface.co/tasks/document-question-answering) | `document-question-answering` | Answering questions on document images. | ❌ | | ||
| [Document Question Answering](https://huggingface.co/tasks/document-question-answering) | `document-question-answering` | Answering questions on document images. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.DocumentQuestionAnsweringPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=document-question-answering&library=transformers.js) | | ||
| [Feature Extraction](https://huggingface.co/tasks/feature-extraction) | `feature-extraction` | Transforming raw data into numerical features that can be processed while preserving the information in the original dataset. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.FeatureExtractionPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=feature-extraction&library=transformers.js) | | ||
@@ -265,2 +266,4 @@ | [Image-to-Text](https://huggingface.co/tasks/image-to-text) | `image-to-text` | Output text from a given image. | ✅ [(docs)](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ImageToTextPipeline)<br>[(models)](https://huggingface.co/models?pipeline_tag=image-to-text&library=transformers.js) | | ||
1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. | ||
1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. | ||
1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. | ||
1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). | ||
@@ -276,2 +279,3 @@ 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. | ||
1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. | ||
1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. | ||
1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei | ||
@@ -284,2 +288,5 @@ 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. | ||
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (from Allegro.pl, AGH University of Science and Technology) released with the paper [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik. | ||
1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. | ||
1. **[LLaMA](https://huggingface.co/docs/transformers/model_doc/llama)** (from The FAIR team of Meta AI) released with the paper [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. | ||
1. **[Llama2](https://huggingface.co/docs/transformers/model_doc/llama2)** (from The FAIR team of Meta AI) released with the paper [Llama2: Open Foundation and Fine-Tuned Chat Models](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/XXX) by Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom. | ||
1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. | ||
@@ -286,0 +293,0 @@ 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. |
@@ -32,3 +32,3 @@ /** | ||
const VERSION = '2.6.1'; | ||
const VERSION = '2.6.2'; | ||
@@ -35,0 +35,0 @@ // Check if various APIs are available (depends on environment) |
@@ -34,3 +34,3 @@ | ||
softmax, | ||
FFT | ||
FFT, | ||
} from './utils/maths.js'; | ||
@@ -125,2 +125,8 @@ | ||
/** | ||
* Named tuple to indicate the order we are using is (height x width), even though | ||
* the Graphics’ industry standard is (width x height). | ||
* @typedef {[height: number, width: number]} HeightWidth | ||
*/ | ||
/** | ||
* Base class for feature extractors. | ||
@@ -143,2 +149,9 @@ * | ||
/** | ||
* @typedef {object} ImageFeatureExtractorResult | ||
* @property {Tensor} pixel_values The pixel values of the batched preprocessed images. | ||
* @property {HeightWidth[]} original_sizes Array of two-dimensional tuples like [[480, 640]]. | ||
* @property {HeightWidth[]} reshaped_input_sizes Array of two-dimensional tuples like [[1000, 1330]]. | ||
*/ | ||
/** | ||
* Feature extractor for image models. | ||
@@ -175,2 +188,3 @@ * | ||
this.do_resize = this.config.do_resize; | ||
this.do_thumbnail = this.config.do_thumbnail; | ||
this.size = this.config.size; | ||
@@ -183,10 +197,53 @@ | ||
this.pad_size = this.config.pad_size; | ||
this.do_pad = (this.config.do_pad ?? false) && this.pad_size; | ||
this.do_pad = this.config.do_pad; | ||
if (this.do_pad && !this.pad_size && this.size.width !== undefined && this.size.height !== undefined) { | ||
// Should pad, but no pad size specified | ||
// We infer the pad size from the resize size | ||
this.pad_size = this.size | ||
} | ||
} | ||
/** | ||
* Resize the image to make a thumbnail. The image is resized so that no dimension is larger than any | ||
* corresponding dimension of the specified size. | ||
* @param {RawImage} image The image to be resized. | ||
* @param {{height:number, width:number}} size The size `{"height": h, "width": w}` to resize the image to. | ||
* @param {string | 0 | 1 | 2 | 3 | 4 | 5} [resample=2] The resampling filter to use. | ||
* @returns {Promise<RawImage>} The resized image. | ||
*/ | ||
async thumbnail(image, size, resample = 2) { | ||
const input_height = image.height; | ||
const input_width = image.width; | ||
const output_height = size.height; | ||
const output_width = size.width; | ||
// We always resize to the smallest of either the input or output size. | ||
let height = Math.min(input_height, output_height) | ||
let width = Math.min(input_width, output_width) | ||
if (height === input_height && width === input_width) { | ||
return image; | ||
} | ||
if (input_height > input_width) { | ||
width = Math.floor(input_width * height / input_height); | ||
} else if (input_width > input_height) { | ||
height = Math.floor(input_height * width / input_width); | ||
} | ||
return await image.resize(width, height, { resample }); | ||
} | ||
/** | ||
* @typedef {object} PreprocessedImage | ||
* @property {HeightWidth} original_size The original size of the image. | ||
* @property {HeightWidth} reshaped_input_size The reshaped input size of the image. | ||
* @property {Tensor} pixel_values The pixel values of the preprocessed image. | ||
*/ | ||
/** | ||
* Preprocesses the given image. | ||
* | ||
* @param {RawImage} image The image to preprocess. | ||
* @returns {Promise<any>} The preprocessed image as a Tensor. | ||
* @returns {Promise<PreprocessedImage>} The preprocessed image. | ||
*/ | ||
@@ -214,4 +271,9 @@ async preprocess(image) { | ||
if (this.do_thumbnail) { | ||
// NOTE: custom logic for `Donut` models | ||
const { height, width } = this.size; | ||
shortest_edge = Math.min(height, width) | ||
} | ||
// Support both formats for backwards compatibility | ||
if (Number.isInteger(this.size)) { | ||
else if (Number.isInteger(this.size)) { | ||
shortest_edge = this.size; | ||
@@ -244,5 +306,5 @@ longest_edge = this.config.max_size ?? shortest_edge; | ||
// To avoid certain floating point precision issues, we round to 3 decimal places | ||
const finalWidth = Math.floor(Number((newWidth * longResizeFactor).toPrecision(3))); | ||
const finalHeight = Math.floor(Number((newHeight * longResizeFactor).toPrecision(3))); | ||
// To avoid certain floating point precision issues, we round to 2 decimal places | ||
const finalWidth = Math.floor(Number((newWidth * longResizeFactor).toFixed(2))); | ||
const finalHeight = Math.floor(Number((newHeight * longResizeFactor).toFixed(2))); | ||
@@ -264,2 +326,7 @@ // Perform resize | ||
// Resize the image using thumbnail method. | ||
if (this.do_thumbnail) { | ||
image = await this.thumbnail(image, this.size, this.resample); | ||
} | ||
if (this.do_center_crop) { | ||
@@ -280,6 +347,7 @@ | ||
/** @type {HeightWidth} */ | ||
let reshaped_input_size = [image.height, image.width]; | ||
// TODO is it okay to pad before rescaling/normalizing? | ||
if (this.do_pad) { | ||
if (this.do_pad && this.pad_size) { | ||
let left = 0; | ||
@@ -339,12 +407,13 @@ let right = this.pad_size.width - image.width; | ||
* features into a single Tensor. | ||
* @param {any} images The URL(s) of the image(s) to extract features from. | ||
* @returns {Promise<Object>} An object containing the concatenated pixel values (and other metadata) of the preprocessed images. | ||
* @param {any[]} images The URL(s) of the image(s) to extract features from. | ||
* @param {...any} args Additional arguments. | ||
* @returns {Promise<ImageFeatureExtractorResult>} An object containing the concatenated pixel values (and other metadata) of the preprocessed images. | ||
*/ | ||
async _call(images) { | ||
async _call(images, ...args) { | ||
if (!Array.isArray(images)) { | ||
images = [images]; | ||
} | ||
/** @type {PreprocessedImage[]} */ | ||
const imageData = await Promise.all(images.map(x => this.preprocess(x))); | ||
let imageData = await Promise.all(images.map(x => this.preprocess(x))); | ||
// TODO: | ||
@@ -355,3 +424,3 @@ | ||
imageData.forEach(x => x.pixel_values.dims = [1, ...x.pixel_values.dims]); | ||
let pixel_values = cat(imageData.map(x => x.pixel_values)); | ||
const pixel_values = cat(imageData.map(x => x.pixel_values)); | ||
@@ -376,4 +445,11 @@ return { | ||
export class BeitFeatureExtractor extends ImageFeatureExtractor { } | ||
export class DonutFeatureExtractor extends ImageFeatureExtractor { } | ||
/** | ||
* @typedef {object} DetrFeatureExtractorResultProps | ||
* @property {Tensor} pixel_mask | ||
* @typedef {ImageFeatureExtractorResult & DetrFeatureExtractorResultProps} DetrFeatureExtractorResult | ||
*/ | ||
/** | ||
* Detr Feature Extractor. | ||
@@ -387,7 +463,7 @@ * | ||
* each image, and concatenates the resulting features into a single Tensor. | ||
* @param {any} urls The URL(s) of the image(s) to extract features from. | ||
* @returns {Promise<Object>} An object containing the concatenated pixel values of the preprocessed images. | ||
* @param {any[]} urls The URL(s) of the image(s) to extract features from. | ||
* @returns {Promise<DetrFeatureExtractorResult>} An object containing the concatenated pixel values of the preprocessed images. | ||
*/ | ||
async _call(urls) { | ||
let result = await super._call(urls); | ||
const result = await super._call(urls); | ||
@@ -397,6 +473,5 @@ // TODO support differently-sized images, for now assume all images are the same size. | ||
// Currently, just fill pixel mask with 1s | ||
let maskSize = [result.pixel_values.dims[0], 64, 64]; | ||
result.pixel_mask = new Tensor( | ||
const maskSize = [result.pixel_values.dims[0], 64, 64]; | ||
const pixel_mask = new Tensor( | ||
'int64', | ||
// TODO: fix error below | ||
new BigInt64Array(maskSize.reduce((a, b) => a * b)).fill(1n), | ||
@@ -406,3 +481,3 @@ maskSize | ||
return result; | ||
return { ...result, pixel_mask }; | ||
} | ||
@@ -707,3 +782,18 @@ | ||
/** | ||
* @typedef {object} SamImageProcessorResult | ||
* @property {Tensor} pixel_values | ||
* @property {HeightWidth[]} original_sizes | ||
* @property {HeightWidth[]} reshaped_input_sizes | ||
* @property {Tensor} input_points | ||
*/ | ||
export class SamImageProcessor extends ImageFeatureExtractor { | ||
/** | ||
* @param {any[]} images The URL(s) of the image(s) to extract features from. | ||
* @param {*} input_points A 3D or 4D array, representing the input points provided by the user. | ||
* - 3D: `[point_batch_size, nb_points_per_image, 2]`. In this case, `batch_size` is assumed to be 1. | ||
* - 4D: `[batch_size, point_batch_size, nb_points_per_image, 2]`. | ||
* @returns {Promise<SamImageProcessorResult>} | ||
*/ | ||
async _call(images, input_points) { | ||
@@ -718,2 +808,3 @@ let { | ||
// TODO: add support for 2D input_points | ||
if (shape.length === 3) { | ||
@@ -1256,5 +1347,6 @@ // Correct user's input | ||
* @param {any} input The input to extract features from. | ||
* @param {...any} args Additional arguments. | ||
* @returns {Promise<any>} A Promise that resolves with the extracted features. | ||
*/ | ||
async _call(input) { | ||
async _call(input, ...args) { | ||
return await this.feature_extractor(input); | ||
@@ -1265,3 +1357,7 @@ } | ||
export class SamProcessor extends Processor { | ||
/** | ||
* @param {*} images | ||
* @param {*} input_points | ||
* @returns {Promise<any>} | ||
*/ | ||
async _call(images, input_points) { | ||
@@ -1309,5 +1405,2 @@ return await this.feature_extractor(images, input_points); | ||
/** | ||
* @typedef {import('./utils/hub.js').PretrainedOptions} PretrainedOptions | ||
*/ | ||
/** | ||
* Helper class which is used to instantiate pretrained processors with the `from_pretrained` function. | ||
@@ -1352,2 +1445,3 @@ * The chosen processor class is determined by the type specified in the processor config. | ||
YolosFeatureExtractor, | ||
DonutFeatureExtractor, | ||
@@ -1375,3 +1469,3 @@ SamImageProcessor, | ||
* - A path to a *directory* containing processor files, e.g., `./my_model_directory/`. | ||
* @param {PretrainedOptions} options Additional options for loading the processor. | ||
* @param {import('./utils/hub.js').PretrainedOptions} options Additional options for loading the processor. | ||
* | ||
@@ -1404,6 +1498,6 @@ * @returns {Promise<Processor>} A new instance of the Processor class. | ||
// Assume ImageFeatureExtractor | ||
console.warn('Feature extractor type not specified, assuming ImageFeatureExtractor due to size parameter in config.'); | ||
console.warn(`Feature extractor type "${key}" not found, assuming ImageFeatureExtractor due to size parameter in config.`); | ||
feature_extractor_class = ImageFeatureExtractor; | ||
} else { | ||
throw new Error(`Unknown Feature Extractor type: ${preprocessorConfig.feature_extractor_type}`); | ||
throw new Error(`Unknown Feature Extractor type: ${key}`); | ||
} | ||
@@ -1410,0 +1504,0 @@ } |
@@ -1,3 +0,1 @@ | ||
// @ts-nocheck | ||
/** | ||
@@ -4,0 +2,0 @@ * @file Entry point for the Transformers.js library. Only the exports from this file |
@@ -22,13 +22,13 @@ | ||
* @typedef {Object} PretrainedOptions Options for loading a pretrained model. | ||
* @property {boolean?} [options.quantized=true] Whether to load the 8-bit quantized version of the model (only applicable when loading model files). | ||
* @property {function} [options.progress_callback=null] If specified, this function will be called during model construction, to provide the user with progress updates. | ||
* @property {Object} [options.config=null] Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: | ||
* @property {boolean?} [quantized=true] Whether to load the 8-bit quantized version of the model (only applicable when loading model files). | ||
* @property {function} [progress_callback=null] If specified, this function will be called during model construction, to provide the user with progress updates. | ||
* @property {Object} [config=null] Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: | ||
* - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). | ||
* - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory. | ||
* @property {string} [options.cache_dir=null] Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. | ||
* @property {boolean} [options.local_files_only=false] Whether or not to only look at local files (e.g., not try downloading the model). | ||
* @property {string} [options.revision='main'] The specific model version to use. It can be a branch name, a tag name, or a commit id, | ||
* @property {string} [cache_dir=null] Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. | ||
* @property {boolean} [local_files_only=false] Whether or not to only look at local files (e.g., not try downloading the model). | ||
* @property {string} [revision='main'] The specific model version to use. It can be a branch name, a tag name, or a commit id, | ||
* since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git. | ||
* NOTE: This setting is ignored for local requests. | ||
* @property {string} [options.model_file_name=null] If specified, load the model with this name (excluding the .onnx suffix). Currently only valid for encoder- or decoder-only models. | ||
* @property {string} [model_file_name=null] If specified, load the model with this name (excluding the .onnx suffix). Currently only valid for encoder- or decoder-only models. | ||
*/ | ||
@@ -35,0 +35,0 @@ |
@@ -87,3 +87,6 @@ | ||
constructor(data, width, height, channels) { | ||
this._update(data, width, height, channels); | ||
this.data = data; | ||
this.width = width; | ||
this.height = height; | ||
this.channels = channels; | ||
} | ||
@@ -512,3 +515,4 @@ | ||
* @param {number} height The new height of the image. | ||
* @param {1|2|3|4} channels The new number of channels of the image. | ||
* @param {1|2|3|4|null} [channels] The new number of channels of the image. | ||
* @private | ||
*/ | ||
@@ -515,0 +519,0 @@ _update(data, width, height, channels = null) { |
@@ -19,3 +19,3 @@ /** | ||
/** | ||
* @typedef {import('./maths.js').AnyTypedArray} AnyTypedArray | ||
* @typedef {import('./maths.js').AnyTypedArray | any[]} DataArray | ||
*/ | ||
@@ -29,3 +29,3 @@ | ||
* Create a new Tensor or copy an existing Tensor. | ||
* @param {[string, Array|AnyTypedArray, number[]]|[ONNXTensor]} args | ||
* @param {[string, DataArray, number[]]|[ONNXTensor]} args | ||
*/ | ||
@@ -32,0 +32,0 @@ constructor(...args) { |
@@ -22,3 +22,3 @@ export namespace env { | ||
declare const __dirname: any; | ||
declare const VERSION: "2.6.1"; | ||
declare const VERSION: "2.6.2"; | ||
declare const localModelPath: any; | ||
@@ -25,0 +25,0 @@ declare const FS_AVAILABLE: boolean; |
/** | ||
* @typedef {import('./utils/hub.js').PretrainedOptions} PretrainedOptions | ||
*/ | ||
/** | ||
* Utility factory method to build a [`Pipeline`] object. | ||
@@ -10,2 +7,3 @@ * | ||
* - `"automatic-speech-recognition"`: will return a `AutomaticSpeechRecognitionPipeline`. | ||
* - `"document-question-answering"`: will return a `DocumentQuestionAnsweringPipeline`. | ||
* - `"feature-extraction"`: will return a `FeatureExtractionPipeline`. | ||
@@ -28,7 +26,7 @@ * - `"fill-mask"`: will return a `FillMaskPipeline`. | ||
* @param {string} [model=null] The name of the pre-trained model to use. If not specified, the default model for the task will be used. | ||
* @param {PretrainedOptions} [options] Optional parameters for the pipeline. | ||
* @param {import('./utils/hub.js').PretrainedOptions} [options] Optional parameters for the pipeline. | ||
* @returns {Promise<Pipeline>} A Pipeline object for the specified task. | ||
* @throws {Error} If an unsupported pipeline is requested. | ||
*/ | ||
export function pipeline(task: string, model?: string, { quantized, progress_callback, config, cache_dir, local_files_only, revision, }?: PretrainedOptions): Promise<Pipeline>; | ||
export function pipeline(task: string, model?: string, { quantized, progress_callback, config, cache_dir, local_files_only, revision, }?: import('./utils/hub.js').PretrainedOptions): Promise<Pipeline>; | ||
declare const Pipeline_base: new () => { | ||
@@ -903,3 +901,27 @@ (...args: any[]): any; | ||
} | ||
export type PretrainedOptions = import('./utils/hub.js').PretrainedOptions; | ||
/** | ||
* Document Question Answering pipeline using any `AutoModelForDocumentQuestionAnswering`. | ||
* The inputs/outputs are similar to the (extractive) question answering pipeline; however, | ||
* the pipeline takes an image (and optional OCR'd words/boxes) as input instead of text context. | ||
* | ||
* **Example:** Answer questions about a document with `Xenova/donut-base-finetuned-docvqa`. | ||
* ```javascript | ||
* let image = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/invoice.png'; | ||
* let question = 'What is the invoice number?'; | ||
* | ||
* let qa_pipeline = await pipeline('document-question-answering', 'Xenova/donut-base-finetuned-docvqa'); | ||
* let output = await qa_pipeline(image, question); | ||
* // [{ answer: 'us-001' }] | ||
* ``` | ||
*/ | ||
export class DocumentQuestionAnsweringPipeline extends Pipeline { | ||
/** | ||
* Answer the question given as input by using the document. | ||
* @param {any} image The image of the document to use. | ||
* @param {string} question A question to ask of the document. | ||
* @param {Object} [generate_kwargs={}] Optional generation arguments. | ||
* @returns {Promise<Object|Object[]>} A Promise that resolves to an object (or array of objects) containing the generated text(s). | ||
*/ | ||
_call(image: any, question: string, generate_kwargs?: any): Promise<any | any[]>; | ||
} | ||
export type QuestionAnsweringResult = { | ||
@@ -906,0 +928,0 @@ /** |
@@ -6,2 +6,7 @@ declare const FeatureExtractor_base: new () => { | ||
/** | ||
* Named tuple to indicate the order we are using is (height x width), even though | ||
* the Graphics’ industry standard is (width x height). | ||
* @typedef {[height: number, width: number]} HeightWidth | ||
*/ | ||
/** | ||
* Base class for feature extractors. | ||
@@ -21,2 +26,8 @@ * | ||
/** | ||
* @typedef {object} ImageFeatureExtractorResult | ||
* @property {Tensor} pixel_values The pixel values of the batched preprocessed images. | ||
* @property {HeightWidth[]} original_sizes Array of two-dimensional tuples like [[480, 640]]. | ||
* @property {HeightWidth[]} reshaped_input_sizes Array of two-dimensional tuples like [[1000, 1330]]. | ||
*/ | ||
/** | ||
* Feature extractor for image models. | ||
@@ -57,2 +68,3 @@ * | ||
do_resize: any; | ||
do_thumbnail: any; | ||
size: any; | ||
@@ -65,8 +77,39 @@ do_center_crop: any; | ||
/** | ||
* Resize the image to make a thumbnail. The image is resized so that no dimension is larger than any | ||
* corresponding dimension of the specified size. | ||
* @param {RawImage} image The image to be resized. | ||
* @param {{height:number, width:number}} size The size `{"height": h, "width": w}` to resize the image to. | ||
* @param {string | 0 | 1 | 2 | 3 | 4 | 5} [resample=2] The resampling filter to use. | ||
* @returns {Promise<RawImage>} The resized image. | ||
*/ | ||
thumbnail(image: RawImage, size: { | ||
height: number; | ||
width: number; | ||
}, resample?: string | 0 | 1 | 2 | 3 | 4 | 5): Promise<RawImage>; | ||
/** | ||
* @typedef {object} PreprocessedImage | ||
* @property {HeightWidth} original_size The original size of the image. | ||
* @property {HeightWidth} reshaped_input_size The reshaped input size of the image. | ||
* @property {Tensor} pixel_values The pixel values of the preprocessed image. | ||
*/ | ||
/** | ||
* Preprocesses the given image. | ||
* | ||
* @param {RawImage} image The image to preprocess. | ||
* @returns {Promise<any>} The preprocessed image as a Tensor. | ||
* @returns {Promise<PreprocessedImage>} The preprocessed image. | ||
*/ | ||
preprocess(image: RawImage): Promise<any>; | ||
preprocess(image: RawImage): Promise<{ | ||
/** | ||
* The original size of the image. | ||
*/ | ||
original_size: HeightWidth; | ||
/** | ||
* The reshaped input size of the image. | ||
*/ | ||
reshaped_input_size: HeightWidth; | ||
/** | ||
* The pixel values of the preprocessed image. | ||
*/ | ||
pixel_values: Tensor; | ||
}>; | ||
/** | ||
@@ -76,6 +119,7 @@ * Calls the feature extraction process on an array of image | ||
* features into a single Tensor. | ||
* @param {any} images The URL(s) of the image(s) to extract features from. | ||
* @returns {Promise<Object>} An object containing the concatenated pixel values (and other metadata) of the preprocessed images. | ||
* @param {any[]} images The URL(s) of the image(s) to extract features from. | ||
* @param {...any} args Additional arguments. | ||
* @returns {Promise<ImageFeatureExtractorResult>} An object containing the concatenated pixel values (and other metadata) of the preprocessed images. | ||
*/ | ||
_call(images: any): Promise<any>; | ||
_call(images: any[], ...args: any[]): Promise<ImageFeatureExtractorResult>; | ||
} | ||
@@ -92,3 +136,10 @@ export class ConvNextFeatureExtractor extends ImageFeatureExtractor { | ||
} | ||
export class DonutFeatureExtractor extends ImageFeatureExtractor { | ||
} | ||
/** | ||
* @typedef {object} DetrFeatureExtractorResultProps | ||
* @property {Tensor} pixel_mask | ||
* @typedef {ImageFeatureExtractorResult & DetrFeatureExtractorResultProps} DetrFeatureExtractorResult | ||
*/ | ||
/** | ||
* Detr Feature Extractor. | ||
@@ -100,2 +151,9 @@ * | ||
/** | ||
* Calls the feature extraction process on an array of image URLs, preprocesses | ||
* each image, and concatenates the resulting features into a single Tensor. | ||
* @param {any[]} urls The URL(s) of the image(s) to extract features from. | ||
* @returns {Promise<DetrFeatureExtractorResult>} An object containing the concatenated pixel values of the preprocessed images. | ||
*/ | ||
_call(urls: any[]): Promise<DetrFeatureExtractorResult>; | ||
/** | ||
* Post-processes the outputs of the model (for object detection). | ||
@@ -179,10 +237,19 @@ * @param {Object} outputs The outputs of the model that must be post-processed | ||
} | ||
/** | ||
* @typedef {object} SamImageProcessorResult | ||
* @property {Tensor} pixel_values | ||
* @property {HeightWidth[]} original_sizes | ||
* @property {HeightWidth[]} reshaped_input_sizes | ||
* @property {Tensor} input_points | ||
*/ | ||
export class SamImageProcessor extends ImageFeatureExtractor { | ||
_call(images: any, input_points: any): Promise<{ | ||
pixel_values: any; | ||
original_sizes: any; | ||
reshaped_input_sizes: any; | ||
input_points: Tensor; | ||
}>; | ||
/** | ||
* @param {any[]} images The URL(s) of the image(s) to extract features from. | ||
* @param {*} input_points A 3D or 4D array, representing the input points provided by the user. | ||
* - 3D: `[point_batch_size, nb_points_per_image, 2]`. In this case, `batch_size` is assumed to be 1. | ||
* - 4D: `[batch_size, point_batch_size, nb_points_per_image, 2]`. | ||
* @returns {Promise<SamImageProcessorResult>} | ||
*/ | ||
_call(images: any[], input_points: any): Promise<SamImageProcessorResult>; | ||
/** | ||
* Remove padding and upscale masks to the original image size. | ||
@@ -303,7 +370,13 @@ * @param {Tensor} masks Batched masks from the mask_decoder in (batch_size, num_channels, height, width) format. | ||
* @param {any} input The input to extract features from. | ||
* @param {...any} args Additional arguments. | ||
* @returns {Promise<any>} A Promise that resolves with the extracted features. | ||
*/ | ||
_call(input: any): Promise<any>; | ||
_call(input: any, ...args: any[]): Promise<any>; | ||
} | ||
export class SamProcessor extends Processor { | ||
/** | ||
* @param {*} images | ||
* @param {*} input_points | ||
* @returns {Promise<any>} | ||
*/ | ||
_call(images: any, input_points: any): Promise<any>; | ||
@@ -320,9 +393,18 @@ /** | ||
export class WhisperProcessor extends Processor { | ||
/** | ||
* Calls the feature_extractor function with the given audio input. | ||
* @param {any} audio The audio input to extract features from. | ||
* @returns {Promise<any>} A Promise that resolves with the extracted features. | ||
*/ | ||
_call(audio: any): Promise<any>; | ||
} | ||
export class Wav2Vec2ProcessorWithLM extends Processor { | ||
/** | ||
* Calls the feature_extractor function with the given audio input. | ||
* @param {any} audio The audio input to extract features from. | ||
* @returns {Promise<any>} A Promise that resolves with the extracted features. | ||
*/ | ||
_call(audio: any): Promise<any>; | ||
} | ||
/** | ||
* @typedef {import('./utils/hub.js').PretrainedOptions} PretrainedOptions | ||
*/ | ||
/** | ||
* Helper class which is used to instantiate pretrained processors with the `from_pretrained` function. | ||
@@ -367,2 +449,3 @@ * The chosen processor class is determined by the type specified in the processor config. | ||
YolosFeatureExtractor: typeof YolosFeatureExtractor; | ||
DonutFeatureExtractor: typeof DonutFeatureExtractor; | ||
SamImageProcessor: typeof SamImageProcessor; | ||
@@ -387,9 +470,37 @@ Wav2Vec2FeatureExtractor: typeof Wav2Vec2FeatureExtractor; | ||
* - A path to a *directory* containing processor files, e.g., `./my_model_directory/`. | ||
* @param {PretrainedOptions} options Additional options for loading the processor. | ||
* @param {import('./utils/hub.js').PretrainedOptions} options Additional options for loading the processor. | ||
* | ||
* @returns {Promise<Processor>} A new instance of the Processor class. | ||
*/ | ||
static from_pretrained(pretrained_model_name_or_path: string, { progress_callback, config, cache_dir, local_files_only, revision, }?: PretrainedOptions): Promise<Processor>; | ||
static from_pretrained(pretrained_model_name_or_path: string, { progress_callback, config, cache_dir, local_files_only, revision, }?: import('./utils/hub.js').PretrainedOptions): Promise<Processor>; | ||
} | ||
export type PretrainedOptions = import('./utils/hub.js').PretrainedOptions; | ||
/** | ||
* Named tuple to indicate the order we are using is (height x width), even though | ||
* the Graphics’ industry standard is (width x height). | ||
*/ | ||
export type HeightWidth = [height: number, width: number]; | ||
export type ImageFeatureExtractorResult = { | ||
/** | ||
* The pixel values of the batched preprocessed images. | ||
*/ | ||
pixel_values: Tensor; | ||
/** | ||
* Array of two-dimensional tuples like [[480, 640]]. | ||
*/ | ||
original_sizes: HeightWidth[]; | ||
/** | ||
* Array of two-dimensional tuples like [[1000, 1330]]. | ||
*/ | ||
reshaped_input_sizes: HeightWidth[]; | ||
}; | ||
export type DetrFeatureExtractorResultProps = { | ||
pixel_mask: Tensor; | ||
}; | ||
export type DetrFeatureExtractorResult = ImageFeatureExtractorResult & DetrFeatureExtractorResultProps; | ||
export type SamImageProcessorResult = { | ||
pixel_values: Tensor; | ||
original_sizes: HeightWidth[]; | ||
reshaped_input_sizes: HeightWidth[]; | ||
input_points: Tensor; | ||
}; | ||
import { RawImage } from './utils/image.js'; | ||
@@ -396,0 +507,0 @@ import { Tensor } from './utils/tensor.js'; |
@@ -72,3 +72,3 @@ declare const TokenizerModel_base: new () => { | ||
* @param {string} pretrained_model_name_or_path The path to the pre-trained tokenizer. | ||
* @param {PretrainedOptions} options Additional options for loading the tokenizer. | ||
* @param {import('./utils/hub.js').PretrainedOptions} options Additional options for loading the tokenizer. | ||
* | ||
@@ -78,3 +78,3 @@ * @throws {Error} Throws an error if the tokenizer.json or tokenizer_config.json files are not found in the `pretrained_model_name_or_path`. | ||
*/ | ||
static from_pretrained(pretrained_model_name_or_path: string, { progress_callback, config, cache_dir, local_files_only, revision, }?: PretrainedOptions): Promise<PreTrainedTokenizer>; | ||
static from_pretrained(pretrained_model_name_or_path: string, { progress_callback, config, cache_dir, local_files_only, revision, }?: import('./utils/hub.js').PretrainedOptions): Promise<PreTrainedTokenizer>; | ||
/** | ||
@@ -127,2 +127,3 @@ * Create a new PreTrainedTokenizer instance. | ||
* @param {boolean} [options.padding=false] Whether to pad the input sequences. | ||
* @param {boolean} [options.add_special_tokens=true] Whether or not to add the special tokens associated with the corresponding model. | ||
* @param {boolean} [options.truncation=null] Whether to truncate the input sequences. | ||
@@ -133,5 +134,6 @@ * @param {number} [options.max_length=null] Maximum length of the returned list and optionally padding length. | ||
*/ | ||
_call(text: string | string[], { text_pair, padding, truncation, max_length, return_tensor, }?: { | ||
_call(text: string | string[], { text_pair, add_special_tokens, padding, truncation, max_length, return_tensor, }?: { | ||
text_pair?: string | string[]; | ||
padding?: boolean; | ||
add_special_tokens?: boolean; | ||
truncation?: boolean; | ||
@@ -156,5 +158,9 @@ max_length?: number; | ||
* @param {string|null} text_pair The optional second text to encode. | ||
* @param {Object} options An optional object containing the following properties: | ||
* @param {boolean} [options.add_special_tokens=true] Whether or not to add the special tokens associated with the corresponding model. | ||
* @returns {number[]} An array of token IDs representing the encoded text(s). | ||
*/ | ||
encode(text: string, text_pair?: string | null): number[]; | ||
encode(text: string, text_pair?: string | null, { add_special_tokens, }?: { | ||
add_special_tokens?: boolean; | ||
}): number[]; | ||
/** | ||
@@ -441,2 +447,6 @@ * Decode a batch of tokenized sequences. | ||
} | ||
export class BlenderbotTokenizer extends PreTrainedTokenizer { | ||
} | ||
export class BlenderbotSmallTokenizer extends PreTrainedTokenizer { | ||
} | ||
/** | ||
@@ -481,2 +491,4 @@ * Helper class which is used to instantiate pretrained tokenizers with the `from_pretrained` function. | ||
Wav2Vec2CTCTokenizer: typeof Wav2Vec2CTCTokenizer; | ||
BlenderbotTokenizer: typeof BlenderbotTokenizer; | ||
BlenderbotSmallTokenizer: typeof BlenderbotSmallTokenizer; | ||
PreTrainedTokenizer: typeof PreTrainedTokenizer; | ||
@@ -495,9 +507,8 @@ }; | ||
* - A path to a *directory* containing tokenizer files, e.g., `./my_model_directory/`. | ||
* @param {PretrainedOptions} options Additional options for loading the tokenizer. | ||
* @param {import('./utils/hub.js').PretrainedOptions} options Additional options for loading the tokenizer. | ||
* | ||
* @returns {Promise<PreTrainedTokenizer>} A new instance of the PreTrainedTokenizer class. | ||
*/ | ||
static from_pretrained(pretrained_model_name_or_path: string, { quantized, progress_callback, config, cache_dir, local_files_only, revision, }?: PretrainedOptions): Promise<PreTrainedTokenizer>; | ||
static from_pretrained(pretrained_model_name_or_path: string, { quantized, progress_callback, config, cache_dir, local_files_only, revision, }?: import('./utils/hub.js').PretrainedOptions): Promise<PreTrainedTokenizer>; | ||
} | ||
export type PretrainedOptions = import('./utils/hub.js').PretrainedOptions; | ||
export type BPENode = { | ||
@@ -504,0 +515,0 @@ /** |
@@ -74,13 +74,13 @@ /** | ||
* @typedef {Object} PretrainedOptions Options for loading a pretrained model. | ||
* @property {boolean?} [options.quantized=true] Whether to load the 8-bit quantized version of the model (only applicable when loading model files). | ||
* @property {function} [options.progress_callback=null] If specified, this function will be called during model construction, to provide the user with progress updates. | ||
* @property {Object} [options.config=null] Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: | ||
* @property {boolean?} [quantized=true] Whether to load the 8-bit quantized version of the model (only applicable when loading model files). | ||
* @property {function} [progress_callback=null] If specified, this function will be called during model construction, to provide the user with progress updates. | ||
* @property {Object} [config=null] Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: | ||
* - The model is a model provided by the library (loaded with the *model id* string of a pretrained model). | ||
* - The model is loaded by supplying a local directory as `pretrained_model_name_or_path` and a configuration JSON file named *config.json* is found in the directory. | ||
* @property {string} [options.cache_dir=null] Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. | ||
* @property {boolean} [options.local_files_only=false] Whether or not to only look at local files (e.g., not try downloading the model). | ||
* @property {string} [options.revision='main'] The specific model version to use. It can be a branch name, a tag name, or a commit id, | ||
* @property {string} [cache_dir=null] Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. | ||
* @property {boolean} [local_files_only=false] Whether or not to only look at local files (e.g., not try downloading the model). | ||
* @property {string} [revision='main'] The specific model version to use. It can be a branch name, a tag name, or a commit id, | ||
* since we use a git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any identifier allowed by git. | ||
* NOTE: This setting is ignored for local requests. | ||
* @property {string} [options.model_file_name=null] If specified, load the model with this name (excluding the .onnx suffix). Currently only valid for encoder- or decoder-only models. | ||
* @property {string} [model_file_name=null] If specified, load the model with this name (excluding the .onnx suffix). Currently only valid for encoder- or decoder-only models. | ||
*/ | ||
@@ -87,0 +87,0 @@ declare class FileResponse { |
@@ -48,2 +48,6 @@ export class RawImage { | ||
}; | ||
data: Uint8ClampedArray; | ||
width: number; | ||
height: number; | ||
channels: 2 | 1 | 3 | 4; | ||
/** | ||
@@ -83,9 +87,6 @@ * Convert the image to grayscale format. | ||
* @param {number} height The new height of the image. | ||
* @param {1|2|3|4} channels The new number of channels of the image. | ||
* @param {1|2|3|4|null} [channels] The new number of channels of the image. | ||
* @private | ||
*/ | ||
_update(data: Uint8ClampedArray, width: number, height: number, channels?: 1 | 2 | 3 | 4): RawImage; | ||
data: Uint8ClampedArray; | ||
width: number; | ||
height: number; | ||
channels: 2 | 1 | 3 | 4; | ||
private _update; | ||
/** | ||
@@ -92,0 +93,0 @@ * Clone the image |
@@ -68,5 +68,5 @@ /** | ||
* Create a new Tensor or copy an existing Tensor. | ||
* @param {[string, Array|AnyTypedArray, number[]]|[ONNXTensor]} args | ||
* @param {[string, DataArray, number[]]|[ONNXTensor]} args | ||
*/ | ||
constructor(...args: [string, any[] | AnyTypedArray, number[]] | [any]); | ||
constructor(...args: [string, DataArray, number[]] | [any]); | ||
/** | ||
@@ -217,4 +217,4 @@ * Index into a Tensor object. | ||
export type NestArray<T, Depth extends number, Acc extends never[] = []> = Acc['length'] extends Depth ? T : NestArray<T[], Depth, [...Acc, never]>; | ||
export type AnyTypedArray = import('./maths.js').AnyTypedArray; | ||
export type DataArray = import('./maths.js').AnyTypedArray | any[]; | ||
export {}; | ||
//# sourceMappingURL=tensor.d.ts.map |
Sorry, the diff of this file is too big to display
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is too big to display
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is too big to display
Sorry, the diff of this file is too big to display
Sorry, the diff of this file is too big to display
Sorry, the diff of this file is too big to display
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
44916711
41995
312