Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@xenova/transformers

Package Overview
Dependencies
Maintainers
1
Versions
75
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@xenova/transformers - npm Package Compare versions

Comparing version 2.13.4 to 2.14.0

2

package.json
{
"name": "@xenova/transformers",
"version": "2.13.4",
"version": "2.14.0",
"description": "State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!",

@@ -5,0 +5,0 @@ "main": "./src/transformers.js",

@@ -104,3 +104,3 @@

<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.13.4';
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.14.0';
</script>

@@ -138,3 +138,3 @@ ```

By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/transformers@2.13.4/dist/), which should work out-of-the-box. You can customize this as follows:
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/transformers@2.14.0/dist/), which should work out-of-the-box. You can customize this as follows:

@@ -333,2 +333,3 @@

1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
1. **[Segment Anything](https://huggingface.co/docs/transformers/model_doc/sam)** (from Meta AI) released with the paper [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf) by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick.
1. **[SigLIP](https://huggingface.co/docs/transformers/main/model_doc/siglip)** (from Google AI) released with the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, Lucas Beyer.

@@ -335,0 +336,0 @@ 1. **[SpeechT5](https://huggingface.co/docs/transformers/model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.

@@ -32,3 +32,3 @@ /**

const VERSION = '2.13.4';
const VERSION = '2.14.0';

@@ -35,0 +35,0 @@ // Check if various APIs are available (depends on environment)

@@ -112,4 +112,4 @@

*
* @param {Array} arr The nested array to calculate dimensions for.
* @returns {Array} An array containing the dimensions of the input array.
* @param {any[]} arr The nested array to calculate dimensions for.
* @returns {number[]} An array containing the dimensions of the input array.
*/

@@ -116,0 +116,0 @@ export function calculateDimensions(arr) {

@@ -401,3 +401,3 @@

.replaceAll('{model}', path_or_repo_id)
.replaceAll('{revision}', revision),
.replaceAll('{revision}', encodeURIComponent(revision)),
filename

@@ -404,0 +404,0 @@ );

@@ -93,2 +93,6 @@

/**
* Returns the size of the image (width, height).
* @returns {[number, number]} The size of the image (width, height).
*/
get size() {

@@ -169,2 +173,6 @@ return [this.width, this.height];

static fromTensor(tensor, channel_format = 'CHW') {
if (tensor.dims.length !== 3) {
throw new Error(`Tensor should have 3 dimensions, but has ${tensor.dims.length} dimensions.`);
}
if (channel_format === 'CHW') {

@@ -171,0 +179,0 @@ tensor = tensor.transpose(1, 2, 0);

@@ -22,3 +22,3 @@ export namespace env {

declare const __dirname: any;
declare const VERSION: "2.13.4";
declare const VERSION: "2.14.0";
declare const localModelPath: any;

@@ -25,0 +25,0 @@ declare const FS_AVAILABLE: boolean;

@@ -115,2 +115,16 @@ declare const FeatureExtractor_base: new () => {

/**
* Find the target (width, height) dimension of the output image after
* resizing given the input image and the desired size.
* @param {RawImage} image The image to resize.
* @param {any} size The size to use for resizing the image.
* @returns {[number, number]} The target (width, height) dimension of the output image after resizing.
*/
get_resize_output_image_size(image: RawImage, size: any): [number, number];
/**
* Resizes the image.
* @param {RawImage} image The image to resize.
* @returns {Promise<RawImage>} The resized image.
*/
resize(image: RawImage): Promise<RawImage>;
/**
* @typedef {object} PreprocessedImage

@@ -178,2 +192,8 @@ * @property {HeightWidth} original_size The original size of the image.

export class ConvNextFeatureExtractor extends ImageFeatureExtractor {
constructor(config: any);
/**
* Percentage of the image to crop. Only has an effect if this.size < 384.
*/
crop_pct: any;
resize(image: any): Promise<any>;
}

@@ -324,13 +344,32 @@ export class ConvNextImageProcessor extends ConvNextFeatureExtractor {

* @property {HeightWidth[]} reshaped_input_sizes
* @property {Tensor} input_points
* @property {Tensor} [input_points]
* @property {Tensor} [input_labels]
*/
export class SamImageProcessor extends ImageFeatureExtractor {
/**
* @param {RawImage[]} images The image(s) to extract features from.
* @param {*} input_points A 3D or 4D array, representing the input points provided by the user.
*
* @param {any} input_points
* @param {HeightWidth[]} original_sizes
* @param {HeightWidth[]} reshaped_input_sizes
* @returns {Tensor}
*/
reshape_input_points(input_points: any, original_sizes: HeightWidth[], reshaped_input_sizes: HeightWidth[]): Tensor;
/**
*
* @param {any} input_labels
* @param {Tensor} input_points
* @returns {Tensor}
*/
add_input_labels(input_labels: any, input_points: Tensor): Tensor;
/**
* @param {any[]} images The URL(s) of the image(s) to extract features from.
* @param {any} [input_points] A 3D or 4D array, representing the input points provided by the user.
* - 3D: `[point_batch_size, nb_points_per_image, 2]`. In this case, `batch_size` is assumed to be 1.
* - 4D: `[batch_size, point_batch_size, nb_points_per_image, 2]`.
* @param {any} [input_labels] A 2D or 3D array, representing the input labels for the points, used by the prompt encoder to encode the prompt.
* - 2D: `[point_batch_size, nb_points_per_image]`. In this case, `batch_size` is assumed to be 1.
* - 3D: `[batch_size, point_batch_size, nb_points_per_image]`.
* @returns {Promise<SamImageProcessorResult>}
*/
_call(images: RawImage[], input_points: any): Promise<SamImageProcessorResult>;
_call(images: any[], input_points?: any, input_labels?: any): Promise<SamImageProcessorResult>;
/**

@@ -520,7 +559,5 @@ * Remove padding and upscale masks to the original image size.

/**
* @param {*} images
* @param {*} input_points
* @returns {Promise<any>}
* @borrows SamImageProcessor#_call as _call
*/
_call(images: any, input_points: any): Promise<any>;
_call(...args: any[]): Promise<any>;
/**

@@ -530,2 +567,6 @@ * @borrows SamImageProcessor#post_process_masks as post_process_masks

post_process_masks(...args: any[]): any;
/**
* @borrows SamImageProcessor#reshape_input_points as reshape_input_points
*/
reshape_input_points(...args: any[]): any;
}

@@ -673,3 +714,4 @@ /**

reshaped_input_sizes: HeightWidth[];
input_points: Tensor;
input_points?: Tensor;
input_labels?: Tensor;
};

@@ -676,0 +718,0 @@ import { RawImage } from './utils/image.js';

@@ -56,6 +56,6 @@ /**

*
* @param {Array} arr The nested array to calculate dimensions for.
* @returns {Array} An array containing the dimensions of the input array.
* @param {any[]} arr The nested array to calculate dimensions for.
* @returns {number[]} An array containing the dimensions of the input array.
*/
export function calculateDimensions(arr: any[]): any[];
export function calculateDimensions(arr: any[]): number[];
/**

@@ -62,0 +62,0 @@ * Replicate python's .pop() method for objects.

@@ -48,4 +48,8 @@ export class RawImage {

channels: 2 | 1 | 3 | 4;
get size(): number[];
/**
* Returns the size of the image (width, height).
* @returns {[number, number]} The size of the image (width, height).
*/
get size(): [number, number];
/**
* Convert the image to grayscale format.

@@ -52,0 +56,0 @@ * @returns {RawImage} `this` to support chaining.

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc