New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

@huggingface/tasks

Package Overview
Dependencies
Maintainers
3
Versions
133
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@huggingface/tasks - npm Package Compare versions

Comparing version 0.0.8 to 0.0.9

2

package.json
{
"name": "@huggingface/tasks",
"packageManager": "pnpm@8.10.5",
"version": "0.0.8",
"version": "0.0.9",
"description": "List of ML tasks for huggingface.co/tasks",

@@ -6,0 +6,0 @@ "repository": "https://github.com/huggingface/huggingface.js.git",

@@ -56,3 +56,3 @@ ## Use Cases

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.audioClassification({

@@ -59,0 +59,0 @@ data: await (await fetch("sample.flac")).blob(),

@@ -38,3 +38,3 @@ ## Use Cases

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.audioToAudio({

@@ -41,0 +41,0 @@ data: await (await fetch("sample.flac")).blob(),

@@ -57,3 +57,3 @@ ## Use Cases

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.automaticSpeechRecognition({

@@ -60,0 +60,0 @@ data: await (await fetch("sample.flac")).blob(),

@@ -37,3 +37,3 @@ ## Use Cases

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.conversational({

@@ -40,0 +40,0 @@ model: "facebook/blenderbot-400M-distill",

@@ -32,3 +32,3 @@ ## Use Cases

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.imageClassification({

@@ -35,0 +35,0 @@ data: await (await fetch("https://picsum.photos/300/300")).blob(),

@@ -48,3 +48,3 @@ ## Use Cases

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.imageSegmentation({

@@ -51,0 +51,0 @@ data: await (await fetch("https://picsum.photos/300/300")).blob(),

@@ -46,3 +46,3 @@ ## Use Cases

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.imageToImage({

@@ -49,0 +49,0 @@ data: await (await fetch("image")).blob(),

@@ -51,3 +51,3 @@ ## Use Cases

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.imageToText({

@@ -54,0 +54,0 @@ data: await (await fetch("https://picsum.photos/300/300")).blob(),

@@ -28,3 +28,3 @@ ## Use Cases

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
const inputs =

@@ -31,0 +31,0 @@ "Paris is the capital and most populous city of France, with an estimated population of 2,175,601 residents as of 2018, in an area of more than 105 square kilometres (41 square miles). The City of Paris is the centre and seat of government of the region and province of Île-de-France, or Paris Region, which has an estimated population of 12,174,880, or about 18 percent of the population of France as of 2017.";

@@ -120,3 +120,3 @@ ## Use Cases

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.conversational({

@@ -123,0 +123,0 @@ model: "distilbert-base-uncased-finetuned-sst-2-english",

@@ -75,3 +75,3 @@ This task covers guides on both [text-generation](https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads) and [text-to-text generation](https://huggingface.co/models?pipeline_tag=text2text-generation&sort=downloads) models. Popular large language models that are used for chats or following instructions are also covered in this task. You can find the list of selected open-source large language models [here](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), ranked by their performance scores.

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.conversational({

@@ -78,0 +78,0 @@ model: "distilbert-base-uncased-finetuned-sst-2-english",

@@ -44,3 +44,3 @@ ## Use Cases

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.textToImage({

@@ -47,0 +47,0 @@ model: "stabilityai/stable-diffusion-2",

@@ -50,3 +50,3 @@ ## Use Cases

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.textToSpeech({

@@ -53,0 +53,0 @@ model: "facebook/mms-tts",

@@ -40,3 +40,3 @@ ## Use Cases

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.translation({

@@ -43,0 +43,0 @@ model: "t5-base",

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc