Research
Security News
Threat Actor Exposes Playbook for Exploiting npm to Build Blockchain-Powered Botnets
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.
@xenova/transformers
Advanced tools
Run 🤗 Transformers in your browser! We currently support [BERT](https://huggingface.co/docs/transformers/model_doc/bert), [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert), [T5](https://huggingface.co/docs/transformers/model_doc
Run 🤗 Transformers in your browser! We currently support BERT, DistilBERT, T5, GPT2, and BART models, for a variety of tasks including: masked language modelling, text classification, translation, summarization, question answering, and text generation.
It's super easy to translate from existing code!
Python (original):
from transformers import pipeline
# Allocate a pipeline for sentiment-analysis
classifier = pipeline('sentiment-analysis')
output = classifier('I love transformers!')
# [{'label': 'POSITIVE', 'score': 0.9998069405555725}]
Javascript (ours):
import { pipeline } from "transformers.js";
// Allocate a pipeline for sentiment-analysis
let classifier = await pipeline('sentiment-analysis')
let output = await classifier('I love transformers!')
// [{label: 'POSITIVE', score: 0.9998176857266375}]
Check out our demo at https://xenova.github.io/transformers.js/. As you'll see, everything runs inside the browser!
We use ONNX Runtime to run the models in the browser, so you must first convert your PyTorch model to ONNX (which can be done using our conversion script). In general, the command will look something like this:
python ./scripts/convert.py --model_id <hf_model_id> --from_hub --quantize --task <task>
For example, to use bert-base-uncased
for masked language modelling, you can use the command:
python ./scripts/convert.py --model_id bert-base-uncased --from_hub --quantize --task masked-lm
If you want to use a local model, remove the --from_hub
flag from above and place your PyTorch model in the ./models/pytorch/
folder. You can also choose a different location by specifying the parent input folder with --input_parent_dir /path/to/parent_dir/
(note: without the model id).
Alternatively, you can find some of the models we have already converted here. For example, to use bert-base-uncased
for masked language modelling, you can use the model found at https://huggingface.co/Xenova/transformers.js/tree/main/quantized/bert-base-uncased/masked-lm.
Note: We recommend quantizing the model (--quantize
) to reduce model size and improve inference speeds (at the expense of a slight decrease in accuracy). For more information, run the help command: python ./scripts/convert.py -h
.
Coming soon...
Coming soon... In the meantime, check out the source code for the demo here.
Inspired by https://github.com/praeclarum/transformers-js
FAQs
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
The npm package @xenova/transformers receives a total of 50,737 weekly downloads. As such, @xenova/transformers popularity was classified as popular.
We found that @xenova/transformers demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.
Security News
NVD’s backlog surpasses 20,000 CVEs as analysis slows and NIST announces new system updates to address ongoing delays.
Security News
Research
A malicious npm package disguised as a WhatsApp client is exploiting authentication flows with a remote kill switch to exfiltrate data and destroy files.