Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

com.johnsnowlabs.nlp:spark-nlp-gpu_2.11

Package Overview
Dependencies
Maintainers
10
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

com.johnsnowlabs.nlp:spark-nlp-gpu_2.11

spark-nlp-gpu

  • 2.7.5
  • Source
  • Maven
  • Socket score

Version published
Maintainers
10
Source

Spark NLP: State-of-the-Art Natural Language Processing & LLMs Library

Spark NLP is a state-of-the-art Natural Language Processing library built on top of Apache Spark. It provides simple, performant & accurate NLP annotations for machine learning pipelines that scale easily in a distributed environment.

Spark NLP comes with 83000+ pretrained pipelines and models in more than 200+ languages. It also offers tasks such as Tokenization, Word Segmentation, Part-of-Speech Tagging, Word and Sentence Embeddings, Named Entity Recognition, Dependency Parsing, Spell Checking, Text Classification, Sentiment Analysis, Token Classification, Machine Translation (+180 languages), Summarization, Question Answering, Table Question Answering, Text Generation, Image Classification, Image to Text (captioning), Automatic Speech Recognition, Zero-Shot Learning, and many more NLP tasks.

Spark NLP is the only open-source NLP library in production that offers state-of-the-art transformers such as BERT, CamemBERT, ALBERT, ELECTRA, XLNet, DistilBERT, RoBERTa, DeBERTa, XLM-RoBERTa, Longformer, ELMO, Universal Sentence Encoder, Llama-2, M2M100, BART, Instructor, E5, Google T5, MarianMT, OpenAI GPT2, Vision Transformers (ViT), OpenAI Whisper, Llama, Mistral, Phi, Qwen2, and many more not only to Python and R, but also to JVM ecosystem (Java, Scala, and Kotlin) at scale by extending Apache Spark natively.

Model Importing Support

Spark NLP provides easy support for importing models from various popular frameworks:

  • TensorFlow
  • ONNX
  • OpenVINO
  • Llama.cpp (GGUF)

This wide range of support allows you to seamlessly integrate models from different sources into your Spark NLP workflows, enhancing flexibility and compatibility with existing machine learning ecosystems.

Project's website

Take a look at our official Spark NLP page: https://sparknlp.org/ for user documentation and examples

Features

Quick Start

This is a quick example of how to use Spark NLP pre-trained pipeline in Python and PySpark:

$ java -version
# should be Java 8 or 11 (Oracle or OpenJDK)
$ conda create -n sparknlp python=3.7 -y
$ conda activate sparknlp
# spark-nlp by default is based on pyspark 3.x
$ pip install spark-nlp==5.5.1 pyspark==3.3.1

In Python console or Jupyter Python3 kernel:

# Import Spark NLP
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp.pretrained import PretrainedPipeline
import sparknlp

# Start SparkSession with Spark NLP
# start() functions has 3 parameters: gpu, apple_silicon, and memory
# sparknlp.start(gpu=True) will start the session with GPU support
# sparknlp.start(apple_silicon=True) will start the session with macOS M1 & M2 support
# sparknlp.start(memory="16G") to change the default driver memory in SparkSession
spark = sparknlp.start()

# Download a pre-trained pipeline
pipeline = PretrainedPipeline('explain_document_dl', lang='en')

# Your testing dataset
text = """
The Mona Lisa is a 16th century oil painting created by Leonardo.
It's held at the Louvre in Paris.
"""

# Annotate your testing dataset
result = pipeline.annotate(text)

# What's in the pipeline
list(result.keys())
Output: ['entities', 'stem', 'checked', 'lemma', 'document',
         'pos', 'token', 'ner', 'embeddings', 'sentence']

# Check the results
result['entities']
Output: ['Mona Lisa', 'Leonardo', 'Louvre', 'Paris']

For more examples, you can visit our dedicated examples to showcase all Spark NLP use cases!

Packages Cheatsheet

This is a cheatsheet for corresponding Spark NLP Maven package to Apache Spark / PySpark major version:

Apache SparkSpark NLP on CPUSpark NLP on GPUSpark NLP on AArch64 (linux)Spark NLP on Apple Silicon
3.0/3.1/3.2/3.3/3.4/3.5spark-nlpspark-nlp-gpuspark-nlp-aarch64spark-nlp-silicon
Start Functionsparknlp.start()sparknlp.start(gpu=True)sparknlp.start(aarch64=True)sparknlp.start(apple_silicon=True)

NOTE: M1/M2 and AArch64 are under experimental support. Access and support to these architectures are limited by the community and we had to build most of the dependencies by ourselves to make them compatible. We support these two architectures, however, they may not work in some environments.

Pipelines and Models

For a quick example of using pipelines and models take a look at our official documentation

Please check out our Models Hub for the full list of pre-trained models with examples, demo, benchmark, and more

Platform and Ecosystem Support

Apache Spark Support

Spark NLP 5.5.1 has been built on top of Apache Spark 3.4 while fully supports Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, 3.4.x, and 3.5.x

Spark NLPApache Spark 3.5.xApache Spark 3.4.xApache Spark 3.3.xApache Spark 3.2.xApache Spark 3.1.xApache Spark 3.0.xApache Spark 2.4.xApache Spark 2.3.x
5.5.xYESYESYESYESYESYESNONO
5.4.xYESYESYESYESYESYESNONO
5.3.xYESYESYESYESYESYESNONO
5.2.xYESYESYESYESYESYESNONO
5.1.xPartiallyYESYESYESYESYESNONO
5.0.xYESYESYESYESYESYESNONO

Find out more about Spark NLP versions from our release notes.

Scala and Python Support

Spark NLPPython 3.6Python 3.7Python 3.8Python 3.9Python 3.10Scala 2.11Scala 2.12
5.5.xNOYESYESYESYESNOYES
5.4.xNOYESYESYESYESNOYES
5.3.xNOYESYESYESYESNOYES
5.2.xNOYESYESYESYESNOYES
5.1.xNOYESYESYESYESNOYES
5.0.xNOYESYESYESYESNOYES

Find out more about 4.x SparkNLP versions in our official documentation

Databricks Support

Spark NLP 5.5.1 has been tested and is compatible with the following runtimes:

CPUGPU
14.1 / 14.1 ML14.1 ML & GPU
14.2 / 14.2 ML14.2 ML & GPU
14.3 / 14.3 ML14.3 ML & GPU
15.0 / 15.0 ML15.0 ML & GPU
15.1 / 15.0 ML15.1 ML & GPU
15.2 / 15.0 ML15.2 ML & GPU
15.3 / 15.0 ML15.3 ML & GPU
15.4 / 15.0 ML15.4 ML & GPU

We are compatible with older runtimes. For a full list check databricks support in our official documentation

EMR Support

Spark NLP 5.5.1 has been tested and is compatible with the following EMR releases:

EMR Release
emr-6.13.0
emr-6.14.0
emr-6.15.0
emr-7.0.0
emr-7.1.0
emr-7.2.0

We are compatible with older EMR releases. For a full list check EMR support in our official documentation

Full list of Amazon EMR 6.x releases Full list of Amazon EMR 7.x releases

NOTE: The EMR 6.1.0 and 6.1.1 are not supported.

Installation

Command line (requires internet connection)

To install spark-nlp packages through command line follow these instructions from our official documentation

Scala

Spark NLP supports Scala 2.12.15 if you are using Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x versions. Our packages are deployed to Maven central. To add any of our packages as a dependency in your application you can follow these instructions from our official documentation.

If you are interested, there is a simple SBT project for Spark NLP to guide you on how to use it in your projects Spark NLP SBT S5.5.1r

Python

Spark NLP supports Python 3.7.x and above depending on your major PySpark version. Check all available installations for Python in our official documentation

Compiled JARs

To compile the jars from source follow these instructions from our official documenation

Platform-Specific Instructions

For detailed instructions on how to use Spark NLP on supported platforms, please refer to our official documentation:

PlatformSupported Language(s)
Apache ZeppelinScala, Python
Jupyter NotebookPython
Google Colab NotebookPython
Kaggle KernelPython
Databricks ClusterScala, Python
EMR ClusterScala, Python
GCP Dataproc ClusterScala, Python

Offline

Spark NLP library and all the pre-trained models/pipelines can be used entirely offline with no access to the Internet. Please check these instructions from our official documentation to use Spark NLP offline

Advanced Settings

You can change Spark NLP configurations via Spark properties configuration. Please check these instructions from our official documentation.

S3 Integration

In Spark NLP we can define S3 locations to:

  • Export log files of training models
  • Store tensorflow graphs used in NerDLApproach

Please check these instructions from our official documentation.

Document5.5.1

Examples

Need more examples? Check out our dedicated Spark NLP Examples repository to showcase all Spark NLP use cases!

Also, don't forget to check Spark NLP in Action built by Streamlit.

All examples: spark-nlp/examples

FAQ

Check our Articles and Videos page here

Citation

We have published a paper that you can cite for the Spark NLP library:

@article{KOCAMAN2021100058,
    title = {Spark NLP: Natural language understanding at scale},
    journal = {Software Impacts},
    pages = {100058},
    year = {2021},
    issn = {2665-9638},
    doi = {https://doi.org/10.1016/j.simpa.2021.100058},
    url = {https://www.sciencedirect.com/science/article/pii/S2665963.2.300063},
    author = {Veysel Kocaman and David Talby},
    keywords = {Spark, Natural language processing, Deep learning, Tensorflow, Cluster},
    abstract = {Spark NLP is a Natural Language Processing (NLP) library built on top of Apache Spark ML. It provides simple, performant & accurate NLP annotations for machine learning pipelines that can scale easily in a distributed environment. Spark NLP comes with 1100+ pretrained pipelines and models in more than 192+ languages. It supports nearly all the NLP tasks and modules that can be used seamlessly in a cluster. Downloaded more than 2.7 million times and experiencing 9x growth since January 2020, Spark NLP is used by 54% of healthcare organizations as the world’s most widely used NLP library in the enterprise.}
    }
}5.5.1

Community support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • YouTube Spark NLP video tutorials

Contributing

We appreciate any sort of contributions:

  • ideas
  • feedback
  • documentation
  • bug reports
  • NLP training and testing corpora
  • Development and testing

Clone the repo and submit your pull-requests! Or directly create issues in this repo.

John Snow Labs

http://johnsnowlabs.com

FAQs

Package last updated on 05 Mar 2021

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc