Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

textembed

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

textembed

TextEmbed provides a robust and scalable REST API for generating vector embeddings from text. Built for performance and flexibility, it supports various sentence-transformer models, allowing users to easily integrate state-of-the-art NLP techniques into their applications. Whether you need embeddings for search, recommendation, or other NLP tasks, TextEmbed delivers with high efficiency.

  • 0.0.8
  • PyPI
  • Socket score

Maintainers
1

Contributors Issues Apache License 2.0 Downloads Docker Pulls PyPI - Version Reliability Rating Quality Gate Status

TextEmbed - Embedding Inference Server

TextEmbed is a high-throughput, low-latency REST API designed for serving vector embeddings. It supports a wide range of sentence-transformer models and frameworks, making it suitable for various applications in natural language processing.

Features

  • High Throughput & Low Latency: Designed to handle a large number of requests efficiently.
  • Flexible Model Support: Works with various sentence-transformer models.
  • Scalable: Easily integrates into larger systems and scales with demand.
  • Batch Processing: Supports batch processing for better and faster inference.
  • OpenAI Compatible REST API Endpoint: Provides an OpenAI compatible REST API endpoint.
  • Single Line Command Deployment: Deploy multiple models via a single command for efficient deployment.
  • Support for Embedding Formats: Supports binary, float16, and float32 embeddings formats for faster retrieval.

Getting Started

Prerequisites

Ensure you have Python 3.10 or higher installed. You will also need to install the required dependencies.

Installation

  1. Install the required dependencies:

    pip install -U textembed
    
  2. Start the TextEmbed server with your desired models:

    python3 -m textembed.server --models <Model1>, <Model2> --port <Port>
    

    Replace <Model1> and <Model2> with the names of the models you want to use, separated by commas. Replace <Port> with the port number on which you want to run the server.

For more information about the Docker deployment and configuration, please refer to the documentation setup.md.

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc