Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@optimaxer/web-core-v3

Package Overview
Dependencies
Maintainers
0
Versions
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@optimaxer/web-core-v3

This package contains the core functionality for the Optimaxer web Library. It is a collection of utilities and components that are used across all Optimaxer web libraries.

  • 3.2.2
  • latest
  • npm
  • Socket score

Version published
Maintainers
0
Created
Source

Optimaxer Web Core

This package contains the core functionality for the Optimaxer web Library. It is a collection of utilities and components that are used across all Optimaxer web libraries.

This library contains building blocks to run AI Models directly in the browser, and to build AI Powered web applications. This is built on top of TransformerJS and WebLLM to provide simple components to build AI powered web applications.

All the Classes, Methods etc. are documented in the code itself and should provide inline documentation.

Installation

npm install @optimaxer/web-core

Available Components

  • Embeddings: Create embeddings from text using pre-trained models directly in the browser.

  • VectorStores: Store and query vector embeddings in the browser.

  • LLM Inferences: Run Large Language Model Inferences directly in the browser.

Embeddings

Currently, the library supports the following pre-trained models:

  • Xenova/gte-small: A pre-trained model that converts text into high-dimensional vectors that can be used for text classification, semantic similarity, clustering, and other natural language processing (NLP) tasks.

VectorStores

The library provides a simple interface to store and query vector embeddings in the browser. This is useful for building search engines, recommendation systems, and other applications that require similarity search.

Currently, the library supports the following VectorStores:

  • ClientVectorStore: A simple IndexDB vector store that stores vectors in an array and performs similarity search entirely in the browser.

LLM Inferences

The library provides a simple interface to run Large Language Model (LLM) inferences directly in the browser. This is useful for building chatbots, question-answering systems, and other applications that require natural language understanding.

Currently, the library supports the following Libraries that can run inferences in the browser. Each of these libraries can be used to run inferences on a specific model. Both of these libraries are built on top of OnnxWeb that runs on WebAssembly.

  • WebLLM: Read More
  • TransformerJS: Read More

Reason for using 2 libraries is that they have different model support.

FAQs

Package last updated on 16 Oct 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc