New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

language-model-plugin

Package Overview
Dependencies
Maintainers
0
Versions
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

language-model-plugin

Enable on device inferencing using Mediapipe LLM.

latest
Source
npmnpm
Version
0.0.4
Version published
Maintainers
0
Created
Source

language-model-plugin

Enabled on device inferencing using Mediapipe LLM.

Install

npm install language-model-plugin
npx cap sync

API

  • generate(...)
  • generateStreaming(...)

generate(...)

generate(options: { value: string; }) => Promise<{ value: string; }>
ParamType
options{ value: string; }

Returns: Promise<{ value: string; }>

generateStreaming(...)

generateStreaming(options: { value: string; }) => Promise<{ value: string; }>
ParamType
options{ value: string; }

Returns: Promise<{ value: string; }>

In the capacitor app (not the plugin) add a supported model to the copy bundle resources. App, Target, Build Phases, Copy Build Resources, then add the model file. You can download the model from kaggle. https://www.kaggle.com/models/google/gemma/tfLite/gemma-2b-it-gpu-int4

Usage

Here’s a simple example of how to use the plugin in your application:

import { Inference } from 'language-model-plugin';
async function doInference(e){
        e.preventDefault();
        setMessageStream([]);
        const result = await Inference.generate({"value":promptRef.current.value});
        setLastMessage(result.value);
    }

The plugin also supports streaming responses:

import { useState, useEffect, useRef } from 'react';
import { Inference } from 'language-model-plugin';

function AIModel() {

    const [messageStream, setMessageStream] = useState([]);
    const promptRef = useRef();

    async function doStreamingInference(e){
        e.preventDefault();
       await Inference.generateStreaming({"value":promptRef.current.value}); 
    }

    useEffect(() => {
        Inference.addListener("llm_partial",(result) => {
            console.log(result);
            setMessageStream(prevData => [...prevData, result.value]);  
        });
        Inference.addListener("llm_start",(result) => {
            console.log(result)
            setMessageStream([]);  
          });
    }, []);
    return (
        <>
            <input type="text" ref={promptRef}></input>
            <button onClick={doStreamingInference}>streaming inference</button>
            <span>{messageStream.join("")}</span>
        </>
    )
}

export default AIModel

Keywords

capacitor

FAQs

Package last updated on 15 Aug 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts