🚀 Big News: Socket Acquires Coana to Bring Reachability Analysis to Every Appsec Team.Learn more
Socket
DemoInstallSign in
Socket

gull-api

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

gull-api

A REST API for running Large Language Models

0.0.15
PyPI
Maintainers
1

GULL-API

Test Docker Publish PyPI Publish

GULL-API is a web application backend that can be used to run Large Language Models (LLMs). The interface between the front-end and the back-end is a JSON REST API.

Features

  • Exposes a /api route that returns a JSON file describing the parameters of the LLM.
  • Provides a /llm route that accepts POST requests with JSON payloads to run the LLM with the specified parameters.

Installation

Using Docker

  • Build the Docker image:

    docker build -t gull-api .
    
  • Run the Docker container:

    docker run -p 8000:8000 gull-api
    

The API will be available at http://localhost:8000.

Docker Test Mode

To build and run the Docker container in test mode, use the following commands:

docker build -t gull-api .
docker run -v $(pwd)/data:/app/data -v $(pwd)/example_cli.json:/app/cli.json -p 8000:8000 gull-api

In test mode, an included script echo_args.sh is used instead of a real LLM. This script echoes the arguments it receives, which can be helpful for local testing.

Local Installation

  • Clone the repository:

    git clone https://github.com/yourusername/gull-api.git
    
  • Change to the project directory:

    cd gull-api
    
  • Install the dependencies:

    pip install poetry
    poetry install
    
  • Configure Environment Variables (Optional):

    GULL-API can be configured using environment variables. To do this, create a file named .env in the root of the project directory, and set the environment variables there. For example:

    DB_URI=sqlite:///mydatabase.db
    CLI_JSON_PATH=/path/to/cli.json
    

    GULL-API uses the python-dotenv package to load these environment variables when the application starts.

  • Run the application:

    uvicorn gull_api.main:app --host 0.0.0.0 --port 8000
    

The API will be available at http://localhost:8000.

Usage

/api Route

Send a GET request to the /api route to retrieve a JSON file describing the parameters of the LLM:

GET http://localhost:8000/api

/llm Route

Send a POST request to the /llm route with a JSON payload containing the LLM parameters:

POST http://localhost:8000/llm
Content-Type: application/json

{
  "Prompt": "Once upon a time",
  "Top P": 0.5
}

Example Requests

curl -X POST "http://localhost:8000/llm" -H  "accept: application/json" -H  "Content-Type: application/json" -d "{\"Instruct mode\":false, \"Maximum length\":256, \"Prompt\":\"Hello, world\", \"Stop sequences\":\"Goodbye, world\", \"Temperature\":0.7, \"Top P\":0.95}"
curl -X GET "http://localhost:8000/api" -H "accept: application/json" | python -mjson.tool

Example CLI JSON

An example CLI JSON file is provided in the repository as example_cli.json. This file provides an example of the expected structure for defining the command-line arguments for the LLM.

License

See LICENSE

Keywords

api

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts