Socket
Book a DemoInstallSign in
Socket

chatthy

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

chatthy

A minimal LLM network chat server/client app.

pipPyPI
Version
0.2.11
Maintainers
1

chatthy

An asynchronous terminal server/multiple-client setup for conducting and managing chats with LLMs.

This is the successor project to llama-farm

The RAG/agent functionality should be split out into an API layer.

network architecture

  • client/server RPC-type architecture
  • message signing
  • ensure stream chunk ordering

chat management

  • basic chat persistence and management
  • set, switch to saved system prompts (personalities)
  • manage prompts like chats (as files)
  • chat truncation to token length
  • rename chat
  • profiles (profile x personalities -> sets of chats)
  • import/export chat to client-side file
  • remove text between tags when saving

context workspace

  • context workspace (load/drop files)
  • client inject from file
  • client inject from other sources, e.g. youtube (trag)
  • templates for standard instruction requests (trag)
  • context workspace - bench/suspend files (hidden by filename)
  • local files / folders in transient workspace
  • checkboxes for delete / show / hide

client interface

  • can switch between Anthropic, OpenAI, tabbyAPI providers and models
  • streaming
  • syntax highlighting
  • decent REPL
  • REPL command mode
  • cut/copy from output
  • client-side prompt editing
  • vimish keys in output
  • client-side chat/message editing (how? temporarily set the input field history? Fire up $EDITOR in client?) - edit via chat local import/export
  • latex rendering (this is tricky in the context of prompt-toolkit, but see flatlatex).
  • generation cancellation
  • tkinter UI

multimodal

  • design with multimodal models in mind
  • image sending and use
  • image display

miscellaneous / extensions

  • use proper config dir (group?)
  • dump default conf if missing

tool / agentic use

Use agents at the API level, which is to say, use an intelligent router. This separates the chatthy system from the RAG/LLM logic.

  • (auto) tools (evolve from llama-farm -> trag)
  • user defined tool plugins
  • server use vdb context at LLM will (tool)
  • iterative workflows (refer to llama-farm, consider smolagents)
  • tool chains
  • tool: workspace file write, delete
  • tool: workspace file patch/diff
  • tool: rag query tool
  • MCP agents?
  • smolagents / archgw?

RAG

  • summaries and standard client instructions (trag)
  • server use vdb context on request
  • set RAG provider client-side (e.g. Mistral Small, Phi-4)
  • consider best method of pdf conversion / ingestion (fvdb), OOB (image models?)
  • full arxiv paper ingestion (fvdb) - consolidate into one latex file OOB
  • vdb result reranking with context, and winnowing (agent?)
  • vdb results -> workspace (agent?)

unallocated / out of scope

audio streaming ? - see matatonic's servers workflows (tree of instruction templates) tasks

Keywords

hy

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

About

Packages

Stay in touch

Get open source security insights delivered straight into your inbox.

  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc

U.S. Patent No. 12,346,443 & 12,314,394. Other pending.