Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

gptme

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

gptme

Personal AI assistant in your terminal that can use the shell, run code, edit files, browse the web, and use vision. An unconstrained local alternative to ChatGPT's Code Interpreter.

  • 0.24.1
  • PyPI
  • Socket score

Maintainers
1

gptme

/ʤiː piː tiː miː/

Getting StartedWebsiteDocumentation

Build Status Docs Build Status Codecov
PyPI version PyPI - Downloads all-time PyPI - Downloads per day
Discord X.com
Powered by gptme

📜 Personal AI assistant in your terminal, with tools so it can:
Use the terminal, run code, edit files, browse the web, use vision, and much more;
Assists in all kinds of knowledge-work, especially programming, from a simple but powerful CLI.

An unconstrained local alternative to ChatGPT's "Code Interpreter".
Not limited by lack of software, internet access, timeouts, or privacy concerns (if using local models).

📚 Table of Contents

🎥 Demos

[!NOTE] These demos are very out of date and do not reflect the latest capabilities. We hope to update them soon!

Fibonacci (old)Snake with curses

demo screencast with asciinema

Steps
  1. Create a new dir 'gptme-test-fib' and git init
  2. Write a fib function to fib.py, commit
  3. Create a public repo and push to GitHub

621992-resvg

Steps
  1. Create a snake game with curses to snake.py
  2. Running fails, ask gptme to fix a bug
  3. Game runs
  4. Ask gptme to add color
  5. Minor struggles
  6. Finished game with green snake and red apple pie!
Mandelbrot with cursesAnswer question from URL

mandelbrot-curses

Steps
  1. Render mandelbrot with curses to mandelbrot_curses.py
  2. Program runs
  3. Add color

superuserlabs-ceo

Steps
  1. Ask who the CEO of Superuser Labs is, passing website URL
  2. gptme browses the website, and answers correctly

You can find more Demos and Examples in the documentation.

🌟 Features

  • 💻 Code execution
    • Executes code in your local environment with the shell and python tools.
  • 🧩 Read, write, and change files
    • Makes incremental changes with the patch tool.
  • 🌐 Search and browse the web.
    • Can use a browser via Playwright with the browser tool.
  • 👀 Vision
    • Can see images referenced in prompts, screenshots of your desktop, and web pages.
  • 🔄 Self-correcting
    • Output is fed back to the assistant, allowing it to respond and self-correct.
  • 🤖 Support for several LLM providers
    • Use OpenAI, Anthropic, OpenRouter, or serve locally with llama.cpp
  • 💬 Web UI frontend and REST API (optional, see docs for server)
    • Interact with the assistant from a web interface or via REST API.
  • 💻 Computer use tool, as hyped by Anthropic (see #216)
    • Give the assistant access to a full desktop, allowing it to interact with GUI applications.
  • 🤖 Long-running agents and advanced agent architectures (see #143 and #259)
  • ✨ Many smaller features to ensure a great experience
    • 🚰 Pipe in context via stdin or as arguments.
      • Passing a filename as an argument will read the file and include it as context.
    • → Tab completion
    • 📝 Automatic naming of conversations
    • 💬 Optional basic Web UI and REST API

🛠 Use Cases

  • 🖥 Development: Write and run code faster with AI assistance.
  • 🎯 Shell Expert: Get the right command using natural language (no more memorizing flags!).
  • 📊 Data Analysis: Process and analyze data directly in your terminal.
  • 🎓 Interactive Learning: Experiment with new technologies or codebases hands-on.
  • 🤖 Agents & Tools: Experiment with agents & tools in a local environment.

🛠 Developer perks

  • 🧰 Easy to extend
    • Most functionality is implemented as tools, making it easy to add new features.
  • 🧪 Extensive testing, high coverage.
  • 🧹 Clean codebase, checked and formatted with mypy, ruff, and pyupgrade.
  • 🤖 GitHub Bot to request changes from comments! (see #16)
    • Operates in this repo! (see #18 for example)
    • Runs entirely in GitHub Actions.
  • 📊 Evaluation suite for testing capabilities of different models
  • 📝 gptme.vim for easy integration with vim

🚧 In progress

  • 🌳 Tree-based conversation structure (see #17)
  • 📜 RAG to automatically include context from local files (see #59)
  • 🏆 Advanced evals for testing frontier capabilities

🚀 Getting Started

Install with pipx:

# requires Python 3.10+
pipx install gptme

Now, to get started, run:

gptme

Here are some examples:

gptme 'write an impressive and colorful particle effect using three.js to particles.html'
gptme 'render mandelbrot set to mandelbrot.png'
gptme 'suggest improvements to my vimrc'
gptme 'convert to h265 and adjust the volume' video.mp4
git diff | gptme 'complete the TODOs in this diff'
make test | gptme 'fix the failing tests'

For more, see the Getting Started guide and the Examples in the documentation.

🛠 Usage

$ gptme --help
Usage: gptme [OPTIONS] [PROMPTS]...

  gptme is a chat-CLI for LLMs, empowering them with tools to run shell
  commands, execute code, read and manipulate files, and more.

  If PROMPTS are provided, a new conversation will be started with it. PROMPTS
  can be chained with the '-' separator.

  The interface provides user commands that can be used to interact with the
  system.

  Available commands:
    /undo         Undo the last action
    /log          Show the conversation log
    /tools        Show available tools
    /edit         Edit the conversation in your editor
    /rename       Rename the conversation
    /fork         Create a copy of the conversation with a new name
    /summarize    Summarize the conversation
    /replay       Re-execute codeblocks in the conversation, wont store output in log
    /impersonate  Impersonate the assistant
    /tokens       Show the number of tokens used
    /export       Export conversation as standalone HTML
    /help         Show this help message
    /exit         Exit the program

Options:
  -n, --name TEXT        Name of conversation. Defaults to generating a random
                         name.
  -m, --model TEXT       Model to use, e.g. openai/gpt-4o,
                         anthropic/claude-3-5-sonnet-20240620. If only
                         provider given, a default is used.
  -w, --workspace TEXT   Path to workspace directory. Pass '@log' to create a
                         workspace in the log directory.
  -r, --resume           Load last conversation
  -y, --no-confirm       Skips all confirmation prompts.
  -n, --non-interactive  Force non-interactive mode. Implies --no-confirm.
  --system TEXT          System prompt. Can be 'full', 'short', or something
                         custom.
  -t, --tools TEXT       Comma-separated list of tools to allow. Available:
                         read, save, append, patch, shell, subagent, tmux,
                         browser, gh, chats, screenshot, vision, computer,
                         python.
  --no-stream            Don't stream responses
  --show-hidden          Show hidden system messages.
  -v, --verbose          Show verbose output.
  --version              Show version and configuration information
  --help                 Show this message and exit.

📊 Stats

⭐ Stargazers over time

Stargazers over time

📈 Download Stats

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc