🚀 Big News: Socket Acquires Coana to Bring Reachability Analysis to Every Appsec Team.Learn more
Socket
DemoInstallSign in
Socket

replgpt

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

replgpt

An interactive REPL with GPT-based assistance

0.1.1
PyPI
Maintainers
1

replgpt

replgpt is a Python hacking tool that lets you seamlessly switch between code authoring, LLM code generation, and code execution, all within a single terminal session. Imagine using a REPL, and being able to switch between Python and natural language, and being able to execute both.

If you've ever been frustrated with copying and pasting code generated by ChatGPT and some other environment, this is the tool for you.

What It Is Not

replgpt is not an IDE. It is not an editor based coding agent, though it shares some functionality. If you are building a new feature on an existing code base, this may not be the best tool for that job. However, if you want to work with generated Python code without toggling between windows, are looking to jump start building a new idea with Python, want to learn about an existing library, it might be up your alley.

Features

  • Standard Python REPL: Execute Python commands just like the standard Python REPL. The code you write and it's results will be automatically added to your chat context.
  • LLM Code Generation: Enter natural language text. Ask questions about an error message without needing to type it in. Or, ask it to write you a function. The function will immediatel be available in your REPL session.

Getting Started

Installation

Install replgpt directly from PyPI:

pip install replgpt

Set Up API Key

Set the OPENAI_API_KEY environment variable with your OpenAI API key:

export OPENAI_API_KEY="your-openai-api-key"

After installing, start the REPL with:

replgpt

Functionality

Python

Enter any valid Python code. When executed, the command and it's output will be included in the Agent's memory.

Natural Language

Enter a query to the AI Agent. It can answer questions you have about the code you've run or errors you've seen. Help you debug code that isn't behaving in a way you'd expect. Or, ask the Agent to write a function for you which will automatically become availabe in your REPL session.

Commands

There are several commands you can issue to the REPL to control its behavior:

  • /help - Print additional information about the REPL and commands you can run.

  • /file_to_context <file_path> - Read the contents of a local file and load it into the Agent's context window. This is can be used to import documentation into the Agent's memory, or give it knowledge of existing code you'd like to work with inside of the REPL. Or, if you want to understand a project's dependencies better, run /file_to_context requirements.txt and ask your agent about the libraries the libraries used.

  • /auto_eval - Controls what the REPL will do with code generated by your AI agent. The default strategy of 'always' means that any code returned by the Agent will be executed. If you have any concerns about this behavior, you can toggle this to never. Alternatively, the 'infer' strategy will make an additional LLM to evaluate the safety of the generated code. In practice this should only allow definitions (functions and classes) but will not execute code that could have side effects.

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts