Socket
Socket
Sign inDemoInstall

readmeai

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

readmeai

README file generator, powered by large language model APIs πŸ‘Ύ


Maintainers
1

README-AI

Automated README file generator, powered by large language model APIs

github-actions codecov pypi-version pepy-total-downloads license

Documentation
Quick Links

πŸ“ Overview

Objective

Readme-ai is a developer tool that auto-generates README.md files using a combination of data extraction and generative ai. Simply provide a repository URL or local path to your codebase and a well-structured and detailed README file will be generated for you.

Motivation

Streamlines documentation creation and maintenance, enhancing developer productivity. This project aims to enable all skill levels, across all domains, to better understand, use, and contribute to open-source software.


πŸ‘Ύ Demo

CLI

readmeai-cli-demo

Streamlit

readmeai-streamlit-demo

[!TIP]

Check out this YouTube tutorial created by a community member!


🧬 Features

  • Flexible README Generation: Robust repository context extraction engine combined with generative AI.
  • Customizable Output: Dozens of CLI options for styling/formatting, badges, header designs, and more.
  • Language Agnostic: Works across a wide range of programming languages and project types.
  • Multi-LLM Support: Compatible with OpenAI, Ollama, Google Gemini and Offline Mode.
    • Offline Mode: Generate a boilerplate README without calling an external API.

See a few examples of the README-AI customization options below:

default-header
--emojis --image custom --badge-color DE3163 --header-style compact --toc-style links

--image cloud --header-style compact --toc-style fold
cloud-db-logo
--align left --badge-style flat-square --image cloud
gradient-markdown-logo
--align left --badge-style flat --image gradient
custom-logo
--badge-style flat --image custom
skills-light
--badge-style skills-light --image grey
readme-ai-header
--badge-style flat-square
black-logo
--badge-style flat --image black
default-header
--image custom --badge-color 00ffe9 --badge-style flat-square --header-style classic
default-header
--image llm --badge-style plastic --header-style classic
default-header
--image custom --badge-color BA0098 --badge-style flat-square --header-style modern --toc-style fold

See the Configuration section for a complete list of CLI options.

πŸ‘‹ Overview
Overview

    llm-overview
    🧩 Features
    Features Table

      prompt template.

    llm-features
    πŸ“„ Codebase Documentation
    Repository Structure

      (tree.py)

    directory-tree
    File Summaries

      prompts!

    llm-summaries
    πŸš€ Quickstart Commands
    Getting Started

      languagedependency
      InstallUsageTest
      parsers

    quick-start
    πŸ”° Contributing Guidelines
    Contributing Guide

          contributing-guidelines
          Additional Sections

            Project RoadmapContributing GuidelinesLicenseAcknowledgements

          contributing-and-more

          πŸš€ Getting Started

          System Requirements:

          • Python 3.9+
          • Package manager/Container: pip, pipx, docker
          • LLM service: OpenAI, Ollama, Google Gemini, Offline Mode
            • Anthropic and LiteLLM coming soon!

          Repository URL or Local Path:

          Make sure to have a repository URL or local directory path ready for the CLI.

          Select an LLM API Service:

          • OpenAI: Recommended, requires an account setup and API key.
          • Ollama: Free and open-source, potentially slower and more resource-intensive.
          • Google Gemini: Requires a Google Cloud account and API key.
          • Offline Mode: Generates a boilerplate README without making API calls.

          βš™οΈ Installation

          Using pip

          pip

          ❯ pip install readmeai
          
          Using pipx

          pipx

          ❯ pipx install readmeai
          

          [!TIP]

          Use pipx to install and run Python command-line applications without causing dependency conflicts with other packages!

          Using docker

          docker

          ❯ docker pull zeroxeli/readme-ai:latest
          
          From source
          Build readme-ai

          Clone repository and navigate to the project directory:

          ❯ git clone https://github.com/eli64s/readme-ai
          ❯ cd readme-ai
          
          Using bash

          bash

          ❯ bash setup/setup.sh
          
          Using poetry

          Poetry

          ❯ poetry install
          

          πŸ€– Usage

          Environment Variables

          OpenAI

          Generate a OpenAI API key and set it as the environment variable OPENAI_API_KEY.

          # Using Linux or macOS
          ❯ export OPENAI_API_KEY=<your_api_key>
          
          # Using Windows
          ❯ set OPENAI_API_KEY=<your_api_key>
          

          Ollama

          Pull model of your choice from the Ollama registry as follows:

          # i.e. mistral, llama3, gemma2, etc.
          ❯ ollama pull mistral:latest
          

          Start the Ollama server:

          ❯ export OLLAMA_HOST=127.0.0.1 && ollama serve
          

          For more details, check out the Ollama repository.

          Google Gemini

          Generate a Google API key and set it as the environment variable GOOGLE_API_KEY.

          ❯ export GOOGLE_API_KEY=<your_api_key>
          
          Running README-AI
          Using pip

          pip

          With OpenAI API:

          ❯ readmeai --repository https://github.com/eli64s/readme-ai \
                  --api openai \
                  --model gpt-3.5-turbo
          

          With Ollama:

          ❯ readmeai --repository https://github.com/eli64s/readme-ai \
                  --api ollama \
                  --model llama3
          

          With Gemini:

          ❯ readmeai --repository https://github.com/eli64s/readme-ai \
                  --api gemini
                  --model gemini-1.5-flash
          

          Advanced Options:

          ❯ readmeai --repository https://github.com/eli64s/readme-ai \
                  --output readmeai.md \
                  --api openai \
                  --model gpt-4-turbo \
                  --badge-color A931EC \
                  --badge-style flat-square \
                  --header-style compact \
                  --toc-style fold \
                  --temperature 0.1 \
                  --tree-depth 2
                  --image LLM \
                  --emojis
          
          Using docker

          docker

          ❯ docker run -it \
                  -e OPENAI_API_KEY=$OPENAI_API_KEY \
                  -v "$(pwd)":/app zeroxeli/readme-ai:latest \
                  -r https://github.com/eli64s/readme-ai
          
          Using streamlit

          Streamlit App

          Try directly in your browser on Streamlit, no installation required! For more details, see the readme-ai-streamlit repository.

          From source
          Using readme-ai
          Using bash

          bash

          ❯ conda activate readmeai
          ❯ python3 -m readmeai.cli.main -r https://github.com/eli64s/readme-ai
          
          Using poetry

          Poetry

          ❯ poetry shell
          ❯ poetry run python3 -m readmeai.cli.main -r https://github.com/eli64s/readme-ai
          

          πŸ§ͺ Testing

          Using pytest

          pytest

          ❯ make test
          
          Using nox
          ❯ make test-nox
          

          [!TIP]

          Use nox to test application against multiple Python environments and dependencies!


          πŸ”§ Configuration

          Customize your README generation using these CLI options:

          OptionDescriptionDefault
          --alignText align in headercenter
          --apiLLM API service (openai, ollama, offline)offline
          --badge-colorBadge color name or hex code0080ff
          --badge-styleBadge icon style typeflat
          --base-urlBase URL for the repositoryv1/chat/completions
          --context-windowMaximum context window of the LLM API3999
          --emojisAdds emojis to the README header sectionsFalse
          --header-styleHeader template styleclassic
          --imageProject logo imageblue
          --modelSpecific LLM model to usegpt-3.5-turbo
          --outputOutput filenamereadme-ai.md
          --rate-limitMaximum API requests per minute5
          --repositoryRepository URL or local directory pathNone
          --temperatureCreativity level for content generation0.9
          --toc-styleTable of contents template stylebullet
          --top-pProbability of the top-p sampling method0.9
          --tree-depthMaximum depth of the directory tree structure2

          [!TIP] For a full list of options, run readmeai --help in your terminal.


          Project Badges

          The --badge-style option lets you select the style of the default badge set.

          StylePreview
          default
          flat
          flat-square
          for-the-badge
          plastic
          skillsPython Skill Icon
          skills-lightPython Skill Light Icon
          social

          When providing the --badge-style option, readme-ai does two things:

          1. Formats the default badge set to match the selection (i.e. flat, flat-square, etc.).
          2. Generates an additional badge set representing your projects dependencies and tech stack (i.e. Python, Docker, etc.)
          Example
          ❯ readmeai --badge-style flat-square --repository https://github.com/eli64s/readme-ai
          
          Output

          {... project logo ...}

          {... project name ...}

          {...project slogan...}


          Developed with the software and tools below.

          YAML

          {... end of header ...}


          Select a project logo using the --image option.

          bluegradientblack
          cloudpurplegrey

          For custom images, see the following options:

          • Use --image custom to invoke a prompt to upload a local image file path or URL.
          • Use --image llm to generate a project logo using a LLM API (OpenAI only).

          🎨 Examples

          Language/FrameworkOutput FileInput RepositoryDescription
          Pythonreadme-python.mdreadme-aiCore readme-ai project
          TypeScript & Reactreadme-typescript.mdChatGPT AppReact Native ChatGPT app
          PostgreSQL & DuckDBreadme-postgres.mdBuenavistaPostgres proxy server
          Kotlin & Androidreadme-kotlin.mdfile.io ClientAndroid file sharing app
          Streamlitreadme-streamlit.mdreadme-ai-streamlitStreamlit UI for readme-ai app
          Rust & Creadme-rust-c.mdCallMonSystem call monitoring tool
          Docker & Goreadme-go.mddocker-gs-pingDockerized Go app
          Javareadme-java.mdMinimal-TodoMinimalist todo Java app
          FastAPI & Redisreadme-fastapi-redis.mdasync-ml-inferenceAsync ML inference service
          Jupyter Notebookreadme-mlops.mdmlops-courseMLOps course repository
          Apache Flinkreadme-local.mdLocal DirectoryExample using a local directory

          [!NOTE] See additional README file examples here.


          πŸ“Œ Roadmap

          • v1.0 release with new features, bug fixes, and improved performance.
          • Develop readmeai-vscode extension to generate README files (WIP).
          • Add new CLI options to enhance README file customization.
            • --audit to review existing README files and suggest improvements.
            • --template to select a README template style (i.e. ai, data, web, etc.)
            • --language to generate README files in any language (i.e. zh-CN, ES, FR, JA, KO, RU)
          • Develop robust documentation generator to build full project docs (i.e. Sphinx, MkDocs)
          • Create community-driven templates for README files and gallery of readme-ai examples.
          • GitHub Actions script to automatically update README file content on repository push.

          πŸ“’ Changelog

          Changelog


          🀝 Contributing

          To grow the project, we need your help! See the links below to get started.



          πŸŽ— License

          MIT


          πŸ‘Š Acknowledgments

          ⬆️ Top


          Keywords

          FAQs


          Did you know?

          Socket

          Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

          Install

          Related posts

          SocketSocket SOC 2 Logo

          Product

          • Package Alerts
          • Integrations
          • Docs
          • Pricing
          • FAQ
          • Roadmap
          • Changelog

          Packages

          npm

          Stay in touch

          Get open source security insights delivered straight into your inbox.


          • Terms
          • Privacy
          • Security

          Made with ⚑️ by Socket Inc