
Security News
Axios Supply Chain Attack Reaches OpenAI macOS Signing Pipeline, Forces Certificate Rotation
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.
rules4
Advanced tools
A universal CLI utility to configure AI rules files (e.g., .roo/rules, CLAUDE.md, .cursor/rules) for any project, based on the latest industry best practices via live Perplexity research.

rules4 auto intelligently detects your project's language, frameworks, and generates appropriate tagsrules4 auto--lang and --tags options when needed--list-models command to see all available modelsThe following diagram shows the core execution flow:
flowchart TD
A[CLI Command] --> B[Parse language and tags]
B --> C[Loop through each tag]
C --> D{Research enabled?}
D -->|Yes| E[Call Perplexity API]
D -->|No| F[Skip research]
E --> G[Primary model generates rules]
F --> G
G --> H{Review model specified?}
H -->|Yes| I[Review model refines rules]
H -->|No| J[Use original rules]
I --> K[Save to tool-specific folder]
J --> K
K --> L{More tags?}
L -->|Yes| C
L -->|No| M[Complete]
pip install rules4
Let rules4 automatically analyze your project and generate intelligent rules:
# Automatically detect language, frameworks, and generate rules for all tools
rules4 auto
# Auto-detect with research for cutting-edge best practices
rules4 auto --research
# Auto-detect for a specific tool
rules4 auto cursor
# Auto-detect with model selection
rules4 auto --primary gpt-4-turbo --review claude-3-5-sonnet-20241022
Or specify everything manually for precise control:
# For Cursor
rules4 cursor --lang python --tags "testing,security"
# For Claude with research
rules4 claude --research --lang javascript --tags "react,typescript"
# For all configured tools (requires initialization)
rules4 generate
Initialize a configuration file for your project to set defaults:
rules4 init
This creates a .rules4rc file with default settings for batch operations.
rules4 interacts with various AI models and research services. To use these features, you need to set up the corresponding API keys as environment variables:
gpt-4-turbo, gpt-4o).claude-3-5-sonnet-20241022, claude-3-opus-20240229).--research flag to perform research with Perplexity AI.Example (add to your shell profile, e.g., ~/.bashrc or ~/.zshrc):
export OPENAI_API_KEY="your_openai_api_key"
export ANTHROPIC_API_KEY="your_anthropic_api_key"
export PERPLEXITY_API_KEY="your_perplexity_api_key"
To generate rules for a specific tool (e.g., copilot) for a given language and tags:
rules4 copilot --lang python --tags "pytest,langgraph"
This command will:
gpt-4-turbo as the primary model (default).pytest and langgraph..github/copilot-python-pytest,langgraph.md (or similar, depending on the tool).You can specify a primary model, a review model, and enable research. Both --primary and --review flags support OpenAI and Anthropic models:
# Use Claude as primary, GPT-4 as reviewer
rules4 copilot --primary claude-3-5-sonnet-20241022 --review gpt-4o --research --lang javascript --tags "react,typescript"
# Use GPT-4 for both generation and review
rules4 cursor --primary gpt-4-turbo --review gpt-4o --lang python --tags "async,testing"
# Use Claude for both generation and review
rules4 claude --primary claude-3-opus-20240229 --review claude-3-5-sonnet-20241022 --lang go --tags "concurrency"
These commands demonstrate the flexibility:
sonar-pro modelrules4 auto)The auto command is the smartest way to generate rules. It analyzes your project structure and automatically determines the best settings:
Languages (50+):
Frameworks & Libraries (50+):
Smart Tags Generated:
# Basic auto-detection for all configured tools
rules4 auto
# Tool-specific auto-generation
rules4 auto cursor
rules4 auto claude
# Auto-detection with research and review
rules4 auto --research --review claude-3-5-sonnet-20241022
# Override auto-detected language but keep detected frameworks
rules4 auto --lang typescript
# Override auto-detected tags but keep detected language
rules4 auto --tags "performance,security,testing"
# Combination with all features
rules4 auto cursor --primary gpt-4-turbo --research --dry-run
If you have a .rules4rc file configured (created with rules4 init), you can generate rules for all specified tools:
rules4 generate --lang go --tags "code style"
This command will:
.rules4rc file.code style for Go projects.Note: The generate command requires a .rules4rc configuration file. Individual tool commands (like rules4 cursor, rules4 claude) and the auto command work without any configuration.
--primary <model_name>: Specify the primary AI model for rule generation. Supports both OpenAI and Anthropic models (e.g., gpt-4-turbo, gpt-4o, claude-3-5-sonnet-20241022).--review <model_name>: Specify an AI model for reviewing and refining the generated rules. Also supports both OpenAI and Anthropic models.--research: Enable research using Perplexity AI before rule generation.--lang <language>: Specify the programming language for rule generation (e.g., python, javascript, go).--tags <tag1,tag2,...>: Comma-separated list of tags or topics for rule generation (e.g., pytest,langgraph, react,typescript, code style).--dry-run: Preview the changes without actually writing any files.--yes, -y: Overwrite existing files without prompting for confirmation.--project-path <path>: (Optional) Specify the target project directory. Defaults to the current directory.To see all available models for use with --primary and --review:
rules4 list-models
This will display models grouped by provider (OpenAI, Anthropic, and Perplexity).
This project is in early development. For contributions, see CONTRIBUTING.md.
For maintainers, this project includes a comprehensive publishing system:
# Install publishing dependencies
pip install build twine
# Set up API tokens
export PYPI_API_TOKEN="your-pypi-token" # For PyPI
export TEST_PYPI_API_TOKEN="your-test-pypi-token" # For TestPyPI
# Test publish (recommended first)
./publish.sh --test --dry-run # Preview what would be published to TestPyPI
./publish.sh --test # Publish to TestPyPI
# Production publish
./publish.sh --dry-run # Preview what would be published to PyPI
./publish.sh # Publish to PyPI
# With version update
./publish.sh --version 1.2.3 # Update version and publish
make publish-test # Publish to TestPyPI
make publish # Publish to PyPI
The enhanced publish.sh script includes:
pyproject.toml and CLI# Clone and setup
git clone https://github.com/dimitritholen/airules.git
cd airules
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
make test # Run tests
make lint # Run all linting checks
make lint-fix # Auto-fix formatting issues
make format # Format code with black
make type-check # Run mypy type checking
MIT License - see LICENSE file for details.
make test lint to ensure qualityFAQs
A universal CLI utility to configure AI rules files for any project.
We found that rules4 demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.

Security News
Open source is under attack because of how much value it creates. It has been the foundation of every major software innovation for the last three decades. This is not the time to walk away from it.

Security News
Socket CEO Feross Aboukhadijeh breaks down how North Korea hijacked Axios and what it means for the future of software supply chain security.