Security News
PyPI’s New Archival Feature Closes a Major Security Gap
PyPI now allows maintainers to archive projects, improving security and helping users make informed decisions about their dependencies.
JuryLLM is an experimental framework that orchestrates multiple language models to work collaboratively, similar to a jury system, to solve complex problems. By leveraging the power of ensemble decision-making, this project aims to demonstrate how smaller, open-source LLM models can work together to produce more robust and intelligent solutions.
The major breakthrough in human intelligence occurred when we learned to communicate more effectively. Unlike other highly intelligent species that went extinct, our ability to communicate and collaborate set us apart. The foundations of our progress have always been rooted in effective communication, teamwork, and collective focus toward shared goals. Even the open-source movement embodies this spirit of collaboration, showcasing how working together can drive innovation and success.
The primary goals of JuryLLM are:
The system is designed as a modular framework where:
specialist models
We welcome contributions to this experimental project! Whether you're interested in:
MIT License
The JuryLLM framework consists of three main components:
Participants (model.py
)
BaseParticipant
: Abstract base class for all participantsOllamaParticipant
: Implementation for local Ollama modelsOpenAIParticipant
: Implementation for OpenAI API modelsJudge
: Specialized participant that evaluates discussions and provides verdictsDiscussion Management (jury.py
)
Discussion
: Core class that manages the conversation flowMessage System
Message
: Data structure for communicationPrerequisites
# Install Python 3.8+ and pip
# Install Ollama (for local models)
brew install ollama
Installation
# Clone the repository
git clone https://github.com/yourusername/juryLLM.git
cd juryLLM
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
Pull Required Models
# Pull Ollama models
ollama pull llama2:13b
ollama pull llama3.2:3b
ollama pull phi3.5:3.8b
Configuration
export OPENAI_API_KEY=your_api_key
Basic Usage
# Run the example discussion
python app.py
Custom Implementation
from juryLLM.model import OllamaParticipant, Judge
from juryLLM.jury import Discussion
# Create participants
participants = [
OllamaParticipant(name="Model1", model_id="llama2:13b"),
OllamaParticipant(name="Model2", model_id="phi3.5:3.8b")
]
# Create judge
judge = Judge(name="Judge", model_id="llama2:13b")
# Initialize discussion
discussion = Discussion(participants=participants, judge=judge)
# Run discussion
async for response in discussion.discuss(your_case_study):
print(response)
Asynchronous Processing
asyncio
for non-blocking operationsModular Architecture
Judge Implementation
Error Handling
Complex Problem Solving
case_study = """
Case: Complex mathematical problem with multiple rules
Rules:
1. Rule one details...
2. Rule two details...
Questions:
1. Question one...
2. Question two...
"""
Decision Making
case_study = """
Case: Ethical decision scenario
Context: [Scenario details]
Questions to consider:
1. Ethical implications
2. Practical considerations
"""
Model Selection
Prompt Engineering
Performance Optimization
Common issues and solutions:
Note: This is an experimental project aimed at exploring collaborative AI approaches. The system is under active development and subject to changes.
FAQs
An experimental framework for LLM discussions with judge oversight
We found that juryLLM demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
PyPI now allows maintainers to archive projects, improving security and helping users make informed decisions about their dependencies.
Research
Security News
Malicious npm package postcss-optimizer delivers BeaverTail malware, targeting developer systems; similarities to past campaigns suggest a North Korean connection.
Security News
CISA's KEV data is now on GitHub, offering easier access, API integration, commit history tracking, and automated updates for security teams and researchers.