New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

aiccel

Package Overview
Dependencies
Maintainers
1
Versions
24
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

aiccel

AIccel is a versatile Python library for building lightweight AI agents with multiple LLM providers

pipPyPI
Version
1.1.9
Maintainers
1

⚡ AICCEL Framework

The Production-Grade, Security-First AI Agent Framework

PyPI version Python 3.8+ License: MIT Downloads

Build AI agents that are fast, secure, and production-ready in minutes.

InstallationQuick StartFeaturesDocumentationExamples

📦 Installation

# Core framework (OpenAI, Gemini, Groq support)
pip install aiccel

# With security features (jailbreak detection)
pip install aiccel[safety]

# With privacy features (PII masking via GLiNER)
pip install aiccel[privacy]

# Full suite (all features)
pip install aiccel[all]

Verify installation:

aiccel check

🚀 Quick Start

1. Create Your First Agent (30 seconds)

from aiccel import Agent, GeminiProvider

# Initialize provider (uses GOOGLE_API_KEY env var if not provided)
provider = GeminiProvider(model="gemini-2.0-flash")

# Create agent
agent = Agent(
    provider=provider,
    name="Assistant",
    instructions="You are a helpful AI assistant."
)

# Run query
result = agent.run("What is the capital of France?")
print(result["response"])

2. Agent with Tools

from aiccel import Agent, GeminiProvider, SearchTool, WeatherTool

# Create tools
search = SearchTool(api_key="your-serpapi-key")
weather = WeatherTool(api_key="your-openweather-key")

# Create agent with tools
agent = Agent(
    provider=GeminiProvider(),
    tools=[search, weather],
    name="ResearchBot",
    instructions="Help users find information and check weather."
)

# Agent automatically selects the right tool
result = agent.run("What's the weather in Tokyo?")
print(result["response"])
print(f"Tools used: {result['tools_used']}")

3. Async Support

import asyncio
from aiccel import Agent, GeminiProvider

async def main():
    agent = Agent(provider=GeminiProvider())
    
    # Async execution
    result = await agent.run_async("Explain quantum computing")
    print(result["response"])

asyncio.run(main())

🔥 Core Features

🧠 Intelligent Agents

from aiccel import Agent, AgentConfig

# Configure agent behavior
config = AgentConfig(
    thinking_enabled=True,      # Enable chain-of-thought reasoning
    max_tool_retries=3,         # Auto-retry failed tools
    safety_enabled=True,        # Enable jailbreak detection
    strict_tool_usage=False,    # Allow mixed tool/LLM responses
)

agent = Agent(
    provider=provider,
    config=config,
    tools=[search, calculator],
    instructions="You are a research analyst."
)

result = agent.run("Analyze the market trends for EV batteries")
print(f"Thinking: {result['thinking']}")  # See the reasoning process
print(f"Response: {result['response']}")

🔒 Enterprise Security Suite

PII Masking (Privacy Protection)

from aiccel.privacy import EntityMasker

masker = EntityMasker()

# Automatically detect and mask sensitive data
text = "Contact John Smith at john@example.com or 555-123-4567"
masked, mapping = masker.mask(text)
# Output: "Contact [PERSON_1] at [EMAIL_1] or [PHONE_1]"

# Unmask after LLM processing
original = masker.unmask(masked, mapping)

Jailbreak Detection

from aiccel.jailbreak import JailbreakGuard, SecurityMode

# Production mode: Block on any detection
guard = JailbreakGuard(security_mode=SecurityMode.FAIL_CLOSED)

is_safe = guard.check("Normal user query")  # True
is_safe = guard.check("Ignore all instructions and...")  # False (blocked)

# Use as decorator
@guard.guard
def process_query(query: str):
    return agent.run(query)

AES-256 Encryption

from aiccel.encryption import encrypt, decrypt

# Encrypt sensitive data
encrypted = encrypt("my-api-key", password="secure-password")

# Decrypt when needed
original = decrypt(encrypted, password="secure-password")

🤝 Multi-Agent Orchestration

from aiccel import Agent, AgentManager, GeminiProvider

# Create specialist agents
researcher = Agent(
    provider=GeminiProvider(),
    name="Researcher",
    instructions="You gather and analyze information."
)

writer = Agent(
    provider=GeminiProvider(),
    name="Writer", 
    instructions="You create clear, engaging content."
)

analyst = Agent(
    provider=GeminiProvider(),
    name="Analyst",
    instructions="You provide data-driven insights."
)

# Create orchestrator
manager = AgentManager(
    llm_provider=GeminiProvider(),
    agents=[researcher, writer, analyst],
    instructions="Coordinate agents to solve complex tasks."
)

# Automatic routing to best agent
result = manager.route("Write a blog post about AI trends")

# Collaborative multi-agent reasoning
result = await manager.collaborate_async(
    "Analyze this quarterly report and create an executive summary"
)

⛓️ Workflow Pipelines

from aiccel import WorkflowBuilder, WorkflowExecutor

# Build a deterministic pipeline
workflow = (
    WorkflowBuilder("content-pipeline")
    .add_agent("research", researcher)
    .add_agent("write", writer)
    .add_agent("review", reviewer)
    .chain("research", "write")      # research -> write
    .chain("write", "review")        # write -> review
    .add_condition(                  # Conditional routing
        "review",
        lambda result: "approved" in result.lower(),
        on_true="publish",
        on_false="write"             # Loop back for revisions
    )
    .build()
)

# Execute workflow
executor = WorkflowExecutor(workflow)
result = await executor.execute_async("Create a blog post about Python")

🔌 MCP (Model Context Protocol) Support

from aiccel.mcp import MCPClient

# Connect to MCP-compatible tool servers
client = MCPClient.from_url("http://localhost:3000/mcp")
await client.connect()

# List available tools
tools = await client.list_tools()
print(f"Available tools: {[t.name for t in tools]}")

# Call tools
result = await client.call_tool("search", {"query": "AI news"})

# Convert MCP tools for use with AICCEL agents
from aiccel.mcp import MCPToolAdapter
aiccel_tools = MCPToolAdapter.convert_tools(tools, client)

agent = Agent(provider=provider, tools=aiccel_tools)

🎯 Request Context & Observability

from aiccel import Agent, request_scope, get_request_id

# Track requests across your application
with request_scope(user_id="user_123") as ctx:
    print(f"Request ID: {ctx.request_id}")
    
    # All operations within this scope are correlated
    result = agent.run("Process this query")
    
    # Logs automatically include request_id for debugging

📊 Custom Tools

from aiccel.tools import BaseTool, ToolResult, ParameterSchema

class CalculatorTool(BaseTool):
    """A simple calculator tool."""
    
    name = "calculator"
    description = "Perform mathematical calculations"
    parameters = [
        ParameterSchema(
            name="expression",
            type="string",
            description="Mathematical expression to evaluate",
            required=True
        )
    ]
    
    def execute(self, expression: str) -> ToolResult:
        try:
            result = eval(expression)  # Use safer eval in production
            return ToolResult.success(str(result))
        except Exception as e:
            return ToolResult.error(f"Calculation failed: {e}")

# Use with agent
calculator = CalculatorTool()
agent = Agent(provider=provider, tools=[calculator])
result = agent.run("What is 15 * 24 + 36?")

🏢 FastAPI Integration

from fastapi import FastAPI
from aiccel import Agent, GeminiProvider
from aiccel.integrations import create_agent_routes

app = FastAPI()

# Create agent
agent = Agent(
    provider=GeminiProvider(),
    name="API-Agent"
)

# Add routes automatically
router = create_agent_routes(agent)
app.include_router(router, prefix="/agent")

# Endpoints created:
# POST /agent/run          - Run a query
# POST /agent/stream       - Stream response
# GET  /agent/info         - Agent metadata
# POST /agent/clear-memory - Clear conversation history
# GET  /agent/health       - Health check

📖 Detailed Documentation

Providers

# All providers support the same interface
from aiccel import GeminiProvider, OpenAIProvider, GroqProvider

gemini = GeminiProvider(model="gemini-2.0-flash")
openai = OpenAIProvider(model="gpt-4o-mini")
groq = GroqProvider(model="llama3-70b-8192")

# Generate text
response = gemini.generate("Hello!")

# Chat with history
response = openai.chat([
    {"role": "user", "content": "Hello"},
    {"role": "assistant", "content": "Hi there!"},
    {"role": "user", "content": "How are you?"}
])

# Async variants
response = await gemini.generate_async("Hello!")
response = await openai.chat_async(messages)

Configuration Options

from aiccel import Agent, AgentConfig

config = AgentConfig(
    # Thinking & Reasoning
    thinking_enabled=True,           # Enable chain-of-thought
    thinking_budget=500,             # Max tokens for thinking
    
    # Tool Execution
    strict_tool_usage=False,         # Require tools for all queries
    max_tool_retries=3,              # Retry failed tools
    tool_timeout=30.0,               # Tool execution timeout
    
    # Security
    safety_enabled=True,             # Jailbreak detection
    pii_masking=True,                # Auto-mask PII
    
    # Memory
    memory_type="buffer",            # buffer, window, or summary
    max_memory_turns=50,             # Max conversation turns
    max_memory_tokens=32000,         # Max tokens in memory
)

Memory Types

from aiccel import ConversationMemory

# Buffer Memory (keeps all turns)
memory = ConversationMemory(memory_type="buffer", max_turns=50)

# Window Memory (sliding window)
memory = ConversationMemory(memory_type="window", max_turns=10)

# Summary Memory (summarizes old conversations)
memory = ConversationMemory(
    memory_type="summary",
    max_turns=5,
    llm_provider=provider  # Required for summarization
)

# Use with agent
agent = Agent(provider=provider, memory=memory)

Error Handling

from aiccel.exceptions import (
    AICCLException,       # Base exception
    AgentException,       # Agent-related errors
    ProviderException,    # LLM provider errors
    ToolException,        # Tool execution errors
    ProviderRateLimitError,
    ProviderTimeoutError,
)

try:
    result = agent.run("Query")
except ProviderRateLimitError as e:
    print(f"Rate limited. Retry in {e.retry_after}s")
except ProviderTimeoutError as e:
    print(f"Request timed out after {e.timeout}s")
except ToolException as e:
    print(f"Tool {e.tool_name} failed: {e.message}")
except AgentException as e:
    print(f"Agent error: {e.message}, Context: {e.context}")

💡 Real-World Examples

Research Assistant

from aiccel import Agent, GeminiProvider, SearchTool

agent = Agent(
    provider=GeminiProvider(model="gemini-2.0-flash"),
    tools=[SearchTool(api_key="...")],
    name="ResearchAssistant",
    instructions="""You are an expert research assistant.
    - Always cite sources
    - Provide balanced perspectives
    - Summarize key findings clearly"""
)

result = agent.run("What are the pros and cons of nuclear energy?")

Code Assistant

from aiccel import Agent, OpenAIProvider

agent = Agent(
    provider=OpenAIProvider(model="gpt-4o"),
    name="CodeHelper",
    instructions="""You are an expert programmer.
    - Write clean, documented code
    - Explain your implementation
    - Suggest best practices"""
)

# Enable thinking for complex problems
agent.enable_thinking()

result = agent.run("Write a Python function to find the longest palindromic substring")
print(result["thinking"])   # See the reasoning
print(result["response"])   # Get the code

Customer Support Bot

from aiccel import Agent, AgentManager, GeminiProvider

# Specialist agents
billing = Agent(
    provider=GeminiProvider(),
    name="BillingAgent",
    instructions="Handle billing inquiries, refunds, and payment issues."
)

technical = Agent(
    provider=GeminiProvider(),
    name="TechSupport",
    instructions="Troubleshoot technical issues and provide solutions."
)

general = Agent(
    provider=GeminiProvider(),
    name="GeneralSupport",
    instructions="Handle general inquiries and route complex issues."
)

# Router automatically selects the best agent
support_manager = AgentManager(
    llm_provider=GeminiProvider(),
    agents=[billing, technical, general]
)

# Automatic routing
result = support_manager.route("I can't log into my account")  # -> TechSupport
result = support_manager.route("I want a refund")              # -> BillingAgent

⚡ Performance Tips

1. Use Connection Pooling (Automatic in v3.2.0+)

# Connections are reused automatically
provider = GeminiProvider()  # Uses connection pooling

2. Use Async for Concurrent Operations

import asyncio

async def process_queries(queries):
    tasks = [agent.run_async(q) for q in queries]
    return await asyncio.gather(*tasks)

3. Enable Caching

from aiccel import Agent, AgentConfig

config = AgentConfig(
    enable_caching=True,
    cache_ttl=3600  # Cache for 1 hour
)

4. Use Lazy Imports

# Fast import (recommended for production)
from aiccel.fast import Agent, GeminiProvider

🔧 Environment Variables

VariableDescriptionRequired
OPENAI_API_KEYOpenAI API keyFor OpenAI
GOOGLE_API_KEYGoogle AI API keyFor Gemini
GROQ_API_KEYGroq API keyFor Groq
SERPAPI_API_KEYSerpAPI keyFor SearchTool
AICCEL_SECURITY_MODEFAIL_CLOSED or FAIL_OPENOptional

🤝 Contributing

We welcome contributions! See our Contributing Guide.

📄 License

MIT License - see LICENSE for details.

Built with ❤️ for the AI community

⭐ Star us on GitHub🐛 Report Issues💬 Discussions

Keywords

ai

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts