πŸš€ Big News:Socket Has Acquired Secure Annex.Learn More β†’
Socket
Book a DemoSign in
Socket

selectools

Package Overview
Dependencies
Maintainers
1
Versions
45
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

selectools

Production-ready Python framework for AI agents with multi-agent graphs, hybrid RAG, guardrails, audit logging, 50 evaluators, and a visual builder. Supports OpenAI, Anthropic, Gemini, Ollama. By NichevLabs.

pipPyPI
Version
0.4.0
Maintainers
1
β”Œβ”€β”β”Œβ”€β”β”¬  β”Œβ”€β”β”Œβ”€β”β”Œβ”¬β”β”Œβ”€β”β”Œβ”€β”β”¬  β”Œβ”€β”
β””β”€β”β”œβ”€ β”‚  β”œβ”€ β”‚   β”‚ β”‚ β”‚β”‚ β”‚β”‚  └─┐
β””β”€β”˜β””β”€β”˜β”΄β”€β”˜β””β”€β”˜β””β”€β”˜ β”΄ β””β”€β”˜β””β”€β”˜β”΄β”€β”˜β””β”€β”˜

PyPI Downloads CI Docs Python License Evaluators

An open-source project from NichevLabs.

Multi-agent orchestration in plain Python. Build agent graphs, compose pipelines with |, deploy with one command. No DSL, no compile step, no paid debugger. Works with OpenAI, Anthropic, Gemini, and Ollama.

3 Ways to Build

# 1. Single agent β€” 5 lines
agent = Agent(tools=[search, calculate], provider=OpenAIProvider())
result = agent.run("What is 15 * 7?")

# 2. Multi-agent graph β€” 1 line
result = AgentGraph.chain(planner, writer, reviewer).run("Write a blog post")

# 3. Deploy β€” 1 command
# selectools serve agent.yaml

What's New in v0.23

v0.23.0 β€” Supabase Sessions + Builder RAG

Two user-facing features plus a post-ship bug-hunt sweep that pinned 8 code-generator fixes in the visual builder.

  • SupabaseSessionStore β€” 4th SessionStore backend alongside JSON, SQLite, and Redis. Postgres-backed via Supabase PostgREST, with idempotent upserts, namespace isolation, and the same validation guards as RedisSessionStore. Optional dep: pip install selectools[supabase]. Demo: examples/96_supabase_session_store.py.

  • Visual builder: first-class RAG + session nodes β€” drag Retriever (RAG) onto the canvas and pick any of 7 vector stores (memory, SQLite, Chroma, Pinecone, FAISS, Qdrant, pgvector), toggle Hybrid (BM25 + vector + RRF) and cross-encoder Rerank. Drag Session Store as a resource node and wire it into an agent via the new Session Store dropdown. Two new presets: Hybrid RAG and Multi-Tenant RAG. Python + YAML code generators emit real, runnable code.

from supabase import create_client
from selectools import SupabaseSessionStore, Agent, AgentConfig

store = SupabaseSessionStore(client=create_client(URL, KEY))
agent = Agent(
    tools=[...],
    config=AgentConfig(session_store=store, session_id="u-1", max_iterations=5),
)

See CHANGELOG.md for the full entry including the 8 builder code-gen fixes.

What's New in v0.22

v0.22.0 β€” Competitor-Informed Bug Fixes

22 bugs identified by mining 95+ closed bug reports from Agno (39k stars) and 60+ from PraisonAI (6.9k stars), then cross-referencing the patterns against selectools v0.21.0 source code. Six were shipping blockers. All 22 are now fixed with TDD regression tests.

# BUG-02: typing.Literal now supported in @tool()
from typing import Literal
from selectools.tools import tool

@tool()
def set_mode(mode: Literal["fast", "slow", "auto"]) -> str:
    return f"mode={mode}"

# BUG-14: session namespace isolation
store.save("session_123", memory_a, namespace="agent_a")
store.save("session_123", memory_b, namespace="agent_b")  # No collision

# BUG-21: opt-in vector store search dedup
results = store.search(query_embedding=emb, top_k=10, dedup=True)

# BUG-03: sync APIs now work in Jupyter / FastAPI handlers
agent.run("hello")  # Just works inside async contexts
  • 6 HIGH severity (shipping blockers): streaming dropped tool calls, typing.Literal crashed @tool(), asyncio.run() re-entry in 8 sync wrappers, HITL silently lost in parallel groups + subgraphs, ConversationMemory had no thread lock
  • 9 MEDIUM severity: <think> tag stripping, RAG batch limits, MCP concurrent race, strβ†’int/float/bool argument coercion, Union[str, int] support, multi-interrupt generators, GraphState fail-fast validation, session namespace isolation, summary growth cap
  • 7 LOW-MED severity: cancelled-result extraction, AgentTrace lock, async observer exception logging, batch clone isolation, OTel/Langfuse observer locks, vector store search dedup, Optional[T] without default handling
  • +57 new regression tests in tests/agent/test_regression.py, each with empirical fault-injection verification (test fails without fix, passes after)
  • Thread safety end-to-end correct across ConversationMemory, AgentTrace, OTelObserver, LangfuseObserver, MCPClient, FallbackProvider, batch clone isolation

See CHANGELOG.md for the full per-bug breakdown with cross-references to every original Agno/PraisonAI issue.

What's New in v0.21

v0.21.0 β€” Connector Expansion

Seven new subsystems land at once: three vector stores, four document loaders, eight new toolbox tools, multimodal messages, an Azure OpenAI provider, and two observability backends.

# New vector stores
from selectools.rag.stores import FAISSVectorStore, QdrantVectorStore, PgVectorStore

# New provider
from selectools import AzureOpenAIProvider

# New observers
from selectools.observe import OTelObserver, LangfuseObserver

# Multimodal messages
from selectools import image_message
agent.run([image_message("./screenshot.png", "What does this UI show?")])
  • Vector stores: FAISSVectorStore (in-process, persistable), QdrantVectorStore (REST + gRPC), PgVectorStore (PostgreSQL pgvector extension)
  • Document loaders: DocumentLoader.from_csv, from_json, from_html, from_url
  • Toolbox: execute_python, execute_shell, web_search, scrape_url, github_search_repos, github_get_file, github_list_issues, query_sqlite, query_postgres
  • Multimodal: Message.content accepts list[ContentPart]; image input works on OpenAI, Anthropic, Gemini, and Ollama vision models
  • Azure OpenAI: deployment-name routing, AAD token auth, env-var fallback (AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY)
  • OpenTelemetry: OTelObserver emits GenAI semantic-convention spans (Jaeger, Tempo, Datadog, Honeycomb, Grafana)
  • Langfuse: LangfuseObserver ships traces, generations, and spans to Langfuse Cloud or self-hosted
pip install "selectools[rag]"      # FAISS + Qdrant + beautifulsoup4 (HTML CSS selectors)
pip install "selectools[observe]"  # OpenTelemetry + Langfuse
pip install "selectools[postgres]" # pgvector (uses psycopg2-binary)

What's New in v0.20

v0.20.1 β€” Visual Agent Builder + GitHub Pages

The first AI agent framework to ship a visual graph builder in a single pip install. No React. No build step. No CDN.

Try the builder in your browser β†’ β€” no install required.

Open in Colab Examples Gallery

pip install selectools
selectools serve --builder
# β†’ open http://localhost:8000/builder
  • Drag START, END, and Agent nodes onto the canvas
  • Click ports to connect agents with edges
  • Add condition labels to edges (e.g. "approved") for conditional routing
  • Edit provider, model, and system prompt in the properties panel
  • Generated Python and YAML update live in the code panel
  • Export or copy to clipboard with one click

What's New in v0.19

v0.19.3 β€” Stability Markers Applied to All Public APIs

Every public class and function exported from selectools now carries a stability marker:

from selectools import Agent, AgentGraph, PlanAndExecuteAgent

print(Agent.__stability__)               # "stable"
print(AgentGraph.__stability__)          # "beta"
print(PlanAndExecuteAgent.__stability__) # "beta"

@stable β€” 60+ core symbols (Agent, AgentConfig, providers, memory, tools, evals, guardrails, sessions, knowledge, cache, cancellation)

@beta β€” 30+ newer symbols (AgentGraph, SupervisorAgent, Pipeline, @step, parallel, branch, all four patterns, compose)

v0.19.2 β€” Enterprise Hardening

from selectools.stability import stable, beta, deprecated
from selectools import trace_to_html

# Mark your own extensions with stability levels
@stable
class MyProductionAgent: ...

@beta
class MyExperimentalFeature: ...

@deprecated(since="0.19", replacement="MyProductionAgent")
class MyOldAgent: ...

# Visualise any trace as a waterfall HTML timeline
Path("trace.html").write_text(trace_to_html(result.trace))
  • Stability markers β€” @stable, @beta, @deprecated(since, replacement) for public API signalling
  • Trace HTML viewer β€” trace_to_html(trace) renders a standalone waterfall timeline
  • Deprecation policy β€” 2-minor-version window, programmatic introspection via .__stability__
  • Security audit β€” all 41 # nosec annotations reviewed and published in docs/SECURITY.md
  • Quality infrastructure β€” property-based tests (Hypothesis), thread-safety smoke suite, 5 new production simulations (5332 tests total)

v0.19.1 β€” Advanced Agent Patterns

from selectools.patterns import PlanAndExecuteAgent, ReflectiveAgent, DebateAgent, TeamLeadAgent

# PlanAndExecute β€” planner generates typed steps, executor runs them sequentially
agent = PlanAndExecuteAgent(planner=planner, executor=executor, provider=provider)
result = agent.run("Research and write a blog post about LLM safety")

# ReflectiveAgent β€” actor drafts, critic reviews, actor revises until approved
agent = ReflectiveAgent(actor=actor, critic=critic, provider=provider, max_reflections=3)
result = agent.run("Draft a product announcement email")

# DebateAgent β€” multiple agents argue, judge synthesizes conclusion
agent = DebateAgent(agents={"optimist": opt, "skeptic": skep}, judge=judge, provider=provider)
result = agent.run("Should we migrate our infrastructure to microservices?")

# TeamLeadAgent β€” lead delegates subtasks, team executes in parallel or sequentially
agent = TeamLeadAgent(lead=lead, team={"researcher": r, "writer": w}, provider=provider)
result = agent.run("Produce a competitive analysis report")
  • PlanAndExecuteAgent β€” Typed PlanStep list; optional replanning on step failure
  • ReflectiveAgent β€” Actor–critic loop with ReflectionRound records per revision
  • DebateAgent β€” N-agent debate with transcript, judge synthesis, DebateResult
  • TeamLeadAgent β€” sequential, parallel, or dynamic delegation strategies

v0.19.0 β€” Serve, Deploy & Complete Composition

# One command deploys your agent over HTTP with SSE streaming
# selectools serve agent.yaml

# Compose tools into a single callable
from selectools import compose
search_and_summarize = compose(search_web, summarize)

# Streaming composition
async for chunk in pipeline.astream("input"):
    print(chunk)
  • selectools serve β€” HTTP deployment with SSE streaming, Playground UI, /health, /schema
  • YAML config β€” AgentConfig.from_yaml("agent.yaml"), 5 built-in templates
  • compose() β€” Chain tools into composite tool; retry() and cache_step() wrappers
  • PostgresCheckpointStore β€” Durable graph checkpointing backed by PostgreSQL
v0.18.x highlights

v0.18.0 β€” Multi-Agent Orchestration

from selectools import AgentGraph, SupervisorAgent, AgentConfig, OpenAIProvider, tool

# Build a multi-agent graph in plain Python β€” no DSL, no compile step
graph = AgentGraph()
graph.add_node("planner", planner_agent)
graph.add_node("writer", writer_agent)
graph.add_node("reviewer", reviewer_agent)
graph.add_edge("planner", "writer")
graph.add_edge("writer", "reviewer")
graph.add_edge("reviewer", AgentGraph.END)
graph.set_entry("planner")
result = graph.run("Write a blog post about AI safety")

# Or use SupervisorAgent for automatic coordination
supervisor = SupervisorAgent(
    agents={"researcher": researcher, "writer": writer},
    provider=OpenAIProvider(),
    strategy="plan_and_execute",  # also: round_robin, dynamic, magentic
)
result = supervisor.run("Write a comprehensive report on LLM safety")
  • AgentGraph β€” Directed graph of agent nodes with plain Python routing
  • 4 Supervisor Strategies β€” plan_and_execute, round_robin, dynamic, magentic (Magentic-One pattern)
  • Human-in-the-Loop β€” Generator nodes with yield InterruptRequest() β€” resumes at exact yield point (LangGraph restarts the whole node)
  • Parallel Execution β€” add_parallel_nodes() with 3 merge policies (LAST_WINS, FIRST_WINS, APPEND)
  • Checkpointing β€” 3 backends (InMemory, File, SQLite) for durable mid-graph persistence
  • Subgraph Composition β€” Nest graphs inside graphs with explicit state mapping
  • ModelSplit β€” Separate planner/executor models for 70-90% cost reduction
  • Loop & Stall Detection β€” State hash tracking with observer events
  • 10 New StepTypes β€” Full trace visibility into graph execution
  • 13 New Observer Events β€” on_graph_start/end, on_node_start/end, on_graph_interrupt/resume, and more

v0.18.0 β€” Composable Pipelines

from selectools import Pipeline, step, parallel, branch

@step
def summarize(text: str) -> str:
    return agent.run(f"Summarize: {text}").content

@step
def translate(text: str, lang: str = "es") -> str:
    return agent.run(f"Translate to {lang}: {text}").content

# Compose with | operator
pipeline = summarize | translate
result = pipeline.run("Long article text here...")

# Fan-out to multiple steps, merge results
research = parallel(search_web, search_docs, search_db)

# Conditional branching
route = branch(
    lambda x: "technical" if "code" in x else "general",
    technical=code_review_pipeline,
    general=summarize_pipeline,
)
  • Pipeline β€” Chain steps sequentially with | operator or Pipeline(steps=[...])
  • @step decorator β€” Wrap any sync/async callable into a composable pipeline step
  • parallel() β€” Fan-out to multiple steps and merge results
  • branch() β€” Conditional routing based on input data
v0.17.x highlights

v0.17.7 β€” Caching & Context

from selectools.cache_semantic import SemanticCache
from selectools.embeddings.openai import OpenAIEmbeddingProvider

# Semantic cache β€” cache hits for paraphrased queries
cache = SemanticCache(
    embedding_provider=OpenAIEmbeddingProvider(),
    similarity_threshold=0.92,
)
config = AgentConfig(cache=cache)
# "Weather in NYC?" hits cache for "What's the weather in New York City?"

# Prompt compression β€” prevent context-window overflow
config = AgentConfig(
    compress_context=True,
    compress_threshold=0.75,   # trigger at 75 % context fill
    compress_keep_recent=4,    # keep last 4 turns verbatim
)

# Conversation branching β€” fork history for A/B exploration
branch = agent.memory.branch()   # independent snapshot
store.branch("main", "experiment")  # fork a persisted session

v0.17.6 β€” Quick Wins

from selectools import AgentConfig, REASONING_STRATEGIES, tool

# Reasoning strategies β€” guide the LLM's thought process
config = AgentConfig(reasoning_strategy="react")   # Thought β†’ Action β†’ Observation
config = AgentConfig(reasoning_strategy="cot")      # Chain-of-Thought step-by-step
config = AgentConfig(reasoning_strategy="plan_then_act")  # Plan first, then execute

# Tool result caching β€” skip re-execution for identical calls
@tool(description="Search the web", cacheable=True, cache_ttl=60)
def web_search(query: str) -> str:
    return expensive_api_call(query)

Also: Python 3.9–3.13 CI matrix (verified zero compatibility issues).

v0.17.4 and earlier

v0.17.4 β€” Agent Intelligence

from selectools import AgentConfig, estimate_run_tokens, KnowledgeMemory, SQLiteKnowledgeStore

# Pre-execution token estimation
estimate = estimate_run_tokens(messages, tools, system_prompt, model="gpt-4o")
print(f"{estimate.total_tokens} tokens, {estimate.remaining_tokens} remaining")

# Model switching β€” cheap for tools, expensive for reasoning
config = AgentConfig(
    model="claude-haiku-4-5",
    model_selector=lambda i, tc, u: "claude-sonnet-4-6" if i > 2 else "claude-haiku-4-5",
)

# Knowledge memory with pluggable stores and importance scoring
memory = KnowledgeMemory(store=SQLiteKnowledgeStore("knowledge.db"), max_entries=50)
memory.remember("User prefers dark mode", category="preference", importance=0.9, ttl_days=30)

v0.17.3 β€” Agent Runtime Controls

from selectools import AgentConfig, CancellationToken, SimpleStepObserver
from selectools.tools import tool

# Token/cost budget β€” stop before burning money
config = AgentConfig(max_total_tokens=50000, max_cost_usd=0.20)

# Cooperative cancellation from any thread
token = CancellationToken()
result = await agent.arun("long task", cancel_token=token)
# token.cancel()  ← from UI handler, supervisor, timeout manager

# Per-tool approval gate
@tool(requires_approval=True, description="Send email to customer")
def send_email(to: str, subject: str, body: str) -> str: ...

# Single-callback observer for SSE streaming
config = AgentConfig(observers=[SimpleStepObserver(
    lambda event, run_id, **data: sse_send({"type": event, **data})
)])

v0.17.1 β€” MCP Client/Server

from selectools.mcp import mcp_tools, MCPServerConfig

with mcp_tools(MCPServerConfig(command="python", args=["server.py"])) as tools:
    agent = Agent(provider=provider, tools=tools, config=config)
  • MCPClient β€” stdio + HTTP transport, circuit breaker, retry, tool caching
  • MultiMCPClient β€” multiple servers, graceful degradation, name prefixing
  • MCPServer β€” expose @tool functions as MCP server

v0.17.0 β€” Built-in Eval Framework

from selectools.evals import EvalSuite, TestCase

suite = EvalSuite(agent=agent, cases=[
    TestCase(input="Cancel account", expect_tool="cancel_sub", expect_no_pii=True),
    TestCase(input="Balance?", expect_contains="balance", expect_latency_ms_lte=500),
])
report = suite.run()
report.to_html("report.html")
  • 50 Evaluators β€” 30 deterministic + 21 LLM-as-judge
  • A/B Testing, regression detection, snapshot testing
  • HTML reports, JUnit XML, CLI, GitHub Action integration

Full changelog: CHANGELOG.md

v0.16.x highlights
  • v0.16.6: Gemini 3.x thought_signature crash fix β€” base64 round-trip for non-UTF-8 binary signatures
  • v0.16.5: Design Patterns & Code Quality β€” terminal actions, async observers, Gemini 3.x thought signatures, agent decomposition, hooks deprecated
  • v0.16.4: Parallel execution safety β€” coherence + screening in parallel, guardrail immutability, streaming usage tracking
  • v0.16.0: Memory & Persistence β€” persistent sessions (3 backends), summarize-on-trim, entity memory, knowledge graph
v0.15.x highlights
  • v0.15.0: Enterprise Reliability β€” Guardrails engine (5 built-in), audit logging (4 privacy levels), tool output screening (15 patterns), coherence checking
v0.14.x highlights
  • v0.14.1: Critical streaming fix β€” 13 bugs fixed across all providers; 141 new tests (total: 1100)
  • v0.14.0: AgentObserver Protocol (25 events), 145 models with March 2026 pricing, OpenAI max_completion_tokens auto-detection, 11 bug fixes

Coming from LangChain?

LangChain/LangGraphselectools
StateGraph + add_node + add_edge + compile()AgentGraph.chain(a, b, c).run(prompt)
LCEL prompt | llm | parser with Runnable protocol@step + | on plain functions
interrupt() restarts the whole node on resumeyield InterruptRequest() resumes at yield point
LangSmith (paid) for tracing and evalsBuilt-in: 50 evaluators + traces, zero cost
5+ packages (langchain-core, langgraph, langsmith...)1 package: pip install selectools
langserve for deploymentselectools serve agent.yaml

Full migration guide with code examples: Coming from LangChain

Why Selectools

CapabilityWhat You Get
Provider AgnosticSwitch between OpenAI, Anthropic, Gemini, Ollama with one line. Your tools stay identical.
Structured OutputPydantic or JSON Schema response_format with auto-retry on validation failure.
Execution TracesEvery run() returns result.trace β€” structured timeline of LLM calls, tool picks, and executions.
Reasoning Visibilityresult.reasoning surfaces why the agent chose a tool, extracted from LLM responses.
Provider FallbackFallbackProvider tries providers in priority order with circuit breaker on failure.
Batch Processingagent.batch() / agent.abatch() for concurrent multi-prompt classification.
Tool Policy EngineDeclarative allow/review/deny rules with glob patterns. Human-in-the-loop approval callbacks.
Hybrid SearchBM25 keyword + vector semantic search with RRF/weighted fusion and cross-encoder reranking.
Advanced ChunkingFixed, recursive, semantic (embedding-based), and contextual (LLM-enriched) chunking strategies.
E2E StreamingToken-level astream() with native tool call support. Parallel tool execution via asyncio.gather.
Dynamic ToolsLoad tools from files/directories at runtime. Add, remove, replace tools without restarting.
Response CachingLRU + TTL in-memory cache and Redis backend. Avoid redundant LLM calls for identical requests.
Routing ModeAgent selects a tool without executing it. Use for intent classification and request routing.
Guardrails EngineInput/output validation pipeline with PII redaction, topic blocking, toxicity detection, and format enforcement.
Audit LoggingJSONL audit trail with privacy controls (redact, hash, omit) and daily rotation.
Tool Output ScreeningPrompt injection detection with 15 built-in patterns. Per-tool or global.
Coherence CheckingLLM-based verification that tool calls match user intent β€” catches injection-driven tool misuse.
Persistent SessionsSessionStore with JSON file, SQLite, and Redis backends. Auto-save/load with TTL expiry.
Entity MemoryLLM-based entity extraction with deduplication, LRU pruning, and system prompt injection.
Knowledge GraphRelationship triple extraction with in-memory and SQLite storage and keyword-based querying.
Cross-Session KnowledgeDaily logs + persistent facts with auto-registered remember tool.
MCP IntegrationConnect to any MCP tool server (stdio + HTTP). MCPClient, MultiMCPClient, MCPServer. Circuit breaker, retry, graceful degradation.
Eval Framework50 built-in evaluators (30 deterministic + 21 LLM-as-judge). A/B testing, regression detection, snapshot testing, HTML reports, JUnit XML, CI integration.
Multi-Agent OrchestrationAgentGraph for directed agent graphs, SupervisorAgent with 4 strategies, HITL via generator nodes, parallel execution, checkpointing, subgraph composition.
Composable PipelinesPipeline + @step + `
AgentObserver Protocol45-event lifecycle observer with run_id/call_id correlation. Built-in LoggingObserver + SimpleStepObserver.
Runtime ControlsToken/cost budget limits, cooperative cancellation, per-tool approval gates, model switching per iteration.
Production HardenedRetries with backoff, per-tool timeouts, iteration caps, cost warnings, observability hooks + observers.
Library-FirstNot a framework. No magic globals, no hidden state. Use as much or as little as you need.

What's Included

  • 5 LLM Providers: OpenAI, Azure OpenAI, Anthropic, Gemini, Ollama + FallbackProvider (auto-failover)
  • Structured Output: Pydantic / JSON Schema response_format with auto-retry
  • Execution Traces: result.trace with typed timeline of every agent step
  • Reasoning Visibility: result.reasoning explains why the agent chose a tool
  • Batch Processing: agent.batch() / agent.abatch() for concurrent classification
  • Tool Policy Engine: Declarative allow/review/deny rules with human-in-the-loop
  • 4 Embedding Providers: OpenAI, Anthropic/Voyage, Gemini (free!), Cohere
  • 7 Vector Stores: In-memory, SQLite, Chroma, Pinecone, FAISS, Qdrant, pgvector
  • Hybrid Search: BM25 + vector fusion with Cohere/Jina reranking
  • Advanced Chunking: Semantic + contextual chunking for better retrieval
  • Dynamic Tool Loading: Plugin system with hot-reload support
  • Response Caching: InMemoryCache and RedisCache with stats tracking
  • 152 Model Registry: Type-safe constants with pricing and metadata
  • Pre-built Toolbox: 24 tools for files, data, text, datetime, web
  • Persistent Sessions: 3 backends (JSON file, SQLite, Redis) with TTL
  • Entity Memory: LLM-based named entity extraction and tracking
  • Knowledge Graph: Triple extraction with in-memory and SQLite storage
  • Cross-Session Knowledge: Daily logs + persistent memory with remember tool, pluggable stores (File, SQLite), importance scoring, TTL
  • Token Budget & Cancellation: max_total_tokens, max_cost_usd hard limits; CancellationToken for cooperative stopping
  • Token Estimation: estimate_run_tokens() for pre-execution budget checks
  • Model Switching: model_selector callback for per-iteration model selection
  • Semantic Cache: SemanticCache β€” embedding-based cache hits for paraphrased queries (cosine similarity, LRU + TTL)
  • Prompt Compression: Auto-summarise old history when context window fills up; compress_context, compress_threshold, compress_keep_recent
  • Conversation Branching: ConversationMemory.branch() and SessionStore.branch() for A/B exploration and checkpointing
  • Multi-Agent Orchestration: AgentGraph with routing, parallel execution, HITL, checkpointing; SupervisorAgent with 4 strategies (plan_and_execute, round_robin, dynamic, magentic)
  • Composable Pipelines: Pipeline + @step + | operator + parallel() + branch() β€” chain agents, tools, and transforms
  • 96 Examples: Multi-agent graphs, RAG, hybrid search, streaming, structured output, traces, batch, policy, observer, guardrails, audit, sessions (incl. Supabase), entity memory, knowledge graph, eval framework, advanced agent patterns, stability markers, HTML trace viewer, and more
  • Built-in Eval Framework: 50 evaluators (30 deterministic + 21 LLM-as-judge), A/B testing, regression detection, HTML reports, JUnit XML, snapshot testing
  • AgentObserver Protocol: 45 lifecycle events with run_id correlation, LoggingObserver, SimpleStepObserver, OTel export
  • 5332 Tests: Unit, integration, regression, and E2E with real API calls

Install

pip install selectools                    # Core + basic RAG
pip install selectools[rag]               # + Chroma, Pinecone, FAISS, Qdrant, Voyage, Cohere, PyPDF, BeautifulSoup
pip install selectools[observe]           # + OpenTelemetry, Langfuse observers
pip install selectools[postgres]          # + psycopg2 (enables pgvector)
pip install selectools[cache]             # + Redis cache
pip install selectools[mcp]               # + MCP client/server
pip install "selectools[rag,observe,cache,mcp]"  # Everything

Add your provider's API key to a .env file in your project root:

OPENAI_API_KEY=sk-...
# or ANTHROPIC_API_KEY, GEMINI_API_KEY β€” whichever provider you use

Quick Start

New to Selectools? Follow the 5-minute Quickstart tutorial β€” no API key needed.

Tool Calling Agent (No API Key)

from selectools import Agent, AgentConfig, tool
from selectools.providers.stubs import LocalProvider

@tool(description="Look up the price of a product")
def get_price(product: str) -> str:
    prices = {"laptop": "$999", "phone": "$699", "headphones": "$149"}
    return prices.get(product.lower(), f"No price found for {product}")

agent = Agent(
    tools=[get_price],
    provider=LocalProvider(),
    config=AgentConfig(max_iterations=3),
)

result = agent.ask("How much is a laptop?")
print(result.content)

Tool Calling Agent (OpenAI)

from selectools import Agent, AgentConfig, OpenAIProvider, tool
from selectools.models import OpenAI

@tool(description="Search the web for information")
def search(query: str) -> str:
    return f"Results for: {query}"

agent = Agent(
    tools=[search],
    provider=OpenAIProvider(default_model=OpenAI.GPT_4O_MINI.id),
    config=AgentConfig(max_iterations=5),
)

result = agent.ask("Search for Python tutorials")
print(result.content)

RAG Agent

from selectools import OpenAIProvider
from selectools.embeddings import OpenAIEmbeddingProvider
from selectools.models import OpenAI
from selectools.rag import RAGAgent, VectorStore

embedder = OpenAIEmbeddingProvider(model=OpenAI.Embeddings.TEXT_EMBEDDING_3_SMALL.id)
store = VectorStore.create("memory", embedder=embedder)

agent = RAGAgent.from_directory(
    directory="./docs",
    provider=OpenAIProvider(default_model=OpenAI.GPT_4O_MINI.id),
    vector_store=store,
    chunk_size=500, top_k=3,
)

result = agent.ask("What are the main features?")
print(result.content)
print(agent.get_usage_summary())  # LLM + embedding costs

Hybrid Search (Keyword + Semantic)

from selectools.rag import BM25, HybridSearcher, FusionMethod, HybridSearchTool, VectorStore

store = VectorStore.create("memory", embedder=embedder)
store.add_documents(chunked_docs)

searcher = HybridSearcher(
    vector_store=store,
    vector_weight=0.6,
    keyword_weight=0.4,
    fusion=FusionMethod.RRF,
)
searcher.add_documents(chunked_docs)

# Use with agent
hybrid_tool = HybridSearchTool(searcher=searcher, top_k=5)
agent = Agent(tools=[hybrid_tool.search_knowledge_base], provider=provider)

Streaming with Parallel Tools

import asyncio
from selectools import Agent, AgentConfig
from selectools.types import StreamChunk, AgentResult

agent = Agent(
    tools=[tool_a, tool_b, tool_c],
    provider=provider,
    config=AgentConfig(parallel_tool_execution=True),  # Default: enabled
)

async for item in agent.astream("Run all tasks"):
    if isinstance(item, StreamChunk):
        print(item.content, end="", flush=True)
    elif isinstance(item, AgentResult):
        print(f"\nDone in {item.iterations} iterations")

Key Features

Hybrid Search & Reranking

Combine semantic search with BM25 keyword matching for better recall on exact terms, names, and acronyms:

from selectools.rag import BM25, HybridSearcher, CohereReranker, FusionMethod

searcher = HybridSearcher(
    vector_store=store,
    fusion=FusionMethod.RRF,
    reranker=CohereReranker(),  # Optional cross-encoder reranking
)
results = searcher.search("GDPR compliance", top_k=5)

See docs/modules/HYBRID_SEARCH.md for full documentation.

Advanced Chunking

Go beyond fixed-size splitting with embedding-aware and LLM-enriched chunking:

from selectools.rag import SemanticChunker, ContextualChunker

# Split at topic boundaries using embedding similarity
semantic = SemanticChunker(embedder=embedder, similarity_threshold=0.75)

# Enrich each chunk with LLM-generated context (Anthropic-style contextual retrieval)
contextual = ContextualChunker(base_chunker=semantic, provider=provider)
enriched_docs = contextual.split_documents(documents)

See docs/modules/ADVANCED_CHUNKING.md for full documentation.

Dynamic Tool Loading

Discover and load @tool functions from files and directories at runtime:

from selectools.tools import ToolLoader

# Load tools from a plugin directory
tools = ToolLoader.from_directory("./plugins", recursive=True)
agent.add_tools(tools)

# Hot-reload after editing a plugin
updated = ToolLoader.reload_file("./plugins/search.py")
agent.replace_tool(updated[0])

# Remove tools the agent no longer needs
agent.remove_tool("deprecated_search")

See docs/modules/DYNAMIC_TOOLS.md for full documentation.

Response Caching

Avoid redundant LLM calls with pluggable caching:

from selectools import Agent, AgentConfig, InMemoryCache

cache = InMemoryCache(max_size=1000, default_ttl=300)
agent = Agent(
    tools=[...],
    provider=provider,
    config=AgentConfig(cache=cache),
)

# Same question twice -> second call is instant (cache hit)
agent.ask("What is Python?")
agent.reset()
agent.ask("What is Python?")

print(cache.stats)  # CacheStats(hits=1, misses=1, hit_rate=50.00%)

For distributed setups: from selectools.cache_redis import RedisCache

Routing Mode

Agent selects a tool without executing it -- use for intent classification:

config = AgentConfig(routing_only=True)
agent = Agent(tools=[send_email, schedule_meeting, search_kb], provider=provider, config=config)

result = agent.ask("Book a meeting with Alice tomorrow")
print(result.tool_name)  # "schedule_meeting"
print(result.tool_args)  # {"attendee": "Alice", "date": "tomorrow"}

Structured Output

Get typed, validated results from the LLM:

from pydantic import BaseModel
from typing import Literal

class Classification(BaseModel):
    intent: Literal["billing", "support", "sales", "cancel"]
    confidence: float
    priority: Literal["low", "medium", "high"]

result = agent.ask("I want to cancel my account", response_format=Classification)
print(result.parsed)  # Classification(intent="cancel", confidence=0.95, priority="high")

Auto-retries with error feedback when validation fails.

Execution Traces & Reasoning

See exactly what your agent did and why:

result = agent.run("Classify this ticket")

# Structured timeline of every step
for step in result.trace:
    print(f"{step.type} | {step.duration_ms:.0f}ms | {step.summary}")

# Why the agent chose a tool
print(result.reasoning)  # "Customer is asking about billing, routing to billing_support"

# Export for dashboards
result.trace.to_json("trace.json")

Provider Fallback

Automatic failover with circuit breaker:

from selectools import FallbackProvider, OpenAIProvider, AnthropicProvider

provider = FallbackProvider([
    OpenAIProvider(default_model="gpt-4o-mini"),
    AnthropicProvider(default_model="claude-haiku"),
])
agent = Agent(tools=[...], provider=provider)
# If OpenAI is down β†’ tries Anthropic automatically

Batch Processing

Classify multiple requests concurrently:

results = await agent.abatch(
    ["Cancel my subscription", "How do I upgrade?", "My payment failed"],
    max_concurrency=10,
)

Tool Policy & Human-in-the-Loop

Declarative safety rules with approval callbacks:

from selectools import ToolPolicy

policy = ToolPolicy(
    allow=["search_*", "read_*"],
    review=["send_*", "create_*"],
    deny=["delete_*"],
)

async def confirm(tool_name, tool_args, reason):
    return await get_user_approval(tool_name, tool_args)

config = AgentConfig(tool_policy=policy, confirm_action=confirm)

AgentObserver Protocol

Class-based observability with run_id correlation for Langfuse, OpenTelemetry, Datadog, or custom integrations:

from selectools import Agent, AgentConfig, AgentObserver, LoggingObserver

class MyObserver(AgentObserver):
    def on_tool_end(self, run_id, call_id, tool_name, result, duration_ms):
        print(f"[{run_id}] {tool_name} finished in {duration_ms:.1f}ms")

    def on_provider_fallback(self, run_id, failed_provider, next_provider, error):
        print(f"[{run_id}] {failed_provider} failed, falling back to {next_provider}")

agent = Agent(
    tools=[...], provider=provider,
    config=AgentConfig(observers=[MyObserver(), LoggingObserver()]),
)

45 lifecycle events: run, LLM, tool, iteration, batch, policy, structured output, fallback, retry, memory trim, guardrail, coherence, screening, session, entity, KG, budget exceeded, cancelled, prompt compressed, plus 13 graph events (graph start/end, node start/end, routing, interrupt, resume, parallel, stall, loop, supervisor replan). See observer.py for full reference.

E2E Streaming & Parallel Execution

  • agent.astream() yields StreamChunk (text deltas) then AgentResult (final)
  • Multiple tool calls execute concurrently via asyncio.gather() (3 tools @ 0.15s each = ~0.15s total)
  • Fallback chain: astream -> acomplete -> complete via executor
  • Context propagation with contextvars for tracing/auth

See docs/modules/STREAMING.md for full documentation.

Providers

ProviderStreamingVisionNative ToolsCost
OpenAIYesYesYesPaid
Azure OpenAIYesYesYesPaid (Azure billing)
AnthropicYesYesYesPaid
GeminiYesYesYesFree tier
OllamaYesNoNoFree (local)
FallbackYesYesYesVaries (wraps others)
LocalNoNoNoFree (testing)
from selectools.models import OpenAI, Anthropic, Gemini, Ollama

# IDE autocomplete for all 152 models with pricing metadata
model = OpenAI.GPT_4O_MINI
print(f"Cost: ${model.prompt_cost}/${model.completion_cost} per 1M tokens")
print(f"Context: {model.context_window:,} tokens")

Embedding Providers

from selectools.embeddings import (
    OpenAIEmbeddingProvider,     # text-embedding-3-small/large
    AnthropicEmbeddingProvider,  # Voyage AI (voyage-3, voyage-3-lite)
    GeminiEmbeddingProvider,     # FREE (text-embedding-001/004)
    CohereEmbeddingProvider,     # embed-english-v3.0
)

Vector Stores

from selectools.rag import VectorStore
from selectools.rag.stores import FAISSVectorStore, QdrantVectorStore, PgVectorStore

# Built-in / factory-style
store = VectorStore.create("memory", embedder=embedder)           # Fast, no persistence
store = VectorStore.create("sqlite", embedder=embedder, db_path="docs.db")  # Persistent
store = VectorStore.create("chroma", embedder=embedder, persist_directory="./chroma")
store = VectorStore.create("pinecone", embedder=embedder, index_name="my-index")

# v0.21.0 β€” direct imports
store = FAISSVectorStore(embedder=embedder)                       # In-process, save/load to disk
store = QdrantVectorStore(embedder=embedder, url="http://localhost:6333")  # REST + gRPC
store = PgVectorStore(embedder=embedder, connection_string="postgresql://...")

Agent Configuration

config = AgentConfig(
    model="gpt-4o-mini",
    temperature=0.0,
    max_tokens=2000,
    max_iterations=6,
    max_retries=3,
    retry_backoff_seconds=2.0,
    request_timeout=60.0,
    tool_timeout_seconds=30.0,
    cost_warning_threshold=0.50,
    parallel_tool_execution=True,
    routing_only=False,
    stream=False,
    cache=None,                  # InMemoryCache or RedisCache
    tool_policy=None,            # ToolPolicy with allow/review/deny rules
    confirm_action=None,         # Human-in-the-loop approval callback
    approval_timeout=60.0,       # Seconds before auto-deny
    enable_analytics=True,
    verbose=False,
    observers=[LoggingObserver()],  # Lifecycle observer (replaces deprecated hooks)
    system_prompt="You are a helpful assistant...",
)

Tool Definition

from selectools import tool

@tool(description="Calculate compound interest")
def calculate_interest(principal: float, rate: float, years: int) -> str:
    amount = principal * (1 + rate / 100) ** years
    return f"After {years} years: ${amount:.2f}"

Tool Registry

from selectools import ToolRegistry

registry = ToolRegistry()

@registry.tool(description="Search the knowledge base")
def search_kb(query: str, max_results: int = 5) -> str:
    return f"Results for: {query}"

agent = Agent(tools=registry.all(), provider=provider)

Injected Parameters

Keep secrets out of the LLM's view:

db_tool = Tool(
    name="query_db",
    description="Execute SQL query",
    parameters=[ToolParameter(name="sql", param_type=str, description="SQL query")],
    function=query_database,
    injected_kwargs={"db_connection": db_conn}  # Hidden from LLM
)

Streaming Tools

from typing import Generator

@tool(description="Process large file", streaming=True)
def process_file(filepath: str) -> Generator[str, None, None]:
    with open(filepath) as f:
        for i, line in enumerate(f, 1):
            yield f"[Line {i}] {line.strip()}\n"

config = AgentConfig(observers=[SimpleStepObserver(lambda event, run_id, **kw: print(kw.get("chunk", ""), end=""))])

Conversation Memory

from selectools import Agent, ConversationMemory

memory = ConversationMemory(max_messages=20)
agent = Agent(tools=[...], provider=provider, memory=memory)

agent.ask("My name is Alice")
agent.ask("What's my name?")  # Remembers "Alice"

Cost Tracking

result = agent.ask("Search and summarize")

print(f"Total cost: ${agent.total_cost:.6f}")
print(f"Total tokens: {agent.total_tokens:,}")
print(agent.get_usage_summary())
# Includes LLM + embedding costs, per-tool breakdown

Examples

Examples are numbered by difficulty. Start from 01 and work your way up.

#ExampleFeaturesAPI Key?
0101_hello_world.pyFirst agent, @tool, ask()No
0202_search_weather.pyToolRegistry, multiple toolsNo
0303_toolbox.py24 pre-built tools (file, data, text, datetime, web)No
0404_conversation_memory.pyMulti-turn memoryYes
0505_cost_tracking.pyToken counting, cost warningsYes
0606_async_agent.pyarun(), concurrent agents, FastAPIYes
0707_streaming_tools.pyGenerator-based streamingYes
0808_streaming_parallel.pyastream(), parallel execution, StreamChunkYes
0909_caching.pyInMemoryCache, RedisCache, cache statsYes
1010_routing_mode.pyRouting mode, intent classificationYes
1111_tool_analytics.pyCall counts, success rates, timingYes
1212_observability_hooks.pyLifecycle hooks, tool validationYes
1313_dynamic_tools.pyToolLoader, plugins, hot-reloadYes
1414_rag_basic.pyRAG pipeline, document loading, vector searchYes + [rag]
1515_semantic_search.pyPure semantic search, metadata filteringYes + [rag]
1616_rag_advanced.pyPDFs, SQLite persistence, custom chunkingYes + [rag]
1717_rag_multi_provider.pyEmbedding/store/chunk-size comparisonsYes + [rag]
1818_hybrid_search.pyBM25 + vector fusion, RRF, rerankingYes + [rag]
1919_advanced_chunking.pySemantic and contextual chunkingYes + [rag]
2020_customer_support_bot.pyMulti-tool customer support workflowYes
2121_data_analysis_agent.pyData exploration and analysisYes
2222_ollama_local.pyFully local LLM via OllamaNo (Ollama)
2323_structured_output.pyPydantic response_format, auto-retry, JSON extractionNo
2424_traces_and_reasoning.pyAgentTrace timeline, reasoning visibility, JSON exportNo
2525_provider_fallback.pyFallbackProvider, circuit breaker, failover chainNo
2626_batch_processing.pybatch(), abatch(), structured batch, error isolationNo
2727_tool_policy.pyToolPolicy, deny_when, HITL approval, memory trimmingNo
2828_agent_observer.pyAgentObserver, LoggingObserver, multiple observers, OTel exportNo
2929_guardrails.pyInput/output guardrails, PII redaction, topic blockingNo
3030_audit_logging.pyJSONL audit logging, privacy controls, daily rotationNo
3131_tool_output_screening.pyPrompt injection detection in tool outputsNo
3232_coherence_checking.pyLLM-based intent verification for injection defenseYes
3333_persistent_sessions.pyJsonFileSessionStore, cross-restart persistenceNo
3434_summarize_on_trim.pySummarize trimmed messages for context preservationNo
3535_entity_memory.pyNamed entity extraction and trackingNo
3636_knowledge_graph.pyTriple extraction, in-memory and SQLite storageNo
3737_knowledge_memory.pyCross-session facts, daily logs, remember toolNo
3838_terminal_tools.py@tool(terminal=True), stop_condition callbackNo
3939_eval_framework.pyEvalSuite, TestCase, evaluators, HTML reportsNo
4040_eval_advanced.pyPairwise A/B, regression detection, snapshotsNo
4141_mcp_client.pyMCPClient, mcp_tools(), tool interopNo
4242_mcp_server.pyMCPServer, expose tools as MCP endpointsNo
4343_token_budget.pymax_total_tokens, max_cost_usd budget limitsNo
4444_cancellation.pyCancellationToken, cooperative stoppingNo
4545_approval_gate.py@tool(requires_approval=True), confirm_actionNo
4646_simple_observer.pySimpleStepObserver, single-callback integrationNo
4747_token_estimation.pyestimate_run_tokens(), pre-flight cost checksNo
4848_model_switching.pymodel_selector callback, per-iteration modelNo
4949_knowledge_stores.pySQLite, Redis, Supabase knowledge storesNo
5050_reasoning_strategies.pyReAct, Chain-of-Thought, Plan-then-ActNo
5151_tool_result_caching.py@tool(cacheable=True, cache_ttl=300)No
5252_semantic_cache.pySemanticCache with embedding similarityYes
5353_prompt_compression.pyAuto-summarize old history on context fillNo
5454_conversation_branching.pymemory.branch(), store.branch()No
5555_agent_graph_linear.pyLinear AgentGraph pipelineNo
5656_agent_graph_parallel.pyParallel fan-out with merge policiesNo
5757_agent_graph_conditional.pyConditional routing with plain PythonNo
5858_agent_graph_hitl.pyHuman-in-the-loop with generator nodesNo
5959_agent_graph_checkpointing.pyCheckpoint, interrupt, resumeNo
6060_supervisor_agent.pySupervisorAgent with 4 strategiesNo
6161_agent_graph_subgraph.pyNested subgraph compositionNo
6262_yaml_config.pyLoad AgentConfig from YAMLNo
6363_agent_templates.pyBuilt-in agent templatesNo
6464_selectools_serve.pyServe agent over HTTP with selectools serveNo
6565_tool_composition.pycompose() tool chainingNo
6666_streaming_pipeline.pypipeline.astream() streaming compositionNo
6767_type_safe_pipeline.pyType-safe step contractsNo
6868_postgres_checkpoints.pyPostgresCheckpointStore for AgentGraphYes + [postgres]
6969_trace_store.pyTrace storage and queryingNo
7070_plan_and_execute.pyPlanAndExecuteAgent with typed stepsNo
7171_reflective_agent.pyReflectiveAgent actor–critic loopNo
7272_debate_agent.pyDebateAgent with optimist/skeptic/judgeNo
7373_team_lead_agent.pyTeamLeadAgent with all 3 delegation strategiesNo

Run any example:

python examples/01_hello_world.py   # No API key needed
python examples/14_rag_basic.py     # Needs OPENAI_API_KEY

Documentation

Read the full documentation β€” hosted on GitHub Pages with search, dark mode, and easy navigation.

Also available in docs/:

ModuleDescription
AGENTAgent loop, structured output, traces, reasoning, batch, policy
STREAMINGE2E streaming, parallel execution, routing
TOOLSTool definition, validation, registry
DYNAMIC_TOOLSToolLoader, plugins, hot-reload
HYBRID_SEARCHBM25, fusion, reranking
ADVANCED_CHUNKINGSemantic & contextual chunking
RAGComplete RAG pipeline
EMBEDDINGSEmbedding providers
VECTOR_STORESStorage backends
PROVIDERSLLM provider adapters + FallbackProvider
MEMORYConversation memory + tool-pair trimming
USAGECost tracking & analytics
MODELSModel registry & pricing
SESSIONSPersistent session stores (JSON, SQLite, Redis)
ENTITY_MEMORYEntity extraction and tracking
KNOWLEDGE_GRAPHTriple extraction and storage
KNOWLEDGECross-session knowledge memory
GUARDRAILSInput/output validation pipeline
AUDITJSONL audit logging
SECURITYScreening & coherence checking
EVALS50 evaluators, A/B testing, regression
MCPMCP client/server integration
BUDGETToken/cost budget limits
CANCELLATIONCooperative cancellation
ORCHESTRATIONAgentGraph, routing, parallel, HITL
SUPERVISORSupervisorAgent, 4 strategies
PATTERNSPlanAndExecute, Reflective, Debate, TeamLead
PARSERTool call parsing
PROMPTSystem prompt generation

Tests

pytest tests/ -x -q          # All tests
pytest tests/ -k "not e2e"   # Skip E2E (no API keys needed)

5332 tests covering parsing, agent loop, providers, RAG pipeline, hybrid search, advanced chunking, dynamic tools, caching, streaming, guardrails, sessions, memory, eval framework, budget/cancellation, knowledge stores, orchestration, pipelines, agent patterns, stability markers, trace viewer, and E2E integration with real API calls.

License

Apache-2.0 β€” Use freely in commercial applications. No copyleft restrictions. See LICENSE.

Contributing

See CONTRIBUTING.md. We welcome contributions for new tools, providers, vector stores, examples, and documentation.

Roadmap | Changelog | Documentation

Keywords

llm

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts