Socket
Book a DemoInstallSign in
Socket

blockbrain-api

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

blockbrain-api

A Python client for the BlockBrain API with model selection, chat, file upload, and streaming support

0.1.3
pipPyPI
Maintainers
1

BlockBrain API Python Client

A modern, streamlined Python client for the BlockBrain API featuring a unified chat interface, file processing, context management, model selection, and real-time streaming responses.

✨ Key Features

  • 🎯 Unified Interface: Single chat() method handles all scenarios
  • 📁 Smart File Processing: Upload documents with automatic processing detection
  • 🧠 Context Management: Guide AI responses with custom context
  • 🤖 Model Selection: Choose from multiple LLM models with default and per-chat overrides
  • 💬 Conversation Continuity: Seamless multi-turn conversations
  • ⚡ Real-time Streaming: Live response streaming (default) or batch processing
  • 🔧 Dual-Level API: High-level simplicity + low-level control
  • 🏗️ Production Ready: Type hints, error handling, and logging control
  • 🔒 Secure: Token-based authentication with tenant isolation

🚀 Quick Start

Installation

pip install blockbrain-api

Basic Usage

from blockbrain_api import BlockBrainAPI

# Initialize client
api = BlockBrainAPI(
    token="your_api_token",
    bot_id="your_bot_id"
)

# Initialize with default model
api = BlockBrainAPI(
    token="your_api_token",
    bot_id="your_bot_id",
    default_model="gpt-4o"
)

# Ask a question
response = api.chat("What is artificial intelligence?")
print(response)
# Output: "Artificial intelligence (AI) refers to the simulation of human intelligence..."

File Analysis

# Upload and analyze a document
response = api.chat(
    "Summarize the key points in this document",
    file_path="research_paper.pdf"
)
print(response)

Contextual AI Assistant

# Create a specialized assistant with context
response = api.chat(
    "Explain machine learning algorithms",
    context="You are a computer science professor teaching undergraduate students. Use simple language and provide examples."
)
print(response)

📚 API Reference

Core Method: api.chat()

The chat() method is your main interface to BlockBrain. It intelligently handles all chat scenarios:

api.chat(
    message: str,                           # Your question or message
    bot_id: Optional[str] = None,          # Bot ID (uses default if not set)
    file_path: Optional[str] = None,       # Path to file for upload
    context: Optional[str] = None,         # Context to guide the AI
    convo_id: Optional[str] = None,        # Continue existing conversation
    session_id: Optional[str] = None,      # Custom session ID
    convo_name: str = "Chat Session",      # Name for new conversations
    cleanup: bool = True,                  # Auto-delete conversation after
    wait_for_processing: bool = True,      # Wait for file processing
    timeout: int = 300,                    # File processing timeout (seconds)
    stream: bool = True,                   # Enable streaming responses
    model: Optional[str] = None            # Model to use (overrides default)
) -> Union[str, Dict[str, Any]]

Advanced API: api.core

For advanced use cases, access the low-level API:

# Manual conversation management
api.core.create_data_room(convo_name, session_id, bot_id)
api.core.user_prompt(content, session_id, convo_id, stream=True)
api.core.upload_file(file_path, convo_id, session_id)
api.core.add_context(convo_id, context)
api.core.delete_data_room(convo_id)

# File processing utilities
api.core.check_file_upload_status(convo_id)
api.core.wait_for_file_processing(convo_id, timeout=300)

# Model management
api.core.get_available_models()
api.core.change_data_room_model(convo_id, model="gpt-4")

💡 Usage Examples

1. Simple Q&A

api = BlockBrainAPI(token="your_token", bot_id="your_bot")

response = api.chat("What are the benefits of renewable energy?")
print(response)

2. Model Selection

# Initialize with default model
api = BlockBrainAPI(
    token="your_token",
    bot_id="your_bot",
    default_model="gpt-4o"
)

# Use default model
response = api.chat("Explain quantum computing")

# Override with specific model
response = api.chat("Write a poem", model="claude-3.5-sonnet")

# Get available models
models = api.get_available_models()
for model in models["body"]:
    if model["isEnable"]:
        print(f"Available: {model['model']}")

# Change model for existing conversation
api.change_data_room_model(convo_id, "gpt-4")

3. Document Analysis

# Analyze a PDF document
response = api.chat(
    "What methodology was used in this research?",
    file_path="research_study.pdf",
    wait_for_processing=True
)

# Multiple questions about the same document
questions = [
    "What are the main findings?",
    "What are the limitations?",
    "What future research is suggested?"
]

for question in questions:
    answer = api.chat(question, file_path="research_study.pdf")
    print(f"Q: {question}")
    print(f"A: {answer}\n")

4. Contextual AI Assistants

# Create a coding mentor
coding_context = """
You are a senior software engineer and mentor. Provide practical,
production-ready advice. Include code examples and best practices.
Focus on clean, maintainable solutions.
"""

response = api.chat(
    "How should I structure a REST API in Python?",
    context=coding_context
)

# Create a medical advisor
medical_context = """
You are a medical professional providing health information.
Always recommend consulting healthcare providers for serious concerns.
Use clear, accessible language.
"""

response = api.chat(
    "What are the symptoms of vitamin D deficiency?",
    context=medical_context
)

5. Multi-turn Conversations

Option A: Using Core API (Recommended for conversation management)

import uuid

# Set up conversation
session_id = str(uuid.uuid4())
data_room = api.core.create_data_room(
    convo_name="Technical Discussion",
    session_id=session_id,
    bot_id="your_bot_id"
)

# Extract conversation ID
convo_id = data_room.get("body", {}).get("dataRoomId")

if convo_id:
    # First message
    response1 = api.core.user_prompt(
        "Tell me about Python web frameworks",
        session_id=session_id,
        convo_id=convo_id
    )
    print(f"AI: {response1}")

    # Follow-up questions maintain context
    response2 = api.core.user_prompt(
        "Which one is best for beginners?",
        session_id=session_id,
        convo_id=convo_id
    )
    print(f"AI: {response2}")

    response3 = api.core.user_prompt(
        "Can you show me a simple example?",
        session_id=session_id,
        convo_id=convo_id
    )
    print(f"AI: {response3}")

    # Clean up when done
    api.core.delete_data_room(convo_id)

Option B: Using chat() with cleanup=False

# Start conversation without auto-cleanup
response1 = api.chat(
    "Let's discuss climate change solutions",
    cleanup=False,
    convo_name="Climate Discussion"
)

# Note: In production, you'd need to track conversation IDs
# for proper continuation with the current API design

6. Streaming vs Batch Responses

# Streaming mode (default) - real-time text assembly
response = api.chat("Explain photosynthesis", stream=True)
print(f"Final response: {response}")
# Output: Complete assembled text

# Batch mode - full JSON response
response = api.chat("Explain photosynthesis", stream=False)
print(f"Response type: {type(response)}")
print(f"Status: {response.get('status')}")
print(f"Content: {response.get('body')}")

7. Error Handling

response = api.chat("Hello world")

# Check for errors
if isinstance(response, dict) and response.get('error'):
    print(f"❌ Error: {response['error']}")

    # Get detailed error info
    if 'details' in response:
        details = response['details']
        status_code = details.get('status_code')
        error_content = details.get('content', {})

        print(f"Status Code: {status_code}")
        print(f"Error Type: {error_content.get('key')}")
        print(f"Message: {error_content.get('body')}")
else:
    print(f"✅ Success: {response}")

# File upload error handling
try:
    response = api.chat("Analyze this", file_path="missing_file.pdf")
except FileNotFoundError:
    print("❌ File not found")
except Exception as e:
    print(f"❌ Unexpected error: {e}")

⚙️ Configuration

Basic Configuration

api = BlockBrainAPI(
    token="your_api_token",              # Required: Your API token
    bot_id="your_bot_id"                 # Required: Your bot ID
)

Advanced Configuration

api = BlockBrainAPI(
    token="your_api_token",
    bot_id="your_bot_id",
    base_url="https://blocky.theblockbrain.ai",  # Custom API endpoint
    tenant_domain="your_company",               # Multi-tenant setup
    enable_logging=True,                        # Enable debug logging
    log_level="DEBUG",                          # Set logging level
    default_model="gpt-4o"                      # Set default model
)

Dynamic Bot Selection

# Use different bots for different purposes
api = BlockBrainAPI(token="your_token")  # No default bot

# Use specialized bots per request
technical_response = api.chat("Explain APIs", bot_id="technical_bot_id")
creative_response = api.chat("Write a story", bot_id="creative_bot_id")

Session Management

import uuid

# Custom session for conversation tracking
custom_session = str(uuid.uuid4())

response = api.chat(
    "Start of our conversation",
    session_id=custom_session,
    cleanup=False
)

🛠️ Response Formats

Successful Streaming Response (Default)

response = api.chat("Hello")
# Type: str
# Example: "Hello! How can I help you today?"

Successful Batch Response

response = api.chat("Hello", stream=False)
# Type: dict
# Example: {
#   "body": {...},
#   "status": "success",
#   "metadata": {...}
# }

Error Response

response = api.chat("Hello")  # with invalid credentials
# Type: dict
# Example: {
#   "error": "Failed to create data room",
#   "details": {
#     "error": True,
#     "status_code": 401,
#     "content": {
#       "code": 401,
#       "key": "UNAUTHORIZED",
#       "body": "Invalid token"
#     }
#   }
# }

🎯 Real-World Use Cases

Document Q&A System

class DocumentQA:
    def __init__(self, api_token, bot_id):
        self.api = BlockBrainAPI(token=api_token, bot_id=bot_id)

    def analyze_document(self, file_path, questions):
        """Analyze a document with multiple questions"""
        results = []

        for question in questions:
            try:
                answer = self.api.chat(
                    question,
                    file_path=file_path,
                    wait_for_processing=True,
                    timeout=600  # 10 minutes for large files
                )
                results.append({
                    "question": question,
                    "answer": answer,
                    "success": True
                })
            except Exception as e:
                results.append({
                    "question": question,
                    "error": str(e),
                    "success": False
                })

        return results

# Usage
qa_system = DocumentQA("your_token", "your_bot")
questions = [
    "What is the main topic of this document?",
    "What are the key findings?",
    "What recommendations are made?"
]
results = qa_system.analyze_document("report.pdf", questions)

Contextual Chatbot Factory

def create_specialist_bot(api, specialty_context, name):
    """Create a specialized chatbot with specific context"""
    def chat_with_context(message):
        return api.chat(
            message,
            context=specialty_context,
            convo_name=f"{name} Session"
        )
    return chat_with_context

# Create different specialist bots
api = BlockBrainAPI(token="your_token", bot_id="your_bot")

# Medical information bot
medical_bot = create_specialist_bot(
    api,
    "You are a medical information assistant. Provide accurate health information but always recommend consulting healthcare professionals.",
    "Medical Assistant"
)

# Coding mentor bot
coding_bot = create_specialist_bot(
    api,
    "You are a senior software engineer. Provide practical coding advice with examples and best practices.",
    "Coding Mentor"
)

# Business advisor bot
business_bot = create_specialist_bot(
    api,
    "You are a business consultant. Provide strategic advice based on industry best practices and data-driven insights.",
    "Business Advisor"
)

# Usage
health_info = medical_bot("What are the symptoms of dehydration?")
coding_help = coding_bot("How do I optimize database queries?")
business_advice = business_bot("How should I price my SaaS product?")

📖 Complete Examples

For comprehensive examples covering all features, see examples.py:

python examples.py

The examples include:

  • Basic Setup & Configuration
  • Model Selection & Management
  • Simple Chat Interactions
  • File Upload & Analysis
  • Context Management
  • Conversation Continuation
  • Streaming vs Batch Modes
  • Error Handling Patterns
  • Advanced Core API Usage
  • Production Use Cases

📋 Changelog

See CHANGELOG.md for detailed version history and release notes.

🔧 Development

Installing for Development

# Clone the repository
git clone https://github.com/blockbrain/blockbrain-api-python
cd blockbrain-api-python

# Install with development dependencies
pip install -e ".[dev]"

Building the Package

# Build distribution packages
python setup.py sdist bdist_wheel

# Install locally
pip install .

# Run examples
python examples.py

Testing

Unit Tests (No credentials required)

# Run unit tests
python -m pytest tests/ -v -k "not e2e"

# Run with coverage
python -m pytest tests/ --cov=blockbrain_api -k "not e2e"

End-to-End Tests (Requires API credentials)

# Set up environment variables
cp .env.example .env
# Edit .env with your actual credentials

# Or export directly
export BLOCKBRAIN_TOKEN="your_token"
export BLOCKBRAIN_BOT_ID="your_bot_id"
export BLOCKBRAIN_BASE_URL="https://blocky.theblockbrain.ai"

# Run E2E tests
python -m pytest tests/e2e/ -v

# Run all tests
python -m pytest tests/ -v

Pre-commit Hooks

# Install pre-commit hooks
pip install pre-commit
pre-commit install

# Run manually
pre-commit run --all-files

# Unit tests run automatically on commit

Examples

# Run examples with your credentials
python examples.py

📋 Requirements

  • Python: 3.7+
  • Dependencies: requests >= 2.25.0
  • API Access: BlockBrain API token and bot ID

🤝 Support & Resources

Documentation & Examples

  • 📋 Complete Examples: examples.py - Comprehensive usage examples
  • 📚 API Documentation: docs.blockbrain.ai
  • 🔧 Core API Reference: Access via api.core.* methods
  • 📋 Version History: CHANGELOG.md - Detailed release notes

Getting Help

Community

  • 🌟 Star on GitHub: Show your support
  • 🔀 Contribute: Pull requests welcome
  • 📢 Share: Help others discover BlockBrain

📄 License

MIT License - see LICENSE file for details.

Ready to get started? Install the package and run the examples to see BlockBrain in action! 🚀

Keywords

blockbrain

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

About

Packages

Stay in touch

Get open source security insights delivered straight into your inbox.

  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc

U.S. Patent No. 12,346,443 & 12,314,394. Other pending.