Apple's Foundation Models provides powerful on-device AI capabilities in macOS 26 Tahoe, but it's only accessible through Swift/Objective-C APIs. This package bridges that gap, offering:
🔒 Privacy-focused: All AI processing happens on-device, no data leaves your Mac
⚡ High performance: Optimized for Apple Silicon with no network latency
🚀 Streaming-first: Simulated real-time response streaming with typing indicators for modern UX
🛠️ Rich tooling: Advanced features like input validation, context cancellation, and generation control
📦 Self-contained: Embedded Swift shim library - no external dependencies
🎯 Production-ready: Comprehensive error handling, memory management, and structured logging
Features
Generation Control
Temperature control: Deterministic (0.0) to creative (1.0) output
Token limiting: Control response length with max tokens
Use found --help or found [command] --help to see all available commands and examples.
Available commands:
found info - Display model availability and system information
found quest - Interactive chat with streaming support, system instructions and JSON output
found stream - Real-time streaming text generation with optional tools ✅
found tool calc - Mathematical calculations with real arithmetic ✅
found tool weather - Real-time weather data with geocoding ✅
Working Examples
Tool Calling Success Stories ✅
Weather Tool: Get real-time weather data
found tool weather "New York"# Returns actual weather from OpenMeteo API with temperature, conditions, humidity, etc.
Calculator Tool: Perform mathematical operations
found tool calc "add 15 plus 27"# Returns: The result of "15 + 27" is **42.00**.
Debug Mode: See comprehensive logging in action
found tool weather --verbose "Paris"# Shows both Go debug logs (slog) and Swift logs with detailed execution flow
Foundation Models Behavior
While tool calling is functional, Foundation Models exhibits some variability:
✅ Tool execution works: When called, tools successfully return real data
✅ Callback mechanism fixed: Swift ↔ Go communication is reliable
⚠️ Inconsistent invocation: Foundation Models sometimes refuses to call tools due to safety restrictions
✅ Error handling: Graceful failures with helpful explanations
Known Limitations
Foundation Models Safety: Some queries may be blocked by built-in safety guardrails
Context Window: 4096 token limit requires session refresh for long conversations
Tool Parameter Mapping: Complex expressions may not parse correctly into tool parameters
Streaming Implementation: Currently uses simulated streaming (post-processing chunks) as Foundation Models doesn't yet provide native streaming APIs
Roadmap
Fix tool calling reliability - ✅ COMPLETED - Tools now work with real data
Swift-Go callback mechanism - ✅ COMPLETED - Reliable bidirectional communication
Tool debugging capabilities - ✅ COMPLETED - --verbose flag for comprehensive debug logs
Direct tool testing - ✅ COMPLETED - --direct flag bypasses Foundation Models
Streaming responses - ✅ COMPLETED - Simulated streaming with word/sentence chunks (native streaming pending Foundation Models API)
Structured logging - ✅ COMPLETED - Go slog integration with consolidated debug logging
Advanced tool schemas with OpenAPI-style definitions
Multi-modal support (images, audio) when available
Performance optimizations for large contexts
Enhanced error handling with detailed diagnostics
Plugin system for extensible tool management
Native streaming support - Upgrade to Foundation Models native streaming API when available
Improve Foundation Models consistency - Research better prompting strategies
License
MIT Copyright (c) 2025 blacktop
FAQs
Unknown package
Package last updated on 12 Jul 2025
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Socket CEO Feross Aboukhadijeh and a16z partner Joel de la Garza discuss vibe coding, AI-driven software development, and how the rise of LLMs, despite their risks, still points toward a more secure and innovative future.