New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

friday-runtime

Package Overview
Dependencies
Maintainers
1
Versions
2
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

friday-runtime

Friday AI agent runtime — execute Claude agents with MCP tools, permissions, and session management

latest
Source
npmnpm
Version
0.2.1
Version published
Maintainers
1
Created
Source

friday-runtime

Self-contained agent runtime for Friday AI. Orchestrates Claude conversations, manages MCP servers, handles permissions, skills, and multi-modal AI providers.

This package can be published and consumed independently — no dependency on the Electron app.

Directory Structure

packages/runtime/
├── index.js                      # Public API exports
├── friday-server.js              # Stdio entry point
├── server.js                     # HTTP/WebSocket server
├── .mcp.json                     # MCP server definitions & auth schemas
├── package.json
│
├── src/
│   ├── runtime/
│   │   ├── AgentRuntime.js       # Core agent orchestrator
│   │   └── RoleBasedAgentRuntime.js
│   ├── mcp/
│   │   └── McpCredentials.js     # Secure credential storage (keytar/file fallback)
│   ├── config.js                 # Config loader, template variable substitution
│   ├── skills/
│   │   ├── SkillManager.js       # Two-tier skill loading system
│   │   └── global/               # 27+ expertise markdown files
│   ├── agents/                   # Agent routing & configuration
│   ├── scheduled-agents/         # Background automation (cron-based)
│   ├── sessions/                 # Session persistence & history
│   ├── oauth/                    # OAuth flow handlers
│   └── sandbox/                  # Process sandboxing
│
├── providers/                    # Multi-modal AI providers
│   ├── ProviderRegistry.js       # Central provider management & MediaContext
│   ├── openai.js                 # OpenAI: GPT-5.2, gpt-image-1.5, Sora 2, TTS/STT
│   ├── google.js                 # Google: Gemini 3, Imagen 4, Veo 3.1, Cloud TTS/STT
│   └── elevenlabs.js             # ElevenLabs: Eleven v3, Flash v2.5, Turbo v2.5
│
├── mcp-servers/
│   ├── media-server.js           # MCP server: image/video/audio generation tools
│   ├── terminal-server.js        # MCP server: shell command execution
│   └── resend/                   # MCP server: email via Resend
│
├── config/
│   └── GlobalConfig.js           # Persistent user configuration
│
├── rules/
│   └── rules.json                # Agent behavior rules
│
└── docs/
    └── multi-modal-providers-plan.md

Quick Start

import { AgentRuntime, loadBackendConfig } from 'friday-runtime';

const config = await loadBackendConfig();
const runtime = new AgentRuntime({
  workspacePath: config.workspacePath,
  rules: config.rules,
  mcpServers: config.mcpServers,
  sessionsPath: config.sessionsPath,
});

runtime.on('message', (msg) => console.log(msg));
await runtime.handleQuery('Hello, Friday');

Agent Loop

User Query → AgentRuntime.handleQuery()
  ├── 1. Build system prompt (with skills)
  ├── 2. Prepare MCP servers from .mcp.json
  ├── 3. Call Claude SDK query() with:
  │       ├── model, mcpServers, systemPrompt, canUseTool
  ├── 4. SDK spawns MCP servers, discovers tools via listTools()
  ├── 5. Claude decides which tools to use
  ├── 6. Permission gate checks each tool call
  ├── 7. Tool executes → result back to Claude
  └── 8. Stream response chunks to consumer

MCP Servers

ServerPurpose
filesystemFile read/write via @modelcontextprotocol/server-filesystem
terminalShell command execution (custom server)
githubGitHub API via @modelcontextprotocol/server-github
friday-mediaImage gen, video gen, TTS, STT, multi-model queries
firecrawlWeb scraping
figmaDesign file access
resendEmail sending
discord/reddit/twitterSocial platform APIs
gmail/google-driveGoogle Workspace
supabaseDatabase operations

Multi-Modal Providers

The provider system enables image generation, video generation, TTS, STT, and multi-model chat:

CapabilityOpenAIGoogleElevenLabs
Image gengpt-image-1.5Imagen 4 Ultra/Standard/Fast
Video genSora 2 / Sora 2 ProVeo 3.1 / Veo 3.1 Fast
TTSgpt-4o-mini-ttsGoogle Cloud TTSEleven v3, Flash v2.5
STTWhisperGoogle Cloud STT
ChatGPT-5.2Gemini 3 Pro/Flash

Auto-selects best available provider based on API key availability and user preferences.

Skills System

Two-tier expertise injection into system prompts:

  • Tier 1 — Expert Skills (max 2): User-selected via @mention tags
  • Tier 2 — Internal Skills (max 2): Agent-selected via [REQUEST_SKILLS: ...]

Dependencies

PackagePurpose
@anthropic-ai/claude-agent-sdkCore agent SDK
@anthropic-ai/sdkAnthropic API client
@modelcontextprotocol/sdkMCP protocol
openaiOpenAI API (image, video, TTS, STT, chat)
@google/genaiGoogle Gemini, Imagen, Veo API
@elevenlabs/elevenlabs-jsElevenLabs TTS API
wsWebSocket support
pinoStructured logging
zodSchema validation

Detailed Plans

  • Multi-Modal Providers Plan — Full architecture for image/video/audio capabilities

Keywords

ai

FAQs

Package last updated on 16 Feb 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts