@every-env/compound-plugin
Advanced tools
+34
| # Changelog | ||
| All notable changes to the `@every-env/compound-plugin` CLI tool will be documented in this file. | ||
| The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), | ||
| and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). | ||
| ## [0.6.0] - 2026-02-12 | ||
| ### Added | ||
| - **Droid sync target** — `sync --target droid` symlinks personal skills to `~/.factory/skills/` | ||
| - **Cursor sync target** — `sync --target cursor` symlinks skills to `.cursor/skills/` and merges MCP servers into `.cursor/mcp.json` | ||
| - **Pi target** — First-class `--to pi` converter with MCPorter config and subagent compatibility ([#181](https://github.com/EveryInc/compound-engineering-plugin/pull/181)) — thanks [@gvkhosla](https://github.com/gvkhosla)! | ||
| ### Fixed | ||
| - **Bare Claude model alias resolution** — Fixed OpenCode converter not resolving bare model aliases like `claude-sonnet-4-5-20250514` ([#182](https://github.com/EveryInc/compound-engineering-plugin/pull/182)) — thanks [@waltbeaman](https://github.com/waltbeaman)! | ||
| ### Changed | ||
| - Extracted shared `expandHome` / `resolveTargetHome` helpers to `src/utils/resolve-home.ts`, removing duplication across `convert.ts`, `install.ts`, and `sync.ts` | ||
| --- | ||
| ## [0.5.2] - 2026-02-09 | ||
| ### Fixed | ||
| - Fix cursor install defaulting to cwd instead of opencode config dir | ||
| ## [0.5.1] - 2026-02-08 | ||
| - Initial npm publish |
| --- | ||
| title: Add Gemini CLI as a Target Provider | ||
| type: feat | ||
| status: completed | ||
| completed_date: 2026-02-14 | ||
| completed_by: "Claude Opus 4.6" | ||
| actual_effort: "Completed in one session" | ||
| date: 2026-02-14 | ||
| --- | ||
| # Add Gemini CLI as a Target Provider | ||
| ## Overview | ||
| Add `gemini` as a sixth target provider in the converter CLI, alongside `opencode`, `codex`, `droid`, `cursor`, and `pi`. This enables `--to gemini` for both `convert` and `install` commands, converting Claude Code plugins into Gemini CLI-compatible format. | ||
| Gemini CLI ([google-gemini/gemini-cli](https://github.com/google-gemini/gemini-cli)) is Google's open-source AI agent for the terminal. It supports GEMINI.md context files, custom commands (TOML format), agent skills (SKILL.md standard), MCP servers, and extensions -- making it a strong conversion target with good coverage of Claude Code plugin concepts. | ||
| ## Component Mapping | ||
| | Claude Code | Gemini Equivalent | Notes | | ||
| |---|---|---| | ||
| | `agents/*.md` | `.gemini/skills/*/SKILL.md` | Agents become skills -- Gemini activates them on demand via `activate_skill` tool based on description matching | | ||
| | `commands/*.md` | `.gemini/commands/*.toml` | TOML format with `prompt` and `description` fields; namespaced via directory structure | | ||
| | `skills/*/SKILL.md` | `.gemini/skills/*/SKILL.md` | **Identical standard** -- copy directly | | ||
| | MCP servers | `settings.json` `mcpServers` | Same MCP protocol; different config location (`settings.json` vs `.mcp.json`) | | ||
| | `hooks/` | `settings.json` hooks | Gemini has hooks (`BeforeTool`, `AfterTool`, `SessionStart`, etc.) but different format; emit `console.warn` and skip for now | | ||
| | `.claude/` paths | `.gemini/` paths | Content rewriting needed | | ||
| ### Key Design Decisions | ||
| **1. Agents become skills (not GEMINI.md context)** | ||
| With 29 agents, dumping them into GEMINI.md would flood every session's context. Instead, agents convert to skills -- Gemini autonomously activates them based on the skill description when relevant. This matches how Claude Code agents are invoked on demand via the Task tool. | ||
| **2. Commands use TOML format with directory-based namespacing** | ||
| Gemini CLI commands are `.toml` files where the path determines the command name: `.gemini/commands/git/commit.toml` becomes `/git:commit`. This maps cleanly from Claude Code's colon-namespaced commands (`workflows:plan` -> `.gemini/commands/workflows/plan.toml`). | ||
| **3. Commands use `{{args}}` placeholder** | ||
| Gemini's TOML commands support `{{args}}` for argument injection, mapping from Claude Code's `argument-hint` field. Commands with `argument-hint` get `{{args}}` appended to the prompt. | ||
| **4. MCP servers go into project-level settings.json** | ||
| Gemini CLI reads MCP config from `.gemini/settings.json` under the `mcpServers` key. The format is compatible -- same `command`, `args`, `env` fields, plus Gemini-specific `cwd`, `timeout`, `trust`, `includeTools`, `excludeTools`. | ||
| **5. Skills pass through unchanged** | ||
| Gemini adopted the same SKILL.md standard (YAML frontmatter with `name` and `description`, markdown body). Skills copy directly. | ||
| ### TOML Command Format | ||
| ```toml | ||
| description = "Brief description of the command" | ||
| prompt = """ | ||
| The prompt content that will be sent to Gemini. | ||
| User request: {{args}} | ||
| """ | ||
| ``` | ||
| - `description` (string): One-line description shown in `/help` | ||
| - `prompt` (string): The prompt sent to the model; supports `{{args}}`, `!{shell}`, `@{file}` placeholders | ||
| ### Skill (SKILL.md) Format | ||
| ```yaml | ||
| --- | ||
| name: skill-name | ||
| description: When and how Gemini should use this skill | ||
| --- | ||
| # Skill Title | ||
| Detailed instructions... | ||
| ``` | ||
| Identical to Claude Code's format. The `description` field is critical -- Gemini uses it to decide when to activate the skill. | ||
| ### MCP Server Format (settings.json) | ||
| ```json | ||
| { | ||
| "mcpServers": { | ||
| "server-name": { | ||
| "command": "npx", | ||
| "args": ["-y", "package-name"], | ||
| "env": { "KEY": "value" } | ||
| } | ||
| } | ||
| } | ||
| ``` | ||
| ## Acceptance Criteria | ||
| - [x] `bun run src/index.ts convert --to gemini ./plugins/compound-engineering` produces valid Gemini config | ||
| - [x] Agents convert to `.gemini/skills/*/SKILL.md` with populated `description` in frontmatter | ||
| - [x] Commands convert to `.gemini/commands/*.toml` with `prompt` and `description` fields | ||
| - [x] Namespaced commands create directory structure (`workflows:plan` -> `commands/workflows/plan.toml`) | ||
| - [x] Commands with `argument-hint` include `{{args}}` placeholder in prompt | ||
| - [x] Commands with `disable-model-invocation: true` are still included (TOML commands are prompts, not code) | ||
| - [x] Skills copied to `.gemini/skills/` (identical format) | ||
| - [x] MCP servers written to `.gemini/settings.json` under `mcpServers` key | ||
| - [x] Existing `.gemini/settings.json` is backed up before overwrite, and MCP config is merged (not clobbered) | ||
| - [x] Content transformation rewrites `.claude/` and `~/.claude/` paths to `.gemini/` and `~/.gemini/` | ||
| - [x] `/workflows:plan` transformed to `/workflows:plan` (Gemini preserves colon namespacing via directories) | ||
| - [x] `Task agent-name(args)` transformed to `Use the agent-name skill to: args` | ||
| - [x] Plugins with hooks emit `console.warn` about format differences | ||
| - [x] Writer does not double-nest `.gemini/.gemini/` | ||
| - [x] `model` and `allowedTools` fields silently dropped (no Gemini equivalent in skills/commands) | ||
| - [x] Converter and writer tests pass | ||
| - [x] Existing tests still pass (`bun test`) | ||
| ## Implementation | ||
| ### Phase 1: Types | ||
| **Create `src/types/gemini.ts`** | ||
| ```typescript | ||
| export type GeminiSkill = { | ||
| name: string | ||
| content: string // Full SKILL.md with YAML frontmatter | ||
| } | ||
| export type GeminiSkillDir = { | ||
| name: string | ||
| sourceDir: string | ||
| } | ||
| export type GeminiCommand = { | ||
| name: string // e.g. "plan" or "workflows/plan" | ||
| content: string // Full TOML content | ||
| } | ||
| export type GeminiBundle = { | ||
| generatedSkills: GeminiSkill[] // From agents | ||
| skillDirs: GeminiSkillDir[] // From skills (pass-through) | ||
| commands: GeminiCommand[] | ||
| mcpServers?: Record<string, { | ||
| command?: string | ||
| args?: string[] | ||
| env?: Record<string, string> | ||
| url?: string | ||
| headers?: Record<string, string> | ||
| }> | ||
| } | ||
| ``` | ||
| ### Phase 2: Converter | ||
| **Create `src/converters/claude-to-gemini.ts`** | ||
| Core functions: | ||
| 1. **`convertClaudeToGemini(plugin, options)`** -- main entry point | ||
| - Convert each agent to a skill via `convertAgentToSkill()` | ||
| - Convert each command via `convertCommand()` | ||
| - Pass skills through as directory references | ||
| - Convert MCP servers to settings-compatible object | ||
| - Emit `console.warn` if `plugin.hooks` has entries | ||
| 2. **`convertAgentToSkill(agent)`** -- agent -> SKILL.md | ||
| - Frontmatter: `name` (from agent name), `description` (from agent description, max ~300 chars) | ||
| - Body: agent body with content transformations applied | ||
| - Prepend capabilities section if present | ||
| - Silently drop `model` field (no Gemini equivalent) | ||
| - If description is empty, generate from agent name: `"Use this skill for ${agent.name} tasks"` | ||
| 3. **`convertCommand(command, usedNames)`** -- command -> TOML file | ||
| - Preserve namespace structure: `workflows:plan` -> path `workflows/plan` | ||
| - `description` field from command description | ||
| - `prompt` field from command body with content transformations | ||
| - If command has `argument-hint`, append `\n\nUser request: {{args}}` to prompt | ||
| - Body: apply `transformContentForGemini()` transformations | ||
| - Silently drop `allowedTools` (no Gemini equivalent) | ||
| 4. **`transformContentForGemini(body)`** -- content rewriting | ||
| - `.claude/` -> `.gemini/` and `~/.claude/` -> `~/.gemini/` | ||
| - `Task agent-name(args)` -> `Use the agent-name skill to: args` | ||
| - `@agent-name` references -> `the agent-name skill` | ||
| - Skip file paths (containing `/`) and common non-command patterns | ||
| 5. **`convertMcpServers(servers)`** -- MCP config | ||
| - Map each `ClaudeMcpServer` entry to Gemini-compatible JSON | ||
| - Pass through: `command`, `args`, `env`, `url`, `headers` | ||
| - Drop `type` field (Gemini infers transport) | ||
| 6. **`toToml(description, prompt)`** -- TOML serializer | ||
| - Escape TOML strings properly | ||
| - Use multi-line strings (`"""`) for prompt field | ||
| - Simple string for description | ||
| ### Phase 3: Writer | ||
| **Create `src/targets/gemini.ts`** | ||
| Output structure: | ||
| ``` | ||
| .gemini/ | ||
| ├── commands/ | ||
| │ ├── plan.toml | ||
| │ └── workflows/ | ||
| │ └── plan.toml | ||
| ├── skills/ | ||
| │ ├── agent-name-1/ | ||
| │ │ └── SKILL.md | ||
| │ ├── agent-name-2/ | ||
| │ │ └── SKILL.md | ||
| │ └── original-skill/ | ||
| │ └── SKILL.md | ||
| └── settings.json (only mcpServers key) | ||
| ``` | ||
| Core function: `writeGeminiBundle(outputRoot, bundle)` | ||
| - `resolveGeminiPaths(outputRoot)` -- detect if path already ends in `.gemini` to avoid double-nesting (follow droid writer pattern) | ||
| - Write generated skills to `skills/<name>/SKILL.md` | ||
| - Copy original skill directories to `skills/` via `copyDir()` | ||
| - Write commands to `commands/` as `.toml` files, creating subdirectories for namespaced commands | ||
| - Write `settings.json` with `{ "mcpServers": {...} }` via `writeJson()` with `backupFile()` for existing files | ||
| - If settings.json exists, read it first and merge `mcpServers` key (don't clobber other settings) | ||
| ### Phase 4: Wire into CLI | ||
| **Modify `src/targets/index.ts`** | ||
| ```typescript | ||
| import { convertClaudeToGemini } from "../converters/claude-to-gemini" | ||
| import { writeGeminiBundle } from "./gemini" | ||
| import type { GeminiBundle } from "../types/gemini" | ||
| // Add to targets: | ||
| gemini: { | ||
| name: "gemini", | ||
| implemented: true, | ||
| convert: convertClaudeToGemini as TargetHandler<GeminiBundle>["convert"], | ||
| write: writeGeminiBundle as TargetHandler<GeminiBundle>["write"], | ||
| }, | ||
| ``` | ||
| **Modify `src/commands/convert.ts`** | ||
| - Update `--to` description: `"Target format (opencode | codex | droid | cursor | pi | gemini)"` | ||
| - Add to `resolveTargetOutputRoot`: `if (targetName === "gemini") return path.join(outputRoot, ".gemini")` | ||
| **Modify `src/commands/install.ts`** | ||
| - Same two changes as convert.ts | ||
| ### Phase 5: Tests | ||
| **Create `tests/gemini-converter.test.ts`** | ||
| Test cases (use inline `ClaudePlugin` fixtures, following existing converter test patterns): | ||
| - Agent converts to skill with SKILL.md frontmatter (`name` and `description` populated) | ||
| - Agent with empty description gets default description text | ||
| - Agent with capabilities prepended to body | ||
| - Agent `model` field silently dropped | ||
| - Agent with empty body gets default body text | ||
| - Command converts to TOML with `prompt` and `description` fields | ||
| - Namespaced command creates correct path (`workflows:plan` -> `workflows/plan`) | ||
| - Command with `disable-model-invocation` is still included | ||
| - Command `allowedTools` silently dropped | ||
| - Command with `argument-hint` gets `{{args}}` placeholder in prompt | ||
| - Skills pass through as directory references | ||
| - MCP servers convert to settings.json-compatible config | ||
| - Content transformation: `.claude/` paths -> `.gemini/` | ||
| - Content transformation: `~/.claude/` paths -> `~/.gemini/` | ||
| - Content transformation: `Task agent(args)` -> natural language skill reference | ||
| - Hooks present -> `console.warn` emitted | ||
| - Plugin with zero agents produces empty generatedSkills array | ||
| - Plugin with only skills works correctly | ||
| - TOML output is valid (description and prompt properly escaped) | ||
| **Create `tests/gemini-writer.test.ts`** | ||
| Test cases (use temp directories, following existing writer test patterns): | ||
| - Full bundle writes skills, commands, settings.json | ||
| - Generated skills written as `skills/<name>/SKILL.md` | ||
| - Original skills copied to `skills/` directory | ||
| - Commands written as `.toml` files in `commands/` directory | ||
| - Namespaced commands create subdirectories (`commands/workflows/plan.toml`) | ||
| - MCP config written as valid JSON `settings.json` with `mcpServers` key | ||
| - Existing `settings.json` is backed up before overwrite | ||
| - Output root already ending in `.gemini` does NOT double-nest | ||
| - Empty bundle produces no output | ||
| ### Phase 6: Documentation | ||
| **Create `docs/specs/gemini.md`** | ||
| Document the Gemini CLI spec as reference, following existing `docs/specs/codex.md` pattern: | ||
| - GEMINI.md context file format | ||
| - Custom commands format (TOML with `prompt`, `description`) | ||
| - Skills format (identical SKILL.md standard) | ||
| - MCP server configuration (`settings.json`) | ||
| - Extensions system (for reference, not converted) | ||
| - Hooks system (for reference, format differences noted) | ||
| - Config file locations (user-level `~/.gemini/` vs project-level `.gemini/`) | ||
| - Directory layout conventions | ||
| **Update `README.md`** | ||
| Add `gemini` to the supported targets in the CLI usage section. | ||
| ## What We're NOT Doing | ||
| - Not converting hooks (Gemini has hooks but different format -- `BeforeTool`/`AfterTool` with matchers -- warn and skip) | ||
| - Not generating full `settings.json` (only `mcpServers` key -- user-specific settings like `model`, `tools.sandbox` are out of scope) | ||
| - Not creating extensions (extension format is for distributing packages, not for converted plugins) | ||
| - Not using `@{file}` or `!{shell}` placeholders in converted commands (would require analyzing command intent) | ||
| - Not transforming content inside copied SKILL.md files (known limitation -- skills may reference `.claude/` paths internally) | ||
| - Not clearing old output before writing (matches existing target behavior) | ||
| - Not merging into existing settings.json intelligently beyond `mcpServers` key (too risky to modify user config) | ||
| ## Complexity Assessment | ||
| This is a **medium change**. The converter architecture is well-established with five existing targets, so this is mostly pattern-following. The key novelties are: | ||
| 1. The TOML command format (unique among all targets -- need simple TOML serializer) | ||
| 2. Agents map to skills rather than a direct 1:1 concept (but this is the same pattern as codex) | ||
| 3. Namespaced commands use directory structure (new approach vs flattening in cursor/codex) | ||
| 4. MCP config goes into a broader `settings.json` file (need to merge, not clobber) | ||
| Skills being identical across platforms simplifies things significantly. The TOML serialization is simple (only two fields: `description` string and `prompt` multi-line string). | ||
| ## References | ||
| - [Gemini CLI Repository](https://github.com/google-gemini/gemini-cli) | ||
| - [Gemini CLI Configuration](https://geminicli.com/docs/get-started/configuration/) | ||
| - [Custom Commands (TOML)](https://geminicli.com/docs/cli/custom-commands/) | ||
| - [Agent Skills](https://geminicli.com/docs/cli/skills/) | ||
| - [Creating Skills](https://geminicli.com/docs/cli/creating-skills/) | ||
| - [Extensions](https://geminicli.com/docs/extensions/writing-extensions/) | ||
| - [MCP Servers](https://google-gemini.github.io/gemini-cli/docs/tools/mcp-server.html) | ||
| - Existing cursor plan: `docs/plans/2026-02-12-feat-add-cursor-cli-target-provider-plan.md` | ||
| - Existing codex converter: `src/converters/claude-to-codex.ts` (has `uniqueName()` and skill generation patterns) | ||
| - Existing droid writer: `src/targets/droid.ts` (has double-nesting guard pattern) | ||
| - Target registry: `src/targets/index.ts` | ||
| ## Completion Summary | ||
| ### What Was Delivered | ||
| - [x] Phase 1: Types (`src/types/gemini.ts`) | ||
| - [x] Phase 2: Converter (`src/converters/claude-to-gemini.ts`) | ||
| - [x] Phase 3: Writer (`src/targets/gemini.ts`) | ||
| - [x] Phase 4: CLI wiring (`src/targets/index.ts`, `src/commands/convert.ts`, `src/commands/install.ts`) | ||
| - [x] Phase 5: Tests (`tests/gemini-converter.test.ts`, `tests/gemini-writer.test.ts`) | ||
| - [x] Phase 6: Documentation (`docs/specs/gemini.md`, `README.md`) | ||
| ### Implementation Statistics | ||
| - 10 files changed | ||
| - 27 new tests added (129 total, all passing) | ||
| - 148 output files generated from compound-engineering plugin conversion | ||
| - 0 dependencies added | ||
| ### Git Commits | ||
| - `201ad6d` feat(gemini): add Gemini CLI as sixth target provider | ||
| - `8351851` docs: add Gemini CLI spec and update README with gemini target | ||
| ### Completion Details | ||
| - **Completed By:** Claude Opus 4.6 | ||
| - **Date:** 2026-02-14 | ||
| - **Session:** Single session |
| # Gemini CLI Spec (GEMINI.md, Commands, Skills, MCP, Settings) | ||
| Last verified: 2026-02-14 | ||
| ## Primary sources | ||
| ``` | ||
| https://github.com/google-gemini/gemini-cli | ||
| https://geminicli.com/docs/get-started/configuration/ | ||
| https://geminicli.com/docs/cli/custom-commands/ | ||
| https://geminicli.com/docs/cli/skills/ | ||
| https://geminicli.com/docs/cli/creating-skills/ | ||
| https://geminicli.com/docs/extensions/writing-extensions/ | ||
| https://google-gemini.github.io/gemini-cli/docs/tools/mcp-server.html | ||
| ``` | ||
| ## Config locations | ||
| - User-level config: `~/.gemini/settings.json` | ||
| - Project-level config: `.gemini/settings.json` | ||
| - Project-level takes precedence over user-level for most settings. | ||
| - GEMINI.md context file lives at project root (similar to CLAUDE.md). | ||
| ## GEMINI.md context file | ||
| - A markdown file at project root loaded into every session's context. | ||
| - Used for project-wide instructions, coding standards, and conventions. | ||
| - Equivalent to Claude Code's CLAUDE.md. | ||
| ## Custom commands (TOML format) | ||
| - Custom commands are TOML files stored in `.gemini/commands/`. | ||
| - Command name is derived from the file path: `.gemini/commands/git/commit.toml` becomes `/git:commit`. | ||
| - Directory-based namespacing: subdirectories create namespaced commands. | ||
| - Each command file has two fields: | ||
| - `description` (string): One-line description shown in `/help` | ||
| - `prompt` (string): The prompt sent to the model | ||
| - Supports placeholders: | ||
| - `{{args}}` — user-provided arguments | ||
| - `!{shell}` — output of a shell command | ||
| - `@{file}` — contents of a file | ||
| - Example: | ||
| ```toml | ||
| description = "Create a git commit with a good message" | ||
| prompt = """ | ||
| Look at the current git diff and create a commit with a descriptive message. | ||
| User request: {{args}} | ||
| """ | ||
| ``` | ||
| ## Skills (SKILL.md standard) | ||
| - A skill is a folder containing `SKILL.md` plus optional supporting files. | ||
| - Skills live in `.gemini/skills/`. | ||
| - `SKILL.md` uses YAML frontmatter with `name` and `description` fields. | ||
| - Gemini activates skills on demand via `activate_skill` tool based on description matching. | ||
| - The `description` field is critical — Gemini uses it to decide when to activate the skill. | ||
| - Format is identical to Claude Code's SKILL.md standard. | ||
| - Example: | ||
| ```yaml | ||
| --- | ||
| name: security-reviewer | ||
| description: Review code for security vulnerabilities and OWASP compliance | ||
| --- | ||
| # Security Reviewer | ||
| Detailed instructions for security review... | ||
| ``` | ||
| ## MCP server configuration | ||
| - MCP servers are configured in `settings.json` under the `mcpServers` key. | ||
| - Same MCP protocol as Claude Code; different config location. | ||
| - Supports `command`, `args`, `env` for stdio transport. | ||
| - Supports `url`, `headers` for HTTP/SSE transport. | ||
| - Additional Gemini-specific fields: `cwd`, `timeout`, `trust`, `includeTools`, `excludeTools`. | ||
| - Example: | ||
| ```json | ||
| { | ||
| "mcpServers": { | ||
| "context7": { | ||
| "url": "https://mcp.context7.com/mcp" | ||
| }, | ||
| "playwright": { | ||
| "command": "npx", | ||
| "args": ["-y", "@anthropic/mcp-playwright"] | ||
| } | ||
| } | ||
| } | ||
| ``` | ||
| ## Hooks | ||
| - Gemini supports hooks: `BeforeTool`, `AfterTool`, `SessionStart`, etc. | ||
| - Hooks use a different format from Claude Code hooks (matchers-based). | ||
| - Not converted by the plugin converter — a warning is emitted. | ||
| ## Extensions | ||
| - Extensions are distributable packages for Gemini CLI. | ||
| - They extend functionality with custom tools, hooks, and commands. | ||
| - Not used for plugin conversion (different purpose from Claude Code plugins). | ||
| ## Settings.json structure | ||
| ```json | ||
| { | ||
| "model": "gemini-2.5-pro", | ||
| "mcpServers": { ... }, | ||
| "tools": { | ||
| "sandbox": true | ||
| } | ||
| } | ||
| ``` | ||
| - Only the `mcpServers` key is written during plugin conversion. | ||
| - Other settings (model, tools, sandbox) are user-specific and out of scope. |
| import { formatFrontmatter } from "../utils/frontmatter" | ||
| import type { ClaudeAgent, ClaudeCommand, ClaudeMcpServer, ClaudePlugin } from "../types/claude" | ||
| import type { GeminiBundle, GeminiCommand, GeminiMcpServer, GeminiSkill } from "../types/gemini" | ||
| import type { ClaudeToOpenCodeOptions } from "./claude-to-opencode" | ||
| export type ClaudeToGeminiOptions = ClaudeToOpenCodeOptions | ||
| const GEMINI_DESCRIPTION_MAX_LENGTH = 1024 | ||
| export function convertClaudeToGemini( | ||
| plugin: ClaudePlugin, | ||
| _options: ClaudeToGeminiOptions, | ||
| ): GeminiBundle { | ||
| const usedSkillNames = new Set<string>() | ||
| const usedCommandNames = new Set<string>() | ||
| const skillDirs = plugin.skills.map((skill) => ({ | ||
| name: skill.name, | ||
| sourceDir: skill.sourceDir, | ||
| })) | ||
| // Reserve skill names from pass-through skills | ||
| for (const skill of skillDirs) { | ||
| usedSkillNames.add(normalizeName(skill.name)) | ||
| } | ||
| const generatedSkills = plugin.agents.map((agent) => convertAgentToSkill(agent, usedSkillNames)) | ||
| const commands = plugin.commands.map((command) => convertCommand(command, usedCommandNames)) | ||
| const mcpServers = convertMcpServers(plugin.mcpServers) | ||
| if (plugin.hooks && Object.keys(plugin.hooks.hooks).length > 0) { | ||
| console.warn("Warning: Gemini CLI hooks use a different format (BeforeTool/AfterTool with matchers). Hooks were skipped during conversion.") | ||
| } | ||
| return { generatedSkills, skillDirs, commands, mcpServers } | ||
| } | ||
| function convertAgentToSkill(agent: ClaudeAgent, usedNames: Set<string>): GeminiSkill { | ||
| const name = uniqueName(normalizeName(agent.name), usedNames) | ||
| const description = sanitizeDescription( | ||
| agent.description ?? `Use this skill for ${agent.name} tasks`, | ||
| ) | ||
| const frontmatter: Record<string, unknown> = { name, description } | ||
| let body = transformContentForGemini(agent.body.trim()) | ||
| if (agent.capabilities && agent.capabilities.length > 0) { | ||
| const capabilities = agent.capabilities.map((c) => `- ${c}`).join("\n") | ||
| body = `## Capabilities\n${capabilities}\n\n${body}`.trim() | ||
| } | ||
| if (body.length === 0) { | ||
| body = `Instructions converted from the ${agent.name} agent.` | ||
| } | ||
| const content = formatFrontmatter(frontmatter, body) | ||
| return { name, content } | ||
| } | ||
| function convertCommand(command: ClaudeCommand, usedNames: Set<string>): GeminiCommand { | ||
| // Preserve namespace structure: workflows:plan -> workflows/plan | ||
| const commandPath = resolveCommandPath(command.name) | ||
| const pathKey = commandPath.join("/") | ||
| uniqueName(pathKey, usedNames) // Track for dedup | ||
| const description = command.description ?? `Converted from Claude command ${command.name}` | ||
| const transformedBody = transformContentForGemini(command.body.trim()) | ||
| let prompt = transformedBody | ||
| if (command.argumentHint) { | ||
| prompt += `\n\nUser request: {{args}}` | ||
| } | ||
| const content = toToml(description, prompt) | ||
| return { name: pathKey, content } | ||
| } | ||
| /** | ||
| * Transform Claude Code content to Gemini-compatible content. | ||
| * | ||
| * 1. Task agent calls: Task agent-name(args) -> Use the agent-name skill to: args | ||
| * 2. Path rewriting: .claude/ -> .gemini/, ~/.claude/ -> ~/.gemini/ | ||
| * 3. Agent references: @agent-name -> the agent-name skill | ||
| */ | ||
| export function transformContentForGemini(body: string): string { | ||
| let result = body | ||
| // 1. Transform Task agent calls | ||
| const taskPattern = /^(\s*-?\s*)Task\s+([a-z][a-z0-9-]*)\(([^)]+)\)/gm | ||
| result = result.replace(taskPattern, (_match, prefix: string, agentName: string, args: string) => { | ||
| const skillName = normalizeName(agentName) | ||
| return `${prefix}Use the ${skillName} skill to: ${args.trim()}` | ||
| }) | ||
| // 2. Rewrite .claude/ paths to .gemini/ | ||
| result = result | ||
| .replace(/~\/\.claude\//g, "~/.gemini/") | ||
| .replace(/\.claude\//g, ".gemini/") | ||
| // 3. Transform @agent-name references | ||
| const agentRefPattern = /@([a-z][a-z0-9-]*-(?:agent|reviewer|researcher|analyst|specialist|oracle|sentinel|guardian|strategist))/gi | ||
| result = result.replace(agentRefPattern, (_match, agentName: string) => { | ||
| return `the ${normalizeName(agentName)} skill` | ||
| }) | ||
| return result | ||
| } | ||
| function convertMcpServers( | ||
| servers?: Record<string, ClaudeMcpServer>, | ||
| ): Record<string, GeminiMcpServer> | undefined { | ||
| if (!servers || Object.keys(servers).length === 0) return undefined | ||
| const result: Record<string, GeminiMcpServer> = {} | ||
| for (const [name, server] of Object.entries(servers)) { | ||
| const entry: GeminiMcpServer = {} | ||
| if (server.command) { | ||
| entry.command = server.command | ||
| if (server.args && server.args.length > 0) entry.args = server.args | ||
| if (server.env && Object.keys(server.env).length > 0) entry.env = server.env | ||
| } else if (server.url) { | ||
| entry.url = server.url | ||
| if (server.headers && Object.keys(server.headers).length > 0) entry.headers = server.headers | ||
| } | ||
| result[name] = entry | ||
| } | ||
| return result | ||
| } | ||
| /** | ||
| * Resolve command name to path segments. | ||
| * workflows:plan -> ["workflows", "plan"] | ||
| * plan -> ["plan"] | ||
| */ | ||
| function resolveCommandPath(name: string): string[] { | ||
| return name.split(":").map((segment) => normalizeName(segment)) | ||
| } | ||
| /** | ||
| * Serialize to TOML command format. | ||
| * Uses multi-line strings (""") for prompt field. | ||
| */ | ||
| export function toToml(description: string, prompt: string): string { | ||
| const lines: string[] = [] | ||
| lines.push(`description = ${formatTomlString(description)}`) | ||
| // Use multi-line string for prompt | ||
| const escapedPrompt = prompt.replace(/\\/g, "\\\\").replace(/"""/g, '\\"\\"\\"') | ||
| lines.push(`prompt = """`) | ||
| lines.push(escapedPrompt) | ||
| lines.push(`"""`) | ||
| return lines.join("\n") | ||
| } | ||
| function formatTomlString(value: string): string { | ||
| return JSON.stringify(value) | ||
| } | ||
| function normalizeName(value: string): string { | ||
| const trimmed = value.trim() | ||
| if (!trimmed) return "item" | ||
| const normalized = trimmed | ||
| .toLowerCase() | ||
| .replace(/[\\/]+/g, "-") | ||
| .replace(/[:\s]+/g, "-") | ||
| .replace(/[^a-z0-9_-]+/g, "-") | ||
| .replace(/-+/g, "-") | ||
| .replace(/^-+|-+$/g, "") | ||
| return normalized || "item" | ||
| } | ||
| function sanitizeDescription(value: string, maxLength = GEMINI_DESCRIPTION_MAX_LENGTH): string { | ||
| const normalized = value.replace(/\s+/g, " ").trim() | ||
| if (normalized.length <= maxLength) return normalized | ||
| const ellipsis = "..." | ||
| return normalized.slice(0, Math.max(0, maxLength - ellipsis.length)).trimEnd() + ellipsis | ||
| } | ||
| function uniqueName(base: string, used: Set<string>): string { | ||
| if (!used.has(base)) { | ||
| used.add(base) | ||
| return base | ||
| } | ||
| let index = 2 | ||
| while (used.has(`${base}-${index}`)) { | ||
| index += 1 | ||
| } | ||
| const name = `${base}-${index}` | ||
| used.add(name) | ||
| return name | ||
| } |
| import { formatFrontmatter } from "../utils/frontmatter" | ||
| import type { ClaudeAgent, ClaudeCommand, ClaudeMcpServer, ClaudePlugin } from "../types/claude" | ||
| import type { | ||
| PiBundle, | ||
| PiGeneratedSkill, | ||
| PiMcporterConfig, | ||
| PiMcporterServer, | ||
| } from "../types/pi" | ||
| import type { ClaudeToOpenCodeOptions } from "./claude-to-opencode" | ||
| import { PI_COMPAT_EXTENSION_SOURCE } from "../templates/pi/compat-extension" | ||
| export type ClaudeToPiOptions = ClaudeToOpenCodeOptions | ||
| const PI_DESCRIPTION_MAX_LENGTH = 1024 | ||
| export function convertClaudeToPi( | ||
| plugin: ClaudePlugin, | ||
| _options: ClaudeToPiOptions, | ||
| ): PiBundle { | ||
| const promptNames = new Set<string>() | ||
| const usedSkillNames = new Set<string>(plugin.skills.map((skill) => normalizeName(skill.name))) | ||
| const prompts = plugin.commands | ||
| .filter((command) => !command.disableModelInvocation) | ||
| .map((command) => convertPrompt(command, promptNames)) | ||
| const generatedSkills = plugin.agents.map((agent) => convertAgent(agent, usedSkillNames)) | ||
| const extensions = [ | ||
| { | ||
| name: "compound-engineering-compat.ts", | ||
| content: PI_COMPAT_EXTENSION_SOURCE, | ||
| }, | ||
| ] | ||
| return { | ||
| prompts, | ||
| skillDirs: plugin.skills.map((skill) => ({ | ||
| name: skill.name, | ||
| sourceDir: skill.sourceDir, | ||
| })), | ||
| generatedSkills, | ||
| extensions, | ||
| mcporterConfig: plugin.mcpServers ? convertMcpToMcporter(plugin.mcpServers) : undefined, | ||
| } | ||
| } | ||
| function convertPrompt(command: ClaudeCommand, usedNames: Set<string>) { | ||
| const name = uniqueName(normalizeName(command.name), usedNames) | ||
| const frontmatter: Record<string, unknown> = { | ||
| description: command.description, | ||
| "argument-hint": command.argumentHint, | ||
| } | ||
| let body = transformContentForPi(command.body) | ||
| body = appendCompatibilityNoteIfNeeded(body) | ||
| return { | ||
| name, | ||
| content: formatFrontmatter(frontmatter, body.trim()), | ||
| } | ||
| } | ||
| function convertAgent(agent: ClaudeAgent, usedNames: Set<string>): PiGeneratedSkill { | ||
| const name = uniqueName(normalizeName(agent.name), usedNames) | ||
| const description = sanitizeDescription( | ||
| agent.description ?? `Converted from Claude agent ${agent.name}`, | ||
| ) | ||
| const frontmatter: Record<string, unknown> = { | ||
| name, | ||
| description, | ||
| } | ||
| const sections: string[] = [] | ||
| if (agent.capabilities && agent.capabilities.length > 0) { | ||
| sections.push(`## Capabilities\n${agent.capabilities.map((capability) => `- ${capability}`).join("\n")}`) | ||
| } | ||
| const body = [ | ||
| ...sections, | ||
| agent.body.trim().length > 0 | ||
| ? agent.body.trim() | ||
| : `Instructions converted from the ${agent.name} agent.`, | ||
| ].join("\n\n") | ||
| return { | ||
| name, | ||
| content: formatFrontmatter(frontmatter, body), | ||
| } | ||
| } | ||
| function transformContentForPi(body: string): string { | ||
| let result = body | ||
| // Task repo-research-analyst(feature_description) | ||
| // -> Run subagent with agent="repo-research-analyst" and task="feature_description" | ||
| const taskPattern = /^(\s*-?\s*)Task\s+([a-z][a-z0-9-]*)\(([^)]+)\)/gm | ||
| result = result.replace(taskPattern, (_match, prefix: string, agentName: string, args: string) => { | ||
| const skillName = normalizeName(agentName) | ||
| const trimmedArgs = args.trim().replace(/\s+/g, " ") | ||
| return `${prefix}Run subagent with agent=\"${skillName}\" and task=\"${trimmedArgs}\".` | ||
| }) | ||
| // Claude-specific tool references | ||
| result = result.replace(/\bAskUserQuestion\b/g, "ask_user_question") | ||
| result = result.replace(/\bTodoWrite\b/g, "file-based todos (todos/ + /skill:file-todos)") | ||
| result = result.replace(/\bTodoRead\b/g, "file-based todos (todos/ + /skill:file-todos)") | ||
| // /command-name or /workflows:command-name -> /workflows-command-name | ||
| const slashCommandPattern = /(?<![:\w])\/([a-z][a-z0-9_:-]*?)(?=[\s,."')\]}`]|$)/gi | ||
| result = result.replace(slashCommandPattern, (match, commandName: string) => { | ||
| if (commandName.includes("/")) return match | ||
| if (["dev", "tmp", "etc", "usr", "var", "bin", "home"].includes(commandName)) { | ||
| return match | ||
| } | ||
| if (commandName.startsWith("skill:")) { | ||
| const skillName = commandName.slice("skill:".length) | ||
| return `/skill:${normalizeName(skillName)}` | ||
| } | ||
| const withoutPrefix = commandName.startsWith("prompts:") | ||
| ? commandName.slice("prompts:".length) | ||
| : commandName | ||
| return `/${normalizeName(withoutPrefix)}` | ||
| }) | ||
| return result | ||
| } | ||
| function appendCompatibilityNoteIfNeeded(body: string): string { | ||
| if (!/\bmcp\b/i.test(body)) return body | ||
| const note = [ | ||
| "", | ||
| "## Pi + MCPorter note", | ||
| "For MCP access in Pi, use MCPorter via the generated tools:", | ||
| "- `mcporter_list` to inspect available MCP tools", | ||
| "- `mcporter_call` to invoke a tool", | ||
| "", | ||
| ].join("\n") | ||
| return body + note | ||
| } | ||
| function convertMcpToMcporter(servers: Record<string, ClaudeMcpServer>): PiMcporterConfig { | ||
| const mcpServers: Record<string, PiMcporterServer> = {} | ||
| for (const [name, server] of Object.entries(servers)) { | ||
| if (server.command) { | ||
| mcpServers[name] = { | ||
| command: server.command, | ||
| args: server.args, | ||
| env: server.env, | ||
| headers: server.headers, | ||
| } | ||
| continue | ||
| } | ||
| if (server.url) { | ||
| mcpServers[name] = { | ||
| baseUrl: server.url, | ||
| headers: server.headers, | ||
| } | ||
| } | ||
| } | ||
| return { mcpServers } | ||
| } | ||
| function normalizeName(value: string): string { | ||
| const trimmed = value.trim() | ||
| if (!trimmed) return "item" | ||
| const normalized = trimmed | ||
| .toLowerCase() | ||
| .replace(/[\\/]+/g, "-") | ||
| .replace(/[:\s]+/g, "-") | ||
| .replace(/[^a-z0-9_-]+/g, "-") | ||
| .replace(/-+/g, "-") | ||
| .replace(/^-+|-+$/g, "") | ||
| return normalized || "item" | ||
| } | ||
| function sanitizeDescription(value: string, maxLength = PI_DESCRIPTION_MAX_LENGTH): string { | ||
| const normalized = value.replace(/\s+/g, " ").trim() | ||
| if (normalized.length <= maxLength) return normalized | ||
| const ellipsis = "..." | ||
| return normalized.slice(0, Math.max(0, maxLength - ellipsis.length)).trimEnd() + ellipsis | ||
| } | ||
| function uniqueName(base: string, used: Set<string>): string { | ||
| if (!used.has(base)) { | ||
| used.add(base) | ||
| return base | ||
| } | ||
| let index = 2 | ||
| while (used.has(`${base}-${index}`)) { | ||
| index += 1 | ||
| } | ||
| const name = `${base}-${index}` | ||
| used.add(name) | ||
| return name | ||
| } |
| import fs from "fs/promises" | ||
| import path from "path" | ||
| import type { ClaudeHomeConfig } from "../parsers/claude-home" | ||
| import type { ClaudeMcpServer } from "../types/claude" | ||
| import { forceSymlink, isValidSkillName } from "../utils/symlink" | ||
| type CursorMcpServer = { | ||
| command?: string | ||
| args?: string[] | ||
| url?: string | ||
| env?: Record<string, string> | ||
| headers?: Record<string, string> | ||
| } | ||
| type CursorMcpConfig = { | ||
| mcpServers: Record<string, CursorMcpServer> | ||
| } | ||
| export async function syncToCursor( | ||
| config: ClaudeHomeConfig, | ||
| outputRoot: string, | ||
| ): Promise<void> { | ||
| const skillsDir = path.join(outputRoot, "skills") | ||
| await fs.mkdir(skillsDir, { recursive: true }) | ||
| for (const skill of config.skills) { | ||
| if (!isValidSkillName(skill.name)) { | ||
| console.warn(`Skipping skill with invalid name: ${skill.name}`) | ||
| continue | ||
| } | ||
| const target = path.join(skillsDir, skill.name) | ||
| await forceSymlink(skill.sourceDir, target) | ||
| } | ||
| if (Object.keys(config.mcpServers).length > 0) { | ||
| const mcpPath = path.join(outputRoot, "mcp.json") | ||
| const existing = await readJsonSafe(mcpPath) | ||
| const converted = convertMcpForCursor(config.mcpServers) | ||
| const merged: CursorMcpConfig = { | ||
| mcpServers: { | ||
| ...(existing.mcpServers ?? {}), | ||
| ...converted, | ||
| }, | ||
| } | ||
| await fs.writeFile(mcpPath, JSON.stringify(merged, null, 2), { mode: 0o600 }) | ||
| } | ||
| } | ||
| async function readJsonSafe(filePath: string): Promise<Partial<CursorMcpConfig>> { | ||
| try { | ||
| const content = await fs.readFile(filePath, "utf-8") | ||
| return JSON.parse(content) as Partial<CursorMcpConfig> | ||
| } catch (err) { | ||
| if ((err as NodeJS.ErrnoException).code === "ENOENT") { | ||
| return {} | ||
| } | ||
| throw err | ||
| } | ||
| } | ||
| function convertMcpForCursor( | ||
| servers: Record<string, ClaudeMcpServer>, | ||
| ): Record<string, CursorMcpServer> { | ||
| const result: Record<string, CursorMcpServer> = {} | ||
| for (const [name, server] of Object.entries(servers)) { | ||
| const entry: CursorMcpServer = {} | ||
| if (server.command) { | ||
| entry.command = server.command | ||
| if (server.args && server.args.length > 0) entry.args = server.args | ||
| if (server.env && Object.keys(server.env).length > 0) entry.env = server.env | ||
| } else if (server.url) { | ||
| entry.url = server.url | ||
| if (server.headers && Object.keys(server.headers).length > 0) entry.headers = server.headers | ||
| } | ||
| result[name] = entry | ||
| } | ||
| return result | ||
| } |
| import fs from "fs/promises" | ||
| import path from "path" | ||
| import type { ClaudeHomeConfig } from "../parsers/claude-home" | ||
| import { forceSymlink, isValidSkillName } from "../utils/symlink" | ||
| export async function syncToDroid( | ||
| config: ClaudeHomeConfig, | ||
| outputRoot: string, | ||
| ): Promise<void> { | ||
| const skillsDir = path.join(outputRoot, "skills") | ||
| await fs.mkdir(skillsDir, { recursive: true }) | ||
| for (const skill of config.skills) { | ||
| if (!isValidSkillName(skill.name)) { | ||
| console.warn(`Skipping skill with invalid name: ${skill.name}`) | ||
| continue | ||
| } | ||
| const target = path.join(skillsDir, skill.name) | ||
| await forceSymlink(skill.sourceDir, target) | ||
| } | ||
| } |
| import fs from "fs/promises" | ||
| import path from "path" | ||
| import type { ClaudeHomeConfig } from "../parsers/claude-home" | ||
| import type { ClaudeMcpServer } from "../types/claude" | ||
| import { forceSymlink, isValidSkillName } from "../utils/symlink" | ||
| type McporterServer = { | ||
| baseUrl?: string | ||
| command?: string | ||
| args?: string[] | ||
| env?: Record<string, string> | ||
| headers?: Record<string, string> | ||
| } | ||
| type McporterConfig = { | ||
| mcpServers: Record<string, McporterServer> | ||
| } | ||
| export async function syncToPi( | ||
| config: ClaudeHomeConfig, | ||
| outputRoot: string, | ||
| ): Promise<void> { | ||
| const skillsDir = path.join(outputRoot, "skills") | ||
| const mcporterPath = path.join(outputRoot, "compound-engineering", "mcporter.json") | ||
| await fs.mkdir(skillsDir, { recursive: true }) | ||
| for (const skill of config.skills) { | ||
| if (!isValidSkillName(skill.name)) { | ||
| console.warn(`Skipping skill with invalid name: ${skill.name}`) | ||
| continue | ||
| } | ||
| const target = path.join(skillsDir, skill.name) | ||
| await forceSymlink(skill.sourceDir, target) | ||
| } | ||
| if (Object.keys(config.mcpServers).length > 0) { | ||
| await fs.mkdir(path.dirname(mcporterPath), { recursive: true }) | ||
| const existing = await readJsonSafe(mcporterPath) | ||
| const converted = convertMcpToMcporter(config.mcpServers) | ||
| const merged: McporterConfig = { | ||
| mcpServers: { | ||
| ...(existing.mcpServers ?? {}), | ||
| ...converted.mcpServers, | ||
| }, | ||
| } | ||
| await fs.writeFile(mcporterPath, JSON.stringify(merged, null, 2), { mode: 0o600 }) | ||
| } | ||
| } | ||
| async function readJsonSafe(filePath: string): Promise<Partial<McporterConfig>> { | ||
| try { | ||
| const content = await fs.readFile(filePath, "utf-8") | ||
| return JSON.parse(content) as Partial<McporterConfig> | ||
| } catch (err) { | ||
| if ((err as NodeJS.ErrnoException).code === "ENOENT") { | ||
| return {} | ||
| } | ||
| throw err | ||
| } | ||
| } | ||
| function convertMcpToMcporter(servers: Record<string, ClaudeMcpServer>): McporterConfig { | ||
| const mcpServers: Record<string, McporterServer> = {} | ||
| for (const [name, server] of Object.entries(servers)) { | ||
| if (server.command) { | ||
| mcpServers[name] = { | ||
| command: server.command, | ||
| args: server.args, | ||
| env: server.env, | ||
| headers: server.headers, | ||
| } | ||
| continue | ||
| } | ||
| if (server.url) { | ||
| mcpServers[name] = { | ||
| baseUrl: server.url, | ||
| headers: server.headers, | ||
| } | ||
| } | ||
| } | ||
| return { mcpServers } | ||
| } |
| import path from "path" | ||
| import { backupFile, copyDir, ensureDir, pathExists, readJson, writeJson, writeText } from "../utils/files" | ||
| import type { GeminiBundle } from "../types/gemini" | ||
| export async function writeGeminiBundle(outputRoot: string, bundle: GeminiBundle): Promise<void> { | ||
| const paths = resolveGeminiPaths(outputRoot) | ||
| await ensureDir(paths.geminiDir) | ||
| if (bundle.generatedSkills.length > 0) { | ||
| for (const skill of bundle.generatedSkills) { | ||
| await writeText(path.join(paths.skillsDir, skill.name, "SKILL.md"), skill.content + "\n") | ||
| } | ||
| } | ||
| if (bundle.skillDirs.length > 0) { | ||
| for (const skill of bundle.skillDirs) { | ||
| await copyDir(skill.sourceDir, path.join(paths.skillsDir, skill.name)) | ||
| } | ||
| } | ||
| if (bundle.commands.length > 0) { | ||
| for (const command of bundle.commands) { | ||
| await writeText(path.join(paths.commandsDir, `${command.name}.toml`), command.content + "\n") | ||
| } | ||
| } | ||
| if (bundle.mcpServers && Object.keys(bundle.mcpServers).length > 0) { | ||
| const settingsPath = path.join(paths.geminiDir, "settings.json") | ||
| const backupPath = await backupFile(settingsPath) | ||
| if (backupPath) { | ||
| console.log(`Backed up existing settings.json to ${backupPath}`) | ||
| } | ||
| // Merge mcpServers into existing settings if present | ||
| let existingSettings: Record<string, unknown> = {} | ||
| if (await pathExists(settingsPath)) { | ||
| try { | ||
| existingSettings = await readJson<Record<string, unknown>>(settingsPath) | ||
| } catch { | ||
| console.warn("Warning: existing settings.json could not be parsed and will be replaced.") | ||
| } | ||
| } | ||
| const existingMcp = (existingSettings.mcpServers && typeof existingSettings.mcpServers === "object") | ||
| ? existingSettings.mcpServers as Record<string, unknown> | ||
| : {} | ||
| const merged = { ...existingSettings, mcpServers: { ...existingMcp, ...bundle.mcpServers } } | ||
| await writeJson(settingsPath, merged) | ||
| } | ||
| } | ||
| function resolveGeminiPaths(outputRoot: string) { | ||
| const base = path.basename(outputRoot) | ||
| // If already pointing at .gemini, write directly into it | ||
| if (base === ".gemini") { | ||
| return { | ||
| geminiDir: outputRoot, | ||
| skillsDir: path.join(outputRoot, "skills"), | ||
| commandsDir: path.join(outputRoot, "commands"), | ||
| } | ||
| } | ||
| // Otherwise nest under .gemini | ||
| return { | ||
| geminiDir: path.join(outputRoot, ".gemini"), | ||
| skillsDir: path.join(outputRoot, ".gemini", "skills"), | ||
| commandsDir: path.join(outputRoot, ".gemini", "commands"), | ||
| } | ||
| } |
| import path from "path" | ||
| import { | ||
| backupFile, | ||
| copyDir, | ||
| ensureDir, | ||
| pathExists, | ||
| readText, | ||
| writeJson, | ||
| writeText, | ||
| } from "../utils/files" | ||
| import type { PiBundle } from "../types/pi" | ||
| const PI_AGENTS_BLOCK_START = "<!-- BEGIN COMPOUND PI TOOL MAP -->" | ||
| const PI_AGENTS_BLOCK_END = "<!-- END COMPOUND PI TOOL MAP -->" | ||
| const PI_AGENTS_BLOCK_BODY = `## Compound Engineering (Pi compatibility) | ||
| This block is managed by compound-plugin. | ||
| Compatibility notes: | ||
| - Claude Task(agent, args) maps to the subagent extension tool | ||
| - For parallel agent runs, batch multiple subagent calls with multi_tool_use.parallel | ||
| - AskUserQuestion maps to the ask_user_question extension tool | ||
| - MCP access uses MCPorter via mcporter_list and mcporter_call extension tools | ||
| - MCPorter config path: .pi/compound-engineering/mcporter.json (project) or ~/.pi/agent/compound-engineering/mcporter.json (global) | ||
| ` | ||
| export async function writePiBundle(outputRoot: string, bundle: PiBundle): Promise<void> { | ||
| const paths = resolvePiPaths(outputRoot) | ||
| await ensureDir(paths.skillsDir) | ||
| await ensureDir(paths.promptsDir) | ||
| await ensureDir(paths.extensionsDir) | ||
| for (const prompt of bundle.prompts) { | ||
| await writeText(path.join(paths.promptsDir, `${prompt.name}.md`), prompt.content + "\n") | ||
| } | ||
| for (const skill of bundle.skillDirs) { | ||
| await copyDir(skill.sourceDir, path.join(paths.skillsDir, skill.name)) | ||
| } | ||
| for (const skill of bundle.generatedSkills) { | ||
| await writeText(path.join(paths.skillsDir, skill.name, "SKILL.md"), skill.content + "\n") | ||
| } | ||
| for (const extension of bundle.extensions) { | ||
| await writeText(path.join(paths.extensionsDir, extension.name), extension.content + "\n") | ||
| } | ||
| if (bundle.mcporterConfig) { | ||
| const backupPath = await backupFile(paths.mcporterConfigPath) | ||
| if (backupPath) { | ||
| console.log(`Backed up existing MCPorter config to ${backupPath}`) | ||
| } | ||
| await writeJson(paths.mcporterConfigPath, bundle.mcporterConfig) | ||
| } | ||
| await ensurePiAgentsBlock(paths.agentsPath) | ||
| } | ||
| function resolvePiPaths(outputRoot: string) { | ||
| const base = path.basename(outputRoot) | ||
| // Global install root: ~/.pi/agent | ||
| if (base === "agent") { | ||
| return { | ||
| skillsDir: path.join(outputRoot, "skills"), | ||
| promptsDir: path.join(outputRoot, "prompts"), | ||
| extensionsDir: path.join(outputRoot, "extensions"), | ||
| mcporterConfigPath: path.join(outputRoot, "compound-engineering", "mcporter.json"), | ||
| agentsPath: path.join(outputRoot, "AGENTS.md"), | ||
| } | ||
| } | ||
| // Project local .pi directory | ||
| if (base === ".pi") { | ||
| return { | ||
| skillsDir: path.join(outputRoot, "skills"), | ||
| promptsDir: path.join(outputRoot, "prompts"), | ||
| extensionsDir: path.join(outputRoot, "extensions"), | ||
| mcporterConfigPath: path.join(outputRoot, "compound-engineering", "mcporter.json"), | ||
| agentsPath: path.join(outputRoot, "AGENTS.md"), | ||
| } | ||
| } | ||
| // Custom output root -> nest under .pi | ||
| return { | ||
| skillsDir: path.join(outputRoot, ".pi", "skills"), | ||
| promptsDir: path.join(outputRoot, ".pi", "prompts"), | ||
| extensionsDir: path.join(outputRoot, ".pi", "extensions"), | ||
| mcporterConfigPath: path.join(outputRoot, ".pi", "compound-engineering", "mcporter.json"), | ||
| agentsPath: path.join(outputRoot, "AGENTS.md"), | ||
| } | ||
| } | ||
| async function ensurePiAgentsBlock(filePath: string): Promise<void> { | ||
| const block = buildPiAgentsBlock() | ||
| if (!(await pathExists(filePath))) { | ||
| await writeText(filePath, block + "\n") | ||
| return | ||
| } | ||
| const existing = await readText(filePath) | ||
| const updated = upsertBlock(existing, block) | ||
| if (updated !== existing) { | ||
| await writeText(filePath, updated) | ||
| } | ||
| } | ||
| function buildPiAgentsBlock(): string { | ||
| return [PI_AGENTS_BLOCK_START, PI_AGENTS_BLOCK_BODY.trim(), PI_AGENTS_BLOCK_END].join("\n") | ||
| } | ||
| function upsertBlock(existing: string, block: string): string { | ||
| const startIndex = existing.indexOf(PI_AGENTS_BLOCK_START) | ||
| const endIndex = existing.indexOf(PI_AGENTS_BLOCK_END) | ||
| if (startIndex !== -1 && endIndex !== -1 && endIndex > startIndex) { | ||
| const before = existing.slice(0, startIndex).trimEnd() | ||
| const after = existing.slice(endIndex + PI_AGENTS_BLOCK_END.length).trimStart() | ||
| return [before, block, after].filter(Boolean).join("\n\n") + "\n" | ||
| } | ||
| if (existing.trim().length === 0) { | ||
| return block + "\n" | ||
| } | ||
| return existing.trimEnd() + "\n\n" + block + "\n" | ||
| } |
| export const PI_COMPAT_EXTENSION_SOURCE = `import fs from "node:fs" | ||
| import os from "node:os" | ||
| import path from "node:path" | ||
| import { fileURLToPath } from "node:url" | ||
| import type { ExtensionAPI } from "@mariozechner/pi-coding-agent" | ||
| import { Type } from "@sinclair/typebox" | ||
| const MAX_BYTES = 50 * 1024 | ||
| const DEFAULT_SUBAGENT_TIMEOUT_MS = 10 * 60 * 1000 | ||
| const MAX_PARALLEL_SUBAGENTS = 8 | ||
| type SubagentTask = { | ||
| agent: string | ||
| task: string | ||
| cwd?: string | ||
| } | ||
| type SubagentResult = { | ||
| agent: string | ||
| task: string | ||
| cwd: string | ||
| exitCode: number | ||
| output: string | ||
| stderr: string | ||
| } | ||
| function truncate(value: string): string { | ||
| const input = value ?? "" | ||
| if (Buffer.byteLength(input, "utf8") <= MAX_BYTES) return input | ||
| const head = input.slice(0, MAX_BYTES) | ||
| return head + "\\n\\n[Output truncated to 50KB]" | ||
| } | ||
| function shellEscape(value: string): string { | ||
| return "'" + value.replace(/'/g, "'\\"'\\"'") + "'" | ||
| } | ||
| function normalizeName(value: string): string { | ||
| return String(value || "") | ||
| .trim() | ||
| .toLowerCase() | ||
| .replace(/[^a-z0-9_-]+/g, "-") | ||
| .replace(/-+/g, "-") | ||
| .replace(/^-+|-+$/g, "") | ||
| } | ||
| function resolveBundledMcporterConfigPath(): string | undefined { | ||
| try { | ||
| const extensionDir = path.dirname(fileURLToPath(import.meta.url)) | ||
| const candidates = [ | ||
| path.join(extensionDir, "..", "pi-resources", "compound-engineering", "mcporter.json"), | ||
| path.join(extensionDir, "..", "compound-engineering", "mcporter.json"), | ||
| ] | ||
| for (const candidate of candidates) { | ||
| if (fs.existsSync(candidate)) return candidate | ||
| } | ||
| } catch { | ||
| // noop: bundled path is best-effort fallback | ||
| } | ||
| return undefined | ||
| } | ||
| function resolveMcporterConfigPath(cwd: string, explicit?: string): string | undefined { | ||
| if (explicit && explicit.trim()) { | ||
| return path.resolve(explicit) | ||
| } | ||
| const projectPath = path.join(cwd, ".pi", "compound-engineering", "mcporter.json") | ||
| if (fs.existsSync(projectPath)) return projectPath | ||
| const globalPath = path.join(os.homedir(), ".pi", "agent", "compound-engineering", "mcporter.json") | ||
| if (fs.existsSync(globalPath)) return globalPath | ||
| return resolveBundledMcporterConfigPath() | ||
| } | ||
| function resolveTaskCwd(baseCwd: string, taskCwd?: string): string { | ||
| if (!taskCwd || !taskCwd.trim()) return baseCwd | ||
| const expanded = taskCwd === "~" | ||
| ? os.homedir() | ||
| : taskCwd.startsWith("~" + path.sep) | ||
| ? path.join(os.homedir(), taskCwd.slice(2)) | ||
| : taskCwd | ||
| return path.resolve(baseCwd, expanded) | ||
| } | ||
| async function runSingleSubagent( | ||
| pi: ExtensionAPI, | ||
| baseCwd: string, | ||
| task: SubagentTask, | ||
| signal?: AbortSignal, | ||
| timeoutMs = DEFAULT_SUBAGENT_TIMEOUT_MS, | ||
| ): Promise<SubagentResult> { | ||
| const agent = normalizeName(task.agent) | ||
| if (!agent) { | ||
| throw new Error("Subagent task is missing a valid agent name") | ||
| } | ||
| const taskText = String(task.task ?? "").trim() | ||
| if (!taskText) { | ||
| throw new Error("Subagent task for " + agent + " is empty") | ||
| } | ||
| const cwd = resolveTaskCwd(baseCwd, task.cwd) | ||
| const prompt = "/skill:" + agent + " " + taskText | ||
| const script = "cd " + shellEscape(cwd) + " && pi --no-session -p " + shellEscape(prompt) | ||
| const result = await pi.exec("bash", ["-lc", script], { signal, timeout: timeoutMs }) | ||
| return { | ||
| agent, | ||
| task: taskText, | ||
| cwd, | ||
| exitCode: result.code, | ||
| output: truncate(result.stdout || ""), | ||
| stderr: truncate(result.stderr || ""), | ||
| } | ||
| } | ||
| async function runParallelSubagents( | ||
| pi: ExtensionAPI, | ||
| baseCwd: string, | ||
| tasks: SubagentTask[], | ||
| signal?: AbortSignal, | ||
| timeoutMs = DEFAULT_SUBAGENT_TIMEOUT_MS, | ||
| maxConcurrency = 4, | ||
| onProgress?: (completed: number, total: number) => void, | ||
| ): Promise<SubagentResult[]> { | ||
| const safeConcurrency = Math.max(1, Math.min(maxConcurrency, MAX_PARALLEL_SUBAGENTS, tasks.length)) | ||
| const results: SubagentResult[] = new Array(tasks.length) | ||
| let nextIndex = 0 | ||
| let completed = 0 | ||
| const workers = Array.from({ length: safeConcurrency }, async () => { | ||
| while (true) { | ||
| const current = nextIndex | ||
| nextIndex += 1 | ||
| if (current >= tasks.length) return | ||
| results[current] = await runSingleSubagent(pi, baseCwd, tasks[current], signal, timeoutMs) | ||
| completed += 1 | ||
| onProgress?.(completed, tasks.length) | ||
| } | ||
| }) | ||
| await Promise.all(workers) | ||
| return results | ||
| } | ||
| function formatSubagentSummary(results: SubagentResult[]): string { | ||
| if (results.length === 0) return "No subagent work was executed." | ||
| const success = results.filter((result) => result.exitCode === 0).length | ||
| const failed = results.length - success | ||
| const header = failed === 0 | ||
| ? "Subagent run completed: " + success + "/" + results.length + " succeeded." | ||
| : "Subagent run completed: " + success + "/" + results.length + " succeeded, " + failed + " failed." | ||
| const lines = results.map((result) => { | ||
| const status = result.exitCode === 0 ? "ok" : "error" | ||
| const body = result.output || result.stderr || "(no output)" | ||
| const preview = body.split("\\n").slice(0, 6).join("\\n") | ||
| return "\\n[" + status + "] " + result.agent + "\\n" + preview | ||
| }) | ||
| return header + lines.join("\\n") | ||
| } | ||
| export default function (pi: ExtensionAPI) { | ||
| pi.registerTool({ | ||
| name: "ask_user_question", | ||
| label: "Ask User Question", | ||
| description: "Ask the user a question with optional choices.", | ||
| parameters: Type.Object({ | ||
| question: Type.String({ description: "Question shown to the user" }), | ||
| options: Type.Optional(Type.Array(Type.String(), { description: "Selectable options" })), | ||
| allowCustom: Type.Optional(Type.Boolean({ default: true })), | ||
| }), | ||
| async execute(_toolCallId, params, _signal, _onUpdate, ctx) { | ||
| if (!ctx.hasUI) { | ||
| return { | ||
| isError: true, | ||
| content: [{ type: "text", text: "UI is unavailable in this mode." }], | ||
| details: {}, | ||
| } | ||
| } | ||
| const options = params.options ?? [] | ||
| const allowCustom = params.allowCustom ?? true | ||
| if (options.length === 0) { | ||
| const answer = await ctx.ui.input(params.question) | ||
| if (!answer) { | ||
| return { | ||
| content: [{ type: "text", text: "User cancelled." }], | ||
| details: { answer: null }, | ||
| } | ||
| } | ||
| return { | ||
| content: [{ type: "text", text: "User answered: " + answer }], | ||
| details: { answer, mode: "input" }, | ||
| } | ||
| } | ||
| const customLabel = "Other (type custom answer)" | ||
| const selectable = allowCustom ? [...options, customLabel] : options | ||
| const selected = await ctx.ui.select(params.question, selectable) | ||
| if (!selected) { | ||
| return { | ||
| content: [{ type: "text", text: "User cancelled." }], | ||
| details: { answer: null }, | ||
| } | ||
| } | ||
| if (selected === customLabel) { | ||
| const custom = await ctx.ui.input("Your answer") | ||
| if (!custom) { | ||
| return { | ||
| content: [{ type: "text", text: "User cancelled." }], | ||
| details: { answer: null }, | ||
| } | ||
| } | ||
| return { | ||
| content: [{ type: "text", text: "User answered: " + custom }], | ||
| details: { answer: custom, mode: "custom" }, | ||
| } | ||
| } | ||
| return { | ||
| content: [{ type: "text", text: "User selected: " + selected }], | ||
| details: { answer: selected, mode: "select" }, | ||
| } | ||
| }, | ||
| }) | ||
| const subagentTaskSchema = Type.Object({ | ||
| agent: Type.String({ description: "Skill/agent name to invoke" }), | ||
| task: Type.String({ description: "Task instructions for that skill" }), | ||
| cwd: Type.Optional(Type.String({ description: "Optional working directory for this task" })), | ||
| }) | ||
| pi.registerTool({ | ||
| name: "subagent", | ||
| label: "Subagent", | ||
| description: "Run one or more skill-based subagent tasks. Supports single, parallel, and chained execution.", | ||
| parameters: Type.Object({ | ||
| agent: Type.Optional(Type.String({ description: "Single subagent name" })), | ||
| task: Type.Optional(Type.String({ description: "Single subagent task" })), | ||
| cwd: Type.Optional(Type.String({ description: "Working directory for single mode" })), | ||
| tasks: Type.Optional(Type.Array(subagentTaskSchema, { description: "Parallel subagent tasks" })), | ||
| chain: Type.Optional(Type.Array(subagentTaskSchema, { description: "Sequential tasks; supports {previous} placeholder" })), | ||
| maxConcurrency: Type.Optional(Type.Number({ default: 4 })), | ||
| timeoutMs: Type.Optional(Type.Number({ default: DEFAULT_SUBAGENT_TIMEOUT_MS })), | ||
| }), | ||
| async execute(_toolCallId, params, signal, onUpdate, ctx) { | ||
| const hasSingle = Boolean(params.agent && params.task) | ||
| const hasTasks = Boolean(params.tasks && params.tasks.length > 0) | ||
| const hasChain = Boolean(params.chain && params.chain.length > 0) | ||
| const modeCount = Number(hasSingle) + Number(hasTasks) + Number(hasChain) | ||
| if (modeCount !== 1) { | ||
| return { | ||
| isError: true, | ||
| content: [{ type: "text", text: "Provide exactly one mode: single (agent+task), tasks, or chain." }], | ||
| details: {}, | ||
| } | ||
| } | ||
| const timeoutMs = Number(params.timeoutMs || DEFAULT_SUBAGENT_TIMEOUT_MS) | ||
| try { | ||
| if (hasSingle) { | ||
| const result = await runSingleSubagent( | ||
| pi, | ||
| ctx.cwd, | ||
| { agent: params.agent!, task: params.task!, cwd: params.cwd }, | ||
| signal, | ||
| timeoutMs, | ||
| ) | ||
| const body = formatSubagentSummary([result]) | ||
| return { | ||
| isError: result.exitCode !== 0, | ||
| content: [{ type: "text", text: body }], | ||
| details: { mode: "single", results: [result] }, | ||
| } | ||
| } | ||
| if (hasTasks) { | ||
| const tasks = params.tasks as SubagentTask[] | ||
| const maxConcurrency = Number(params.maxConcurrency || 4) | ||
| const results = await runParallelSubagents( | ||
| pi, | ||
| ctx.cwd, | ||
| tasks, | ||
| signal, | ||
| timeoutMs, | ||
| maxConcurrency, | ||
| (completed, total) => { | ||
| onUpdate?.({ | ||
| content: [{ type: "text", text: "Subagent progress: " + completed + "/" + total }], | ||
| details: { mode: "parallel", completed, total }, | ||
| }) | ||
| }, | ||
| ) | ||
| const body = formatSubagentSummary(results) | ||
| const hasFailure = results.some((result) => result.exitCode !== 0) | ||
| return { | ||
| isError: hasFailure, | ||
| content: [{ type: "text", text: body }], | ||
| details: { mode: "parallel", results }, | ||
| } | ||
| } | ||
| const chain = params.chain as SubagentTask[] | ||
| const results: SubagentResult[] = [] | ||
| let previous = "" | ||
| for (const step of chain) { | ||
| const resolvedTask = step.task.replace(/\\{previous\\}/g, previous) | ||
| const result = await runSingleSubagent( | ||
| pi, | ||
| ctx.cwd, | ||
| { agent: step.agent, task: resolvedTask, cwd: step.cwd }, | ||
| signal, | ||
| timeoutMs, | ||
| ) | ||
| results.push(result) | ||
| previous = result.output || result.stderr | ||
| onUpdate?.({ | ||
| content: [{ type: "text", text: "Subagent chain progress: " + results.length + "/" + chain.length }], | ||
| details: { mode: "chain", completed: results.length, total: chain.length }, | ||
| }) | ||
| if (result.exitCode !== 0) break | ||
| } | ||
| const body = formatSubagentSummary(results) | ||
| const hasFailure = results.some((result) => result.exitCode !== 0) | ||
| return { | ||
| isError: hasFailure, | ||
| content: [{ type: "text", text: body }], | ||
| details: { mode: "chain", results }, | ||
| } | ||
| } catch (error) { | ||
| return { | ||
| isError: true, | ||
| content: [{ type: "text", text: error instanceof Error ? error.message : String(error) }], | ||
| details: {}, | ||
| } | ||
| } | ||
| }, | ||
| }) | ||
| pi.registerTool({ | ||
| name: "mcporter_list", | ||
| label: "MCPorter List", | ||
| description: "List tools on an MCP server through MCPorter.", | ||
| parameters: Type.Object({ | ||
| server: Type.String({ description: "Configured MCP server name" }), | ||
| allParameters: Type.Optional(Type.Boolean({ default: false })), | ||
| json: Type.Optional(Type.Boolean({ default: true })), | ||
| configPath: Type.Optional(Type.String({ description: "Optional mcporter config path" })), | ||
| }), | ||
| async execute(_toolCallId, params, signal, _onUpdate, ctx) { | ||
| const args = ["list", params.server] | ||
| if (params.allParameters) args.push("--all-parameters") | ||
| if (params.json ?? true) args.push("--json") | ||
| const configPath = resolveMcporterConfigPath(ctx.cwd, params.configPath) | ||
| if (configPath) { | ||
| args.push("--config", configPath) | ||
| } | ||
| const result = await pi.exec("mcporter", args, { signal }) | ||
| const output = truncate(result.stdout || result.stderr || "") | ||
| return { | ||
| isError: result.code !== 0, | ||
| content: [{ type: "text", text: output || "(no output)" }], | ||
| details: { | ||
| exitCode: result.code, | ||
| command: "mcporter " + args.join(" "), | ||
| configPath, | ||
| }, | ||
| } | ||
| }, | ||
| }) | ||
| pi.registerTool({ | ||
| name: "mcporter_call", | ||
| label: "MCPorter Call", | ||
| description: "Call a specific MCP tool through MCPorter.", | ||
| parameters: Type.Object({ | ||
| call: Type.Optional(Type.String({ description: "Function-style call, e.g. linear.list_issues(limit: 5)" })), | ||
| server: Type.Optional(Type.String({ description: "Server name (if call is omitted)" })), | ||
| tool: Type.Optional(Type.String({ description: "Tool name (if call is omitted)" })), | ||
| args: Type.Optional(Type.Record(Type.String(), Type.Any(), { description: "JSON arguments object" })), | ||
| configPath: Type.Optional(Type.String({ description: "Optional mcporter config path" })), | ||
| }), | ||
| async execute(_toolCallId, params, signal, _onUpdate, ctx) { | ||
| const args = ["call"] | ||
| if (params.call && params.call.trim()) { | ||
| args.push(params.call.trim()) | ||
| } else { | ||
| if (!params.server || !params.tool) { | ||
| return { | ||
| isError: true, | ||
| content: [{ type: "text", text: "Provide either call, or server + tool." }], | ||
| details: {}, | ||
| } | ||
| } | ||
| args.push(params.server + "." + params.tool) | ||
| if (params.args) { | ||
| args.push("--args", JSON.stringify(params.args)) | ||
| } | ||
| } | ||
| args.push("--output", "json") | ||
| const configPath = resolveMcporterConfigPath(ctx.cwd, params.configPath) | ||
| if (configPath) { | ||
| args.push("--config", configPath) | ||
| } | ||
| const result = await pi.exec("mcporter", args, { signal }) | ||
| const output = truncate(result.stdout || result.stderr || "") | ||
| return { | ||
| isError: result.code !== 0, | ||
| content: [{ type: "text", text: output || "(no output)" }], | ||
| details: { | ||
| exitCode: result.code, | ||
| command: "mcporter " + args.join(" "), | ||
| configPath, | ||
| }, | ||
| } | ||
| }, | ||
| }) | ||
| } | ||
| ` |
| export type GeminiSkill = { | ||
| name: string | ||
| content: string // Full SKILL.md with YAML frontmatter | ||
| } | ||
| export type GeminiSkillDir = { | ||
| name: string | ||
| sourceDir: string | ||
| } | ||
| export type GeminiCommand = { | ||
| name: string // e.g. "plan" or "workflows/plan" | ||
| content: string // Full TOML content | ||
| } | ||
| export type GeminiMcpServer = { | ||
| command?: string | ||
| args?: string[] | ||
| env?: Record<string, string> | ||
| url?: string | ||
| headers?: Record<string, string> | ||
| } | ||
| export type GeminiBundle = { | ||
| generatedSkills: GeminiSkill[] // From agents | ||
| skillDirs: GeminiSkillDir[] // From skills (pass-through) | ||
| commands: GeminiCommand[] | ||
| mcpServers?: Record<string, GeminiMcpServer> | ||
| } |
| export type PiPrompt = { | ||
| name: string | ||
| content: string | ||
| } | ||
| export type PiSkillDir = { | ||
| name: string | ||
| sourceDir: string | ||
| } | ||
| export type PiGeneratedSkill = { | ||
| name: string | ||
| content: string | ||
| } | ||
| export type PiExtensionFile = { | ||
| name: string | ||
| content: string | ||
| } | ||
| export type PiMcporterServer = { | ||
| description?: string | ||
| baseUrl?: string | ||
| command?: string | ||
| args?: string[] | ||
| env?: Record<string, string> | ||
| headers?: Record<string, string> | ||
| } | ||
| export type PiMcporterConfig = { | ||
| mcpServers: Record<string, PiMcporterServer> | ||
| } | ||
| export type PiBundle = { | ||
| prompts: PiPrompt[] | ||
| skillDirs: PiSkillDir[] | ||
| generatedSkills: PiGeneratedSkill[] | ||
| extensions: PiExtensionFile[] | ||
| mcporterConfig?: PiMcporterConfig | ||
| } |
| import os from "os" | ||
| import path from "path" | ||
| export function expandHome(value: string): string { | ||
| if (value === "~") return os.homedir() | ||
| if (value.startsWith(`~${path.sep}`)) { | ||
| return path.join(os.homedir(), value.slice(2)) | ||
| } | ||
| return value | ||
| } | ||
| export function resolveTargetHome(value: unknown, defaultPath: string): string { | ||
| if (!value) return defaultPath | ||
| const raw = String(value).trim() | ||
| if (!raw) return defaultPath | ||
| return path.resolve(expandHome(raw)) | ||
| } |
| import { describe, expect, test } from "bun:test" | ||
| import { convertClaudeToGemini, toToml, transformContentForGemini } from "../src/converters/claude-to-gemini" | ||
| import { parseFrontmatter } from "../src/utils/frontmatter" | ||
| import type { ClaudePlugin } from "../src/types/claude" | ||
| const fixturePlugin: ClaudePlugin = { | ||
| root: "/tmp/plugin", | ||
| manifest: { name: "fixture", version: "1.0.0" }, | ||
| agents: [ | ||
| { | ||
| name: "Security Reviewer", | ||
| description: "Security-focused agent", | ||
| capabilities: ["Threat modeling", "OWASP"], | ||
| model: "claude-sonnet-4-20250514", | ||
| body: "Focus on vulnerabilities.", | ||
| sourcePath: "/tmp/plugin/agents/security-reviewer.md", | ||
| }, | ||
| ], | ||
| commands: [ | ||
| { | ||
| name: "workflows:plan", | ||
| description: "Planning command", | ||
| argumentHint: "[FOCUS]", | ||
| model: "inherit", | ||
| allowedTools: ["Read"], | ||
| body: "Plan the work.", | ||
| sourcePath: "/tmp/plugin/commands/workflows/plan.md", | ||
| }, | ||
| ], | ||
| skills: [ | ||
| { | ||
| name: "existing-skill", | ||
| description: "Existing skill", | ||
| sourceDir: "/tmp/plugin/skills/existing-skill", | ||
| skillPath: "/tmp/plugin/skills/existing-skill/SKILL.md", | ||
| }, | ||
| ], | ||
| hooks: undefined, | ||
| mcpServers: { | ||
| local: { command: "echo", args: ["hello"] }, | ||
| }, | ||
| } | ||
| describe("convertClaudeToGemini", () => { | ||
| test("converts agents to skills with SKILL.md frontmatter", () => { | ||
| const bundle = convertClaudeToGemini(fixturePlugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| const skill = bundle.generatedSkills.find((s) => s.name === "security-reviewer") | ||
| expect(skill).toBeDefined() | ||
| const parsed = parseFrontmatter(skill!.content) | ||
| expect(parsed.data.name).toBe("security-reviewer") | ||
| expect(parsed.data.description).toBe("Security-focused agent") | ||
| expect(parsed.body).toContain("Focus on vulnerabilities.") | ||
| }) | ||
| test("agent with capabilities prepended to body", () => { | ||
| const bundle = convertClaudeToGemini(fixturePlugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| const skill = bundle.generatedSkills.find((s) => s.name === "security-reviewer") | ||
| expect(skill).toBeDefined() | ||
| const parsed = parseFrontmatter(skill!.content) | ||
| expect(parsed.body).toContain("## Capabilities") | ||
| expect(parsed.body).toContain("- Threat modeling") | ||
| expect(parsed.body).toContain("- OWASP") | ||
| }) | ||
| test("agent with empty description gets default description", () => { | ||
| const plugin: ClaudePlugin = { | ||
| ...fixturePlugin, | ||
| agents: [ | ||
| { | ||
| name: "my-agent", | ||
| body: "Do things.", | ||
| sourcePath: "/tmp/plugin/agents/my-agent.md", | ||
| }, | ||
| ], | ||
| commands: [], | ||
| skills: [], | ||
| } | ||
| const bundle = convertClaudeToGemini(plugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| const parsed = parseFrontmatter(bundle.generatedSkills[0].content) | ||
| expect(parsed.data.description).toBe("Use this skill for my-agent tasks") | ||
| }) | ||
| test("agent model field silently dropped", () => { | ||
| const bundle = convertClaudeToGemini(fixturePlugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| const skill = bundle.generatedSkills.find((s) => s.name === "security-reviewer") | ||
| const parsed = parseFrontmatter(skill!.content) | ||
| expect(parsed.data.model).toBeUndefined() | ||
| }) | ||
| test("agent with empty body gets default body text", () => { | ||
| const plugin: ClaudePlugin = { | ||
| ...fixturePlugin, | ||
| agents: [ | ||
| { | ||
| name: "Empty Agent", | ||
| description: "An empty agent", | ||
| body: "", | ||
| sourcePath: "/tmp/plugin/agents/empty.md", | ||
| }, | ||
| ], | ||
| commands: [], | ||
| skills: [], | ||
| } | ||
| const bundle = convertClaudeToGemini(plugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| const parsed = parseFrontmatter(bundle.generatedSkills[0].content) | ||
| expect(parsed.body).toContain("Instructions converted from the Empty Agent agent.") | ||
| }) | ||
| test("converts commands to TOML with prompt and description", () => { | ||
| const bundle = convertClaudeToGemini(fixturePlugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| expect(bundle.commands).toHaveLength(1) | ||
| const command = bundle.commands[0] | ||
| expect(command.name).toBe("workflows/plan") | ||
| expect(command.content).toContain('description = "Planning command"') | ||
| expect(command.content).toContain('prompt = """') | ||
| expect(command.content).toContain("Plan the work.") | ||
| }) | ||
| test("namespaced command creates correct path", () => { | ||
| const bundle = convertClaudeToGemini(fixturePlugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| const command = bundle.commands.find((c) => c.name === "workflows/plan") | ||
| expect(command).toBeDefined() | ||
| }) | ||
| test("command with argument-hint gets {{args}} placeholder", () => { | ||
| const bundle = convertClaudeToGemini(fixturePlugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| const command = bundle.commands[0] | ||
| expect(command.content).toContain("{{args}}") | ||
| }) | ||
| test("command with disable-model-invocation is still included", () => { | ||
| const plugin: ClaudePlugin = { | ||
| ...fixturePlugin, | ||
| commands: [ | ||
| { | ||
| name: "disabled-command", | ||
| description: "Disabled command", | ||
| disableModelInvocation: true, | ||
| body: "Disabled body.", | ||
| sourcePath: "/tmp/plugin/commands/disabled.md", | ||
| }, | ||
| ], | ||
| agents: [], | ||
| skills: [], | ||
| } | ||
| const bundle = convertClaudeToGemini(plugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| // Gemini TOML commands are prompts, not code — always include | ||
| expect(bundle.commands).toHaveLength(1) | ||
| expect(bundle.commands[0].name).toBe("disabled-command") | ||
| }) | ||
| test("command allowedTools silently dropped", () => { | ||
| const bundle = convertClaudeToGemini(fixturePlugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| const command = bundle.commands[0] | ||
| expect(command.content).not.toContain("allowedTools") | ||
| expect(command.content).not.toContain("Read") | ||
| }) | ||
| test("skills pass through as directory references", () => { | ||
| const bundle = convertClaudeToGemini(fixturePlugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| expect(bundle.skillDirs).toHaveLength(1) | ||
| expect(bundle.skillDirs[0].name).toBe("existing-skill") | ||
| expect(bundle.skillDirs[0].sourceDir).toBe("/tmp/plugin/skills/existing-skill") | ||
| }) | ||
| test("MCP servers convert to settings.json-compatible config", () => { | ||
| const bundle = convertClaudeToGemini(fixturePlugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| expect(bundle.mcpServers?.local?.command).toBe("echo") | ||
| expect(bundle.mcpServers?.local?.args).toEqual(["hello"]) | ||
| }) | ||
| test("plugin with zero agents produces empty generatedSkills", () => { | ||
| const plugin: ClaudePlugin = { | ||
| ...fixturePlugin, | ||
| agents: [], | ||
| commands: [], | ||
| skills: [], | ||
| } | ||
| const bundle = convertClaudeToGemini(plugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| expect(bundle.generatedSkills).toHaveLength(0) | ||
| }) | ||
| test("plugin with only skills works correctly", () => { | ||
| const plugin: ClaudePlugin = { | ||
| ...fixturePlugin, | ||
| agents: [], | ||
| commands: [], | ||
| } | ||
| const bundle = convertClaudeToGemini(plugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| expect(bundle.generatedSkills).toHaveLength(0) | ||
| expect(bundle.skillDirs).toHaveLength(1) | ||
| expect(bundle.commands).toHaveLength(0) | ||
| }) | ||
| test("agent name colliding with skill name gets deduplicated", () => { | ||
| const plugin: ClaudePlugin = { | ||
| ...fixturePlugin, | ||
| skills: [{ name: "security-reviewer", description: "Existing skill", sourceDir: "/tmp/skill", skillPath: "/tmp/skill/SKILL.md" }], | ||
| agents: [{ name: "Security Reviewer", description: "Agent version", body: "Body.", sourcePath: "/tmp/agents/sr.md" }], | ||
| commands: [], | ||
| } | ||
| const bundle = convertClaudeToGemini(plugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| // Agent should be deduplicated since skill already has "security-reviewer" | ||
| expect(bundle.generatedSkills[0].name).toBe("security-reviewer-2") | ||
| expect(bundle.skillDirs[0].name).toBe("security-reviewer") | ||
| }) | ||
| test("hooks present emits console.warn", () => { | ||
| const warnings: string[] = [] | ||
| const originalWarn = console.warn | ||
| console.warn = (msg: string) => warnings.push(msg) | ||
| const plugin: ClaudePlugin = { | ||
| ...fixturePlugin, | ||
| hooks: { hooks: { PreToolUse: [{ matcher: "*", body: "hook body" }] } }, | ||
| agents: [], | ||
| commands: [], | ||
| skills: [], | ||
| } | ||
| convertClaudeToGemini(plugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| console.warn = originalWarn | ||
| expect(warnings.some((w) => w.includes("Gemini"))).toBe(true) | ||
| }) | ||
| }) | ||
| describe("transformContentForGemini", () => { | ||
| test("transforms .claude/ paths to .gemini/", () => { | ||
| const result = transformContentForGemini("Read .claude/settings.json for config.") | ||
| expect(result).toContain(".gemini/settings.json") | ||
| expect(result).not.toContain(".claude/") | ||
| }) | ||
| test("transforms ~/.claude/ paths to ~/.gemini/", () => { | ||
| const result = transformContentForGemini("Check ~/.claude/config for settings.") | ||
| expect(result).toContain("~/.gemini/config") | ||
| expect(result).not.toContain("~/.claude/") | ||
| }) | ||
| test("transforms Task agent(args) to natural language skill reference", () => { | ||
| const input = `Run these: | ||
| - Task repo-research-analyst(feature_description) | ||
| - Task learnings-researcher(feature_description) | ||
| Task best-practices-researcher(topic)` | ||
| const result = transformContentForGemini(input) | ||
| expect(result).toContain("Use the repo-research-analyst skill to: feature_description") | ||
| expect(result).toContain("Use the learnings-researcher skill to: feature_description") | ||
| expect(result).toContain("Use the best-practices-researcher skill to: topic") | ||
| expect(result).not.toContain("Task repo-research-analyst") | ||
| }) | ||
| test("transforms @agent references to skill references", () => { | ||
| const result = transformContentForGemini("Ask @security-sentinel for a review.") | ||
| expect(result).toContain("the security-sentinel skill") | ||
| expect(result).not.toContain("@security-sentinel") | ||
| }) | ||
| }) | ||
| describe("toToml", () => { | ||
| test("produces valid TOML with description and prompt", () => { | ||
| const result = toToml("A description", "The prompt content") | ||
| expect(result).toContain('description = "A description"') | ||
| expect(result).toContain('prompt = """') | ||
| expect(result).toContain("The prompt content") | ||
| expect(result).toContain('"""') | ||
| }) | ||
| test("escapes quotes in description", () => { | ||
| const result = toToml('Say "hello"', "Prompt") | ||
| expect(result).toContain('description = "Say \\"hello\\""') | ||
| }) | ||
| test("escapes triple quotes in prompt", () => { | ||
| const result = toToml("A command", 'Content with """ inside it') | ||
| // Should not contain an unescaped """ that would close the TOML multi-line string prematurely | ||
| // The prompt section should have the escaped version | ||
| expect(result).toContain('description = "A command"') | ||
| expect(result).toContain('prompt = """') | ||
| // The inner """ should be escaped | ||
| expect(result).not.toMatch(/""".*""".*"""/s) // Should not have 3 separate triple-quote sequences (open, content, close would make 3) | ||
| // Verify it contains the escaped form | ||
| expect(result).toContain('\\"\\"\\"') | ||
| }) | ||
| }) |
| import { describe, expect, test } from "bun:test" | ||
| import { promises as fs } from "fs" | ||
| import path from "path" | ||
| import os from "os" | ||
| import { writeGeminiBundle } from "../src/targets/gemini" | ||
| import type { GeminiBundle } from "../src/types/gemini" | ||
| async function exists(filePath: string): Promise<boolean> { | ||
| try { | ||
| await fs.access(filePath) | ||
| return true | ||
| } catch { | ||
| return false | ||
| } | ||
| } | ||
| describe("writeGeminiBundle", () => { | ||
| test("writes skills, commands, and settings.json", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "gemini-test-")) | ||
| const bundle: GeminiBundle = { | ||
| generatedSkills: [ | ||
| { | ||
| name: "security-reviewer", | ||
| content: "---\nname: security-reviewer\ndescription: Security\n---\n\nReview code.", | ||
| }, | ||
| ], | ||
| skillDirs: [ | ||
| { | ||
| name: "skill-one", | ||
| sourceDir: path.join(import.meta.dir, "fixtures", "sample-plugin", "skills", "skill-one"), | ||
| }, | ||
| ], | ||
| commands: [ | ||
| { | ||
| name: "plan", | ||
| content: 'description = "Plan"\nprompt = """\nPlan the work.\n"""', | ||
| }, | ||
| ], | ||
| mcpServers: { | ||
| playwright: { command: "npx", args: ["-y", "@anthropic/mcp-playwright"] }, | ||
| }, | ||
| } | ||
| await writeGeminiBundle(tempRoot, bundle) | ||
| expect(await exists(path.join(tempRoot, ".gemini", "skills", "security-reviewer", "SKILL.md"))).toBe(true) | ||
| expect(await exists(path.join(tempRoot, ".gemini", "skills", "skill-one", "SKILL.md"))).toBe(true) | ||
| expect(await exists(path.join(tempRoot, ".gemini", "commands", "plan.toml"))).toBe(true) | ||
| expect(await exists(path.join(tempRoot, ".gemini", "settings.json"))).toBe(true) | ||
| const skillContent = await fs.readFile( | ||
| path.join(tempRoot, ".gemini", "skills", "security-reviewer", "SKILL.md"), | ||
| "utf8", | ||
| ) | ||
| expect(skillContent).toContain("Review code.") | ||
| const commandContent = await fs.readFile( | ||
| path.join(tempRoot, ".gemini", "commands", "plan.toml"), | ||
| "utf8", | ||
| ) | ||
| expect(commandContent).toContain("Plan the work.") | ||
| const settingsContent = JSON.parse( | ||
| await fs.readFile(path.join(tempRoot, ".gemini", "settings.json"), "utf8"), | ||
| ) | ||
| expect(settingsContent.mcpServers.playwright.command).toBe("npx") | ||
| }) | ||
| test("namespaced commands create subdirectories", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "gemini-ns-")) | ||
| const bundle: GeminiBundle = { | ||
| generatedSkills: [], | ||
| skillDirs: [], | ||
| commands: [ | ||
| { | ||
| name: "workflows/plan", | ||
| content: 'description = "Plan"\nprompt = """\nPlan.\n"""', | ||
| }, | ||
| ], | ||
| } | ||
| await writeGeminiBundle(tempRoot, bundle) | ||
| expect(await exists(path.join(tempRoot, ".gemini", "commands", "workflows", "plan.toml"))).toBe(true) | ||
| }) | ||
| test("does not double-nest when output root is .gemini", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "gemini-home-")) | ||
| const geminiRoot = path.join(tempRoot, ".gemini") | ||
| const bundle: GeminiBundle = { | ||
| generatedSkills: [ | ||
| { name: "reviewer", content: "Reviewer skill content" }, | ||
| ], | ||
| skillDirs: [], | ||
| commands: [ | ||
| { name: "plan", content: "Plan content" }, | ||
| ], | ||
| } | ||
| await writeGeminiBundle(geminiRoot, bundle) | ||
| expect(await exists(path.join(geminiRoot, "skills", "reviewer", "SKILL.md"))).toBe(true) | ||
| expect(await exists(path.join(geminiRoot, "commands", "plan.toml"))).toBe(true) | ||
| // Should NOT double-nest under .gemini/.gemini | ||
| expect(await exists(path.join(geminiRoot, ".gemini"))).toBe(false) | ||
| }) | ||
| test("handles empty bundles gracefully", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "gemini-empty-")) | ||
| const bundle: GeminiBundle = { | ||
| generatedSkills: [], | ||
| skillDirs: [], | ||
| commands: [], | ||
| } | ||
| await writeGeminiBundle(tempRoot, bundle) | ||
| expect(await exists(tempRoot)).toBe(true) | ||
| }) | ||
| test("backs up existing settings.json before overwrite", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "gemini-backup-")) | ||
| const geminiRoot = path.join(tempRoot, ".gemini") | ||
| await fs.mkdir(geminiRoot, { recursive: true }) | ||
| // Write existing settings.json | ||
| const settingsPath = path.join(geminiRoot, "settings.json") | ||
| await fs.writeFile(settingsPath, JSON.stringify({ mcpServers: { old: { command: "old-cmd" } } })) | ||
| const bundle: GeminiBundle = { | ||
| generatedSkills: [], | ||
| skillDirs: [], | ||
| commands: [], | ||
| mcpServers: { | ||
| newServer: { command: "new-cmd" }, | ||
| }, | ||
| } | ||
| await writeGeminiBundle(geminiRoot, bundle) | ||
| // New settings.json should have the new content | ||
| const newContent = JSON.parse(await fs.readFile(settingsPath, "utf8")) | ||
| expect(newContent.mcpServers.newServer.command).toBe("new-cmd") | ||
| // A backup file should exist | ||
| const files = await fs.readdir(geminiRoot) | ||
| const backupFiles = files.filter((f) => f.startsWith("settings.json.bak.")) | ||
| expect(backupFiles.length).toBeGreaterThanOrEqual(1) | ||
| }) | ||
| test("merges mcpServers into existing settings.json without clobbering other keys", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "gemini-merge-")) | ||
| const geminiRoot = path.join(tempRoot, ".gemini") | ||
| await fs.mkdir(geminiRoot, { recursive: true }) | ||
| // Write existing settings.json with other keys | ||
| const settingsPath = path.join(geminiRoot, "settings.json") | ||
| await fs.writeFile(settingsPath, JSON.stringify({ | ||
| model: "gemini-2.5-pro", | ||
| mcpServers: { old: { command: "old-cmd" } }, | ||
| })) | ||
| const bundle: GeminiBundle = { | ||
| generatedSkills: [], | ||
| skillDirs: [], | ||
| commands: [], | ||
| mcpServers: { | ||
| newServer: { command: "new-cmd" }, | ||
| }, | ||
| } | ||
| await writeGeminiBundle(geminiRoot, bundle) | ||
| const content = JSON.parse(await fs.readFile(settingsPath, "utf8")) | ||
| // Should preserve existing model key | ||
| expect(content.model).toBe("gemini-2.5-pro") | ||
| // Should preserve existing MCP server | ||
| expect(content.mcpServers.old.command).toBe("old-cmd") | ||
| // Should add new MCP server | ||
| expect(content.mcpServers.newServer.command).toBe("new-cmd") | ||
| }) | ||
| }) |
| import { describe, expect, test } from "bun:test" | ||
| import path from "path" | ||
| import { loadClaudePlugin } from "../src/parsers/claude" | ||
| import { convertClaudeToPi } from "../src/converters/claude-to-pi" | ||
| import { parseFrontmatter } from "../src/utils/frontmatter" | ||
| import type { ClaudePlugin } from "../src/types/claude" | ||
| const fixtureRoot = path.join(import.meta.dir, "fixtures", "sample-plugin") | ||
| describe("convertClaudeToPi", () => { | ||
| test("converts commands, skills, extensions, and MCPorter config", async () => { | ||
| const plugin = await loadClaudePlugin(fixtureRoot) | ||
| const bundle = convertClaudeToPi(plugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| // Prompts are normalized command names | ||
| expect(bundle.prompts.some((prompt) => prompt.name === "workflows-review")).toBe(true) | ||
| expect(bundle.prompts.some((prompt) => prompt.name === "plan_review")).toBe(true) | ||
| // Commands with disable-model-invocation are excluded | ||
| expect(bundle.prompts.some((prompt) => prompt.name === "deploy-docs")).toBe(false) | ||
| const workflowsReview = bundle.prompts.find((prompt) => prompt.name === "workflows-review") | ||
| expect(workflowsReview).toBeDefined() | ||
| const parsedPrompt = parseFrontmatter(workflowsReview!.content) | ||
| expect(parsedPrompt.data.description).toBe("Run a multi-agent review workflow") | ||
| // Existing skills are copied and agents are converted into generated Pi skills | ||
| expect(bundle.skillDirs.some((skill) => skill.name === "skill-one")).toBe(true) | ||
| expect(bundle.generatedSkills.some((skill) => skill.name === "repo-research-analyst")).toBe(true) | ||
| // Pi compatibility extension is included (with subagent + MCPorter tools) | ||
| const compatExtension = bundle.extensions.find((extension) => extension.name === "compound-engineering-compat.ts") | ||
| expect(compatExtension).toBeDefined() | ||
| expect(compatExtension!.content).toContain('name: "subagent"') | ||
| expect(compatExtension!.content).toContain('name: "mcporter_call"') | ||
| // Claude MCP config is translated to MCPorter config | ||
| expect(bundle.mcporterConfig?.mcpServers.context7?.baseUrl).toBe("https://mcp.context7.com/mcp") | ||
| expect(bundle.mcporterConfig?.mcpServers["local-tooling"]?.command).toBe("echo") | ||
| }) | ||
| test("transforms Task calls, AskUserQuestion, slash commands, and todo tool references", () => { | ||
| const plugin: ClaudePlugin = { | ||
| root: "/tmp/plugin", | ||
| manifest: { name: "fixture", version: "1.0.0" }, | ||
| agents: [], | ||
| commands: [ | ||
| { | ||
| name: "workflows:plan", | ||
| description: "Plan workflow", | ||
| body: [ | ||
| "Run these in order:", | ||
| "- Task repo-research-analyst(feature_description)", | ||
| "- Task learnings-researcher(feature_description)", | ||
| "Use AskUserQuestion tool for follow-up.", | ||
| "Then use /workflows:work and /prompts:deepen-plan.", | ||
| "Track progress with TodoWrite and TodoRead.", | ||
| ].join("\n"), | ||
| sourcePath: "/tmp/plugin/commands/plan.md", | ||
| }, | ||
| ], | ||
| skills: [], | ||
| hooks: undefined, | ||
| mcpServers: undefined, | ||
| } | ||
| const bundle = convertClaudeToPi(plugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| expect(bundle.prompts).toHaveLength(1) | ||
| const parsedPrompt = parseFrontmatter(bundle.prompts[0].content) | ||
| expect(parsedPrompt.body).toContain("Run subagent with agent=\"repo-research-analyst\" and task=\"feature_description\".") | ||
| expect(parsedPrompt.body).toContain("Run subagent with agent=\"learnings-researcher\" and task=\"feature_description\".") | ||
| expect(parsedPrompt.body).toContain("ask_user_question") | ||
| expect(parsedPrompt.body).toContain("/workflows-work") | ||
| expect(parsedPrompt.body).toContain("/deepen-plan") | ||
| expect(parsedPrompt.body).toContain("file-based todos (todos/ + /skill:file-todos)") | ||
| }) | ||
| test("appends MCPorter compatibility note when command references MCP", () => { | ||
| const plugin: ClaudePlugin = { | ||
| root: "/tmp/plugin", | ||
| manifest: { name: "fixture", version: "1.0.0" }, | ||
| agents: [], | ||
| commands: [ | ||
| { | ||
| name: "docs", | ||
| description: "Read MCP docs", | ||
| body: "Use MCP servers for docs lookup.", | ||
| sourcePath: "/tmp/plugin/commands/docs.md", | ||
| }, | ||
| ], | ||
| skills: [], | ||
| hooks: undefined, | ||
| mcpServers: undefined, | ||
| } | ||
| const bundle = convertClaudeToPi(plugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| const parsedPrompt = parseFrontmatter(bundle.prompts[0].content) | ||
| expect(parsedPrompt.body).toContain("Pi + MCPorter note") | ||
| expect(parsedPrompt.body).toContain("mcporter_call") | ||
| }) | ||
| }) |
| import { describe, expect, test } from "bun:test" | ||
| import { promises as fs } from "fs" | ||
| import path from "path" | ||
| import os from "os" | ||
| import { writePiBundle } from "../src/targets/pi" | ||
| import type { PiBundle } from "../src/types/pi" | ||
| async function exists(filePath: string): Promise<boolean> { | ||
| try { | ||
| await fs.access(filePath) | ||
| return true | ||
| } catch { | ||
| return false | ||
| } | ||
| } | ||
| describe("writePiBundle", () => { | ||
| test("writes prompts, skills, extensions, mcporter config, and AGENTS.md block", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "pi-writer-")) | ||
| const outputRoot = path.join(tempRoot, ".pi") | ||
| const bundle: PiBundle = { | ||
| prompts: [{ name: "workflows-plan", content: "Prompt content" }], | ||
| skillDirs: [ | ||
| { | ||
| name: "skill-one", | ||
| sourceDir: path.join(import.meta.dir, "fixtures", "sample-plugin", "skills", "skill-one"), | ||
| }, | ||
| ], | ||
| generatedSkills: [{ name: "repo-research-analyst", content: "---\nname: repo-research-analyst\n---\n\nBody" }], | ||
| extensions: [{ name: "compound-engineering-compat.ts", content: "export default function () {}" }], | ||
| mcporterConfig: { | ||
| mcpServers: { | ||
| context7: { baseUrl: "https://mcp.context7.com/mcp" }, | ||
| }, | ||
| }, | ||
| } | ||
| await writePiBundle(outputRoot, bundle) | ||
| expect(await exists(path.join(outputRoot, "prompts", "workflows-plan.md"))).toBe(true) | ||
| expect(await exists(path.join(outputRoot, "skills", "skill-one", "SKILL.md"))).toBe(true) | ||
| expect(await exists(path.join(outputRoot, "skills", "repo-research-analyst", "SKILL.md"))).toBe(true) | ||
| expect(await exists(path.join(outputRoot, "extensions", "compound-engineering-compat.ts"))).toBe(true) | ||
| expect(await exists(path.join(outputRoot, "compound-engineering", "mcporter.json"))).toBe(true) | ||
| const agentsPath = path.join(outputRoot, "AGENTS.md") | ||
| const agentsContent = await fs.readFile(agentsPath, "utf8") | ||
| expect(agentsContent).toContain("BEGIN COMPOUND PI TOOL MAP") | ||
| expect(agentsContent).toContain("MCPorter") | ||
| }) | ||
| test("writes to ~/.pi/agent style roots without nesting under .pi", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "pi-agent-root-")) | ||
| const outputRoot = path.join(tempRoot, "agent") | ||
| const bundle: PiBundle = { | ||
| prompts: [{ name: "workflows-work", content: "Prompt content" }], | ||
| skillDirs: [], | ||
| generatedSkills: [], | ||
| extensions: [], | ||
| } | ||
| await writePiBundle(outputRoot, bundle) | ||
| expect(await exists(path.join(outputRoot, "prompts", "workflows-work.md"))).toBe(true) | ||
| expect(await exists(path.join(outputRoot, ".pi"))).toBe(false) | ||
| }) | ||
| test("backs up existing mcporter config before overwriting", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "pi-backup-")) | ||
| const outputRoot = path.join(tempRoot, ".pi") | ||
| const configPath = path.join(outputRoot, "compound-engineering", "mcporter.json") | ||
| await fs.mkdir(path.dirname(configPath), { recursive: true }) | ||
| await fs.writeFile(configPath, JSON.stringify({ previous: true }, null, 2)) | ||
| const bundle: PiBundle = { | ||
| prompts: [], | ||
| skillDirs: [], | ||
| generatedSkills: [], | ||
| extensions: [], | ||
| mcporterConfig: { | ||
| mcpServers: { | ||
| linear: { baseUrl: "https://mcp.linear.app/mcp" }, | ||
| }, | ||
| }, | ||
| } | ||
| await writePiBundle(outputRoot, bundle) | ||
| const files = await fs.readdir(path.dirname(configPath)) | ||
| const backupFileName = files.find((file) => file.startsWith("mcporter.json.bak.")) | ||
| expect(backupFileName).toBeDefined() | ||
| const currentConfig = JSON.parse(await fs.readFile(configPath, "utf8")) as { mcpServers: Record<string, unknown> } | ||
| expect(currentConfig.mcpServers.linear).toBeDefined() | ||
| }) | ||
| }) |
| import { describe, expect, test } from "bun:test" | ||
| import { promises as fs } from "fs" | ||
| import path from "path" | ||
| import os from "os" | ||
| import { syncToCursor } from "../src/sync/cursor" | ||
| import type { ClaudeHomeConfig } from "../src/parsers/claude-home" | ||
| describe("syncToCursor", () => { | ||
| test("symlinks skills and writes mcp.json", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "sync-cursor-")) | ||
| const fixtureSkillDir = path.join(import.meta.dir, "fixtures", "sample-plugin", "skills", "skill-one") | ||
| const config: ClaudeHomeConfig = { | ||
| skills: [ | ||
| { | ||
| name: "skill-one", | ||
| sourceDir: fixtureSkillDir, | ||
| skillPath: path.join(fixtureSkillDir, "SKILL.md"), | ||
| }, | ||
| ], | ||
| mcpServers: { | ||
| context7: { url: "https://mcp.context7.com/mcp" }, | ||
| local: { command: "echo", args: ["hello"], env: { FOO: "bar" } }, | ||
| }, | ||
| } | ||
| await syncToCursor(config, tempRoot) | ||
| // Check skill symlink | ||
| const linkedSkillPath = path.join(tempRoot, "skills", "skill-one") | ||
| const linkedStat = await fs.lstat(linkedSkillPath) | ||
| expect(linkedStat.isSymbolicLink()).toBe(true) | ||
| // Check mcp.json | ||
| const mcpPath = path.join(tempRoot, "mcp.json") | ||
| const mcpConfig = JSON.parse(await fs.readFile(mcpPath, "utf8")) as { | ||
| mcpServers: Record<string, { url?: string; command?: string; args?: string[]; env?: Record<string, string> }> | ||
| } | ||
| expect(mcpConfig.mcpServers.context7?.url).toBe("https://mcp.context7.com/mcp") | ||
| expect(mcpConfig.mcpServers.local?.command).toBe("echo") | ||
| expect(mcpConfig.mcpServers.local?.args).toEqual(["hello"]) | ||
| expect(mcpConfig.mcpServers.local?.env).toEqual({ FOO: "bar" }) | ||
| }) | ||
| test("merges existing mcp.json", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "sync-cursor-merge-")) | ||
| const mcpPath = path.join(tempRoot, "mcp.json") | ||
| await fs.writeFile( | ||
| mcpPath, | ||
| JSON.stringify({ mcpServers: { existing: { command: "node", args: ["server.js"] } } }, null, 2), | ||
| ) | ||
| const config: ClaudeHomeConfig = { | ||
| skills: [], | ||
| mcpServers: { | ||
| context7: { url: "https://mcp.context7.com/mcp" }, | ||
| }, | ||
| } | ||
| await syncToCursor(config, tempRoot) | ||
| const merged = JSON.parse(await fs.readFile(mcpPath, "utf8")) as { | ||
| mcpServers: Record<string, { command?: string; url?: string }> | ||
| } | ||
| expect(merged.mcpServers.existing?.command).toBe("node") | ||
| expect(merged.mcpServers.context7?.url).toBe("https://mcp.context7.com/mcp") | ||
| }) | ||
| test("does not write mcp.json when no MCP servers", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "sync-cursor-nomcp-")) | ||
| const fixtureSkillDir = path.join(import.meta.dir, "fixtures", "sample-plugin", "skills", "skill-one") | ||
| const config: ClaudeHomeConfig = { | ||
| skills: [ | ||
| { | ||
| name: "skill-one", | ||
| sourceDir: fixtureSkillDir, | ||
| skillPath: path.join(fixtureSkillDir, "SKILL.md"), | ||
| }, | ||
| ], | ||
| mcpServers: {}, | ||
| } | ||
| await syncToCursor(config, tempRoot) | ||
| const mcpExists = await fs.access(path.join(tempRoot, "mcp.json")).then(() => true).catch(() => false) | ||
| expect(mcpExists).toBe(false) | ||
| }) | ||
| }) |
| import { describe, expect, test } from "bun:test" | ||
| import { promises as fs } from "fs" | ||
| import path from "path" | ||
| import os from "os" | ||
| import { syncToDroid } from "../src/sync/droid" | ||
| import type { ClaudeHomeConfig } from "../src/parsers/claude-home" | ||
| describe("syncToDroid", () => { | ||
| test("symlinks skills to factory skills dir", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "sync-droid-")) | ||
| const fixtureSkillDir = path.join(import.meta.dir, "fixtures", "sample-plugin", "skills", "skill-one") | ||
| const config: ClaudeHomeConfig = { | ||
| skills: [ | ||
| { | ||
| name: "skill-one", | ||
| sourceDir: fixtureSkillDir, | ||
| skillPath: path.join(fixtureSkillDir, "SKILL.md"), | ||
| }, | ||
| ], | ||
| mcpServers: { | ||
| context7: { url: "https://mcp.context7.com/mcp" }, | ||
| }, | ||
| } | ||
| await syncToDroid(config, tempRoot) | ||
| const linkedSkillPath = path.join(tempRoot, "skills", "skill-one") | ||
| const linkedStat = await fs.lstat(linkedSkillPath) | ||
| expect(linkedStat.isSymbolicLink()).toBe(true) | ||
| // Droid does not write MCP config | ||
| const mcpExists = await fs.access(path.join(tempRoot, "mcp.json")).then(() => true).catch(() => false) | ||
| expect(mcpExists).toBe(false) | ||
| }) | ||
| test("skips skills with invalid names", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "sync-droid-invalid-")) | ||
| const fixtureSkillDir = path.join(import.meta.dir, "fixtures", "sample-plugin", "skills", "skill-one") | ||
| const config: ClaudeHomeConfig = { | ||
| skills: [ | ||
| { | ||
| name: "../escape", | ||
| sourceDir: fixtureSkillDir, | ||
| skillPath: path.join(fixtureSkillDir, "SKILL.md"), | ||
| }, | ||
| ], | ||
| mcpServers: {}, | ||
| } | ||
| await syncToDroid(config, tempRoot) | ||
| const entries = await fs.readdir(path.join(tempRoot, "skills")) | ||
| expect(entries).toHaveLength(0) | ||
| }) | ||
| }) |
| import { describe, expect, test } from "bun:test" | ||
| import { promises as fs } from "fs" | ||
| import path from "path" | ||
| import os from "os" | ||
| import { syncToPi } from "../src/sync/pi" | ||
| import type { ClaudeHomeConfig } from "../src/parsers/claude-home" | ||
| describe("syncToPi", () => { | ||
| test("symlinks skills and writes MCPorter config", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "sync-pi-")) | ||
| const fixtureSkillDir = path.join(import.meta.dir, "fixtures", "sample-plugin", "skills", "skill-one") | ||
| const config: ClaudeHomeConfig = { | ||
| skills: [ | ||
| { | ||
| name: "skill-one", | ||
| sourceDir: fixtureSkillDir, | ||
| skillPath: path.join(fixtureSkillDir, "SKILL.md"), | ||
| }, | ||
| ], | ||
| mcpServers: { | ||
| context7: { url: "https://mcp.context7.com/mcp" }, | ||
| local: { command: "echo", args: ["hello"] }, | ||
| }, | ||
| } | ||
| await syncToPi(config, tempRoot) | ||
| const linkedSkillPath = path.join(tempRoot, "skills", "skill-one") | ||
| const linkedStat = await fs.lstat(linkedSkillPath) | ||
| expect(linkedStat.isSymbolicLink()).toBe(true) | ||
| const mcporterPath = path.join(tempRoot, "compound-engineering", "mcporter.json") | ||
| const mcporterConfig = JSON.parse(await fs.readFile(mcporterPath, "utf8")) as { | ||
| mcpServers: Record<string, { baseUrl?: string; command?: string }> | ||
| } | ||
| expect(mcporterConfig.mcpServers.context7?.baseUrl).toBe("https://mcp.context7.com/mcp") | ||
| expect(mcporterConfig.mcpServers.local?.command).toBe("echo") | ||
| }) | ||
| test("merges existing MCPorter config", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "sync-pi-merge-")) | ||
| const mcporterPath = path.join(tempRoot, "compound-engineering", "mcporter.json") | ||
| await fs.mkdir(path.dirname(mcporterPath), { recursive: true }) | ||
| await fs.writeFile( | ||
| mcporterPath, | ||
| JSON.stringify({ mcpServers: { existing: { baseUrl: "https://example.com/mcp" } } }, null, 2), | ||
| ) | ||
| const config: ClaudeHomeConfig = { | ||
| skills: [], | ||
| mcpServers: { | ||
| context7: { url: "https://mcp.context7.com/mcp" }, | ||
| }, | ||
| } | ||
| await syncToPi(config, tempRoot) | ||
| const merged = JSON.parse(await fs.readFile(mcporterPath, "utf8")) as { | ||
| mcpServers: Record<string, { baseUrl?: string }> | ||
| } | ||
| expect(merged.mcpServers.existing?.baseUrl).toBe("https://example.com/mcp") | ||
| expect(merged.mcpServers.context7?.baseUrl).toBe("https://mcp.context7.com/mcp") | ||
| }) | ||
| }) |
@@ -15,3 +15,3 @@ { | ||
| "description": "AI-powered development tools that get smarter with every use. Make each unit of engineering work easier than the last. Includes 29 specialized agents, 22 commands, and 19 skills.", | ||
| "version": "2.33.0", | ||
| "version": "2.34.0", | ||
| "author": { | ||
@@ -18,0 +18,0 @@ "name": "Kieran Klaassen", |
+1
-1
| { | ||
| "name": "@every-env/compound-plugin", | ||
| "version": "0.5.2", | ||
| "version": "0.7.0", | ||
| "type": "module", | ||
@@ -5,0 +5,0 @@ "private": false, |
| { | ||
| "name": "compound-engineering", | ||
| "version": "2.33.0", | ||
| "version": "2.34.0", | ||
| "description": "AI-powered development tools. 29 agents, 22 commands, 19 skills, 1 MCP server for code review, research, design, and workflow automation.", | ||
@@ -5,0 +5,0 @@ "author": { |
@@ -8,2 +8,19 @@ # Changelog | ||
| ## [2.34.0] - 2026-02-14 | ||
| ### Added | ||
| - **Gemini CLI target** — New converter target for [Gemini CLI](https://github.com/google-gemini/gemini-cli). Install with `--to gemini` to convert agents to `.gemini/skills/*/SKILL.md`, commands to `.gemini/commands/*.toml` (TOML format with `description` + `prompt`), and MCP servers to `.gemini/settings.json`. Skills pass through unchanged (identical SKILL.md standard). Namespaced commands create directory structure (`workflows:plan` → `commands/workflows/plan.toml`). 29 new tests. ([#190](https://github.com/EveryInc/compound-engineering-plugin/pull/190)) | ||
| --- | ||
| ## [2.33.1] - 2026-02-13 | ||
| ### Changed | ||
| - **`/workflows:plan` command** - All plan templates now include `status: active` in YAML frontmatter. Plans are created with `status: active` and marked `status: completed` when work finishes. | ||
| - **`/workflows:work` command** - Phase 4 now updates plan frontmatter from `status: active` to `status: completed` after shipping. Agents can grep for status to distinguish current vs historical plans. | ||
| --- | ||
| ## [2.33.0] - 2026-02-12 | ||
@@ -10,0 +27,0 @@ |
@@ -181,2 +181,3 @@ --- | ||
| type: [feat|fix|refactor] | ||
| status: active | ||
| date: YYYY-MM-DD | ||
@@ -234,2 +235,3 @@ --- | ||
| type: [feat|fix|refactor] | ||
| status: active | ||
| date: YYYY-MM-DD | ||
@@ -299,2 +301,3 @@ --- | ||
| type: [feat|fix|refactor] | ||
| status: active | ||
| date: YYYY-MM-DD | ||
@@ -301,0 +304,0 @@ --- |
@@ -300,3 +300,10 @@ --- | ||
| 4. **Notify User** | ||
| 4. **Update Plan Status** | ||
| If the input document has YAML frontmatter with a `status` field, update it to `completed`: | ||
| ``` | ||
| status: active → status: completed | ||
| ``` | ||
| 5. **Notify User** | ||
| - Summarize what was completed | ||
@@ -303,0 +310,0 @@ - Link to PR |
+20
-3
@@ -15,5 +15,5 @@ # Compound Marketplace | ||
| ## OpenCode, Codex, Droid & Cursor (experimental) Install | ||
| ## OpenCode, Codex, Droid, Cursor, Pi & Gemini (experimental) Install | ||
| This repo includes a Bun/TypeScript CLI that converts Claude Code plugins to OpenCode, Codex, Factory Droid, and Cursor. | ||
| This repo includes a Bun/TypeScript CLI that converts Claude Code plugins to OpenCode, Codex, Factory Droid, Cursor, Pi, and Gemini CLI. | ||
@@ -32,2 +32,8 @@ ```bash | ||
| bunx @every-env/compound-plugin install compound-engineering --to cursor | ||
| # convert to Pi format | ||
| bunx @every-env/compound-plugin install compound-engineering --to pi | ||
| # convert to Gemini CLI format | ||
| bunx @every-env/compound-plugin install compound-engineering --to gemini | ||
| ``` | ||
@@ -45,2 +51,4 @@ | ||
| Cursor output is written to `.cursor/` with rules (`.mdc`), commands, skills, and `mcp.json`. Agents become "Agent Requested" rules (`alwaysApply: false`) so Cursor's AI activates them on demand. Works with both the Cursor IDE and Cursor CLI (`cursor-agent`) — they share the same `.cursor/` config directory. | ||
| Pi output is written to `~/.pi/agent/` by default with prompts, skills, extensions, and `compound-engineering/mcporter.json` for MCPorter interoperability. | ||
| Gemini output is written to `.gemini/` with skills (from agents), commands (`.toml`), and `settings.json` (MCP servers). Namespaced commands create directory structure (`workflows:plan` → `commands/workflows/plan.toml`). Skills use the identical SKILL.md standard and pass through unchanged. | ||
@@ -51,3 +59,3 @@ All provider targets are experimental and may change as the formats evolve. | ||
| Sync your personal Claude Code config (`~/.claude/`) to OpenCode or Codex: | ||
| Sync your personal Claude Code config (`~/.claude/`) to other AI coding tools: | ||
@@ -60,2 +68,11 @@ ```bash | ||
| bunx @every-env/compound-plugin sync --target codex | ||
| # Sync to Pi | ||
| bunx @every-env/compound-plugin sync --target pi | ||
| # Sync to Droid (skills only) | ||
| bunx @every-env/compound-plugin sync --target droid | ||
| # Sync to Cursor (skills + MCP servers) | ||
| bunx @every-env/compound-plugin sync --target cursor | ||
| ``` | ||
@@ -62,0 +79,0 @@ |
+14
-25
@@ -8,2 +8,3 @@ import { defineCommand } from "citty" | ||
| import { ensureCodexAgentsFile } from "../utils/codex-agents" | ||
| import { expandHome, resolveTargetHome } from "../utils/resolve-home" | ||
@@ -26,3 +27,3 @@ const permissionModes: PermissionMode[] = ["none", "broad", "from-commands"] | ||
| default: "opencode", | ||
| description: "Target format (opencode | codex | droid | cursor)", | ||
| description: "Target format (opencode | codex | droid | cursor | pi | gemini)", | ||
| }, | ||
@@ -39,2 +40,7 @@ output: { | ||
| }, | ||
| piHome: { | ||
| type: "string", | ||
| alias: "pi-home", | ||
| description: "Write Pi output to this Pi root (ex: ~/.pi/agent or ./.pi)", | ||
| }, | ||
| also: { | ||
@@ -78,3 +84,4 @@ type: "string", | ||
| const outputRoot = resolveOutputRoot(args.output) | ||
| const codexHome = resolveCodexRoot(args.codexHome) | ||
| const codexHome = resolveTargetHome(args.codexHome, path.join(os.homedir(), ".codex")) | ||
| const piHome = resolveTargetHome(args.piHome, path.join(os.homedir(), ".pi", "agent")) | ||
@@ -87,3 +94,3 @@ const options = { | ||
| const primaryOutputRoot = resolveTargetOutputRoot(targetName, outputRoot, codexHome) | ||
| const primaryOutputRoot = resolveTargetOutputRoot(targetName, outputRoot, codexHome, piHome) | ||
| const bundle = target.convert(plugin, options) | ||
@@ -114,3 +121,3 @@ if (!bundle) { | ||
| } | ||
| const extraRoot = resolveTargetOutputRoot(extra, path.join(outputRoot, extra), codexHome) | ||
| const extraRoot = resolveTargetOutputRoot(extra, path.join(outputRoot, extra), codexHome, piHome) | ||
| await handler.write(extraRoot, extraBundle) | ||
@@ -134,22 +141,2 @@ console.log(`Converted ${plugin.manifest.name} to ${extra} at ${extraRoot}`) | ||
| function resolveCodexHome(value: unknown): string | null { | ||
| if (!value) return null | ||
| const raw = String(value).trim() | ||
| if (!raw) return null | ||
| const expanded = expandHome(raw) | ||
| return path.resolve(expanded) | ||
| } | ||
| function resolveCodexRoot(value: unknown): string { | ||
| return resolveCodexHome(value) ?? path.join(os.homedir(), ".codex") | ||
| } | ||
| function expandHome(value: string): string { | ||
| if (value === "~") return os.homedir() | ||
| if (value.startsWith(`~${path.sep}`)) { | ||
| return path.join(os.homedir(), value.slice(2)) | ||
| } | ||
| return value | ||
| } | ||
| function resolveOutputRoot(value: unknown): string { | ||
@@ -163,7 +150,9 @@ if (value && String(value).trim()) { | ||
| function resolveTargetOutputRoot(targetName: string, outputRoot: string, codexHome: string): string { | ||
| function resolveTargetOutputRoot(targetName: string, outputRoot: string, codexHome: string, piHome: string): string { | ||
| if (targetName === "codex") return codexHome | ||
| if (targetName === "pi") return piHome | ||
| if (targetName === "droid") return path.join(os.homedir(), ".factory") | ||
| if (targetName === "cursor") return path.join(outputRoot, ".cursor") | ||
| if (targetName === "gemini") return path.join(outputRoot, ".gemini") | ||
| return outputRoot | ||
| } |
+23
-25
@@ -10,2 +10,3 @@ import { defineCommand } from "citty" | ||
| import { ensureCodexAgentsFile } from "../utils/codex-agents" | ||
| import { expandHome, resolveTargetHome } from "../utils/resolve-home" | ||
@@ -28,3 +29,3 @@ const permissionModes: PermissionMode[] = ["none", "broad", "from-commands"] | ||
| default: "opencode", | ||
| description: "Target format (opencode | codex | droid | cursor)", | ||
| description: "Target format (opencode | codex | droid | cursor | pi | gemini)", | ||
| }, | ||
@@ -41,2 +42,7 @@ output: { | ||
| }, | ||
| piHome: { | ||
| type: "string", | ||
| alias: "pi-home", | ||
| description: "Write Pi output to this Pi root (ex: ~/.pi/agent or ./.pi)", | ||
| }, | ||
| also: { | ||
@@ -82,3 +88,4 @@ type: "string", | ||
| const outputRoot = resolveOutputRoot(args.output) | ||
| const codexHome = resolveCodexRoot(args.codexHome) | ||
| const codexHome = resolveTargetHome(args.codexHome, path.join(os.homedir(), ".codex")) | ||
| const piHome = resolveTargetHome(args.piHome, path.join(os.homedir(), ".pi", "agent")) | ||
@@ -96,3 +103,3 @@ const options = { | ||
| const hasExplicitOutput = Boolean(args.output && String(args.output).trim()) | ||
| const primaryOutputRoot = resolveTargetOutputRoot(targetName, outputRoot, codexHome, hasExplicitOutput) | ||
| const primaryOutputRoot = resolveTargetOutputRoot(targetName, outputRoot, codexHome, piHome, hasExplicitOutput) | ||
| await target.write(primaryOutputRoot, bundle) | ||
@@ -118,3 +125,3 @@ console.log(`Installed ${plugin.manifest.name} to ${primaryOutputRoot}`) | ||
| } | ||
| const extraRoot = resolveTargetOutputRoot(extra, path.join(outputRoot, extra), codexHome, hasExplicitOutput) | ||
| const extraRoot = resolveTargetOutputRoot(extra, path.join(outputRoot, extra), codexHome, piHome, hasExplicitOutput) | ||
| await handler.write(extraRoot, extraBundle) | ||
@@ -161,22 +168,2 @@ console.log(`Installed ${plugin.manifest.name} to ${extraRoot}`) | ||
| function resolveCodexHome(value: unknown): string | null { | ||
| if (!value) return null | ||
| const raw = String(value).trim() | ||
| if (!raw) return null | ||
| const expanded = expandHome(raw) | ||
| return path.resolve(expanded) | ||
| } | ||
| function resolveCodexRoot(value: unknown): string { | ||
| return resolveCodexHome(value) ?? path.join(os.homedir(), ".codex") | ||
| } | ||
| function expandHome(value: string): string { | ||
| if (value === "~") return os.homedir() | ||
| if (value.startsWith(`~${path.sep}`)) { | ||
| return path.join(os.homedir(), value.slice(2)) | ||
| } | ||
| return value | ||
| } | ||
| function resolveOutputRoot(value: unknown): string { | ||
@@ -192,4 +179,11 @@ if (value && String(value).trim()) { | ||
| function resolveTargetOutputRoot(targetName: string, outputRoot: string, codexHome: string, hasExplicitOutput: boolean): string { | ||
| function resolveTargetOutputRoot( | ||
| targetName: string, | ||
| outputRoot: string, | ||
| codexHome: string, | ||
| piHome: string, | ||
| hasExplicitOutput: boolean, | ||
| ): string { | ||
| if (targetName === "codex") return codexHome | ||
| if (targetName === "pi") return piHome | ||
| if (targetName === "droid") return path.join(os.homedir(), ".factory") | ||
@@ -200,2 +194,6 @@ if (targetName === "cursor") { | ||
| } | ||
| if (targetName === "gemini") { | ||
| const base = hasExplicitOutput ? outputRoot : process.cwd() | ||
| return path.join(base, ".gemini") | ||
| } | ||
| return outputRoot | ||
@@ -202,0 +200,0 @@ } |
+44
-21
@@ -7,5 +7,12 @@ import { defineCommand } from "citty" | ||
| import { syncToCodex } from "../sync/codex" | ||
| import { syncToPi } from "../sync/pi" | ||
| import { syncToDroid } from "../sync/droid" | ||
| import { syncToCursor } from "../sync/cursor" | ||
| import { expandHome } from "../utils/resolve-home" | ||
| function isValidTarget(value: string): value is "opencode" | "codex" { | ||
| return value === "opencode" || value === "codex" | ||
| const validTargets = ["opencode", "codex", "pi", "droid", "cursor"] as const | ||
| type SyncTarget = (typeof validTargets)[number] | ||
| function isValidTarget(value: string): value is SyncTarget { | ||
| return (validTargets as readonly string[]).includes(value) | ||
| } | ||
@@ -27,6 +34,21 @@ | ||
| function resolveOutputRoot(target: SyncTarget): string { | ||
| switch (target) { | ||
| case "opencode": | ||
| return path.join(os.homedir(), ".config", "opencode") | ||
| case "codex": | ||
| return path.join(os.homedir(), ".codex") | ||
| case "pi": | ||
| return path.join(os.homedir(), ".pi", "agent") | ||
| case "droid": | ||
| return path.join(os.homedir(), ".factory") | ||
| case "cursor": | ||
| return path.join(process.cwd(), ".cursor") | ||
| } | ||
| } | ||
| export default defineCommand({ | ||
| meta: { | ||
| name: "sync", | ||
| description: "Sync Claude Code config (~/.claude/) to OpenCode or Codex", | ||
| description: "Sync Claude Code config (~/.claude/) to OpenCode, Codex, Pi, Droid, or Cursor", | ||
| }, | ||
@@ -37,3 +59,3 @@ args: { | ||
| required: true, | ||
| description: "Target: opencode | codex", | ||
| description: "Target: opencode | codex | pi | droid | cursor", | ||
| }, | ||
@@ -48,3 +70,3 @@ claudeHome: { | ||
| if (!isValidTarget(args.target)) { | ||
| throw new Error(`Unknown target: ${args.target}. Use 'opencode' or 'codex'.`) | ||
| throw new Error(`Unknown target: ${args.target}. Use one of: ${validTargets.join(", ")}`) | ||
| } | ||
@@ -67,11 +89,20 @@ | ||
| const outputRoot = | ||
| args.target === "opencode" | ||
| ? path.join(os.homedir(), ".config", "opencode") | ||
| : path.join(os.homedir(), ".codex") | ||
| const outputRoot = resolveOutputRoot(args.target) | ||
| if (args.target === "opencode") { | ||
| await syncToOpenCode(config, outputRoot) | ||
| } else { | ||
| await syncToCodex(config, outputRoot) | ||
| switch (args.target) { | ||
| case "opencode": | ||
| await syncToOpenCode(config, outputRoot) | ||
| break | ||
| case "codex": | ||
| await syncToCodex(config, outputRoot) | ||
| break | ||
| case "pi": | ||
| await syncToPi(config, outputRoot) | ||
| break | ||
| case "droid": | ||
| await syncToDroid(config, outputRoot) | ||
| break | ||
| case "cursor": | ||
| await syncToCursor(config, outputRoot) | ||
| break | ||
| } | ||
@@ -82,9 +113,1 @@ | ||
| }) | ||
| function expandHome(value: string): string { | ||
| if (value === "~") return os.homedir() | ||
| if (value.startsWith(`~${path.sep}`)) { | ||
| return path.join(os.homedir(), value.slice(2)) | ||
| } | ||
| return value | ||
| } |
@@ -253,4 +253,20 @@ import { formatFrontmatter } from "../utils/frontmatter" | ||
| // Bare Claude family aliases used in Claude Code (e.g. `model: haiku`). | ||
| // Update these when new model generations are released. | ||
| const CLAUDE_FAMILY_ALIASES: Record<string, string> = { | ||
| haiku: "claude-haiku-4-5", | ||
| sonnet: "claude-sonnet-4-5", | ||
| opus: "claude-opus-4-6", | ||
| } | ||
| function normalizeModel(model: string): string { | ||
| if (model.includes("/")) return model | ||
| if (CLAUDE_FAMILY_ALIASES[model]) { | ||
| const resolved = `anthropic/${CLAUDE_FAMILY_ALIASES[model]}` | ||
| console.warn( | ||
| `Warning: bare model alias "${model}" mapped to "${resolved}". ` + | ||
| `Update CLAUDE_FAMILY_ALIASES if a newer version is available.`, | ||
| ) | ||
| return resolved | ||
| } | ||
| if (/^claude-/.test(model)) return `anthropic/${model}` | ||
@@ -257,0 +273,0 @@ if (/^(gpt-|o1-|o3-)/.test(model)) return `openai/${model}` |
+18
-0
@@ -6,2 +6,4 @@ import type { ClaudePlugin } from "../types/claude" | ||
| import type { CursorBundle } from "../types/cursor" | ||
| import type { PiBundle } from "../types/pi" | ||
| import type { GeminiBundle } from "../types/gemini" | ||
| import { convertClaudeToOpenCode, type ClaudeToOpenCodeOptions } from "../converters/claude-to-opencode" | ||
@@ -11,2 +13,4 @@ import { convertClaudeToCodex } from "../converters/claude-to-codex" | ||
| import { convertClaudeToCursor } from "../converters/claude-to-cursor" | ||
| import { convertClaudeToPi } from "../converters/claude-to-pi" | ||
| import { convertClaudeToGemini } from "../converters/claude-to-gemini" | ||
| import { writeOpenCodeBundle } from "./opencode" | ||
@@ -16,2 +20,4 @@ import { writeCodexBundle } from "./codex" | ||
| import { writeCursorBundle } from "./cursor" | ||
| import { writePiBundle } from "./pi" | ||
| import { writeGeminiBundle } from "./gemini" | ||
@@ -50,2 +56,14 @@ export type TargetHandler<TBundle = unknown> = { | ||
| }, | ||
| pi: { | ||
| name: "pi", | ||
| implemented: true, | ||
| convert: convertClaudeToPi as TargetHandler<PiBundle>["convert"], | ||
| write: writePiBundle as TargetHandler<PiBundle>["write"], | ||
| }, | ||
| gemini: { | ||
| name: "gemini", | ||
| implemented: true, | ||
| convert: convertClaudeToGemini as TargetHandler<GeminiBundle>["convert"], | ||
| write: writeGeminiBundle as TargetHandler<GeminiBundle>["write"], | ||
| }, | ||
| } |
+76
-0
@@ -353,2 +353,78 @@ import { describe, expect, test } from "bun:test" | ||
| }) | ||
| test("convert supports --pi-home for pi output", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "cli-pi-home-")) | ||
| const piRoot = path.join(tempRoot, ".pi") | ||
| const fixtureRoot = path.join(import.meta.dir, "fixtures", "sample-plugin") | ||
| const proc = Bun.spawn([ | ||
| "bun", | ||
| "run", | ||
| "src/index.ts", | ||
| "convert", | ||
| fixtureRoot, | ||
| "--to", | ||
| "pi", | ||
| "--pi-home", | ||
| piRoot, | ||
| ], { | ||
| cwd: path.join(import.meta.dir, ".."), | ||
| stdout: "pipe", | ||
| stderr: "pipe", | ||
| }) | ||
| const exitCode = await proc.exited | ||
| const stdout = await new Response(proc.stdout).text() | ||
| const stderr = await new Response(proc.stderr).text() | ||
| if (exitCode !== 0) { | ||
| throw new Error(`CLI failed (exit ${exitCode}).\nstdout: ${stdout}\nstderr: ${stderr}`) | ||
| } | ||
| expect(stdout).toContain("Converted compound-engineering") | ||
| expect(stdout).toContain(piRoot) | ||
| expect(await exists(path.join(piRoot, "prompts", "workflows-review.md"))).toBe(true) | ||
| expect(await exists(path.join(piRoot, "skills", "repo-research-analyst", "SKILL.md"))).toBe(true) | ||
| expect(await exists(path.join(piRoot, "extensions", "compound-engineering-compat.ts"))).toBe(true) | ||
| expect(await exists(path.join(piRoot, "compound-engineering", "mcporter.json"))).toBe(true) | ||
| }) | ||
| test("install supports --also with pi output", async () => { | ||
| const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), "cli-also-pi-")) | ||
| const fixtureRoot = path.join(import.meta.dir, "fixtures", "sample-plugin") | ||
| const piRoot = path.join(tempRoot, ".pi") | ||
| const proc = Bun.spawn([ | ||
| "bun", | ||
| "run", | ||
| "src/index.ts", | ||
| "install", | ||
| fixtureRoot, | ||
| "--to", | ||
| "opencode", | ||
| "--also", | ||
| "pi", | ||
| "--pi-home", | ||
| piRoot, | ||
| "--output", | ||
| tempRoot, | ||
| ], { | ||
| cwd: path.join(import.meta.dir, ".."), | ||
| stdout: "pipe", | ||
| stderr: "pipe", | ||
| }) | ||
| const exitCode = await proc.exited | ||
| const stdout = await new Response(proc.stdout).text() | ||
| const stderr = await new Response(proc.stderr).text() | ||
| if (exitCode !== 0) { | ||
| throw new Error(`CLI failed (exit ${exitCode}).\nstdout: ${stdout}\nstderr: ${stderr}`) | ||
| } | ||
| expect(stdout).toContain("Installed compound-engineering") | ||
| expect(stdout).toContain(piRoot) | ||
| expect(await exists(path.join(piRoot, "prompts", "workflows-review.md"))).toBe(true) | ||
| expect(await exists(path.join(piRoot, "extensions", "compound-engineering-compat.ts"))).toBe(true) | ||
| }) | ||
| }) |
@@ -78,2 +78,31 @@ import { describe, expect, test } from "bun:test" | ||
| test("resolves bare Claude model aliases to full IDs", () => { | ||
| const plugin: ClaudePlugin = { | ||
| root: "/tmp/plugin", | ||
| manifest: { name: "fixture", version: "1.0.0" }, | ||
| agents: [ | ||
| { | ||
| name: "cheap-agent", | ||
| description: "Agent using bare alias", | ||
| body: "Test agent.", | ||
| sourcePath: "/tmp/plugin/agents/cheap-agent.md", | ||
| model: "haiku", | ||
| }, | ||
| ], | ||
| commands: [], | ||
| skills: [], | ||
| } | ||
| const bundle = convertClaudeToOpenCode(plugin, { | ||
| agentMode: "subagent", | ||
| inferTemperature: false, | ||
| permissions: "none", | ||
| }) | ||
| const agent = bundle.agents.find((a) => a.name === "cheap-agent") | ||
| expect(agent).toBeDefined() | ||
| const parsed = parseFrontmatter(agent!.content) | ||
| expect(parsed.data.model).toBe("anthropic/claude-haiku-4-5") | ||
| }) | ||
| test("converts hooks into plugin file", async () => { | ||
@@ -80,0 +109,0 @@ const plugin = await loadClaudePlugin(fixtureRoot) |
Network access
Supply chain riskThis module accesses the network.
Found 1 instance in 1 package
Environment variable access
Supply chain riskPackage accesses environment variables, which may be a sign of credential stuffing or data theft.
Found 2 instances in 1 package
Filesystem access
Supply chain riskAccesses the file system, and could potentially read sensitive data.
Found 2 instances in 1 package
Long strings
Supply chain riskContains long string literals, which may be a sign of obfuscated or packed code.
Found 1 instance in 1 package
URL strings
Supply chain riskPackage contains fragments of external URLs or IP addresses, which the package may be accessing at runtime.
Found 1 instance in 1 package
Network access
Supply chain riskThis module accesses the network.
Found 1 instance in 1 package
Environment variable access
Supply chain riskPackage accesses environment variables, which may be a sign of credential stuffing or data theft.
Found 2 instances in 1 package
Filesystem access
Supply chain riskAccesses the file system, and could potentially read sensitive data.
Found 2 instances in 1 package
Long strings
Supply chain riskContains long string literals, which may be a sign of obfuscated or packed code.
Found 1 instance in 1 package
URL strings
Supply chain riskPackage contains fragments of external URLs or IP addresses, which the package may be accessing at runtime.
Found 1 instance in 1 package
1879505
5.91%285
7.95%12199
20.9%113
17.71%35
40%