
Security News
Open VSX Begins Implementing Pre-Publish Security Checks After Repeated Supply Chain Incidents
Following multiple malicious extension incidents, Open VSX outlines new safeguards designed to catch risky uploads earlier.
@github/copilot-sdk
Advanced tools
TypeScript SDK for programmatic control of GitHub Copilot CLI via JSON-RPC
TypeScript SDK for programmatic control of GitHub Copilot CLI via JSON-RPC.
Note: This SDK is in technical preview and may change in breaking ways.
npm install @github/copilot-sdk
import { CopilotClient } from "@github/copilot-sdk";
// Create and start client
const client = new CopilotClient();
await client.start();
// Create a session
const session = await client.createSession({
model: "gpt-5",
});
// Wait for response using typed event handlers
const done = new Promise<void>((resolve) => {
session.on("assistant.message", (event) => {
console.log(event.data.content);
});
session.on("session.idle", () => {
resolve();
});
});
// Send a message and wait for completion
await session.send({ prompt: "What is 2+2?" });
await done;
// Clean up
await session.destroy();
await client.stop();
new CopilotClient(options?: CopilotClientOptions)
Options:
cliPath?: string - Path to CLI executable (default: "copilot" from PATH)cliArgs?: string[] - Extra arguments prepended before SDK-managed flags (e.g. ["./dist-cli/index.js"] when using node)cliUrl?: string - URL of existing CLI server to connect to (e.g., "localhost:8080", "http://127.0.0.1:9000", or just "8080"). When provided, the client will not spawn a CLI process.port?: number - Server port (default: 0 for random)useStdio?: boolean - Use stdio transport instead of TCP (default: true)logLevel?: string - Log level (default: "info")autoStart?: boolean - Auto-start server (default: true)autoRestart?: boolean - Auto-restart on crash (default: true)githubToken?: string - GitHub token for authentication. When provided, takes priority over other auth methods.useLoggedInUser?: boolean - Whether to use logged-in user for authentication (default: true, but false when githubToken is provided). Cannot be used with cliUrl.start(): Promise<void>Start the CLI server and establish connection.
stop(): Promise<Error[]>Stop the server and close all sessions. Returns a list of any errors encountered during cleanup.
forceStop(): Promise<void>Force stop the CLI server without graceful cleanup. Use when stop() takes too long.
createSession(config?: SessionConfig): Promise<CopilotSession>Create a new conversation session.
Config:
sessionId?: string - Custom session IDmodel?: string - Model to use ("gpt-5", "claude-sonnet-4.5", etc.). Required when using custom provider.tools?: Tool[] - Custom tools exposed to the CLIsystemMessage?: SystemMessageConfig - System message customization (see below)infiniteSessions?: InfiniteSessionConfig - Configure automatic context compaction (see below)provider?: ProviderConfig - Custom API provider configuration (BYOK - Bring Your Own Key). See Custom Providers section.onUserInputRequest?: UserInputHandler - Handler for user input requests from the agent. Enables the ask_user tool. See User Input Requests section.hooks?: SessionHooks - Hook handlers for session lifecycle events. See Session Hooks section.resumeSession(sessionId: string, config?: ResumeSessionConfig): Promise<CopilotSession>Resume an existing session. Returns the session with workspacePath populated if infinite sessions were enabled.
ping(message?: string): Promise<{ message: string; timestamp: number }>Ping the server to check connectivity.
getState(): ConnectionStateGet current connection state.
listSessions(): Promise<SessionMetadata[]>List all available sessions.
deleteSession(sessionId: string): Promise<void>Delete a session and its data from disk.
Represents a single conversation session.
sessionId: stringThe unique identifier for this session.
workspacePath?: stringPath to the session workspace directory when infinite sessions are enabled. Contains checkpoints/, plan.md, and files/ subdirectories. Undefined if infinite sessions are disabled.
send(options: MessageOptions): Promise<string>Send a message to the session. Returns immediately after the message is queued; use event handlers or sendAndWait() to wait for completion.
Options:
prompt: string - The message/prompt to sendattachments?: Array<{type, path, displayName}> - File attachmentsmode?: "enqueue" | "immediate" - Delivery modeReturns the message ID.
sendAndWait(options: MessageOptions, timeout?: number): Promise<AssistantMessageEvent | undefined>Send a message and wait until the session becomes idle.
Options:
prompt: string - The message/prompt to sendattachments?: Array<{type, path, displayName}> - File attachmentsmode?: "enqueue" | "immediate" - Delivery modetimeout?: number - Optional timeout in millisecondsReturns the final assistant message event, or undefined if none was received.
on(eventType: string, handler: TypedSessionEventHandler): () => voidSubscribe to a specific event type. The handler receives properly typed events.
// Listen for specific event types with full type inference
session.on("assistant.message", (event) => {
console.log(event.data.content); // TypeScript knows about event.data.content
});
session.on("session.idle", () => {
console.log("Session is idle");
});
// Listen to streaming events
session.on("assistant.message_delta", (event) => {
process.stdout.write(event.data.deltaContent);
});
on(handler: SessionEventHandler): () => voidSubscribe to all session events. Returns an unsubscribe function.
const unsubscribe = session.on((event) => {
// Handle any event type
console.log(event.type, event);
});
// Later...
unsubscribe();
abort(): Promise<void>Abort the currently processing message in this session.
getMessages(): Promise<SessionEvent[]>Get all events/messages from this session.
destroy(): Promise<void>Destroy the session and free resources.
Sessions emit various events during processing:
user.message - User message addedassistant.message - Assistant responseassistant.message_delta - Streaming response chunktool.execution_start - Tool execution startedtool.execution_end - Tool execution completedSee SessionEvent type in the source for full details.
The SDK supports image attachments via the attachments parameter. You can attach images by providing their file path:
await session.send({
prompt: "What's in this image?",
attachments: [
{
type: "file",
path: "/path/to/image.jpg",
},
],
});
Supported image formats include JPG, PNG, GIF, and other common image types. The agent's view tool can also read images directly from the filesystem, so you can also ask questions like:
await session.send({ prompt: "What does the most recent jpg in this directory portray?" });
Enable streaming to receive assistant response chunks as they're generated:
const session = await client.createSession({
model: "gpt-5",
streaming: true,
});
// Wait for completion using typed event handlers
const done = new Promise<void>((resolve) => {
session.on("assistant.message_delta", (event) => {
// Streaming message chunk - print incrementally
process.stdout.write(event.data.deltaContent);
});
session.on("assistant.reasoning_delta", (event) => {
// Streaming reasoning chunk (if model supports reasoning)
process.stdout.write(event.data.deltaContent);
});
session.on("assistant.message", (event) => {
// Final message - complete content
console.log("\n--- Final message ---");
console.log(event.data.content);
});
session.on("assistant.reasoning", (event) => {
// Final reasoning content (if model supports reasoning)
console.log("--- Reasoning ---");
console.log(event.data.content);
});
session.on("session.idle", () => {
// Session finished processing
resolve();
});
});
await session.send({ prompt: "Tell me a short story" });
await done; // Wait for streaming to complete
When streaming: true:
assistant.message_delta events are sent with deltaContent containing incremental textassistant.reasoning_delta events are sent with deltaContent for reasoning/chain-of-thought (model-dependent)deltaContent values to build the full response progressivelyassistant.message and assistant.reasoning events contain the complete contentNote: assistant.message and assistant.reasoning (final events) are always sent regardless of streaming setting.
const client = new CopilotClient({ autoStart: false });
// Start manually
await client.start();
// Use client...
// Stop manually
await client.stop();
You can let the CLI call back into your process when the model needs capabilities you own. Use defineTool with Zod schemas for type-safe tool definitions:
import { z } from "zod";
import { CopilotClient, defineTool } from "@github/copilot-sdk";
const session = await client.createSession({
model: "gpt-5",
tools: [
defineTool("lookup_issue", {
description: "Fetch issue details from our tracker",
parameters: z.object({
id: z.string().describe("Issue identifier"),
}),
handler: async ({ id }) => {
const issue = await fetchIssue(id);
return issue;
},
}),
],
});
When Copilot invokes lookup_issue, the client automatically runs your handler and responds to the CLI. Handlers can return any JSON-serializable value (automatically wrapped), a simple string, or a ToolResultObject for full control over result metadata. Raw JSON schemas are also supported if Zod isn't desired.
Control the system prompt using systemMessage in session config:
const session = await client.createSession({
model: "gpt-5",
systemMessage: {
content: `
<workflow_rules>
- Always check for security vulnerabilities
- Suggest performance improvements when applicable
</workflow_rules>
`,
},
});
The SDK auto-injects environment context, tool instructions, and security guardrails. The default CLI persona is preserved, and your content is appended after SDK-managed sections. To change the persona or fully redefine the prompt, use mode: "replace".
For full control (removes all guardrails), use mode: "replace":
const session = await client.createSession({
model: "gpt-5",
systemMessage: {
mode: "replace",
content: "You are a helpful assistant.",
},
});
By default, sessions use infinite sessions which automatically manage context window limits through background compaction and persist state to a workspace directory.
// Default: infinite sessions enabled with default thresholds
const session = await client.createSession({ model: "gpt-5" });
// Access the workspace path for checkpoints and files
console.log(session.workspacePath);
// => ~/.copilot/session-state/{sessionId}/
// Custom thresholds
const session = await client.createSession({
model: "gpt-5",
infiniteSessions: {
enabled: true,
backgroundCompactionThreshold: 0.80, // Start compacting at 80% context usage
bufferExhaustionThreshold: 0.95, // Block at 95% until compaction completes
},
});
// Disable infinite sessions
const session = await client.createSession({
model: "gpt-5",
infiniteSessions: { enabled: false },
});
When enabled, sessions emit compaction events:
session.compaction_start - Background compaction startedsession.compaction_complete - Compaction finished (includes token counts)const session1 = await client.createSession({ model: "gpt-5" });
const session2 = await client.createSession({ model: "claude-sonnet-4.5" });
// Both sessions are independent
await session1.sendAndWait({ prompt: "Hello from session 1" });
await session2.sendAndWait({ prompt: "Hello from session 2" });
const session = await client.createSession({
sessionId: "my-custom-session-id",
model: "gpt-5",
});
await session.send({
prompt: "Analyze this file",
attachments: [
{
type: "file",
path: "/path/to/file.js",
displayName: "My File",
},
],
});
The SDK supports custom OpenAI-compatible API providers (BYOK - Bring Your Own Key), including local providers like Ollama. When using a custom provider, you must specify the model explicitly.
ProviderConfig:
type?: "openai" | "azure" | "anthropic" - Provider type (default: "openai")baseUrl: string - API endpoint URL (required)apiKey?: string - API key (optional for local providers like Ollama)bearerToken?: string - Bearer token for authentication (takes precedence over apiKey)wireApi?: "completions" | "responses" - API format for OpenAI/Azure (default: "completions")azure?.apiVersion?: string - Azure API version (default: "2024-10-21")Example with Ollama:
const session = await client.createSession({
model: "deepseek-coder-v2:16b", // Required when using custom provider
provider: {
type: "openai",
baseUrl: "http://localhost:11434/v1", // Ollama endpoint
// apiKey not required for Ollama
},
});
await session.sendAndWait({ prompt: "Hello!" });
Example with custom OpenAI-compatible API:
const session = await client.createSession({
model: "gpt-4",
provider: {
type: "openai",
baseUrl: "https://my-api.example.com/v1",
apiKey: process.env.MY_API_KEY,
},
});
Example with Azure OpenAI:
const session = await client.createSession({
model: "gpt-4",
provider: {
type: "azure", // Must be "azure" for Azure endpoints, NOT "openai"
baseUrl: "https://my-resource.openai.azure.com", // Just the host, no path
apiKey: process.env.AZURE_OPENAI_KEY,
azure: {
apiVersion: "2024-10-21",
},
},
});
Important notes:
- When using a custom provider, the
modelparameter is required. The SDK will throw an error if no model is specified.- For Azure OpenAI endpoints (
*.openai.azure.com), you must usetype: "azure", nottype: "openai".- The
baseUrlshould be just the host (e.g.,https://my-resource.openai.azure.com). Do not include/openai/v1in the URL - the SDK handles path construction automatically.
Enable the agent to ask questions to the user using the ask_user tool by providing an onUserInputRequest handler:
const session = await client.createSession({
model: "gpt-5",
onUserInputRequest: async (request, invocation) => {
// request.question - The question to ask
// request.choices - Optional array of choices for multiple choice
// request.allowFreeform - Whether freeform input is allowed (default: true)
console.log(`Agent asks: ${request.question}`);
if (request.choices) {
console.log(`Choices: ${request.choices.join(", ")}`);
}
// Return the user's response
return {
answer: "User's answer here",
wasFreeform: true, // Whether the answer was freeform (not from choices)
};
},
});
Hook into session lifecycle events by providing handlers in the hooks configuration:
const session = await client.createSession({
model: "gpt-5",
hooks: {
// Called before each tool execution
onPreToolUse: async (input, invocation) => {
console.log(`About to run tool: ${input.toolName}`);
// Return permission decision and optionally modify args
return {
permissionDecision: "allow", // "allow", "deny", or "ask"
modifiedArgs: input.toolArgs, // Optionally modify tool arguments
additionalContext: "Extra context for the model",
};
},
// Called after each tool execution
onPostToolUse: async (input, invocation) => {
console.log(`Tool ${input.toolName} completed`);
// Optionally modify the result or add context
return {
additionalContext: "Post-execution notes",
};
},
// Called when user submits a prompt
onUserPromptSubmitted: async (input, invocation) => {
console.log(`User prompt: ${input.prompt}`);
return {
modifiedPrompt: input.prompt, // Optionally modify the prompt
};
},
// Called when session starts
onSessionStart: async (input, invocation) => {
console.log(`Session started from: ${input.source}`); // "startup", "resume", "new"
return {
additionalContext: "Session initialization context",
};
},
// Called when session ends
onSessionEnd: async (input, invocation) => {
console.log(`Session ended: ${input.reason}`);
},
// Called when an error occurs
onErrorOccurred: async (input, invocation) => {
console.error(`Error in ${input.errorContext}: ${input.error}`);
return {
errorHandling: "retry", // "retry", "skip", or "abort"
};
},
},
});
Available hooks:
onPreToolUse - Intercept tool calls before execution. Can allow/deny or modify arguments.onPostToolUse - Process tool results after execution. Can modify results or add context.onUserPromptSubmitted - Intercept user prompts. Can modify the prompt before processing.onSessionStart - Run logic when a session starts or resumes.onSessionEnd - Cleanup or logging when session ends.onErrorOccurred - Handle errors with retry/skip/abort strategies.try {
const session = await client.createSession();
await session.send({ prompt: "Hello" });
} catch (error) {
console.error("Error:", error.message);
}
cliPath)MIT
FAQs
TypeScript SDK for programmatic control of GitHub Copilot CLI via JSON-RPC
The npm package @github/copilot-sdk receives a total of 15,031 weekly downloads. As such, @github/copilot-sdk popularity was classified as popular.
We found that @github/copilot-sdk demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 21 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Following multiple malicious extension incidents, Open VSX outlines new safeguards designed to catch risky uploads earlier.

Research
/Security News
Threat actors compromised four oorzc Open VSX extensions with more than 22,000 downloads, pushing malicious versions that install a staged loader, evade Russian-locale systems, pull C2 from Solana memos, and steal macOS credentials and wallets.

Security News
Lodash 4.17.23 marks a security reset, with maintainers rebuilding governance and infrastructure to support long-term, sustainable maintenance.