
Security News
Open VSX Begins Implementing Pre-Publish Security Checks After Repeated Supply Chain Incidents
Following multiple malicious extension incidents, Open VSX outlines new safeguards designed to catch risky uploads earlier.
@github/copilot-sdk
Advanced tools
TypeScript SDK for programmatic control of GitHub Copilot CLI via JSON-RPC
TypeScript SDK for programmatic control of GitHub Copilot CLI via JSON-RPC.
Note: This SDK is in technical preview and may change in breaking ways.
npm install @github/copilot-sdk
import { CopilotClient } from "@github/copilot-sdk";
// Create and start client
const client = new CopilotClient();
await client.start();
// Create a session
const session = await client.createSession({
model: "gpt-5",
});
// Wait for response using session.idle event
const done = new Promise<void>((resolve) => {
session.on((event) => {
if (event.type === "assistant.message") {
console.log(event.data.content);
} else if (event.type === "session.idle") {
resolve();
}
});
});
// Send a message and wait for completion
await session.send({ prompt: "What is 2+2?" });
await done;
// Clean up
await session.destroy();
await client.stop();
new CopilotClient(options?: CopilotClientOptions)
Options:
cliPath?: string - Path to CLI executable (default: "copilot" from PATH)cliArgs?: string[] - Extra arguments prepended before SDK-managed flags (e.g. ["./dist-cli/index.js"] when using node)cliUrl?: string - URL of existing CLI server to connect to (e.g., "localhost:8080", "http://127.0.0.1:9000", or just "8080"). When provided, the client will not spawn a CLI process.port?: number - Server port (default: 0 for random)useStdio?: boolean - Use stdio transport instead of TCP (default: true)logLevel?: string - Log level (default: "info")autoStart?: boolean - Auto-start server (default: true)autoRestart?: boolean - Auto-restart on crash (default: true)start(): Promise<void>Start the CLI server and establish connection.
stop(): Promise<Error[]>Stop the server and close all sessions. Returns a list of any errors encountered during cleanup.
forceStop(): Promise<void>Force stop the CLI server without graceful cleanup. Use when stop() takes too long.
createSession(config?: SessionConfig): Promise<CopilotSession>Create a new conversation session.
Config:
sessionId?: string - Custom session IDmodel?: string - Model to use ("gpt-5", "claude-sonnet-4.5", etc.)tools?: Tool[] - Custom tools exposed to the CLIsystemMessage?: SystemMessageConfig - System message customization (see below)infiniteSessions?: InfiniteSessionConfig - Configure automatic context compaction (see below)resumeSession(sessionId: string, config?: ResumeSessionConfig): Promise<CopilotSession>Resume an existing session. Returns the session with workspacePath populated if infinite sessions were enabled.
ping(message?: string): Promise<{ message: string; timestamp: number }>Ping the server to check connectivity.
getState(): ConnectionStateGet current connection state.
listSessions(): Promise<SessionMetadata[]>List all available sessions.
deleteSession(sessionId: string): Promise<void>Delete a session and its data from disk.
Represents a single conversation session.
sessionId: stringThe unique identifier for this session.
workspacePath?: stringPath to the session workspace directory when infinite sessions are enabled. Contains checkpoints/, plan.md, and files/ subdirectories. Undefined if infinite sessions are disabled.
send(options: MessageOptions): Promise<string>Send a message to the session. Returns immediately after the message is queued; use event handlers or sendAndWait() to wait for completion.
Options:
prompt: string - The message/prompt to sendattachments?: Array<{type, path, displayName}> - File attachmentsmode?: "enqueue" | "immediate" - Delivery modeReturns the message ID.
sendAndWait(options: MessageOptions, timeout?: number): Promise<AssistantMessageEvent | undefined>Send a message and wait until the session becomes idle.
Options:
prompt: string - The message/prompt to sendattachments?: Array<{type, path, displayName}> - File attachmentsmode?: "enqueue" | "immediate" - Delivery modetimeout?: number - Optional timeout in millisecondsReturns the final assistant message event, or undefined if none was received.
on(handler: SessionEventHandler): () => voidSubscribe to session events. Returns an unsubscribe function.
const unsubscribe = session.on((event) => {
console.log(event);
});
// Later...
unsubscribe();
abort(): Promise<void>Abort the currently processing message in this session.
getMessages(): Promise<SessionEvent[]>Get all events/messages from this session.
destroy(): Promise<void>Destroy the session and free resources.
Sessions emit various events during processing:
user.message - User message addedassistant.message - Assistant responseassistant.message_delta - Streaming response chunktool.execution_start - Tool execution startedtool.execution_end - Tool execution completedSee SessionEvent type in the source for full details.
The SDK supports image attachments via the attachments parameter. You can attach images by providing their file path:
await session.send({
prompt: "What's in this image?",
attachments: [
{
type: "file",
path: "/path/to/image.jpg",
},
],
});
Supported image formats include JPG, PNG, GIF, and other common image types. The agent's view tool can also read images directly from the filesystem, so you can also ask questions like:
await session.send({ prompt: "What does the most recent jpg in this directory portray?" });
Enable streaming to receive assistant response chunks as they're generated:
const session = await client.createSession({
model: "gpt-5",
streaming: true,
});
// Wait for completion using session.idle event
const done = new Promise<void>((resolve) => {
session.on((event) => {
if (event.type === "assistant.message_delta") {
// Streaming message chunk - print incrementally
process.stdout.write(event.data.deltaContent);
} else if (event.type === "assistant.reasoning_delta") {
// Streaming reasoning chunk (if model supports reasoning)
process.stdout.write(event.data.deltaContent);
} else if (event.type === "assistant.message") {
// Final message - complete content
console.log("\n--- Final message ---");
console.log(event.data.content);
} else if (event.type === "assistant.reasoning") {
// Final reasoning content (if model supports reasoning)
console.log("--- Reasoning ---");
console.log(event.data.content);
} else if (event.type === "session.idle") {
// Session finished processing
resolve();
}
});
});
await session.send({ prompt: "Tell me a short story" });
await done; // Wait for streaming to complete
When streaming: true:
assistant.message_delta events are sent with deltaContent containing incremental textassistant.reasoning_delta events are sent with deltaContent for reasoning/chain-of-thought (model-dependent)deltaContent values to build the full response progressivelyassistant.message and assistant.reasoning events contain the complete contentNote: assistant.message and assistant.reasoning (final events) are always sent regardless of streaming setting.
const client = new CopilotClient({ autoStart: false });
// Start manually
await client.start();
// Use client...
// Stop manually
await client.stop();
You can let the CLI call back into your process when the model needs capabilities you own. Use defineTool with Zod schemas for type-safe tool definitions:
import { z } from "zod";
import { CopilotClient, defineTool } from "@github/copilot-sdk";
const session = await client.createSession({
model: "gpt-5",
tools: [
defineTool("lookup_issue", {
description: "Fetch issue details from our tracker",
parameters: z.object({
id: z.string().describe("Issue identifier"),
}),
handler: async ({ id }) => {
const issue = await fetchIssue(id);
return issue;
},
}),
],
});
When Copilot invokes lookup_issue, the client automatically runs your handler and responds to the CLI. Handlers can return any JSON-serializable value (automatically wrapped), a simple string, or a ToolResultObject for full control over result metadata. Raw JSON schemas are also supported if Zod isn't desired.
Control the system prompt using systemMessage in session config:
const session = await client.createSession({
model: "gpt-5",
systemMessage: {
content: `
<workflow_rules>
- Always check for security vulnerabilities
- Suggest performance improvements when applicable
</workflow_rules>
`,
},
});
The SDK auto-injects environment context, tool instructions, and security guardrails. The default CLI persona is preserved, and your content is appended after SDK-managed sections. To change the persona or fully redefine the prompt, use mode: "replace".
For full control (removes all guardrails), use mode: "replace":
const session = await client.createSession({
model: "gpt-5",
systemMessage: {
mode: "replace",
content: "You are a helpful assistant.",
},
});
By default, sessions use infinite sessions which automatically manage context window limits through background compaction and persist state to a workspace directory.
// Default: infinite sessions enabled with default thresholds
const session = await client.createSession({ model: "gpt-5" });
// Access the workspace path for checkpoints and files
console.log(session.workspacePath);
// => ~/.copilot/session-state/{sessionId}/
// Custom thresholds
const session = await client.createSession({
model: "gpt-5",
infiniteSessions: {
enabled: true,
backgroundCompactionThreshold: 0.80, // Start compacting at 80% context usage
bufferExhaustionThreshold: 0.95, // Block at 95% until compaction completes
},
});
// Disable infinite sessions
const session = await client.createSession({
model: "gpt-5",
infiniteSessions: { enabled: false },
});
When enabled, sessions emit compaction events:
session.compaction_start - Background compaction startedsession.compaction_complete - Compaction finished (includes token counts)const session1 = await client.createSession({ model: "gpt-5" });
const session2 = await client.createSession({ model: "claude-sonnet-4.5" });
// Both sessions are independent
await session1.sendAndWait({ prompt: "Hello from session 1" });
await session2.sendAndWait({ prompt: "Hello from session 2" });
const session = await client.createSession({
sessionId: "my-custom-session-id",
model: "gpt-5",
});
await session.send({
prompt: "Analyze this file",
attachments: [
{
type: "file",
path: "/path/to/file.js",
displayName: "My File",
},
],
});
try {
const session = await client.createSession();
await session.send({ prompt: "Hello" });
} catch (error) {
console.error("Error:", error.message);
}
cliPath)MIT
FAQs
TypeScript SDK for programmatic control of GitHub Copilot CLI via JSON-RPC
The npm package @github/copilot-sdk receives a total of 16,199 weekly downloads. As such, @github/copilot-sdk popularity was classified as popular.
We found that @github/copilot-sdk demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 21 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Following multiple malicious extension incidents, Open VSX outlines new safeguards designed to catch risky uploads earlier.

Research
/Security News
Threat actors compromised four oorzc Open VSX extensions with more than 22,000 downloads, pushing malicious versions that install a staged loader, evade Russian-locale systems, pull C2 from Solana memos, and steal macOS credentials and wallets.

Security News
Lodash 4.17.23 marks a security reset, with maintainers rebuilding governance and infrastructure to support long-term, sustainable maintenance.