OpenAI API Mock
This is a Node.js module for mocking OpenAI API responses in a development environment .

It's useful for testing and development purposes when you don't want to make actual API calls.
The module supports the following OpenAI API endpoints:
- chat completions
- chat completions with streaming
- chat completions with functions
- image generations
This module is powering the sandbox mode for Aipify.
Table of Contents
Installation
You can install this module using npm as a dev dependency :
npm install -D openai-api-mock
Usage
The module supports both ESM and CommonJS imports:
import { mockOpenAIResponse } from 'openai-api-mock';
const { mockOpenAIResponse } = require('openai-api-mock');
Then, call the mockOpenAIResponse function to set up the mock response:
mockOpenAIResponse();
mockOpenAIResponse(true);
mockOpenAIResponse(false, {
includeErrors: true,
latency: 1000,
logRequests: true,
seed: 12345,
useFixedResponses: true,
baseUrl: 'https://api.openai.com',
});
The function accepts two parameters:
force (boolean): Determines whether the mock response should be used regardless of the environment. If false or not provided, mocking only occurs in development environment.
options (object): Additional configuration options
includeErrors (boolean): When true, randomly simulates API errors
latency (number): Adds artificial delay to responses in milliseconds
logRequests (boolean): Logs incoming requests to console for debugging
seed (number|string): Seed value for consistent/deterministic responses using faker.js
useFixedResponses (boolean): Use predefined fixed response templates for completely consistent responses
baseUrl (string): Base URL for the OpenAI API or OpenAI-compatible service (defaults to https://api.openai.com)
The function returns an object with control methods:
const mock = mockOpenAIResponse();
console.log(mock.isActive);
mock.stopMocking();
mock.setSeed(12345);
mock.resetSeed();
const templates = mock.getResponseTemplates();
const customTemplate = mock.createResponseTemplate('SIMPLE_CHAT', {
choices: [{ message: { content: 'Custom response' } }],
});
mock.addCustomEndpoint('POST', '/v1/custom', (uri, body) => {
return [200, { custom: 'response' }];
});
Using with OpenAI-Compatible Services
The library supports mocking any OpenAI-compatible API by configuring the baseUrl option. This is useful when working with services like Azure OpenAI, local models, or other OpenAI-compatible endpoints.
mockOpenAIResponse(true, {
baseUrl: 'https://your-resource.openai.azure.com',
logRequests: true,
});
mockOpenAIResponse(true, {
baseUrl: 'http://localhost:11434',
logRequests: true,
});
mockOpenAIResponse(true, {
baseUrl: 'https://api.anthropic.com',
logRequests: true,
});
const openai = new OpenAI({
apiKey: 'your-api-key',
baseURL: 'https://your-resource.openai.azure.com',
});
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
});
When using custom baseUrl, the mock will:
- Intercept requests to the specified base URL instead of
api.openai.com
- Block network connections to that specific host while allowing other network requests
- Apply all the same mocking behavior (errors, latency, seeding, etc.) to the custom endpoint
Example responses
mockOpenAIResponse();
const response = await openai.chat.completions.create({
model: 'gpt-3.5',
messages: [
{ role: 'system', content: "You're an expert chef" },
{ role: 'user', content: 'Suggest at least 5 recipes' },
],
});
In this example, the response constant will contain mock data, simulating a response from the OpenAI API:
{
choices: [
{
finish_reason: 'stop',
index: 0,
message: [Object],
logprobs: null
}
],
created: 1707040459,
id: 'chatcmpl-tggOnwW8Lp2XiwQ8dmHHAcNYJ8CfzR',
model: 'gpt-3.5-mock',
object: 'chat.completion',
usage: { completion_tokens: 17, prompt_tokens: 57, total_tokens: 74 }
}
The library also supports mocking stream responses
mockOpenAIResponse();
const response = await openai.chat.completions.create({
model: 'gpt-3.5',
stream: true,
messages: [
{ role: 'system', content: "You're an expert chef" },
{ role: 'user', content: 'Suggest at least 5 recipes' },
],
});
for await (const part of response) {
console.log(part.choices[0]?.delta?.content || '');
}
Consistent Outputs for Testing
The library provides several mechanisms to achieve consistent, deterministic outputs for reliable testing:
Seed-based Consistency
Use seeds to ensure reproducible responses across test runs:
const mock = mockOpenAIResponse(true, { seed: 12345 });
const response1 = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Hello' }],
});
const response2 = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Hello' }],
});
console.log(JSON.stringify(response1) === JSON.stringify(response2));
Fixed Response Templates
For maximum consistency, use predefined response templates:
const mock = mockOpenAIResponse(true, { useFixedResponses: true });
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Any message' }],
});
console.log(response.choices[0].message.content);
Runtime Seed Management
Change seeds during runtime for different test scenarios:
const mock = mockOpenAIResponse(true);
mock.setSeed(12345);
const responseA = await openai.chat.completions.create({...});
mock.setSeed(54321);
const responseB = await openai.chat.completions.create({...});
mock.resetSeed();
const responseRandom = await openai.chat.completions.create({...});
For comprehensive examples and best practices, see CONSISTENCY_EXAMPLES.md.
Intercepted URLs
This module uses the nock library to intercept HTTP calls to OpenAI API endpoints. By default, it intercepts:
https://api.openai.com/v1/chat/completions: This endpoint is used for generating chat completions.
https://api.openai.com/v1/images/generations: This endpoint is used for generating images.
When using the baseUrl option, the intercepted URLs will use your configured base URL instead:
mockOpenAIResponse(true, { baseUrl: 'https://your-api.example.com' });
TypeScript Support
This package includes TypeScript definitions out of the box. After installing the package, you can use it with full type support:
import { mockOpenAIResponse, MockOptions } from 'openai-api-mock';
const options: MockOptions = {
includeErrors: true,
latency: 1000,
logRequests: true,
seed: 12345,
useFixedResponses: true,
baseUrl: 'https://api.openai.com',
};
const mock = mockOpenAIResponse(true, options);
console.log(mock.isActive);
mock.stopMocking();
mock.setSeed(54321);
mock.resetSeed();
const templates = mock.getResponseTemplates();
const customTemplate = mock.createResponseTemplate('SIMPLE_CHAT', {
choices: [{ message: { content: 'Custom content' } }],
});
mock.addCustomEndpoint('POST', '/v1/custom', (uri, body) => {
return [200, { custom: 'response' }];
});
Dependencies
This module depends on the following npm packages:
nock : For intercepting HTTP calls.
@faker-js/faker : For generating fake data.
License
This project is licensed under the MIT License.