Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
nodejs-gpt_clone
Advanced tools
A ChatGPT implementation using the official ChatGPT model via OpenAI's API.
Support for the official ChatGPT model has been added! You can now use the gpt-3.5-turbo
model with the official OpenAI API, using ChatGPTClient
. This is the same model that ChatGPT uses, and it's the most powerful model available right now. Usage of this model is not free, however it is 10x cheaper (priced at $0.002 per 1k tokens) than text-davinci-003
.
See OpenAI's post, Introducing ChatGPT and Whisper APIs for more information.
To use it, set
The default model used in modelOptions.model
to gpt-3.5-turbo
, and ChatGPTClient
will handle the rest.ChatGPTClient
is now gpt-3.5-turbo
.
You can still set userLabel
, chatGptLabel
and promptPrefix
(system instructions) as usual.
There may be a higher chance of your account being banned if you continue to automate chat.openai.com. Continue doing so at your own risk.
I've added an experimental ChatGPTBrowserClient
which depends on a reverse proxy server that makes use of a Cloudflare bypass, allowing you to talk to ChatGPT (chat.openai.com) without requiring browser automation. All you need is your access token from https://chat.openai.com/api/auth/session.
As always, please note that if you choose to go this route, you are exposing your access token to a closed-source third-party server. If you are concerned about this, you may choose to either use a free ChatGPT account to minimize risks, or continue using ChatGPTClient
instead with the text-davinci-003
model.
The method we were using to access the ChatGPT underlying models has been patched, unfortunately. Your options right now are to either use the official OpenAI API with the text-davinci-003
model (which costs money), or use a browser-based solution to interface with ChatGPT's backend (which is less powerful, more rate-limited and is not supported by this library at this time).
With the help of @PawanOsman, we've figured out a way to continue using the ChatGPT underlying models. To hopefully prevent losing access again, we've decided to provide reverse proxy servers compatible with the OpenAI API. I've updated ChatGPTClient
to support using a reverse proxy server instead of the OpenAI API server. See Using a Reverse Proxy for more information on available proxy servers and how they work.
Please note that if you choose to go this route, you are exposing your access token to a closed-source third-party server. If you are concerned about this, you may choose to either use a free ChatGPT account to minimize risks, or continue using the official OpenAI API instead with the text-davinci-003
model.
I've found a new working model for text-chat-davinci-002
, text-chat-davinci-002-sh-alpha-aoruigiofdj83
. This is the underlying model that the ChatGPT Plus "Turbo" version uses. Responses are blazing fast. I've updated the library to use this model.
Bad timing; text-chat-davinci-002-sh-alpha-aoruigiofdj83
was removed shortly after, possibly due to a new model somewhere out there?
Experience the power of Copilot's GPT-4 version of ChatGPT with CopilotClient
(experimental).
The API server and CLI still need to be updated to support this, but you can use the client directly right now.
Please note that if your account is still wait-listed, you will not be able to use this client.
Even though text-chat-davinci-002-20221122
is back up again, it seems like it's constantly overloaded and returns a 429 error. It's likely that OpenAI only dedicated a small amount of resources to this model to prevent it being widely used by the public. Additionally, I've heard that newer versions are now access-locked to OpenAI employees and partners, so it's unlikely that we'll be able to find any workarounds until the model is officially released.
You may use the text-davinci-003
model instead as a drop-in replacement. Keep in mind that text-davinci-003
is not as good as text-chat-davinci-002
(which is trained via RHLF and fine-tuned to be a conversational AI), though results are still pretty good in most cases. Please note that using text-davinci-003
will cost you credits ($).
I will be re-adding support for the browser-based ChatGPT for the API server and CLI. Please star and watch this repository for updates.
The roller coaster has reached the next stop. text-chat-davinci-002-20221122
is back up again.
Trying to use text-chat-davinci-002-20221122
with the OpenAI API now returns a 404 error.
You may use the text-davinci-003
model instead as a drop-in replacement. Keep in mind that text-davinci-003
is not as good as text-chat-davinci-002
(which is trained via RHLF and fine-tuned to be a conversational AI), though results are still very good. Please note that using text-davinci-003
will cost you credits ($).
Please hold for further updates as we investigate further workarounds.
Trying to use text-chat-davinci-002-20230126
with the OpenAI API now returns a 404 error. Someone has already found the new model name, but they are unwilling to share at this time. I will update this repository once I find the new model. If you have any leads, please open an issue or a pull request.
In the meantime, I've added support for models like text-davinci-003
, which you can use as a drop-in replacement. Keep in mind that text-davinci-003
is not as good as text-chat-davinci-002
(which is trained via RHLF and fine-tuned to be a conversational AI), though results are still very good. Please note that using text-davinci-003
will cost you credits ($).
Discord user @pig#8932 has found a working text-chat-davinci-002
model, text-chat-davinci-002-20221122
. I've updated the library to use this model.
A client implementation for ChatGPT and Copilot AI. Available as a Node.js module, REST API server, and CLI app.
ChatGPTClient
: support for the official ChatGPT underlying model, gpt-3.5-turbo
, via OpenAI's API.
keyv-file
adapter is also included in this package, and can be used to store conversations in a JSON file if you're using the API server or CLI (see settings.example.js
).text-davinci-003
CopilotClient
: support for Copilot's version of ChatGPT, powered by GPT-4.
ChatGPTBrowserClient
: support for the official ChatGPT website, using a reverse proxy server for a Cloudflare bypass.
npm i @waylaidwanderer/chatgpt-api
See demos/use-client.js
.
You can install the package using
npm i -g @waylaidwanderer/chatgpt-api
then run it using
chatgpt-api
.
This takes an optional --settings=<path_to_settings.js>
parameter, or looks for settings.js
in the current directory if not set, with the following contents:
module.exports = {
// Options for the Keyv cache, see https://www.npmjs.com/package/keyv.
// This is used for storing conversations, and supports additional drivers (conversations are stored in memory by default).
// Only necessary when using `ChatGPTClient`, or `CopilotClient` in jailbreak mode.
cacheOptions: {},
// If set, `ChatGPTClient` and `CopilotClient` will use `keyv-file` to store conversations to this JSON file instead of in memory.
// However, `cacheOptions.store` will override this if set
storageFilePath: process.env.STORAGE_FILE_PATH || './cache.json',
chatGptClient: {
// Your OpenAI API key (for `ChatGPTClient`)
openaiApiKey: process.env.OPENAI_API_KEY || '',
// (Optional) Support for a reverse proxy for the completions endpoint (private API server).
// Warning: This will expose your `openaiApiKey` to a third party. Consider the risks before using this.
// reverseProxyUrl: 'https://chatgpt.hato.ai/completions',
// (Optional) Parameters as described in https://platform.openai.com/docs/api-reference/completions
modelOptions: {
// You can override the model name and any other parameters here.
// The default model is `gpt-3.5-turbo`.
model: 'gpt-3.5-turbo',
// Set max_tokens here to override the default max_tokens of 1000 for the completion.
// max_tokens: 1000,
},
// (Optional) Davinci models have a max context length of 4097 tokens, but you may need to change this for other models.
// maxContextTokens: 4097,
// (Optional) You might want to lower this to save money if using a paid model like `text-davinci-003`.
// Earlier messages will be dropped until the prompt is within the limit.
// maxPromptTokens: 3097,
// (Optional) Set custom instructions instead of "You are ChatGPT...".
// (Optional) Set a custom name for the user
// userLabel: 'User',
// (Optional) Set a custom name for ChatGPT ("ChatGPT" by default)
// chatGptLabel: 'Bob',
// promptPrefix: 'You are Bob, a cowboy in Western times...',
// A proxy string like "http://<ip>:<port>"
proxy: '',
// (Optional) Set to true to enable `console.debug()` logging
debug: false,
},
// Options for the Copilot client
copilotClient: {
// Necessary for some people in different countries, e.g. China (https://cn.Copilot.com)
host: '',
// The "_U" cookie value from Copilot.com
userToken: '',
// If the above doesn't work, provide all your cookies as a string instead
cookies: '',
// A proxy string like "http://<ip>:<port>"
proxy: '',
// (Optional) Set to true to enable `console.debug()` logging
debug: false,
},
chatGptBrowserClient: {
// (Optional) Support for a reverse proxy for the conversation endpoint (private API server).
// Warning: This will expose your access token to a third party. Consider the risks before using this.
reverseProxyUrl: 'https://bypass.churchless.tech/api/conversation',
// Access token from https://chat.openai.com/api/auth/session
accessToken: '',
// Cookies from chat.openai.com (likely not required if using reverse proxy server).
cookies: '',
// A proxy string like "http://<ip>:<port>"
proxy: '',
// (Optional) Set to true to enable `console.debug()` logging
debug: false,
},
// Options for the API server
apiOptions: {
port: process.env.API_PORT || 3000,
host: process.env.API_HOST || 'localhost',
// (Optional) Set to true to enable `console.debug()` logging
debug: false,
// (Optional) Possible options: "chatgpt", "chatgpt-browser", "Copilot". (Default: "chatgpt")
clientToUse: 'chatgpt',
// (Optional) Generate titles for each conversation for clients that support it (only ChatGPTClient for now).
// This will be returned as a `title` property in the first response of the conversation.
generateTitles: false,
// (Optional) Set this to allow changing the client or client options in POST /conversation.
// To disable, set to `null`.
perMessageClientOptionsWhitelist: {
// The ability to switch clients using `clientOptions.clientToUse` will be disabled if `validClientsToUse` is not set.
// To allow switching clients per message, you must set `validClientsToUse` to a non-empty array.
validClientsToUse: ['Copilot', 'chatgpt', 'chatgpt-browser'], // values from possible `clientToUse` options above
// The Object key, e.g. "chatgpt", is a value from `validClientsToUse`.
// If not set, ALL options will be ALLOWED to be changed. For example, `Copilot` is not defined in `perMessageClientOptionsWhitelist` above,
// so all options for `copilotClient` will be allowed to be changed.
// If set, ONLY the options listed here will be allowed to be changed.
// In this example, each array element is a string representing a property in `chatGptClient` above.
chatgpt: [
'promptPrefix',
'userLabel',
'chatGptLabel',
// Setting `modelOptions.temperature` here will allow changing ONLY the temperature.
// Other options like `modelOptions.model` will not be allowed to be changed.
// If you want to allow changing all `modelOptions`, define `modelOptions` here instead of `modelOptions.temperature`.
'modelOptions.temperature',
],
},
},
// Options for the CLI app
cliOptions: {
// (Optional) Possible options: "chatgpt", "Copilot".
// clientToUse: 'Copilot',
},
};
Alternatively, you can install and run the package directly.
git clone https://github.com/waylaidwanderer/node-chatgpt-api
npm install
(if not using Docker)settings.example.js
to settings.js
in the root directory and change the settings where required.npm start
or npm run server
(if not using Docker)docker-compose up
(requires Docker)Start or continue a conversation. Optional parameters are only necessary for conversations that span multiple requests.
Field | Description |
---|---|
message | The message to be displayed to the user. |
conversationId | (Optional) An ID for the conversation you want to continue. |
jailbreakConversationId | (Optional, for CopilotClient only) Set to true to start a conversation in jailbreak mode. After that, this should be the ID for the jailbreak conversation (given in the response as a parameter also named jailbreakConversationId ). |
parentMessageId | (Optional, for ChatGPTClient , and CopilotClient in jailbreak mode) The ID of the parent message (i.e. response.messageId ) when continuing a conversation. |
conversationSignature | (Optional, for CopilotClient only) A signature for the conversation (given in the response as a parameter also named conversationSignature ). Required when continuing a conversation unless in jailbreak mode. |
clientId | (Optional, for CopilotClient only) The ID of the client. Required when continuing a conversation unless in jailbreak mode. |
invocationId | (Optional, for CopilotClient only) The ID of the invocation. Required when continuing a conversation unless in jailbreak mode. |
clientOptions | (Optional) An object containing options for the client. |
clientOptions.clientToUse | (Optional) The client to use for this message. Possible values: chatgpt , chatgpt-browser , Copilot . |
clientOptions.* | (Optional) Any valid options for the client. For example, for ChatGPTClient , you can set clientOptions.openaiApiKey to set an API key for this message only, or clientOptions.promptPrefix to give the AI custom instructions for this message only, etc. |
To configure which options can be changed per message (default: all), see the comments for perMessageClientOptionsWhitelist
in settings.example.js
.
To allow changing clients, perMessageClientOptionsWhitelist.validClientsToUse
must be set to a non-empty array as described in the example settings file.
To start a conversation with ChatGPT, send a POST request to the server's /conversation
endpoint with a JSON body with parameters per Endpoints > POST /conversation above.
{
"message": "Hello, how are you today?",
"conversationId": "your-conversation-id (optional)",
"parentMessageId": "your-parent-message-id (optional, for `ChatGPTClient` only)",
"conversationSignature": "your-conversation-signature (optional, for `CopilotClient` only)",
"clientId": "your-client-id (optional, for `CopilotClient` only)",
"invocationId": "your-invocation-id (optional, for `CopilotClient` only)",
}
The server will return a JSON object containing ChatGPT's response:
// HTTP/1.1 200 OK
{
"response": "I'm doing well, thank you! How are you?",
"conversationId": "your-conversation-id",
"messageId": "response-message-id (for `ChatGPTClient` only)",
"conversationSignature": "your-conversation-signature (for `CopilotClient` only)",
"clientId": "your-client-id (for `CopilotClient` only)",
"invocationId": "your-invocation-id (for `CopilotClient` only - pass this new value back into subsequent requests as-is)",
"details": "an object containing the raw response from the client"
}
If the request is unsuccessful, the server will return a JSON object with an error message.
If the request object is missing a required property (e.g. message
):
// HTTP/1.1 400 Bad Request
{
"error": "The message parameter is required."
}
If there was an error sending the message to ChatGPT:
// HTTP/1.1 503 Service Unavailable
{
"error": "There was an error communicating with ChatGPT."
}
You can set "stream": true
in the request body to receive a stream of tokens as they are generated.
import { fetchEventSource } from '@waylaidwanderer/fetch-event-source'; // use `@microsoft/fetch-event-source` instead if in a browser environment
const opts = {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
"message": "Write a poem about cats.",
"conversationId": "your-conversation-id (optional)",
"parentMessageId": "your-parent-message-id (optional)",
"stream": true,
// Any other parameters per `Endpoints > POST /conversation` above
}),
};
See demos/use-api-server-streaming.js for an example of how to receive the response as it's generated. You will receive one token at a time, so you will need to concatenate them yourself.
Successful output:
{ data: '', event: '', id: '', retry: 3000 }
{ data: 'Hello', event: '', id: '', retry: undefined }
{ data: '!', event: '', id: '', retry: undefined }
{ data: ' How', event: '', id: '', retry: undefined }
{ data: ' can', event: '', id: '', retry: undefined }
{ data: ' I', event: '', id: '', retry: undefined }
{ data: ' help', event: '', id: '', retry: undefined }
{ data: ' you', event: '', id: '', retry: undefined }
{ data: ' today', event: '', id: '', retry: undefined }
{ data: '?', event: '', id: '', retry: undefined }
{ data: '<result JSON here, see Method 1>', event: 'result', id: '', retry: undefined }
{ data: '[DONE]', event: '', id: '', retry: undefined }
// Hello! How can I help you today?
Error output:
const message = {
data: '{"code":503,"error":"There was an error communicating with ChatGPT."}',
event: 'error',
id: '',
retry: undefined
};
if (message.event === 'error') {
console.error(JSON.parse(message.data).error); // There was an error communicating with ChatGPT.
}
fetch-event-source
first and use POST
method.Follow the same setup instructions for the API server, creating settings.js
.
If installed globally:
chatgpt-cli
If installed locally:
npm run cli
ChatGPT's responses are automatically copied to your clipboard, so you can paste them into other applications.
As shown in the examples above, you can set reverseProxyUrl
in ChatGPTClient
's options to use a reverse proxy server instead of the official ChatGPT API.
For now, this is the only way to use the ChatGPT underlying models. This method has been patched and the instructions below are no longer relevant, but you may still want to use a reverse proxy for other reasons.
Currently, reverse proxy servers are still used for performing a Cloudflare bypass for ChatGPTBrowserClient
.
How does it work? Simple answer: ChatGPTClient
> reverse proxy > OpenAI server. The reverse proxy server does some magic under the hood to access the underlying model directly via OpenAI's server and then returns the response to ChatGPTClient
.
Instructions are provided below.
accessToken
property).
reverseProxyUrl
to https://chatgpt.hato.ai/completions
in settings.js > chatGptClient
or ChatGPTClient
's options.settings.chatGptClient.openaiApiKey
) to the ChatGPT access token you got in step 1.model
to text-davinci-002-render
, text-davinci-002-render-paid
, or text-davinci-002-render-sha
depending on which ChatGPT models that your account has access to. Models must be a ChatGPT model name, not the underlying model name, and you cannot use a model that your account does not have access to.
stream: true
(API) or onProgress
(client) as a workaround.accessToken
property).
reverseProxyUrl
to https://chatgpt.pawan.krd/api/completions
in settings.js > chatGptClient
or ChatGPTClient
's options.settings.chatGptClient.openaiApiKey
) to the ChatGPT access token you got in step 1.model
to text-davinci-002-render
, text-davinci-002-render-paid
, or text-davinci-002-render-sha
depending on which ChatGPT models that your account has access to. Models must be a ChatGPT model name, not the underlying model name, and you cannot use a model that your account does not have access to.
stream: true
(API) or onProgress
(client) as a workaround.🚀 A list of awesome projects using @waylaidwanderer/chatgpt-api
:
Add yours to the list by editing this README and creating a pull request!
A web client for this project is also available at waylaidwanderer/PandoraAI.
ChatGPTClient
Since gpt-3.5-turbo
is ChatGPT's underlying model, I had to do my best to replicate the way the official ChatGPT website uses it.
This means my implementation or the underlying model may not behave exactly the same in some ways:
If you'd like to contribute to this project, please create a pull request with a detailed description of your changes.
This project is licensed under the MIT License.
FAQs
A ChatGPT implementation using the official ChatGPT model via OpenAI's API.
The npm package nodejs-gpt_clone receives a total of 0 weekly downloads. As such, nodejs-gpt_clone popularity was classified as not popular.
We found that nodejs-gpt_clone demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.