Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More โ†’
Socket
Sign inDemoInstall
Socket

gifted-gpt

Package Overview
Dependencies
Maintainers
0
Versions
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

gifted-gpt

Free Chat GPT4 npm package, No API key or auth needed, All Models Included as well as Image Generators.

  • 1.1.0
  • latest
  • npm
  • Socket score

Version published
Weekly downloads
142
decreased by-54.05%
Maintainers
0
Weekly downloads
ย 
Created
Source

gifted-gpt

Gifted-Gpt is a Free Chat GPT4 npm package, No API key or auth needed, All Models Included as well as Image Generators.

This package can be used in both Typescript and CommonJS/ModuleJS environments.

๐Ÿ› ๏ธ Installation

Using npm:

npm install gifted-gpt

Using yarn:

yarn add gifted-gpt


๐ŸŽฏ Examples

๐Ÿ“ค Chat completion

With the chatCompletion function, you will be able to obtain a textual response to a conversation with some context, using providers and models designed for this task. In addition, you will be able to manipulate the answer before converting it to a stream or force the AI to give you a certain answer by generating several retries.

โš™๏ธ Basic usage

Simple fetch

It will capture the messages and the context, and any provider will respond with a string.

const { GiftedGpt } = require("gifted-gpt");
const gpt4 = new GiftedGpt();
const messages = [
    { role: "user", content: "Hi, what's up?"}
];
gpt4.chatCompletion(messages).then(console.log);
// Hello! I'm here to help you with anything you need. What can I do for you today? ๐Ÿ˜Š

Note: The conversation needs to include at least one message with the role user to provide a proper answer.

Give your instructions

You can provide your own instructions for the conversation before it starts using the system role.

const { GiftedGpt } = require("gifted-gpt");
const gpt4 = new GiftedGpt();
const messages = [
    { role: "system", content: "You're an expert bot in poetry."},
    { role: "user", content: "Hi, write me something."}
];
gpt4.chatCompletion(messages).then(console.log);
/*
Sure, I can write you a poem. Here is a short one: 
The Wind:
The wind is a curious thing,
It can make you dance and sing,
It can make you feel alive,
And help you thrive.
...
*/

Follow up on the conversation context

const { GiftedGpt } = require("gifted-gpt");
const gpt4 = new GiftedGpt();
const messages = [
    { role: "system", content: "You're a math teacher."},
    { role: "user", content: "How much is 2 plus 2?" },
    { role: "assistant", content: "2 plus 2 equals 4." },
    { role: "user", content: "You're really good at math!" },
    { role: "assistant", content: "Thank you! I'm glad I could help you with your math question." },
    { role: "user", content: "What was the first question I asked you?" }
];

gpt4.chatCompletion(messages).then(console.log);
// The first question you asked me was "How much is 2 plus 2?".

Note: AI responses use the assistant role and an appropriate conversation structure alternates between the user and the assistant, as seen in the previous example.

โœ๏ธ RESUME: Conversation roles

RoleDescription
systemUsed for providing instructions and context prior to the conversation.
userUsed to identify user messages
assistantUsed to identify AI messages

๐Ÿ”ฉ Add configurable options

Basic options

You can select any provider, model, debug mode and a proxy URL if you want.

const { GiftedGpt } = require("gifted-gpt");
const gpt4 = new GiftedGpt();
const messages = [
    { role: "user", content: "Hi, what's up?"}
];
const options = {
    provider: gpt4.providers.GPT,
    model: "gpt-3.5-turbo",
    debug: true,
    proxy: ""
};

(async() => {
	const text = await gpt4.chatCompletion(messages, options);	
	console.log(text);
})();
/*
[provider] ยป โˆš  success   Provider found: GPT
[model] ยป โˆš  success   Using the model: gpt-3.5-turbo
[provider] ยป โˆš  success   Data was successfully fetched from the GPT provider

In the realm of words, where verses dance and rhyme,
I shall craft a poem, a moment frozen in time.
With ink as my brush, I paint a vivid scene,
Where dreams and emotions intertwine, serene.
Through lines and stanzas, I'll weave a tale,
Of love, of loss, of hope that will never fail.
So close your eyes, and let your heart unfurl,
As I conjure a poem, a gift for your soul to swirl. ๐Ÿ’•๐ŸŒน
*/

Note: You can specify the provider, model, debug, and proxy options according to your needs; they are entirely optional.

Advanced options

You can force an expected response using retry, and manipulate the final response using output.

const { GiftedGpt } = require("gifted-gpt");
const gpt4 = new GiftedGpt();
const messages = [
    { role: "system", content: "You're an expert bot in poetry."},
    { role: "user", content: "Let's see, write a single paragraph-long poem for me." },
];
const options = {
    model: "gpt-4",
    debug: true,
	retry: {
        times: 3,
        condition: (text) => {
            const words = text.split(" ");
            return words.length > 10;
        }
    },
    output: (text) => {
        return text + " ๐Ÿ’•๐ŸŒน";
    }
};

(async() => {
    const text = await gpt4.chatCompletion(messages, options);	
    console.log(text);
})();
/* 
[provider] ยป โˆš  success   Provider found: GPT
[model] ยป โˆš  success   Using the model: gpt-4
[fetch] ยป โˆš  success   [1/3] - Retry #1
[output] ยป โˆš  success   Output function runtime finalized.

I'll try to create that.
Is what you asked me to say
I hope it brings you joy
And your heart it does employ ๐Ÿ’•๐ŸŒน
*/

Note: Retry will execute the fetch operation consecutively N times until it finishes, or the condition function indicates true. The output function only edits the final response.

What is the difference between basic options and advanced options?

If you decide to use the retry, output option, or both, keep in mind that these options involve preprocessing before delivering the ultimate response. The impact on performance and response times may vary depending on the functions you employ.

๐Ÿ“ Streaming

When using the stream option, the chatCompletion function will return an object with the streamable data and the name of the provider.

Basic usage

const { GiftedGpt } = require("gifted-gpt");
const gpt4 = new GiftedGpt();
const messages = [
    { role: "system", content: "You're an expert bot in poetry."},
    { role: "user", content: "Let's see, write a single paragraph-long poem for me." },
];
const options = {
    provider: gpt4.providers.ChatBase,
    stream: true
};

(async() => {
    const response = await gpt4.chatCompletion(messages, options);	
    console.log(response);
})();
/*
{ 
    data: <ref *1> BrotliDecompress { ... }, 
    name: "ChatBase" 
}
*/

So, how you should handle the streamable data?

I highly recommend you to use the integrated chunkProcessor function so that you don't have to format each chunk into a single string format response.

const { GiftedGpt, chunkProcessor } = require("gifted-gpt");
const gpt4 = new GiftedGpt();
const messages = [
    { role: "system", content: "You're an expert bot in poetry."},
    { role: "user", content: "Let's see, write a single paragraph-long poem for me." },
];
const options = {
    provider: gpt4.providers.ChatBase,
    stream: true
};

(async() => {
    const response = await gpt4.chatCompletion(messages, options);
    let text = "";
    for await (const chunk of chunkProcessor(response)) {
        text += chunk;
    }
    console.log(text);
})();
/* 
I'll try to create that.
To keep your worries at bay.
A smile on your face,
And a heart full of grace.
*/

Stream on postprocessing

When employing retry, output option, or both, you have the flexibility to select the size of each streamed chunk.

const { GiftedGpt, chunkProcessor } = require("gifted-gpt");
const gpt4 = new GiftedGpt();
const messages = [
    { role: "system", content: "You're an expert bot in poetry."},
    { role: "user", content: "Let's see, write a single paragraph-long poem for me." },
];
const options = {
    provider: gpt4.providers.Bing,
    stream: true,
    chunkSize: 15,
    retry: {
        times: 3,
        condition: (text) => {
            const words = text.split(" ");
            return words.length > 10;
        }
    },
    output: (text) => {
        return text + " ๐Ÿ’•๐ŸŒน";
    }
};

(async() => {
    const response = await gpt4.chatCompletion(messages, options);
    for await (const chunk of chunkProcessor(response)) {
        console.log(chunk);    
    }
})();
/*
I'll try to cre
ate that. 
  Is what you a
sked me to say
n    I hope it
brings you joy
n    And makes
your heart feel
 gay ๐Ÿ’•๐ŸŒน
*/

Note: The chunkSize feature is effective only when the stream option is activated along with the retry/output option.

โœ๏ธ RESUME: Configurable options

OptionTypeDescription
providerg4f.providers.anyChoose the provider to use for chat completions.
modelstringChoose the model to use by a provider that supports it
debugbooleanEnable or disable debug mode.
proxystringSpecify a proxy as a URL with a string in the host:port format.
retryobjectExecute the fetch operation N times in a row until it finishes or the callback function returns true.
retry.timesnumberSpecify the number of times the fetch operation will execute as a limit.
retry.conditionfunction: booleanCallback function that receives a string as the text for each instance the fetch operation is running. This function should return a boolean.
outputfunction: stringCallback function that receives a string as the final text response so you can edit it. This function executes after the retry fetch operations. This function should return a string.
conversationStylestringChoose the conversation style to use. This option is only supported by the Bing provider.
markdownbooleanDetermine if the response should be in markdown format or not.
streambooleanDetermine if the data should be streamed in parts or not.
chunkSizenumberDetermine the size of chunks streamed. This only works if the stream option is true and if using retry or condition.

๐Ÿš€ Chat completion providers

WebsiteProviderGPT-3.5GPT-4StreamStatus
GPT.aigpt4.providers.GPTโœ”๏ธโœ”๏ธโŒActive
chatbase.cogpt4.providers.ChatBaseโœ”๏ธโŒโœ”๏ธInactive
bing.comgpt4.providers.BingโŒโœ”๏ธโœ”๏ธActive

๐Ÿ“š Chat completion models

ModelProviders that support it
gpt-4gpt4.providers.GPT, gpt4.providers.Bing
gpt-4-0613gpt4.providers.GPT
gpt-4-32kgpt4.providers.GPT
gpt-4-0314gpt4.providers.GPT
gpt-4-32k-0314gpt4.providers.GPT
gpt-3.5-turbogpt4.providers.GPT, gpt4.providers.ChatBase
gpt-3.5-turbo-16kgpt4.providers.GPT
gpt-3.5-turbo-0613gpt4.providers.GPT
gpt-3.5-turbo-16k-0613gpt4.providers.GPT
gpt-3.5-turbo-0301gpt4.providers.GPT
text-davinci-003gpt4.providers.GPT
text-davinci-002gpt4.providers.GPT
code-davinci-002gpt4.providers.GPT
gpt-3gpt4.providers.GPT
text-curie-001gpt4.providers.GPT
text-babbage-001gpt4.providers.GPT
text-ada-001gpt4.providers.GPT
davincigpt4.providers.GPT
curiegpt4.providers.GPT
babbagegpt4.providers.GPT
adagpt4.providers.GPT
babbage-002gpt4.providers.GPT
davinci-002gpt4.providers.GPT



๐Ÿ“ก Translation

With the translation function, you can convert a text to a target language using AI.

Usage

const { GiftedGpt } = require("gifted-gpt");

const gpt4 = new GiftedGpt();
const options = {
    text: "Hello World",
    source: "en",
    target: "ko"
};

(async() => {
    const text = await gpt4.translation(options);
    console.log(text);
})();
/* 
{
  source: { code: 'en', lang: 'English' },
  target: { code: 'ko', lang: 'ํ•œ๊ตญ์–ด' },
  translation: { parts: [ [Object] ], result: '์•ˆ๋…•ํ•˜์„ธ์š” ์„ธ๊ณ„' }
}
*/

Note: You need to identify the language source ID and included it by your own, in the future this will be solved with AI, and you wouldn't need to specify it.

โœ๏ธ RESUME: Translation options

OptionTypeRequiredDescription
providerg4f.providers.anyโŒChoose the provider to use for translations.
debugbooleanโŒEnable or disable debug mode.
textstringโœ”๏ธSpecify the text to translate
sourcestringโœ”๏ธSpecify the source text language.
targetstringโœ”๏ธSpecify the target language to translate.

๐ŸŒ Languages available

ProviderStatusLanguages supported
gpt4.providers.TranslateAIActivehttps://rentry.co/3qi3wqnr



๐Ÿ“ท Image generation (BETA)

With the imageGeneration function, you will be able to generate images from a text input and optional parameters that will provide you with millions of combinations to stylize each of the images.


Cartoon style example

const { GiftedGpt } = require("gifted-gpt");
const fs = require("fs");

const gpt4 = new GiftedGpt();
(async() => {
    const base64Image = await gpt4.imageGeneration("A squirrel", { 
        debug: true,
        provider: gpt4.providers.Emi
    });	
    fs.writeFile('image.jpg', base64Image, { encoding: 'base64' }, function(err) {
      if (err) return console.error('Error writing the file: ', err);
      console.log('The image has been successfully saved as image.jpg.');
    });
})();

An squirrel cartoon style from the Emi provider


Paint style example

const { GiftedGpt } = require("gifted-gpt");
const fs = require("fs");

const gpt4 = new GiftedGpt();
(async() => {
    const base64Image = await gpt4.imageGeneration("A village", { 
        debug: true,
        provider: gpt4.providers.Pixart,
        providerOptions: {
            height: 512,
            width: 512,
            samplingMethod: "SA-Solver"
        }
    });	
    fs.writeFile('image.jpg', base64Image, { encoding: 'base64' }, function(err) {
      if (err) return console.error('Error writing the file: ', err);
      console.log('The image has been successfully saved as image.jpg.');
    });
})();

A village paint from the Pixart provider


Realistic style example

const { GiftedGpt } = require("gifted-gpt");
const fs = require("fs");

const gpt4 = new GiftedGpt();
(async() => {
    const base64Image = await gpt4.imageGeneration("A colorfull photo of a young lady", { 
        debug: true,
        provider: gpt4.providers.Prodia,
        providerOptions: {
            model: "ICantBelieveItsNotPhotography_seco.safetensors [4e7a3dfd]",
            samplingSteps: 15,
            cfgScale: 30
        }
    });	
    fs.writeFile('image.jpg', base64Image, { encoding: 'base64' }, function(err) {
      if (err) return console.error('Error writing the file: ', err);
      console.log('The image has been successfully saved as image.jpg.');
    });
})();

A photo of a young lady in realistic style from the Prodia provider

โœ๏ธ RESUME: Image generation options

OptionTypeDescription
debugbooleanEnable or disable debug mode.
providergpt4.providers.anyChoose the provider to use for image generations.
providerOptionsobjectUse provider options supported by any provider

Note: The value of providerOptions should be an object containing instructions for image generation, such as the base model, image style, sampling methods, among others. Not all providers support the same instructions, so refer to the following list.

โœ๏ธ RESUME: Image generation provider options

OptionTypeDescriptionLimitsProviders that support it
modelstringChoose a model as a base for generation.๐Ÿค– Check listsProdia, ProdiaStableDiffusion, ProdiaStableDiffusionXL
negativePromptstringIndicate the provider of what not to do.NonePixart, PixartLCM, Prodia, ProdiaStableDiffusion, ProdiaStableDiffusionXL
imageStylestringSpecify the draw style.๐ŸŽจ Check listsPixart, PixartLCM
heightnumberSpecify the image height.๐Ÿงฎ Check listsPixart, PixartLCM, ProdiaStableDiffusion, ProdiaStableDiffusionXL
widthnumberSpecify the image width.๐Ÿงฎ Check listsPixart, PixartLCM, ProdiaStableDiffusion, ProdiaStableDiffusionXL
samplingStepsnumberSpecify the number of iterations. A higher number results in more quality.๐Ÿงฎ Check listsProdia, ProdiaStableDiffusion, ProdiaStableDiffusionXL
samplingMethodstringChoose a sampling method to control the diversity, quality, and coherence of images.โœ’๏ธ Check listsPixart, Prodia, ProdiaStableDiffusion, ProdiaStableDiffusionXL
cfgScalenumberSpecify the Classifier-Free Guidance to control how closely the generated image adheres to the given text prompt.๐Ÿงฎ Check listsPixart Prodia, ProdiaStableDiffusion, ProdiaStableDiffusionXL
dpmInferenceStepsnumberSpecify the DPM Inference Steps for refining object detection accuracy๐Ÿงฎ Check listsPixart
saGuidanceScalenumberSpecify the Style-Aware Guidance Scale for fine-tuning style and composition๐Ÿงฎ Check listsPixart StableDiffusionPlus
saInferenceStepsnumberSpecify the Style-Aware Inference Steps for refining or adjusting the generated image during style transfer or style-based image synthesis.๐Ÿงฎ Check listsPixart
lcmInferenceStepsnumberSpecify the LCM Inference Steps for enhancing the generation of images with AI by leveraging latent consistency models๐Ÿงฎ Check listsPixartLCM
useGpubooleanDetermine whether to use the GPU for generationNoneDalle2
promptImprovementbooleanDetermine whether the prompt should be enhanced using AI.NoneDalle2

๐Ÿค– Image generation models

ProviderModels supported
Prodiahttps://rentry.co/b6i53fnm
ProdiaStableDiffusionhttps://rentry.co/pfwmx6y5
ProdiaStableDiffusionXLhttps://rentry.co/wfhsk8sv

๐ŸŽจ Image generation styles

ProviderImage styles supported
Pixarthttps://rentry.co/hcggg36n
PixartLCMhttps://rentry.co/gzxa3wv2

โœ’๏ธ Image generation sampling methods

ProviderSampling methods supported
Pixarthttps://rentry.co/x7i8gko9
Prodiahttps://rentry.co/8bwtqeh9
ProdiaStableDiffusionhttps://rentry.co/iyrkxmzr
ProdiaStableDiffusionXLhttps://rentry.co/p2ad6y3f

๐Ÿงฎ Number type options

ProviderNumber type options and values supported
Pixart
OptionDefaultMinMax
height10242562048
width10242562048
dpmInferenceSteps14540
saGuidanceScale3110
saInferenceSteps251040
cfgScale4.5110
PixartLCM
OptionDefaultMinMax
height10242562048
width10242562048
lcmInferenceSteps9130
Prodia
OptionDefaultMinMax
samplingSteps7020
cfgScale25130
ProdiaStableDiffusion
OptionDefaultMinMax
height512501024
width512501024
samplingSteps25130
cfgScale7120
ProdiaStableDiffusionXL
OptionDefaultMinMax
height10245121536
width10245121536
samplingSteps25130
cfgScale7120
StableDiffusionPlus
OptionDefaultMinMax
saGuidanceScale9050

๐Ÿ–ผ๏ธ Image generation providers

ProviderStatusDefault style
PixartInactiveRealistic with a touch of exaggeration, characterized by detailed textures, vibrant colors, and enhanced features.
PixartLCMInactiveExhibits a detailed and vibrant use of color, creating a visually rich and textured representation. Itโ€™s a blend of realism with a touch of artistic interpretation.
EmiActiveCharacterized by a colorful and whimsical animation, reminiscent of a childrenโ€™s storybook illustration.
DalleActiveRealistic, capturing intricate details and textures to depict a lifelike representation.
DalleMiniActiveLeans towards the abstract, with a digital artistry touch that emphasizes detailed textures and vibrant colors. It captures the essence of the subject through the use of shape, color, and form rather than attempting to represent it accurately.
Dalle2InactiveCharacterized by its semi-realism, with a focus on fine details, vivid colors, and natural lighting.
ProdiaActiveCan be described as โ€œphotorealisticโ€. This term refers to artwork that is extremely detailed and lifelike, closely resembling a high-resolution photograph.
ProdiaStableDiffusionInactivePhotorealistic, capturing intricate details and textures to mimic the appearance of a real-life scene.
ProdiaStableDiffusionXLInactiveSemi-realistic, meticulously incorporating fine details and textures to emulate the semblance of a real-world scenario.
StableDiffusionLiteActiveCan be described as folk art. It exhibits a naive perspective, lacks realistic proportions, and evokes simplicity.
StableDiffusionPlusActiveImpressionism, characterized by visible brushstrokes, open composition, emphasis on light in its changing qualities, and ordinary subject matter.

Note

It's important to review the possibilities that each provider offers within their limitations, in order to access more detailed creations. However, it's possible that at some point you might combine options that are not supported by the provider you're using at that moment. In such cases the image generation won't stop; instead (and as long as you're using the debug option), you'll receive a warning alerting you to the error.

Keywords

FAQs

Package last updated on 28 Aug 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with โšก๏ธ by Socket Inc