Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
AI-driven tool for translation, summarization, text moderation, and sensitive image detection.
ai-driven
is a versatile module that integrates two cutting-edge AI platforms: Claude AI
and OpenAI's GPT
. This powerful combination offers a wide array of natural language processing and computer vision capabilities.
Key features include:
import { Assistant } from 'ai-driven';
const assistant = new Assistant({ apiVendor: 'OpenAI', apiKey: 'your_api_key_here' });
const translatedText = await assistant.translateText('Hello, world!', 'it');
console.log(translatedText); // => Ciao, mondo!
You can find more usage examples here
ai-driven
offers easy-to-use methods (API Methods list) for a wide range of tasks including:
Text Processing:
Image Analysis:
Audio Processing:
Free-form ask [more]
This versatile module simplifies complex AI tasks, making it easier for developers to integrate advanced AI capabilities into their applications.
To install the ai-driven
module, run the following command:
npm i -S ai-driven
You can configure the assistant in two ways:
Provide the configuration when creating the assistant:
const assistant = new Assistant({
apiKey: 'your_api_key_here',
apiVendor: 'OpenAI', // 'OpenAI' or 'Claude'
apiUrl: 'https://api.anthropic.com/v1/messages', // optional
apiModel: 'claude-3-haiku-20240307' // optional
});
.env
file in your project's root directory..env
file:
2.1. For OpenAI:OPENAI_API_KEY=your_OpenAI_api_key_here
OPENAI_API_URL=https://api.openai.com/v1/chat/completions
OPENAI_API_MODEL=gpt-3.5-turbo
2.2. For Claude:
CLAUDE_API_KEY=your_Claude_api_key_here
CLAUDE_API_URL=https://api.anthropic.com/v1/messages
CLAUDE_API_MODEL=claude-3-haiku-20240307
The assistant will automatically use these environment variables if no configuration is provided during initialization.
Here's a basic example of how to use the ai-driven
module:
import { Assistant } from 'ai-driven';
import fs from 'fs/promises';
async function main() {
const assistant = new Assistant({ apiKey: 'your_api_key_here' });
// Translate text
const translatedText = await assistant.translateText('Hello, world!', 'it');
console.log('Translated text:', translatedText);
// Bulk translate text
const translatedText = await assistant.translateBulkText('Hello, world!', ['it', 'fr', 'es']);
console.log('Translated text:', translatedText);
// Check for offensive language
const offensiveLevel = await assistant.checkForOffensiveLanguage('You are stupid!');
console.log('Offensive level:', offensiveLevel);
// Check for profanity
const profanityLevel = await assistant.checkForProfanity('Damn it!');
console.log('Profanity level:', profanityLevel);
// Check an image for violence
const imageBuffer = await fs.readFile('path/to/your/image.jpg');
const violenceLevel = await assistant.checkImageForViolence(imageBuffer);
console.log('Violence level in image:', violenceLevel);
// Check an image for pornography
const pornographyLevel = await assistant.checkImageForPornography(imageBuffer);
console.log('Pornography level in image:', pornographyLevel);
}
main().catch(console.error);
The cost for using OpenAI's models varies depending on the model and usage. As of now, for the GPT-4o model, the pricing is as follows:
$0.005
per 1,000 tokens for input$0.015
per 1,000 tokens for outputFor more detailed and up-to-date pricing, please refer to the OpenAI Pricing page.
If you use the text examples from example.ts, and you consume 739 tokens for input
and 384 tokens for output
, the cost would be approximately $0,009
.
However, the cost will increase significantly if you use image and audio processing models, as the pricing for these services depends on the size and complexity of the files you're working with—larger files incur higher costs.
For the most up-to-date information on rate limits, please refer to the OpenAI Rate Limits page.
To use this library, you'll need an API key. You can obtain one from the OpenAI console: https://platform.openai.com/account/api-keys
GPT Models:
gpt-4
gpt-4-turbo
gpt-4-vision-preview
gpt-4o
gpt-4-32k
gpt-3.5-turbo
(default)gpt-3.5-turbo-16k
gpt-3.5-turbo-instruct
DALL-E Models:
dall-e-3
dall-e-2
Whisper Models:
whisper
Embedding Models:
text-embedding-3-large
text-embedding-3-small
text-embedding-ada-002
Text-to-Speech Models:
tts-1
tts-1-hd
More about models: https://platform.openai.com/docs/models
Currently, the most affordable model costs $0.25
per million tokens (MTok) for input and $1.25
per MTok for output. More details here
If you only use the text examples from example.ts, you'll consume 739 tokens for input
and 384 tokens for output
, resulting in a cost of approximately $0.0007
.
However, this cost will increase significantly if you use image and audio processing, as it depends entirely on the size of the files you're working with—larger files incur higher costs.
For the most up-to-date information on rate limits, please refer to the Rate Limits page
To use this library, you'll need an API key. You can obtain one from the Anthropic console: https://console.anthropic.com/settings/keys
More about models: https://docs.anthropic.com/en/docs/about-claude/models#model-names
ai-driven
leverages the power of Claude AI and OpenAI's GPT models to perform various tasks such as:
Text translation: Convert text from one language to another while preserving meaning and context - translateText(text: string, lang?: string, context?: string ): Promise<string>
Bulk Text translation: Convert text from one language to another languages while preserving meaning and context, return json object - translateBulkText(text: string, lang: string[], context?: string ): Promise<string>
Language detection: Automatically identify the language of a given text. Return 2-letters ISO_639-1 language code - detectLanguage(text: string): Promise<string>
Grammar and spelling correction: Identify and correct grammatical errors and spelling mistakes in text - correctText(text: string): Promise<string>
Text Summarization: Generate concise summaries of longer text documents - summarizeText(text: string, maxWords?: number): Promise<string>
Text Generation: Create coherent and contextually relevant text based on given prompts - generateText(prompt: string, maxWords?: number): Promise<string>
Text paraphrasing: Rewrite text to convey the same meaning using different words and sentence structures - paraphraseText(text: string): Promise<string>
Text classification: Categorize text into predefined classes or topics - classifyText(text: string, categories: string[]): Promise<string>
Keyword extraction: Identify and extract the most important or relevant words or phrases from a text - extractKeywords(text: string, count?: number): Promise<string[]>
Named Entity Recognition (NER): Extract entities like names, dates, locations, and organizations from text - extractEntities(text: string): Promise<Record<string, string[]>>
Sentiment Analysis: Detect the sentiment (positive, negative, neutral) in text data - analyzeSentiment(text: string): Promise<string>
Offensive language detection: Identify and flag inappropriate, offensive, or harmful language in text - checkForOffensiveLanguage(text: string): Promise<number>
Profanity checking: Detect and filter out profane or vulgar words and expressions in text - checkForProfanity(text: string): Promise<number>
Emotion Detection: Identify specific emotions (e.g., joy, sadness, anger) in text - detectEmotion(text: string): Promise<string>
Question Answering: Provide accurate answers to questions based on a given context or dataset - answerQuestion(question: string, context: string): Promise<string>
Image Captioning: Generate descriptive captions for images - captionImage(imageBuffer: Buffer): Promise<string>
Optical Character Recognition (OCR): Extract text from images of documents or handwritten notes (not supported by OpenAI vendor) - extractTextFromImage(imageBuffer: Buffer): Promise<string>
Object Detection in Images: Identify and locate objects within images - detectObjectsInImage(imageBuffer: Buffer): Promise<Record<string, number[]>>
Search Object in Images: Locate specific objects within images based on user queries - searchObjectInImage(imageBuffer: Buffer, objectQuery: string): Promise<number[] | null>
Violence detection in images: Identify and flag images containing violent content or scenes - checkImageForViolence(imageBuffer: Buffer): Promise<number>
Pornographic content detection in images: Detect and filter out images containing explicit or pornographic content - checkImageForPornography(imageBuffer: Buffer): Promise<number>
Facial expression analysis in images: Recognize and categorize facial expressions in images to determine emotions - analyzeFacialExpression(imageBuffer: Buffer): Promise<Record<string, string>>
Emotion Detection in Voice: Identify specific emotions (e.g., joy, sadness, anger) in voice data (not supported by OpenAI vendor) - detectEmotionInVoice(audioBuffer: Buffer): Promise<string>
Speech-to-text conversion: Transcribe spoken words from audio recordings into written text (not supported by OpenAI vendor) - speechToText(audioBuffer: Buffer): Promise<string>
The ai-driven
module provides the following methods:
Method | Description | Parameters | Return Promise Type |
---|---|---|---|
ask | Ask a question with customizable options [more] | text: string, options?: askOptionsType | string |
translateText | Translates the given text to selected language (English by default) | text: string, lang?: string, context?: string | string |
translateBulkText | Translates the given text to selected languages | text: string, lang: string[], context?: string | Record<string, string> |
detectLanguage | Detects the language of the provided text | text: string | string |
correctText | Corrects grammar and spelling errors in the given text | text: string | string |
summarizeText | Generates a summary of the provided text, optionally limiting the summary length | text: string, maxWords?: number | string |
generateText | Creates coherent and contextually relevant text based on the given prompt | prompt: string, maxWords?: number | string |
paraphraseText | Rewrites the given text to convey the same meaning using different words and sentence structures | text: string | string |
classifyText | Categorizes the given text into one of the predefined classes or topics | text: string, categories: string[] | string |
extractKeywords | Identifies and extracts the most important or relevant words or phrases from the text | text: string, count?: number | string[] |
extractEntities | Extracts named entities (names, dates, locations, organizations) from the text | text: string | Record<string, string[]> |
analyzeSentiment | Detects the sentiment (positive, negative, neutral) in the given text | text: string | string |
checkForOffensiveLanguage | Checks the given text for offensive language and returns a score from 1 to 10 | text: string | number |
checkForProfanity | Checks the given text for profanity and returns a score from 1 to 10 | text: string | number |
detectEmotion | Identifies specific emotions (e.g., joy, sadness, anger) in the given text | text: string | string |
answerQuestion | Provides an accurate answer to the question based on the given context | question: string, context: string | string |
captionImage | Generates a descriptive caption for the given image | imageBuffer: Buffer | string |
extractTextFromImage | Extracts text from images of documents or handwritten notes (not supported by OpenAI vendor) | imageBuffer: Buffer | string |
detectObjectsInImage | Identifies and locates objects within the given image | imageBuffer: Buffer | Record<string, number[]> |
searchObjectInImage | Locates a specific object within the image based on the user query | imageBuffer: Buffer, objectQuery: string | number[] | null |
checkImageForViolence | Analyzes the given image for violent content and returns a score from 1 to 10 | imageBuffer: Buffer | number |
checkImageForPornography | Analyzes the given image for pornographic content and returns a score from 1 to 10 | imageBuffer: Buffer | number |
analyzeFacialExpression | Recognizes and categorizes facial expressions in the given image to determine emotions | imageBuffer: Buffer | Record<string, string> |
detectEmotionInVoice | Identifies specific emotions in the given voice data (not supported by OpenAI vendor) | audioBuffer: Buffer | string |
speechToText | Transcribes spoken words from the given audio recording into written text (not supported by OpenAI vendor) | audioBuffer: Buffer | string |
Method: ask
This method is used to ask a question with customizable options.
import { Assistant } from 'ai-driven';
const assistant = new Assistant({ apiVendor: 'OpenAI', apiKey: 'your_api_key_here' });
const result = await assistant.ask(
'bubble sort function',
{
format: 'TypeScript',
answerOnly: false,
language: 'en',
role: 'Fitness Trainer',
tone: 'Informative',
style: 'Poetic',
emotion: 'Love',
context: 'Sort colors',
}
);
console.log(result);
Bubble Sort Function for Sorting Colors
Ah, the dance of colors, a captivating sight,
Where hues embrace, in a harmonious flight.
Let us embark on a journey, with grace and might,
To sort these vibrant shades, with all our might.
Fitness Trainer's Perspective:
Just as our bodies crave a well-ordered routine,
Our colors, too, deserve a rhythm, serene.
Through the Bubble Sort, we'll find the way,
To arrange these hues, in a beautiful display.
With each gentle swap, a transformation unfolds,
Allowing the spectrum to shine, its story untold.
From the lightest hue to the darkest hue,
We'll navigate this dance, with love anew.
So, let's dive in, and embrace the flow,
As we sort these colors, with a rhythmic glow.
For in this process, we'll find the art,
Of bringing order to the canvas of our heart.
function bubbleSort(colors: string[]): string[] {
const n = colors.length;
for (let i = 0; i < n - 1; i++) {
for (let j = 0; j < n - i - 1; j++) {
if (colors[j] > colors[j + 1]) {
// Swap colors[j] and colors[j+1]
[colors[j], colors[j + 1]] = [colors[j + 1], colors[j]];
}
}
}
return colors;
}
public async ask(question: string, options?: askOptionsType): Promise<string>
question
(string): The question to ask.options
(askOptionsType): Optional parameters to customize the question.answerOnly
(boolean): Return only the answer. Default is true
.language
(string): Answer in the specified language.context
(string): In the specified context.role
(string): Act as a specific role.task
(string): Create a specific task.format
(string): Response format.tone
(string): Tone of the response.style
(string): Style of writing.emotion
(string): Emotion to convey.This module requires a valid Claude API key to function. Ensure you have the necessary permissions and comply with Claude's terms of service when using this module.
Dimitry Ivanov 2@ivanoff.org.ua # curl -A cv ivanoff.org.ua
FAQs
AI-driven tool for translation, summarization, text moderation, and sensitive image detection.
The npm package ai-driven receives a total of 49 weekly downloads. As such, ai-driven popularity was classified as not popular.
We found that ai-driven demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.