
Research
/Security News
Malicious npm Packages Target WhatsApp Developers with Remote Kill Switch
Two npm packages masquerading as WhatsApp developer libraries include a kill switch that deletes all files if the phone number isn’t whitelisted.
Create type-safe functions using AI Language Models with ease.
# Using npm
npm install aifn
# Using yarn
yarn add aifn
# Using pnpm
pnpm add aifn
You'll also need to install the provider SDKs you want to use:
# For OpenAI
pnpm add openai
# For Anthropic
pnpm add @anthropic-ai/sdk
# For Google's Gemini
pnpm add @google/generative-ai
# For Ollama
pnpm add ollama
import { z } from 'zod'
import { llm, fn } from 'aifn'
import { OpenAI } from 'openai'
const toFrench = fn({
llm: llm.openai(new OpenAI({ apiKey: 'YOUR_OPENAI_API_KEY' }), 'gpt-4o-mini'),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
}),
})
const res = await toFrench('Hello, how are you?')
console.log(res.translation) // 'Bonjour, comment ça va?'
import { z } from 'zod'
import { llm, fn } from 'aifn'
import { Anthropic } from '@anthropic-ai/sdk'
const toFrench = fn({
llm: llm.anthropic(new Anthropic({ apiKey: 'YOUR_ANTHROPIC_API_KEY' }), 'claude-3-5-haiku-20241022'),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
}),
})
const res = await toFrench('Hello, how are you?')
console.log(res.translation) // 'Bonjour, comment ça va?'
import { z } from 'zod'
import { llm, fn } from 'aifn'
import { GoogleGenerativeAI } from '@google/generative-ai'
const toFrench = fn({
llm: llm.gemini(new GoogleGenerativeAI('YOUR_GEMINI_API_KEY'), 'gemini-1.5-flash'),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
}),
})
const res = await toFrench('Hello, how are you?')
console.log(res.translation) // 'Bonjour, comment ça va?'
const toFrench = fn({
llm: llm.ollama(new Ollama(), 'mistral:7b'),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
}),
})
const res = await toFrench('Hello, how are you?')
console.log(res.translation) // 'Bonjour, comment ça va?'
You can specify examples for your function to improve the quality of the output.
import { z } from 'zod'
import { llm, fn } from 'aifn'
const toFrench = fn({
llm: llm.openai(new OpenAI({ apiKey: 'YOUR_OPENAI_API_KEY' }), 'gpt-4o-mini'),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
}),
examples: [
{ input: 'Hello', output: { translation: 'Bonjour' } },
{ input: 'How are you?', output: { translation: 'Comment ça va?' } },
],
})
You can use custom LLM providers by specifying the llm.custom
method
import { z } from 'zod'
import { llm, fn, LLMRequest, LLMResponse } from 'aifn'
const toFrench = fn({
llm: llm.custom(async (req: LLMRequest): Promise<LLMResponse> => {
// implement your custom LLM calling logic here
}),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
})
})
The request and response types look as follows:
type LLMRequest = {
system: string
messages: Message[]
output_schema?: ZodSchema<any>
}
type Message = {
role: 'user' | 'assistant'
content: string
}
type LLMResponse =
| { type: 'text'; content: string; response: any }
| { type: 'json'; data: unknown; response: any }
| { type: 'error'; error: unknown }
The function created with fn
has a config
property that contains the configuration used to create the function.
import { z } from 'zod'
import { OpenAI } from 'openai'
import { llm, fn } from 'aifn'
const toFrench = fn({
llm: llm.openai(new OpenAI({ apiKey: 'YOUR_OPENAI_API_KEY' }), 'gpt-4o-mini'),
description: `Translate the user message from English to French`,
input: z.string().describe('The text to translate'),
output: z.object({
translation: z.string().describe('The translated text'),
}),
})
console.log(toFrench.config)
// {
// llm: {
// provider: 'openai',
// client: OpenAI {...},
// model: 'gpt-4o-mini',
// ...
// },
// description: 'Translate the user message from English to French',
// input: ZodString {...},
// output: ZodObject {...},
// }
You can use this configuration to duplicate the function with a different LLM for example:
const otherToFrench = fn({
... toFrench.config,
llm: llm.ollama(new Ollama(), 'llama3.1'),
})
The function created with fn
has mock
and unmock
methods that can be used to mock the function during tests.
import { toFrench } from './my/file.js'
describe('my awesome feature', () => {
before(() => {
toFrench.mock(async text => ({ translation: `Translated(${text})` }))
})
after(() => {
toFrench.unmock()
})
it('translates text', async () => {
const res = await toFrench('Hello, how are you?')
expect(res.translation).to.equal('Translated(Hello, how are you?)')
})
})
function fn<Args, R>(config: FnConfig<Args, R>): Fn<Args, R>
Creates a type-safe function that uses an LLM to transform inputs into outputs.
Parameters:
config
: Configuration object with the following properties:
llm
: LLM provider instance (see LLM Providers below)description
: String describing what the function does (used as system prompt)input
: Zod schema for the input typeoutput
: Zod schema for the output typeexamples?
: Optional array of input/output examples to guide the LLMReturns: A function with the following properties:
(args: Args) => Promise<R>
: The main function that processes inputsconfig
: The configuration object used to create the functionmock(implementation: (args: Args) => Promise<R>)
: Method to set a mock implementationunmock()
: Method to remove the mock implementationExample:
import { z } from 'zod'
import { fn, llm } from 'aifn'
import { OpenAI } from 'openai'
const summarize = fn({
llm: llm.openai(new OpenAI({ apiKey: 'YOUR_API_KEY' }), 'gpt-3.5-turbo'),
description: 'Summarize the given text in a concise way',
input: z.object({
text: z.string().describe('The text to summarize'),
maxWords: z.number().describe('Maximum number of words in the summary')
}),
output: z.object({
summary: z.string().describe('The summarized text'),
wordCount: z.number().describe('Number of words in the summary')
}),
examples: [{
input: { text: 'TypeScript is a programming language...', maxWords: 10 },
output: { summary: 'TypeScript: JavaScript with static typing.', wordCount: 5 }
}]
})
function openai(client: OpenAI, model: string): LLM
Creates an OpenAI LLM provider.
Parameters:
client
: OpenAI client instancemodel
: Model name (e.g., 'gpt-4', 'gpt-4o-mini')Example:
import { OpenAI } from 'openai'
import { llm } from 'aifn'
const provider = llm.openai(
new OpenAI({ apiKey: 'YOUR_API_KEY' }),
'gpt-4o-mini'
)
function anthropic(client: Anthropic, model: string): LLM
Creates an Anthropic LLM provider.
Parameters:
client
: Anthropic client instancemodel
: Model name (e.g., 'claude-3-5-haiku-20241022')Example:
import Anthropic from '@anthropic-ai/sdk'
import { llm } from 'aifn'
const provider = llm.anthropic(
new Anthropic({ apiKey: 'YOUR_API_KEY' }),
'claude-3-5-haiku-20241022'
)
function gemini(client: GoogleGenerativeAI, model: string): LLM
Creates a Google Gemini LLM provider.
Parameters:
client
: Google GenerativeAI client instancemodel
: Model name (e.g., 'gemini-1.5-flash')Example:
import { GoogleGenerativeAI } from '@google/generative-ai'
import { llm } from 'aifn'
const provider = llm.gemini(
new GoogleGenerativeAI('YOUR_API_KEY'),
'gemini-1.5-flash'
)
function ollama(client: Ollama, model: string): LLM
Creates an Ollama LLM provider for local models.
Parameters:
client
: Ollama client instancemodel
: Model name (e.g., 'llama3.1', 'mistral')Example:
import { Ollama } from 'ollama'
import { llm } from 'aifn'
const provider = llm.ollama(new Ollama(), 'llama3.1')
function custom(generate: (req: LLMRequest) => Promise<LLMResponse>): LLM
Creates a custom LLM provider with your own implementation.
Parameters:
generate
: Function that implements the LLM request/response cycleExample:
import { llm, LLMRequest, LLMResponse } from 'aifn'
const provider = llm.custom(async (req: LLMRequest): Promise<LLMResponse> => {
// Your custom implementation here
return {
type: 'json',
data: { /* your response data */ },
response: { /* raw response data */ }
}
})
type LLMRequest = {
system: string // System prompt
messages: Message[] // Conversation history
output_schema?: ZodSchema // Expected output schema
}
type LLMResponse =
| { type: 'text'; content: string; response: any }
| { type: 'json'; data: unknown; response: any }
| { type: 'error'; error: unknown }
type Message = {
role: 'user' | 'assistant'
content: string
}
2.0.0-alpha.1 (Dec 8th 2024)
2.0.0-alpha.0 (Dec 2nd 2024)
1.0.0 (Nov 25th 2024)
FAQs
Create functions using AI LLMs.
The npm package aifn receives a total of 1 weekly downloads. As such, aifn popularity was classified as not popular.
We found that aifn demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
/Security News
Two npm packages masquerading as WhatsApp developer libraries include a kill switch that deletes all files if the phone number isn’t whitelisted.
Research
/Security News
Socket uncovered 11 malicious Go packages using obfuscated loaders to fetch and execute second-stage payloads via C2 domains.
Security News
TC39 advances 11 JavaScript proposals, with two moving to Stage 4, bringing better math, binary APIs, and more features one step closer to the ECMAScript spec.