
Research
Supply Chain Attack on Axios Pulls Malicious Dependency from npm
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.
@ts-dspy/gemini
Advanced tools
Gemini API integration for TS-DSPy - enables type-safe LLM interactions with Gemini models for TypeScript
Google Gemini API integration for TS-DSPy - enables type-safe LLM interactions with Gemini models for TypeScript applications.
This package provides seamless integration between TS-DSPy and Google's Gemini language models, allowing you to build powerful, type-safe applications with Gemini 2.0 Flash and other Gemini models.
npm install @ts-dspy/gemini @ts-dspy/core
# Install ts-node for proper execution (recommended)
npm install -g ts-node
⚠️ Important: Use ts-node to run TypeScript files directly. Transpiling to JavaScript may cause issues with decorators and type information.
# Run your scripts with ts-node
npx ts-node your-script.ts
# Or install globally and use directly
npm install -g ts-node
ts-node your-script.ts
Get your Gemini API key from Google AI Studio.
import { GeminiLM } from '@ts-dspy/gemini';
const lm = new GeminiLM({
apiKey: process.env.GEMINI_API_KEY, // Your Gemini API key
model: 'gemini-2.0-flash', // or 'gemini-1.0-pro', etc.
});
import { Signature, InputField, OutputField, Predict, configure } from '@ts-dspy/core';
import { GeminiLM } from '@ts-dspy/gemini';
// Define your signature
class Translator extends Signature {
static description = "Translate text between languages";
@InputField({ description: "Text to translate" })
text!: string;
@InputField({ description: "Target language" })
target_language!: string;
@OutputField({ description: "Translated text" })
translation!: string;
@OutputField({ description: "Confidence score from 0-1", type: "number" })
confidence!: number;
}
// Setup Gemini model
configure({
lm: new GeminiLM({
apiKey: process.env.GEMINI_API_KEY,
model: 'gemini-2.0-flash'
})
});
// Create and use predictor
const translator = new Predict(Translator);
const result = await translator.forward({
text: "Hello, how are you?",
target_language: "Spanish"
});
console.log(result.translation); // "Hola, ¿cómo estás?"
console.log(result.confidence); // 0.95
import { configure } from '@ts-dspy/core';
import { GeminiLM } from '@ts-dspy/gemini';
// Configure globally
configure({
lm: new GeminiLM({
apiKey: process.env.GEMINI_API_KEY,
model: 'gemini-2.0-flash'
})
});
// Now you can use modules without passing the language model
const predictor = new Predict("question -> answer");
const lm = new GeminiLM({
// Required
apiKey: 'your-api-key',
// Model selection
model: 'gemini-2.0-flash', // gemini-2.0-flash, gemini-1.0-pro, etc.
});
// You can also set generation parameters through LLMCallOptions
const result = await predictor.forward(
{ question: "What is AI?" },
{
temperature: 0.7, // 0-1, controls creativity
maxTokens: 1000, // Maximum tokens to generate
topP: 0.9, // Nucleus sampling parameter
stopSequences: ['END'], // Stop generation at these sequences
}
);
gemini-2.0-flash (latest and fastest)gemini-1.0-proconst capabilities = lm.getCapabilities();
console.log(capabilities);
// {
// supportsStreaming: false,
// supportsStructuredOutput: true,
// supportsFunctionCalling: false,
// supportsVision: false,
// maxContextLength: 32768,
// supportedFormats: ['json_object']
// }
Gemini excels at generating structured JSON responses:
import { Signature, InputField, OutputField } from '@ts-dspy/core';
class ProductAnalysis extends Signature {
@InputField({ description: "Product description" })
description!: string;
@OutputField({ description: "Product category" })
category!: string;
@OutputField({ description: "Price range", type: "string" })
priceRange!: 'budget' | 'mid-range' | 'premium';
@OutputField({ description: "Key features as array", type: "array" })
features!: string[];
@OutputField({ description: "Sentiment score", type: "number" })
sentiment!: number;
}
const analyzer = new Predict(ProductAnalysis);
const result = await analyzer.forward({
description: "Latest smartphone with AI camera, long battery life, and premium design"
});
console.log(result.category); // "Electronics"
console.log(result.priceRange); // "premium"
console.log(result.features); // ["AI camera", "long battery", "premium design"]
console.log(result.sentiment); // 0.8
Gemini includes built-in safety settings to filter harmful content:
// The GeminiLM automatically configures safety settings
// Default: BLOCK_MEDIUM_AND_ABOVE for harassment
// If content is blocked, you'll receive a descriptive error:
try {
const result = await predictor.forward({ question: "Inappropriate content..." });
} catch (error) {
if (error.message.includes('blockReason')) {
console.log('Content was blocked by Gemini safety filters');
// Handle content filtering gracefully
}
}
import { ChainOfThought } from '@ts-dspy/core';
const reasoner = new ChainOfThought("problem -> solution: int");
const result = await reasoner.forward({
problem: "A store sells apples for $2 each and oranges for $3 each. If someone buys 4 apples and 3 oranges, how much do they pay in total?"
});
console.log(result.reasoning); // "First, calculate apples: 4 × $2 = $8..."
console.log(result.solution); // 17
import { RespAct } from '@ts-dspy/core';
const agent = new RespAct("question -> answer", {
tools: {
calculate: {
description: "Performs mathematical calculations including arithmetic operations. Use this when you need to compute numerical results.",
function: (expression: string) => {
try {
return new Function('return ' + expression)();
} catch (error) {
return `Error: ${error}`;
}
}
},
convertCurrency: {
description: "Converts between currencies. Provide amount and currency codes (e.g., '100 USD to EUR').",
function: async (query: string) => {
// Implementation would call a currency API
return "Converted amount: ...";
}
}
},
maxSteps: 5
});
const result = await agent.forward({
question: "What is 15 * 24 + 100 - 50 in USD converted to EUR?"
});
class SentimentAnalysis extends Signature {
@InputField({ description: "Text to analyze for sentiment" })
text!: string;
@OutputField({ description: "Sentiment classification" })
sentiment!: 'positive' | 'negative' | 'neutral';
@OutputField({ description: "Confidence score between 0 and 1" })
confidence!: number;
@OutputField({ description: "Key emotional indicators found" })
emotions!: string[];
static description = "Analyze the sentiment and emotions in the given text";
}
const classifier = new Predict(SentimentAnalysis);
const result = await classifier.forward({
text: "I absolutely love this new framework! It's incredibly powerful and easy to use."
});
console.log(result.sentiment); // "positive"
console.log(result.confidence); // 0.92
console.log(result.emotions); // ["love", "enthusiasm", "satisfaction"]
// Multi-turn conversations
const chatModel = new GeminiLM({
apiKey: process.env.GEMINI_API_KEY,
model: 'gemini-2.0-flash'
});
const messages = [
{ role: 'user', content: 'Hello! What is TypeScript?' },
{ role: 'assistant', content: 'TypeScript is a superset of JavaScript...' },
{ role: 'user', content: 'How does it help with large applications?' }
];
const response = await chatModel.chat(messages);
console.log(response);
Comprehensive error handling for Gemini-specific issues:
try {
const result = await predictor.forward({ question: "Complex question" });
} catch (error) {
if (error.message.includes('blockReason')) {
console.log('Content blocked by safety filters');
// Try rephrasing the question
} else if (error.message.includes('API key')) {
console.log('Invalid or missing API key');
// Check your API key configuration
} else if (error.message.includes('quota')) {
console.log('API quota exceeded');
// Implement retry with backoff
} else {
console.log('Unexpected error:', error.message);
}
}
Note: Gemini API currently doesn't provide detailed token usage statistics through the SDK:
const lm = new GeminiLM({
apiKey: process.env.GEMINI_API_KEY,
model: 'gemini-2.0-flash'
});
// Make some predictions
const predictor = new Predict("question -> answer", lm);
await predictor.forward({ question: "What is AI?" });
// Get usage statistics (currently limited)
const usage = lm.getUsage();
console.log(`Requests made: ${usage.requestCount || 'Not available'}`);
console.log('Note: Detailed token usage not yet available in Gemini API');
| Feature | Gemini | OpenAI |
|---|---|---|
| Models | Gemini 2.0 Flash, 1.0 Pro | GPT-4, GPT-3.5-turbo |
| Context Length | 32k+ tokens | 8k-128k tokens |
| Structured Output | ✅ Native JSON | ✅ JSON mode |
| Streaming | ❌ Not yet supported | ✅ Full support |
| Function Calling | ✅ Full support | ✅ Full support |
| Safety Filtering | ✅ Built-in | ⚠️ Moderation API |
| Cost | Generally lower | Higher for advanced models |
| Speed | Very fast (Flash model) | Variable by model |
API Key Issues
# Set your API key
export GEMINI_API_KEY="your-key-here"
Content Blocked
Model Not Found
Rate Limiting
Check out the complete examples in the TS-DSPy repository:
basic-gemini-example.ts - Comprehensive usage examplesContributions are welcome! Please see the main TS-DSPy repository for contribution guidelines.
MIT License - see the LICENSE file for details.
@ts-dspy/core - Core TS-DSPy framework@ts-dspy/openai - OpenAI integrationMade with ❤️ for the TypeScript + AI community
FAQs
Gemini API integration for TS-DSPy - enables type-safe LLM interactions with Gemini models for TypeScript
We found that @ts-dspy/gemini demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.

Research
Malicious versions of the Telnyx Python SDK on PyPI delivered credential-stealing malware via a multi-stage supply chain attack.

Security News
TeamPCP is partnering with ransomware group Vect to turn open source supply chain attacks on tools like Trivy and LiteLLM into large-scale ransomware operations.