
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
vectorcache
Advanced tools
Official TypeScript/JavaScript SDK for Vectorcache - Intelligent LLM caching layer
Official JavaScript/TypeScript SDK for Vectorcache - the intelligent caching layer for LLMs that uses semantic similarity to dramatically reduce API costs and improve response times.
npm install vectorcache
yarn add vectorcache
pnpm add vectorcache
import { VectorcacheClient } from 'vectorcache';
const client = new VectorcacheClient({
apiKey: 'your-api-key-here'
});
const result = await client.query({
query: "What is machine learning?",
context: "AI tutorial",
model: "gpt-4"
});
console.log(result.response);
console.log('Cache hit:', result.is_cache_hit);
console.log('Response time:', result.response_time_ms + 'ms');
const client = new VectorcacheClient({
apiKey: process.env.VECTORCACHE_API_KEY
});
const client = new VectorcacheClient({
apiKey: 'your-api-key', // Required: Your Vectorcache API key
baseUrl: 'https://api.vectorcache.com', // Optional: API base URL
projectId: 'your-project-id', // Optional: Default project ID
timeout: 30000, // Optional: Request timeout (ms)
logLevel: 'warn' // Optional: 'debug' | 'info' | 'warn' | 'error' | 'none'
});
client.query(request)Make a cached query to your LLM.
const result = await client.query({
query: "Explain quantum computing",
context: "For a beginner audience", // Optional
model: "gpt-4", // Optional, defaults to 'gpt-4'
max_tokens: 150, // Optional, defaults to 1000
temperature: 0.7, // Optional, defaults to 0.7
metadata: { userId: "123" } // Optional
});
Response:
{
response: string; // The cached or generated response
is_cache_hit: boolean; // Whether this was a cache hit
similarity_score?: number; // Similarity score for cache hits
response_time_ms: number; // Response time in milliseconds
tokens_used?: number; // Tokens used (for cache misses)
estimated_cost?: number; // Estimated cost in USD
cache_entry_id?: string; // Cache entry ID for hits
query_id: string; // Unique query ID for tracking
}
client.testCache(request)Test cache workflow with debugging information.
const result = await client.testCache({
project_id: 'your-project-id',
prompt: "What is AI?",
similarity_threshold: 0.8, // Optional, defaults to 0.85
include_debug: true // Optional, defaults to true
});
client.getCacheStats(projectId?)Get cache statistics for a project.
const stats = await client.getCacheStats('your-project-id');
console.log('Total entries:', stats.total_entries);
console.log('Cache hits:', stats.total_hits);
console.log('Cost saved:', stats.total_cost_saved);
client.findSimilarQueries(query, projectId?)Find similar queries in your cache.
const similar = await client.findSimilarQueries(
"What is artificial intelligence?",
'your-project-id'
);
console.log('Found', similar.total_found, 'similar queries');
client.clearCache(request)Clear cache entries.
const result = await client.clearCache({
project_id: 'your-project-id', // Optional
expired_only: true, // Optional, only clear expired entries
older_than_hours: 24 // Optional, clear entries older than X hours
});
client.health()Check API health and connectivity.
const health = await client.health();
console.log('Status:', health.status);
The SDK provides custom error types for different scenarios:
import {
VectorcacheAPIError,
VectorcacheAuthenticationError,
VectorcacheRateLimitError,
VectorcacheValidationError,
VectorcacheNetworkError,
VectorcacheTimeoutError
} from 'vectorcache';
try {
const result = await client.query({
query: "What is AI?",
model: "gpt-4"
});
} catch (error) {
if (error instanceof VectorcacheAuthenticationError) {
console.error('Invalid API key:', error.message);
} else if (error instanceof VectorcacheRateLimitError) {
console.error('Rate limited. Retry after:', error.retryAfter);
} else if (error instanceof VectorcacheValidationError) {
console.error('Validation error:', error.message);
console.error('Details:', error.details);
} else {
console.error('Unexpected error:', error.message);
}
}
import { VectorcacheClient } from 'vectorcache';
const client = new VectorcacheClient({
apiKey: process.env.VECTORCACHE_API_KEY
});
// Simple query
const result = await client.query({
query: "What is the capital of France?",
model: "gpt-4"
});
console.log(result.response); // "The capital of France is Paris."
const queries = [
"What is machine learning?",
"Explain neural networks",
"What is deep learning?"
];
const results = await Promise.all(
queries.map(query => client.query({ query, model: "gpt-4" }))
);
results.forEach((result, index) => {
console.log(`Query ${index + 1}:`);
console.log(` Cache hit: ${result.is_cache_hit}`);
console.log(` Response: ${result.response}\n`);
});
// Get cache statistics
const stats = await client.getCacheStats();
console.log(`Cache hit rate: ${(stats.total_hits / stats.total_entries * 100).toFixed(1)}%`);
console.log(`Total cost saved: $${stats.total_cost_saved.toFixed(4)}`);
// Find similar queries
const similar = await client.findSimilarQueries("What is AI?");
console.log(`Found ${similar.total_found} similar queries`);
// Clear old cache entries
await client.clearCache({
expired_only: true,
older_than_hours: 168 // 1 week
});
You can set these environment variables for convenience:
VECTORCACHE_API_KEY=your-api-key-here
VECTORCACHE_PROJECT_ID=your-default-project-id
This SDK works in both Node.js and modern browsers. For browser usage:
<script type="module">
import { VectorcacheClient } from 'https://cdn.skypack.dev/vectorcache';
const client = new VectorcacheClient({
apiKey: 'your-api-key'
});
</script>
The SDK is written in TypeScript and includes comprehensive type definitions:
import {
VectorcacheClient,
SemanticQueryRequest,
SemanticQueryResponse,
CacheStatsResponse
} from 'vectorcache';
const client = new VectorcacheClient({ apiKey: 'key' });
// Full type safety
const request: SemanticQueryRequest = {
query: "What is AI?",
model: "gpt-4",
max_tokens: 100
};
const response: SemanticQueryResponse = await client.query(request);
# Clone the repository
git clone https://github.com/vectorcache/vectorcache-js.git
# Install dependencies
npm install
# Build the project
npm run build
# Run tests
npm test
# Run examples
node examples/basic-usage.js
We welcome contributions! Please see our Contributing Guide for details.
MIT License - see LICENSE file for details.
FAQs
Official TypeScript/JavaScript SDK for Vectorcache - Intelligent LLM caching layer
We found that vectorcache demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.