
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
@khaveeai/providers-mock
Advanced tools
Mock providers for KhaveeAI SDK development and testing. Perfect for developing VRM avatar applications without requiring API keys or external services.
npm install @khaveeai/providers-mock @khaveeai/react @khaveeai/core
import { KhaveeProvider, VRMAvatar } from "@khaveeai/react";
import { MockLLM, MockTTS } from "@khaveeai/providers-mock";
import { Canvas } from "@react-three/fiber";
function App() {
const mockConfig = {
llm: new MockLLM(),
tts: new MockTTS(),
};
return (
<KhaveeProvider config={mockConfig}>
<Canvas>
<VRMAvatar src="/models/avatar.vrm" />
</Canvas>
{/* Your UI components */}
</KhaveeProvider>
);
}
"use client";
import { useState } from "react";
import { KhaveeProvider, VRMAvatar } from "@khaveeai/react";
import { MockLLM, MockTTS } from "@khaveeai/providers-mock";
import { Canvas } from "@react-three/fiber";
function Chat() {
const [messages, setMessages] = useState<Array<{ role: string; content: string }>>([]);
const [input, setInput] = useState("");
const mockLLM = new MockLLM();
const handleSend = async () => {
if (!input.trim()) return;
const userMessage = { role: "user", content: input };
setMessages((prev) => [...prev, userMessage]);
setInput("");
// Stream response from MockLLM
let response = "";
for await (const chunk of mockLLM.streamChat({
messages: [...messages, userMessage]
})) {
if (chunk.type === "text") {
response += chunk.delta;
}
}
setMessages((prev) => [...prev, { role: "assistant", content: response }]);
};
return (
<div className="chat">
<div className="messages">
{messages.map((msg, i) => (
<div key={i} className={msg.role}>
<strong>{msg.role}:</strong> {msg.content}
</div>
))}
</div>
<div className="input-area">
<input
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyPress={(e) => e.key === "Enter" && handleSend()}
placeholder="Type a message..."
/>
<button onClick={handleSend}>Send</button>
</div>
</div>
);
}
export default function App() {
return (
<KhaveeProvider config={{ llm: new MockLLM(), tts: new MockTTS() }}>
<div className="app">
<Canvas className="canvas">
<VRMAvatar src="/models/avatar.vrm" />
<ambientLight intensity={0.5} />
</Canvas>
<Chat />
</div>
</KhaveeProvider>
);
}
Simulated Large Language Model with context-aware responses and animation triggers.
import { MockLLM } from "@khaveeai/providers-mock";
const mockLLM = new MockLLM();
// Stream chat responses
for await (const chunk of mockLLM.streamChat({
messages: [{ role: "user", content: "Hello!" }]
})) {
console.log(chunk); // { type: 'text', delta: 'H' }
}
MockLLM intelligently responds based on keywords in your messages:
| Keyword | Response Type | Animation Trigger |
|---|---|---|
hello, hi, hey | Greeting | wave_small 👋 |
dance, move | Dancing | swing_dance 💃 |
sad, cry, upset | Empathy | sad 💙 |
happy, good, great | Celebration | laugh 😊 |
fight, angry, mad | Conflict | punch 🥊 |
think, question, wonder | Thoughtful | thinking 🤔 |
yes, agree, correct | Agreement | nod_yes ✓ |
no, disagree, wrong | Disagreement | shake_no ✗ |
| anything else | Random response | Various |
Responses include embedded animation commands in the format *trigger_animation: animation_name*:
// Example responses
"Hello! *trigger_animation: wave_small* 👋"
"I'd love to dance! *trigger_animation: swing_dance* 💃"
"Let me think... *trigger_animation: thinking* 🤔"
You can parse these triggers in your UI to play corresponding VRM animations:
const parseAnimationTrigger = (text: string) => {
const match = text.match(/\*trigger_animation:\s*(\w+)\*/);
return match ? match[1] : null;
};
// Usage
const animation = parseAnimationTrigger(response);
if (animation) {
animate(animation); // Play VRM animation
}
Simulated Text-to-Speech with realistic timing and viseme logging.
import { MockTTS } from "@khaveeai/providers-mock";
const mockTTS = new MockTTS();
// Simulate speech
await mockTTS.speak({
text: "Hello, I'm a VRM avatar!",
voice: "mock-voice"
});
MockTTS provides detailed logging for development:
🔊 [Mock TTS] Speaking with mock-voice:
"Hello, I'm a VRM avatar!"
👄 [Mock Visemes] Simulating lip-sync patterns...
📊 Detected: 7 vowels, 11 consonants
🎭 Viseme sequence: Hello, I'm a VRM avatar!
⏱️ [Mock TTS] Speech duration: 1600ms
✅ [Mock TTS] Speech completed
MockTTS simulates phoneme/viseme data for lip-sync development:
// Vowel mapping
'a' → 'aa' (open mouth)
'e' → 'ee' (half open)
'i' → 'ih' (smile)
'o' → 'oh' (round)
'u' → 'ou' (pucker)
// Consonant mapping
'b', 'm', 'p' → 'PP' (lips together)
'f', 'v' → 'FF' (teeth on lip)
't', 'd', 'n', 'l' → 'TH' (tongue)
's', 'z' → 'SS' (hiss)
// ... and more
Perfect for building UI and testing animations without OpenAI API costs:
// Development environment
const isDev = process.env.NODE_ENV === "development";
const config = isDev
? { llm: new MockLLM(), tts: new MockTTS() }
: { realtime: new OpenAIRealtimeProvider({ apiKey: process.env.OPENAI_API_KEY! }) };
<KhaveeProvider config={config}>
<VRMAvatar src="/models/avatar.vrm" />
</KhaveeProvider>
Test your animation system with predictable triggers:
import { MockLLM } from "@khaveeai/providers-mock";
import { useVRMAnimations } from "@khaveeai/react";
function AnimationTest() {
const { animate } = useVRMAnimations();
const mockLLM = new MockLLM();
const testAnimations = async () => {
const testMessages = [
"Say hello", // Triggers wave animation
"Let's dance", // Triggers dance animation
"Are you sad?", // Triggers sad animation
"That's great!", // Triggers happy animation
];
for (const msg of testMessages) {
let response = "";
for await (const chunk of mockLLM.streamChat({
messages: [{ role: "user", content: msg }]
})) {
if (chunk.type === "text") response += chunk.delta;
}
// Parse and trigger animation
const match = response.match(/\*trigger_animation:\s*(\w+)\*/);
if (match) {
console.log(`Playing animation: ${match[1]}`);
animate(match[1]);
}
await new Promise(resolve => setTimeout(resolve, 2000));
}
};
return <button onClick={testAnimations}>Test Animations</button>;
}
Focus on UI/UX without worrying about API integration:
function DevelopmentUI() {
return (
<KhaveeProvider config={{ llm: new MockLLM() }}>
{/* Design your UI components */}
<ChatInterface />
<ExpressionControls />
<AnimationPanel />
{/* Avatar responds with mock data */}
<Canvas>
<VRMAvatar src="/models/avatar.vrm" />
</Canvas>
</KhaveeProvider>
);
}
Write tests without external API dependencies:
import { MockLLM, MockTTS } from "@khaveeai/providers-mock";
describe("Chat Component", () => {
it("should respond to user messages", async () => {
const mockLLM = new MockLLM();
const messages = [{ role: "user", content: "Hello" }];
let response = "";
for await (const chunk of mockLLM.streamChat({ messages })) {
if (chunk.type === "text") response += chunk.delta;
}
expect(response).toContain("Hello");
expect(response).toContain("wave_small");
});
it("should simulate TTS with proper timing", async () => {
const mockTTS = new MockTTS();
const start = Date.now();
await mockTTS.speak({ text: "Test message" });
const duration = Date.now() - start;
expect(duration).toBeGreaterThan(0);
});
});
MockLLM includes 8 built-in responses with various animations:
These are randomly selected when no specific keyword matches, ensuring variety in development.
Add your own responses and behaviors:
import { MockLLM } from "@khaveeai/providers-mock";
class CustomMockLLM extends MockLLM {
async *streamChat({ messages }: { messages: any[] }) {
const lastMessage = messages[messages.length - 1]?.content || "";
// Add custom logic
if (lastMessage.includes("your-keyword")) {
const response = "Your custom response! *trigger_animation: your_animation*";
for (const char of response) {
await new Promise(resolve => setTimeout(resolve, 30));
yield { type: "text", delta: char };
}
return;
}
// Fall back to default behavior
yield* super.streamChat({ messages });
}
}
// Use custom implementation
const config = { llm: new CustomMockLLM() };
Adjust speech simulation duration:
import { MockTTS } from "@khaveeai/providers-mock";
class CustomMockTTS extends MockTTS {
async speak({ text, voice = "custom-voice" }: { text: string; voice?: string }) {
console.log(`Speaking: "${text}"`);
// Custom timing logic
const words = text.split(" ").length;
const duration = (words / 120) * 60 * 1000; // 120 WPM
await new Promise(resolve => setTimeout(resolve, duration));
console.log("Done speaking");
}
}
Use mock providers in development, real providers in production:
const getConfig = () => {
if (process.env.NODE_ENV === "development") {
return {
llm: new MockLLM(),
tts: new MockTTS(),
};
}
return {
realtime: new OpenAIRealtimeProvider({
apiKey: process.env.NEXT_PUBLIC_OPENAI_API_KEY!,
}),
};
};
<KhaveeProvider config={getConfig()}>
{/* Your app */}
</KhaveeProvider>
Extract animation commands from responses:
const extractAnimations = (text: string): string[] => {
const matches = text.matchAll(/\*trigger_animation:\s*(\w+)\*/g);
return Array.from(matches, m => m[1]);
};
// Usage
const animations = extractAnimations(response);
animations.forEach(anim => animate(anim));
Add realistic delays between interactions:
const handleChat = async (message: string) => {
// Simulate "thinking" time
await new Promise(resolve => setTimeout(resolve, 500));
// Stream response
for await (const chunk of mockLLM.streamChat({ messages })) {
// Process chunk
}
};
Enable verbose logging:
import { MockLLM, MockTTS } from "@khaveeai/providers-mock";
const mockLLM = new MockLLM();
const mockTTS = new MockTTS();
// All console output is automatically logged
// Check browser console for:
// - 🔊 TTS speaking events
// - 👄 Viseme simulations
// - ⏱️ Duration estimates
// - ✅ Completion confirmations
Full TypeScript support with proper interfaces:
import type { LLMProvider, TTSProvider } from "@khaveeai/core";
import { MockLLM, MockTTS } from "@khaveeai/providers-mock";
const llm: LLMProvider = new MockLLM();
const tts: TTSProvider = new MockTTS();
// Type-safe streaming
async function chat(messages: Array<{ role: string; content: string }>) {
for await (const chunk of llm.streamChat({ messages })) {
if (chunk.type === "text") {
console.log(chunk.delta); // TypeScript knows this is a string
}
}
}
Check out complete examples in the examples directory:
basic-mock - Simple mock provider setupanimation-testing - Testing animations with mock responsesdevelopment-workflow - Development environment setupWe welcome contributions! Please see our Contributing Guide.
MIT © KhaveeAI
Need help? Open an issue or check our documentation.
FAQs
Mock providers for KhaveeAI development
We found that @khaveeai/providers-mock demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.