New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details β†’ β†’
Socket
Book a DemoSign in
Socket

lip-sync-engine

Package Overview
Dependencies
Maintainers
1
Versions
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

lip-sync-engine

πŸŽ™οΈ WebAssembly port of Rhubarb Lip Sync for browser-based lip-sync animation. Analyzes audio to generate precise mouth movements using PocketSphinx speech recognition. Framework-agnostic TypeScript API for React, Vue, Svelte, Next.js, and vanilla JS. Per

latest
Source
npmnpm
Version
1.0.3
Version published
Weekly downloads
113
91.53%
Maintainers
1
Weekly downloads
Β 
Created
Source

πŸŽ™οΈ LipSyncEngine.js

High-quality lip-sync animation from audio in the browser

NPM Version License

WebAssembly port of Rhubarb Lip Sync with TypeScript support.

✨ Features

  • πŸš€ High Performance - Runs natively in browser via WebAssembly
  • 🎯 Accurate - Uses PocketSphinx speech recognition for precise phoneme detection
  • πŸ“¦ Small Bundle - Only ~80KB JavaScript + 2.2MB WASM + models
  • πŸ”§ TypeScript - Full type definitions included
  • 🌐 Framework-Agnostic - Works with React, Vue, Svelte, vanilla JS, and any framework
  • 🧡 Web Workers - Non-blocking analysis with worker pool
  • πŸ”„ Streaming Support - Dynamic real-time chunk processing for live audio
  • 🎨 Complete API - Audio utilities, format conversion, microphone recording
  • πŸ“± Browser-Native - No server required, runs entirely client-side

πŸ“¦ Installation

npm install lip-sync-engine

πŸš€ Quick Start

Vanilla JavaScript / TypeScript

import { analyze, recordAudio } from 'lip-sync-engine';

// Record audio from microphone
const { pcm16 } = await recordAudio(5000); // 5 seconds

// Analyze
const result = await analyze(pcm16, {
  dialogText: "Hello world", // Optional, improves accuracy
  sampleRate: 16000
});

// Use mouth cues for animation
result.mouthCues.forEach(cue => {
  console.log(`${cue.start}s - ${cue.end}s: ${cue.value}`);
  // Output: 0.00s - 0.35s: X
  //         0.35s - 0.50s: D
  //         0.50s - 0.85s: B
  //         ...
});

React

import { useState, useEffect, useRef } from 'react';
import { LipSyncEngine, recordAudio } from 'lip-sync-engine';

function useLipSyncEngine() {
  const [result, setResult] = useState(null);
  const lipSyncEngineRef = useRef(LipSyncEngine.getInstance());

  useEffect(() => {
    lipSyncEngineRef.current.init();
    return () => lipSyncEngineRef.current.destroy();
  }, []);

  const analyze = async (pcm16, options) => {
    const result = await lipSyncEngineRef.current.analyze(pcm16, options);
    setResult(result);
  };

  return { analyze, result };
}

function MyComponent() {
  const { analyze, result } = useLipSyncEngine();

  const handleRecord = async () => {
    const { pcm16 } = await recordAudio(5000);
    await analyze(pcm16, { dialogText: "Hello world" });
  };

  return (
    <div>
      <button onClick={handleRecord}>Record & Analyze</button>
      {result && <div>Found {result.mouthCues.length} mouth cues!</div>}
    </div>
  );
}

See examples/react for complete example.

Vue

<script setup lang="ts">
import { ref, onMounted, onUnmounted } from 'vue';
import { LipSyncEngine, recordAudio } from 'lip-sync-engine';

const result = ref(null);
const lipSyncEngine = LipSyncEngine.getInstance();

onMounted(() => lipSyncEngine.init());
onUnmounted(() => lipSyncEngine.destroy());

const handleRecord = async () => {
  const { pcm16 } = await recordAudio(5000);
  result.value = await lipSyncEngine.analyze(pcm16, { dialogText: "Hello world" });
};
</script>

<template>
  <div>
    <button @click="handleRecord">Record & Analyze</button>
    <div v-if="result">Found {{ result.mouthCues.length }} mouth cues!</div>
  </div>
</template>

See examples/vue for complete example.

Svelte

<script>
import { writable } from 'svelte/store';
import { LipSyncEngine, recordAudio } from 'lip-sync-engine';

const result = writable(null);
const lipSyncEngine = LipSyncEngine.getInstance();
lipSyncEngine.init();

async function handleRecord() {
  const { pcm16 } = await recordAudio(5000);
  const res = await lipSyncEngine.analyze(pcm16, { dialogText: "Hello world" });
  result.set(res);
}
</script>

<button on:click={handleRecord}>Record & Analyze</button>
{#if $result}
  <div>Found {$result.mouthCues.length} mouth cues!</div>
{/if}

See examples/svelte for complete example.

πŸ“š Documentation

🎨 Mouth Shapes (Visemes)

LipSyncEngine.js generates 9 mouth shapes based on Preston Blair's phoneme categorization:

ShapeImageDescriptionExample Sounds
XClosed/RestSilence
AOpenah, aa, aw
BLips togetherp, b, m
CRoundedsh, ch, zh
DTongue-teethth, dh, t, d, n, l
ESlightly openeh, ae, uh
FF/V soundf, v
GOpen backk, g, ng
HWide openee, ih, ey

πŸ”¬ How It Works

  • Speech Recognition - PocketSphinx analyzes audio to detect phonemes
  • Phoneme Mapping - Phonemes are mapped to Preston Blair mouth shapes
  • Timing Optimization - Animation is smoothed for natural transitions
  • JSON Output - Returns timestamped mouth shape cues

πŸ“Š API Overview

Core Functions

// Simple one-off analysis
import { analyze } from 'lip-sync-engine';
const result = await analyze(pcm16, options);

// Async analysis (non-blocking)
import { analyzeAsync } from 'lip-sync-engine';
const result = await analyzeAsync(pcm16, options);

// Using the main class
import { LipSyncEngine } from 'lip-sync-engine';
const lipSyncEngine = LipSyncEngine.getInstance();
await lipSyncEngine.init();
const result = await lipSyncEngine.analyze(pcm16, options);

Streaming Analysis (Real-Time)

import { WorkerPool } from 'lip-sync-engine';

const pool = WorkerPool.getInstance(4);
await pool.init({ /* paths */ });
await pool.warmup(); // Pre-create workers

// Create streaming analyzer
const stream = pool.createStreamAnalyzer({
  dialogText: "Expected dialog",
  sampleRate: 16000
});

// Add chunks as they arrive from WebSocket, MediaRecorder, etc.
for await (const chunk of audioStream) {
  stream.addChunk(chunk); // Non-blocking!
}

// Get all results in order
const results = await stream.finalize();

See Streaming Analysis Guide for complete usage patterns.

Audio Utilities

import {
  recordAudio,
  loadAudio,
  audioBufferToInt16,
  float32ToInt16,
  resample
} from 'lip-sync-engine';

// Record from microphone
const { pcm16, audioBuffer } = await recordAudio(5000); // 5 seconds

// Load from file or URL
const { pcm16, audioBuffer } = await loadAudio('audio.mp3');

// Convert formats
const int16 = audioBufferToInt16(audioBuffer, 16000);
const int16 = float32ToInt16(float32Array);
const resampled = resample(float32Array, 44100, 16000);

Types

interface MouthCue {
  start: number;  // seconds
  end: number;    // seconds
  value: string;  // X, A, B, C, D, E, F, G, or H
}

interface LipSyncEngineResult {
  mouthCues: MouthCue[];
  metadata?: {
    duration: number;
    sampleRate: number;
    dialogText?: string;
  };
}

interface LipSyncEngineOptions {
  dialogText?: string;  // Improves accuracy significantly
  sampleRate?: number;  // Default: 16000 (recommended)
}

πŸ› οΈ Development

# Install dependencies
npm install

# Build WASM module
npm run build:wasm

# Build TypeScript
npm run build:ts

# Build everything
npm run build

# Type check
npm run typecheck

# Clean build artifacts
npm run clean

πŸ“„ License

MIT License - see LICENSE

πŸ™ Credits

πŸ› Issues

Report issues at https://github.com/biolimbo/lip-sync-engine/issues

Keywords

lip-sync

FAQs

Package last updated on 09 Oct 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts