
Company News
Socket Named Top Sales Organization by RepVue
Socket won two 2026 Reppy Awards from RepVue, ranking in the top 5% of all sales orgs. AE Alexandra Lister shares what it's like to grow a sales career here.
@loqalabs/loqa-audio-dsp
Advanced tools
Production-grade Expo native module for audio DSP analysis (FFT, pitch detection, formant extraction, spectral analysis)
Production-grade Expo native module for audio DSP analysis.
@loqalabs/loqa-audio-dsp provides high-performance audio Digital Signal Processing (DSP) functions for React Native/Expo applications. It wraps the loqa-voice-dsp Rust crate with native iOS (Swift) and Android (Kotlin) bindings.
computeFFT(): Fast Fourier Transform for frequency spectrumdetectPitch(): YIN algorithm for fundamental frequencyextractFormants(): LPC-based formant analysis (F1, F2, F3)analyzeSpectrum(): Spectral centroid, tilt, rolloffWorks seamlessly with @loqalabs/loqa-audio-bridge for real-time audio streaming:
import { startAudioStream, addAudioSampleListener } from '@loqalabs/loqa-audio-bridge';
import { detectPitch, computeFFT } from '@loqalabs/loqa-audio-dsp';
// Stream audio from microphone
await startAudioStream({ sampleRate: 16000, bufferSize: 2048 });
// Analyze each audio buffer
addAudioSampleListener((event) => {
const pitch = detectPitch(event.samples, event.sampleRate);
const spectrum = computeFFT(event.samples, 2048);
console.log(`Detected pitch: ${pitch.frequency} Hz (confidence: ${pitch.confidence})`);
});
npx expo install @loqalabs/loqa-audio-dsp
import { computeFFT } from '@loqalabs/loqa-audio-dsp';
// Example: Analyze audio frequency content
const audioBuffer = new Float32Array(2048); // Your audio samples
// ... fill buffer with audio data from microphone or file ...
// Compute FFT with options
const result = await computeFFT(audioBuffer, {
fftSize: 2048,
windowType: 'hanning',
includePhase: false,
});
// Find the dominant frequency
const maxMagnitudeIndex = result.magnitude.indexOf(Math.max(...result.magnitude));
const dominantFrequency = result.frequencies[maxMagnitudeIndex];
console.log(`Dominant frequency: ${dominantFrequency.toFixed(2)} Hz`);
console.log(`Magnitude bins: ${result.magnitude.length}`);
console.log(
`Frequency range: ${result.frequencies[0]} Hz - ${
result.frequencies[result.frequencies.length - 1]
} Hz`
);
import { detectPitch } from '@loqalabs/loqa-audio-dsp';
// Example: Real-time pitch detection for a tuner app
const audioBuffer = new Float32Array(2048); // Your audio samples
// ... fill buffer with microphone data ...
const pitch = await detectPitch(audioBuffer, 44100, {
minFrequency: 80, // Minimum pitch (human voice range)
maxFrequency: 400, // Maximum pitch (human voice range)
});
if (pitch.isVoiced) {
console.log(`Detected pitch: ${pitch.frequency.toFixed(2)} Hz`);
console.log(`Confidence: ${(pitch.confidence * 100).toFixed(1)}%`);
// Convert to musical note for tuner display
const noteNames = ['C', 'C#', 'D', 'D#', 'E', 'F', 'F#', 'G', 'G#', 'A', 'A#', 'B'];
const a4 = 440;
const semitones = 12 * Math.log2(pitch.frequency / a4);
const noteIndex = Math.round(semitones) % 12;
console.log(`Closest note: ${noteNames[noteIndex]}`);
} else {
console.log('No pitch detected (silence or unvoiced segment)');
}
import { extractFormants } from '@loqalabs/loqa-audio-dsp';
// Example: Analyze vowel formants for pronunciation feedback
const voiceBuffer = new Float32Array(2048); // Your voice samples
// ... fill buffer with voiced audio (vowel sound) ...
const formants = await extractFormants(voiceBuffer, 16000, {
lpcOrder: 14, // Optional: defaults to sampleRate/1000 + 2
});
console.log(`Formant frequencies:`);
console.log(
` F1: ${formants.f1.toFixed(1)} Hz (bandwidth: ${formants.bandwidths.f1.toFixed(1)} Hz)`
);
console.log(
` F2: ${formants.f2.toFixed(1)} Hz (bandwidth: ${formants.bandwidths.f2.toFixed(1)} Hz)`
);
console.log(
` F3: ${formants.f3.toFixed(1)} Hz (bandwidth: ${formants.bandwidths.f3.toFixed(1)} Hz)`
);
// Identify vowel based on F1/F2 values (simplified example)
if (formants.f1 < 400 && formants.f2 > 2000) {
console.log('Detected vowel: /i/ (as in "see")');
} else if (formants.f1 > 700 && formants.f2 < 1200) {
console.log('Detected vowel: /a/ (as in "father")');
}
import { analyzeSpectrum } from '@loqalabs/loqa-audio-dsp';
// Example: Analyze spectral features for audio classification
const audioBuffer = new Float32Array(2048); // Your audio samples
// ... fill buffer with audio data ...
const spectrum = await analyzeSpectrum(audioBuffer, 44100);
console.log(`Spectral centroid: ${spectrum.centroid.toFixed(1)} Hz`);
console.log(`Spectral rolloff: ${spectrum.rolloff.toFixed(1)} Hz`);
console.log(`Spectral tilt: ${spectrum.tilt.toFixed(3)}`);
// Use spectral features for audio classification
if (spectrum.centroid > 3000) {
console.log('Bright sound (high-frequency content)');
} else if (spectrum.centroid < 1500) {
console.log('Dark sound (low-frequency content)');
}
// Spectral tilt indicates timbre: negative = more bass, positive = more treble
if (spectrum.tilt < -0.01) {
console.log('Bass-heavy timbre');
} else if (spectrum.tilt > 0.01) {
console.log('Treble-heavy timbre');
}
import { detectPitch, extractFormants, computeFFT } from '@loqalabs/loqa-audio-dsp';
// Example: Comprehensive voice analysis for coaching apps
async function analyzeVoice(samples: Float32Array, sampleRate: number) {
// 1. Detect pitch
const pitch = await detectPitch(samples, sampleRate, {
minFrequency: 80,
maxFrequency: 400,
});
// 2. Extract formants (for voiced segments only)
let formants = null;
if (pitch.isVoiced) {
formants = await extractFormants(samples, sampleRate);
}
// 3. Compute frequency spectrum
const fft = await computeFFT(samples, { fftSize: 2048 });
return {
pitch: {
frequency: pitch.frequency,
confidence: pitch.confidence,
isVoiced: pitch.isVoiced,
},
formants: formants
? {
f1: formants.f1,
f2: formants.f2,
f3: formants.f3,
}
: null,
spectrum: {
bins: fft.magnitude.length,
dominantFreq: fft.frequencies[fft.magnitude.indexOf(Math.max(...fft.magnitude))],
},
};
}
// Usage
const result = await analyzeVoice(audioSamples, 16000);
console.log('Voice analysis:', result);
@loqalabs/loqa-audio-dsp wraps the high-performance loqa-voice-dsp Rust library:
┌─────────────────────────────────────────┐
│ React Native / Expo Application │
│ ┌───────────────────────────────────┐ │
│ │ @loqalabs/loqa-audio-dsp (TS) │ │
│ │ - TypeScript API │ │
│ └────────────┬──────────────────────┘ │
└───────────────┼──────────────────────────┘
│ Expo Modules Core
┌────────┴────────┐
│ │
┌──────▼──────┐ ┌──────▼──────┐
│iOS (Swift) │ │Android (Kt) │
│FFI bindings │ │JNI bindings │
└──────┬──────┘ └──────┬──────┘
│ │
└────────┬────────┘
│
┌───────▼────────┐
│ loqa-voice-dsp │
│ (Rust crate) │
│ - YIN pitch │
│ - LPC formants│
│ - FFT/DFT │
└────────────────┘
DSP algorithms are implemented in Rust for optimal performance:
TypeScript Tests (Jest):
npm test # Run all tests
npm run test:watch # Run tests in watch mode
npm run test:coverage # Run tests with coverage report
iOS Tests (XCTest):
iOS tests are located in ios/Tests/ and will run through the example app's Xcode test target:
cd example
npx expo run:ios
# Then run tests via Xcode's test navigator (Cmd+U)
Android Tests (JUnit):
Android tests are located in android/src/test/ and will run through the example app's Gradle:
cd example
npx expo run:android
# Tests run via Gradle: ./gradlew testDebugUnitTest
MIT License - see LICENSE for details.
Contributions welcome! Please see CONTRIBUTING.md for guidelines.
For real-time audio streaming capabilities, see @loqalabs/loqa-audio-bridge.
FAQs
Production-grade Expo native module for audio DSP analysis (FFT, pitch detection, formant extraction, spectral analysis)
The npm package @loqalabs/loqa-audio-dsp receives a total of 12 weekly downloads. As such, @loqalabs/loqa-audio-dsp popularity was classified as not popular.
We found that @loqalabs/loqa-audio-dsp demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Company News
Socket won two 2026 Reppy Awards from RepVue, ranking in the top 5% of all sales orgs. AE Alexandra Lister shares what it's like to grow a sales career here.

Security News
NIST will stop enriching most CVEs under a new risk-based model, narrowing the NVD's scope as vulnerability submissions continue to surge.

Company News
/Security News
Socket is an initial recipient of OpenAI's Cybersecurity Grant Program, which commits $10M in API credits to defenders securing open source software.