
Security News
Axios Maintainer Confirms Social Engineering Attack Behind npm Compromise
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.
@classytic/react-stream
Advanced tools
World-class React hooks for browser media device control (camera, microphone, screen share, WebRTC)
Production-ready React 19 hooks for building pro-tier video apps (Discord, Teams, Meet).
Built on useSyncExternalStore for perfect hydration and zero-tearing state management.
pnpm add @classytic/react-stream
# or
npm install @classytic/react-stream
import { useMediaManager } from "@classytic/react-stream";
function Room() {
const {
camera,
microphone,
cameraStream,
isInitialized,
initialize,
toggleCamera,
toggleMicrophone,
switchAudioDevice,
switchVideoDevice,
getVideoTrack,
getAudioTrack,
} = useMediaManager();
return (
<div>
{!isInitialized ? (
<button onClick={initialize}>Start Media</button>
) : (
<>
<video
ref={(el) => el && (el.srcObject = cameraStream)}
autoPlay
muted
playsInline
/>
<button onClick={toggleCamera}>
{camera.status === "active" ? "Camera Off" : "Camera On"}
</button>
<button onClick={toggleMicrophone}>
{microphone.trackEnabled ? "Mute" : "Unmute"}
</button>
</>
)}
</div>
);
}
useSyncExternalStoreThis library uses React 19's useSyncExternalStore for state management, ensuring:
getServerSnapshot┌─────────────────────────────────────────────────────────────────┐
│ MediaStore │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Microphone │ │ Camera │ │ Screen │ │
│ │ Stream │ │ Stream │ │ Share │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │
│ └───────────────┼───────────────┘ │
│ ▼ │
│ ┌─────────────────────┐ │
│ │ useSyncExternalStore │ │
│ └─────────────────────┘ │
│ │ │
└─────────────────────────┼───────────────────────────────────────┘
▼
┌─────────────────────┐
│ React Components │
└─────────────────────┘
All action callbacks (toggleCamera, toggleMicrophone, etc.) are stable references that never change. This means:
useEffect dependenciesuseCallback wrapper// ✅ Safe - callbacks are stable
useEffect(() => {
if (someCondition) {
toggleCamera();
}
}, [someCondition, toggleCamera]); // toggleCamera never changes
The library uses refs internally for mutable state that shouldn't trigger re-renders:
// Internal pattern - refs for control flow, state for UI
const isActiveRef = useRef(false); // For internal checks
const [isActive, setIsActive] = useState(false); // For UI updates
useMediaManagerThe main orchestration hook for camera, microphone, and screen share.
const {
// State (reactive - triggers re-renders)
camera, // { status, stream, trackEnabled, error }
microphone, // { status, stream, trackEnabled, error }
screen, // { status, stream, trackEnabled, error }
cameraStream, // MediaStream | null
screenStream, // MediaStream | null
audioLevel, // number (0-100)
isSpeaking, // boolean
isInitialized, // boolean
isInitializing, // boolean
// Actions (stable - never change identity)
initialize, // () => Promise<boolean>
toggleMicrophone, // () => void
toggleCamera, // () => Promise<void>
toggleScreenShare,// () => Promise<void>
switchAudioDevice,// (deviceId: string) => Promise<boolean>
switchVideoDevice,// (deviceId: string) => Promise<boolean>
cleanup, // () => void
// Track access (for WebRTC)
getVideoTrack, // () => MediaStreamTrack | null
getAudioTrack, // () => MediaStreamTrack | null
} = useMediaManager(options);
Options:
interface UseMediaManagerOptions {
videoConstraints?: MediaTrackConstraints | false; // false = no camera
audioConstraints?: MediaTrackConstraints | false; // false = no mic
screenShareOptions?: DisplayMediaStreamOptions;
autoInitialize?: boolean; // Auto-request permissions on mount
// Callbacks
onMicrophoneChange?: (state: DeviceState) => void;
onCameraChange?: (state: DeviceState) => void;
onScreenShareChange?: (state: DeviceState) => void;
onAudioLevel?: (data: AudioLevelData) => void;
onError?: (type: MediaDeviceType, error: Error) => void;
}
useDevicesEnumerate available media devices.
const {
videoInputs, // DeviceInfo[] - cameras
audioInputs, // DeviceInfo[] - microphones
audioOutputs, // DeviceInfo[] - speakers
allDevices, // DeviceInfo[] - all devices
isLoading, // boolean
error, // Error | null
refresh, // () => Promise<void>
} = useDevices();
useAudioAnalyzerReal-time audio level monitoring with voice activity detection.
const {
level, // number (0-100) - normalized audio level
raw, // number - raw FFT average
isSpeaking, // boolean - above threshold
isActive, // boolean - analyzer running
start, // () => void
stop, // () => void
} = useAudioAnalyzer(stream, {
fftSize: 256,
smoothingTimeConstant: 0.8,
speakingThreshold: 5,
updateInterval: 100, // ms
});
import { useMediaManager } from "@classytic/react-stream";
import { useLocalParticipant, useTracks } from "@livekit/components-react";
function LiveKitRoom() {
const { getVideoTrack, getAudioTrack, toggleCamera, toggleMicrophone } =
useMediaManager({ autoInitialize: true });
const { localParticipant } = useLocalParticipant();
// Publish tracks to LiveKit
useEffect(() => {
const videoTrack = getVideoTrack();
const audioTrack = getAudioTrack();
if (videoTrack && localParticipant) {
localParticipant.publishTrack(videoTrack, { name: 'camera' });
}
if (audioTrack && localParticipant) {
localParticipant.publishTrack(audioTrack, { name: 'microphone' });
}
}, [localParticipant, getVideoTrack, getAudioTrack]);
return (
<div>
<button onClick={toggleCamera}>Toggle Camera</button>
<button onClick={toggleMicrophone}>Toggle Mic</button>
</div>
);
}
import { useMediaManager } from "@classytic/react-stream";
function WebRTCCall() {
const { getVideoTrack, getAudioTrack, isInitialized } = useMediaManager();
const pcRef = useRef<RTCPeerConnection | null>(null);
const sendersRef = useRef<Map<string, RTCRtpSender>>(new Map());
// Setup peer connection
useEffect(() => {
pcRef.current = new RTCPeerConnection({
iceServers: [{ urls: 'stun:stun.l.google.com:19302' }]
});
return () => pcRef.current?.close();
}, []);
// Add tracks when ready
useEffect(() => {
if (!isInitialized || !pcRef.current) return;
const videoTrack = getVideoTrack();
const audioTrack = getAudioTrack();
if (videoTrack) {
const sender = pcRef.current.addTrack(videoTrack);
sendersRef.current.set('video', sender);
}
if (audioTrack) {
const sender = pcRef.current.addTrack(audioTrack);
sendersRef.current.set('audio', sender);
}
}, [isInitialized, getVideoTrack, getAudioTrack]);
// Handle track replacement (e.g., device switch)
const replaceTrack = async (kind: 'video' | 'audio') => {
const sender = sendersRef.current.get(kind);
const track = kind === 'video' ? getVideoTrack() : getAudioTrack();
if (sender && track) {
await sender.replaceTrack(track);
}
};
return <div>...</div>;
}
import { useMediaManager } from "@classytic/react-stream";
import { useDaily } from "@daily-co/daily-react";
function DailyRoom() {
const daily = useDaily();
const { getVideoTrack, getAudioTrack, switchVideoDevice } = useMediaManager();
// Update Daily when tracks change
useEffect(() => {
if (!daily) return;
const videoTrack = getVideoTrack();
if (videoTrack) {
daily.setLocalVideo(true);
}
}, [daily, getVideoTrack]);
// Switch camera
const handleCameraSwitch = async (deviceId: string) => {
await switchVideoDevice(deviceId);
// Daily will automatically pick up the new track
};
return <div>...</div>;
}
Problem: Creating new MediaStream() during render causes infinite loops.
// ❌ BAD - creates new MediaStream every render
function BadExample({ track }) {
const stream = new MediaStream([track]); // New object every render!
return <video srcObject={stream} />;
}
// ✅ GOOD - memoize the MediaStream
function GoodExample({ track }) {
const stream = useMemo(
() => track ? new MediaStream([track]) : null,
[track]
);
return <video srcObject={stream} />;
}
Problem: Inline objects change identity every render.
// ❌ BAD - options object changes every render
function BadExample() {
const { level } = useAudioAnalyzer(stream, {
fftSize: 256, // New object every render!
});
}
// ✅ GOOD - stable options reference
const ANALYZER_OPTIONS = { fftSize: 256 };
function GoodExample() {
const { level } = useAudioAnalyzer(stream, ANALYZER_OPTIONS);
}
// ✅ ALSO GOOD - useMemo for dynamic options
function GoodExample2({ fftSize }) {
const options = useMemo(() => ({ fftSize }), [fftSize]);
const { level } = useAudioAnalyzer(stream, options);
}
Problem: Media tracks keep running after component unmount.
// ❌ BAD - tracks leak
function BadExample() {
const { initialize } = useMediaManager();
useEffect(() => { initialize(); }, []);
// No cleanup!
}
// ✅ GOOD - cleanup in useEffect
function GoodExample() {
const { initialize, cleanup } = useMediaManager();
useEffect(() => {
initialize();
return () => cleanup(); // Stop tracks on unmount
}, [initialize, cleanup]);
}
Problem: Using state in a callback's dependencies when that callback sets the same state.
// ❌ BAD - infinite loop
const start = useCallback(() => {
if (isActive) return; // Depends on isActive
setIsActive(true); // Sets isActive
}, [isActive]); // isActive changes → start changes → effect runs
// ✅ GOOD - use ref for internal checks
const isActiveRef = useRef(false);
const start = useCallback(() => {
if (isActiveRef.current) return; // Check ref
isActiveRef.current = true;
setIsActive(true); // State for UI only
}, []); // Stable callback
Problem: Camera/mic gets unplugged but UI doesn't update.
// ✅ GOOD - handle device changes
function GoodExample() {
const { camera, microphone } = useMediaManager({
autoSwitchDevices: true, // Auto-reacquire on disconnect
onError: (type, error) => {
console.error(`${type} error:`, error);
// Show user notification
},
});
// Check for ended tracks
if (camera.status === 'error') {
return <div>Camera disconnected: {camera.error}</div>;
}
}
Problem: Forgetting to include system audio in screen share.
// ✅ GOOD - request system audio
const { startScreenShare } = useMediaManager({
screenShareOptions: {
video: true,
audio: true, // Include system audio (tab audio)
},
});
type DeviceStatus =
| 'idle' // Not started
| 'acquiring' // Requesting permission
| 'active' // Track is live and enabled
| 'muted' // Track is live but disabled
| 'stopped' // Track was stopped (camera off)
| 'error'; // Error occurred
interface DeviceState {
status: DeviceStatus;
stream: MediaStream | null;
trackEnabled: boolean;
error: string | null;
}
// Only import what you need for smaller bundles
import { useDevices } from '@classytic/react-stream/devices';
import { useConstraints, QUALITY_PRESETS } from '@classytic/react-stream/constraints';
import { useScreenShare } from '@classytic/react-stream/screen';
import { useAudioAnalyzer } from '@classytic/react-stream/audio';
import { useTrackPublisher } from '@classytic/react-stream/webrtc';
import { useNoiseSuppression } from '@classytic/react-stream/fx/audio';
import { useWorkerProcessor } from '@classytic/react-stream/fx/processor';
import { MediaProvider, useMediaContext } from '@classytic/react-stream/context';
import { useNoiseSuppression } from "@classytic/react-stream/fx/audio";
function NoiseControl({ micTrack }) {
const ns = useNoiseSuppression({
wasmUrl: "/models/rnnoise.wasm",
onReady: () => console.log("NS ready"),
onError: (err) => console.error(err),
});
// Start processing
const enableNS = () => ns.start(micTrack);
// Use ns.processedTrack for WebRTC
return (
<button onClick={enableNS} disabled={ns.isActive}>
{ns.isActive ? "NS Active" : "Enable Noise Suppression"}
</button>
);
}
import { useWorkerProcessor } from "@classytic/react-stream/fx/processor";
function BackgroundBlur({ videoTrack }) {
const processor = useWorkerProcessor({
workerUrl: "/workers/blur-worker.js",
config: { blurRadius: 15 },
onReady: () => console.log("Worker ready"),
});
// processor.processedTrack is the blurred video
return (
<button onClick={() => processor.start(videoTrack)}>
Enable Background Blur
</button>
);
}
| Feature | Chrome | Firefox | Safari | Edge |
|---|---|---|---|---|
| Core Media | 74+ | 66+ | 14+ | 79+ |
| Worker Processing | 94+ | - | - | 94+ |
| WebTransport | 97+ | 114+ | - | 97+ |
| WebCodecs | 94+ | 130+ | 16.4+ | 94+ |
| AudioWorklet | 66+ | 76+ | 14.1+ | 79+ |
Enable debug logging to see internal state changes:
import { enableDebug, disableDebug } from "@classytic/react-stream";
// In development
if (process.env.NODE_ENV === 'development') {
enableDebug();
}
// Or enable specific loggers
enableDebug('useMediaManager');
enableDebug('createMediaStore');
This package includes an agent skill for AI coding agents (Claude Code, Cursor, Copilot, Cline, etc.):
npx skills add classytic/rtc --skill react-stream
The skill provides context-aware guidance for using the library — API patterns, WebRTC integration, common pitfalls, and browser support.
MIT
FAQs
World-class React hooks for browser media device control (camera, microphone, screen share, WebRTC)
The npm package @classytic/react-stream receives a total of 93 weekly downloads. As such, @classytic/react-stream popularity was classified as not popular.
We found that @classytic/react-stream demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.

Security News
The Axios compromise shows how time-dependent dependency resolution makes exposure harder to detect and contain.