New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

@classytic/react-stream

Package Overview
Dependencies
Maintainers
1
Versions
8
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@classytic/react-stream

World-class React hooks for browser media device control (camera, microphone, screen share, WebRTC)

latest
Source
npmnpm
Version
1.2.2
Version published
Weekly downloads
98
-80.28%
Maintainers
1
Weekly downloads
 
Created
Source

@classytic/react-stream

npm version License: MIT

Production-ready React 19 hooks for building pro-tier video apps (Discord, Teams, Meet). Built on useSyncExternalStore for perfect hydration and zero-tearing state management.

Table of Contents

Installation

pnpm add @classytic/react-stream
# or
npm install @classytic/react-stream

Quick Start

import { useMediaManager } from "@classytic/react-stream";

function Room() {
  const {
    camera,
    microphone,
    cameraStream,
    isInitialized,
    initialize,
    toggleCamera,
    toggleMicrophone,
    switchAudioDevice,
    switchVideoDevice,
    getVideoTrack,
    getAudioTrack,
  } = useMediaManager();

  return (
    <div>
      {!isInitialized ? (
        <button onClick={initialize}>Start Media</button>
      ) : (
        <>
          <video
            ref={(el) => el && (el.srcObject = cameraStream)}
            autoPlay
            muted
            playsInline
          />
          <button onClick={toggleCamera}>
            {camera.status === "active" ? "Camera Off" : "Camera On"}
          </button>
          <button onClick={toggleMicrophone}>
            {microphone.trackEnabled ? "Mute" : "Unmute"}
          </button>
        </>
      )}
    </div>
  );
}

Architecture

Store Pattern with useSyncExternalStore

This library uses React 19's useSyncExternalStore for state management, ensuring:

  • Zero tearing - State is always consistent across concurrent renders
  • SSR-safe - Proper hydration with getServerSnapshot
  • Granular subscriptions - Only re-render components that need updates
┌─────────────────────────────────────────────────────────────────┐
│                        MediaStore                                │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐             │
│  │  Microphone │  │   Camera    │  │   Screen    │             │
│  │   Stream    │  │   Stream    │  │   Share     │             │
│  └─────────────┘  └─────────────┘  └─────────────┘             │
│         │               │               │                       │
│         └───────────────┼───────────────┘                       │
│                         ▼                                       │
│              ┌─────────────────────┐                           │
│              │  useSyncExternalStore │                          │
│              └─────────────────────┘                           │
│                         │                                       │
└─────────────────────────┼───────────────────────────────────────┘
                          ▼
              ┌─────────────────────┐
              │   React Components   │
              └─────────────────────┘

Callback Stability

All action callbacks (toggleCamera, toggleMicrophone, etc.) are stable references that never change. This means:

  • Safe to use in useEffect dependencies
  • Safe to pass to child components without useCallback wrapper
  • No infinite re-render loops from callback identity changes
// ✅ Safe - callbacks are stable
useEffect(() => {
  if (someCondition) {
    toggleCamera();
  }
}, [someCondition, toggleCamera]); // toggleCamera never changes

State vs Refs Pattern

The library uses refs internally for mutable state that shouldn't trigger re-renders:

// Internal pattern - refs for control flow, state for UI
const isActiveRef = useRef(false);  // For internal checks
const [isActive, setIsActive] = useState(false);  // For UI updates

Core Hooks

useMediaManager

The main orchestration hook for camera, microphone, and screen share.

const {
  // State (reactive - triggers re-renders)
  camera,           // { status, stream, trackEnabled, error }
  microphone,       // { status, stream, trackEnabled, error }
  screen,           // { status, stream, trackEnabled, error }
  cameraStream,     // MediaStream | null
  screenStream,     // MediaStream | null
  audioLevel,       // number (0-100)
  isSpeaking,       // boolean
  isInitialized,    // boolean
  isInitializing,   // boolean

  // Actions (stable - never change identity)
  initialize,       // () => Promise<boolean>
  toggleMicrophone, // () => void
  toggleCamera,     // () => Promise<void>
  toggleScreenShare,// () => Promise<void>
  switchAudioDevice,// (deviceId: string) => Promise<boolean>
  switchVideoDevice,// (deviceId: string) => Promise<boolean>
  cleanup,          // () => void

  // Track access (for WebRTC)
  getVideoTrack,    // () => MediaStreamTrack | null
  getAudioTrack,    // () => MediaStreamTrack | null
} = useMediaManager(options);

Options:

interface UseMediaManagerOptions {
  videoConstraints?: MediaTrackConstraints | false;  // false = no camera
  audioConstraints?: MediaTrackConstraints | false;  // false = no mic
  screenShareOptions?: DisplayMediaStreamOptions;
  autoInitialize?: boolean;  // Auto-request permissions on mount

  // Callbacks
  onMicrophoneChange?: (state: DeviceState) => void;
  onCameraChange?: (state: DeviceState) => void;
  onScreenShareChange?: (state: DeviceState) => void;
  onAudioLevel?: (data: AudioLevelData) => void;
  onError?: (type: MediaDeviceType, error: Error) => void;
}

useDevices

Enumerate available media devices.

const {
  videoInputs,   // DeviceInfo[] - cameras
  audioInputs,   // DeviceInfo[] - microphones
  audioOutputs,  // DeviceInfo[] - speakers
  allDevices,    // DeviceInfo[] - all devices
  isLoading,     // boolean
  error,         // Error | null
  refresh,       // () => Promise<void>
} = useDevices();

useAudioAnalyzer

Real-time audio level monitoring with voice activity detection.

const {
  level,      // number (0-100) - normalized audio level
  raw,        // number - raw FFT average
  isSpeaking, // boolean - above threshold
  isActive,   // boolean - analyzer running
  start,      // () => void
  stop,       // () => void
} = useAudioAnalyzer(stream, {
  fftSize: 256,
  smoothingTimeConstant: 0.8,
  speakingThreshold: 5,
  updateInterval: 100, // ms
});

WebRTC Integration

With LiveKit

import { useMediaManager } from "@classytic/react-stream";
import { useLocalParticipant, useTracks } from "@livekit/components-react";

function LiveKitRoom() {
  const { getVideoTrack, getAudioTrack, toggleCamera, toggleMicrophone } =
    useMediaManager({ autoInitialize: true });

  const { localParticipant } = useLocalParticipant();

  // Publish tracks to LiveKit
  useEffect(() => {
    const videoTrack = getVideoTrack();
    const audioTrack = getAudioTrack();

    if (videoTrack && localParticipant) {
      localParticipant.publishTrack(videoTrack, { name: 'camera' });
    }
    if (audioTrack && localParticipant) {
      localParticipant.publishTrack(audioTrack, { name: 'microphone' });
    }
  }, [localParticipant, getVideoTrack, getAudioTrack]);

  return (
    <div>
      <button onClick={toggleCamera}>Toggle Camera</button>
      <button onClick={toggleMicrophone}>Toggle Mic</button>
    </div>
  );
}

With Raw RTCPeerConnection

import { useMediaManager } from "@classytic/react-stream";

function WebRTCCall() {
  const { getVideoTrack, getAudioTrack, isInitialized } = useMediaManager();
  const pcRef = useRef<RTCPeerConnection | null>(null);
  const sendersRef = useRef<Map<string, RTCRtpSender>>(new Map());

  // Setup peer connection
  useEffect(() => {
    pcRef.current = new RTCPeerConnection({
      iceServers: [{ urls: 'stun:stun.l.google.com:19302' }]
    });
    return () => pcRef.current?.close();
  }, []);

  // Add tracks when ready
  useEffect(() => {
    if (!isInitialized || !pcRef.current) return;

    const videoTrack = getVideoTrack();
    const audioTrack = getAudioTrack();

    if (videoTrack) {
      const sender = pcRef.current.addTrack(videoTrack);
      sendersRef.current.set('video', sender);
    }
    if (audioTrack) {
      const sender = pcRef.current.addTrack(audioTrack);
      sendersRef.current.set('audio', sender);
    }
  }, [isInitialized, getVideoTrack, getAudioTrack]);

  // Handle track replacement (e.g., device switch)
  const replaceTrack = async (kind: 'video' | 'audio') => {
    const sender = sendersRef.current.get(kind);
    const track = kind === 'video' ? getVideoTrack() : getAudioTrack();
    if (sender && track) {
      await sender.replaceTrack(track);
    }
  };

  return <div>...</div>;
}

With Daily.co

import { useMediaManager } from "@classytic/react-stream";
import { useDaily } from "@daily-co/daily-react";

function DailyRoom() {
  const daily = useDaily();
  const { getVideoTrack, getAudioTrack, switchVideoDevice } = useMediaManager();

  // Update Daily when tracks change
  useEffect(() => {
    if (!daily) return;

    const videoTrack = getVideoTrack();
    if (videoTrack) {
      daily.setLocalVideo(true);
    }
  }, [daily, getVideoTrack]);

  // Switch camera
  const handleCameraSwitch = async (deviceId: string) => {
    await switchVideoDevice(deviceId);
    // Daily will automatically pick up the new track
  };

  return <div>...</div>;
}

Common Pitfalls

1. Creating MediaStream in Render

Problem: Creating new MediaStream() during render causes infinite loops.

// ❌ BAD - creates new MediaStream every render
function BadExample({ track }) {
  const stream = new MediaStream([track]); // New object every render!
  return <video srcObject={stream} />;
}

// ✅ GOOD - memoize the MediaStream
function GoodExample({ track }) {
  const stream = useMemo(
    () => track ? new MediaStream([track]) : null,
    [track]
  );
  return <video srcObject={stream} />;
}

2. Object Options in Dependencies

Problem: Inline objects change identity every render.

// ❌ BAD - options object changes every render
function BadExample() {
  const { level } = useAudioAnalyzer(stream, {
    fftSize: 256,  // New object every render!
  });
}

// ✅ GOOD - stable options reference
const ANALYZER_OPTIONS = { fftSize: 256 };

function GoodExample() {
  const { level } = useAudioAnalyzer(stream, ANALYZER_OPTIONS);
}

// ✅ ALSO GOOD - useMemo for dynamic options
function GoodExample2({ fftSize }) {
  const options = useMemo(() => ({ fftSize }), [fftSize]);
  const { level } = useAudioAnalyzer(stream, options);
}

3. Forgetting Cleanup

Problem: Media tracks keep running after component unmount.

// ❌ BAD - tracks leak
function BadExample() {
  const { initialize } = useMediaManager();
  useEffect(() => { initialize(); }, []);
  // No cleanup!
}

// ✅ GOOD - cleanup in useEffect
function GoodExample() {
  const { initialize, cleanup } = useMediaManager();

  useEffect(() => {
    initialize();
    return () => cleanup();  // Stop tracks on unmount
  }, [initialize, cleanup]);
}

4. Using State in Callbacks That Set State

Problem: Using state in a callback's dependencies when that callback sets the same state.

// ❌ BAD - infinite loop
const start = useCallback(() => {
  if (isActive) return;  // Depends on isActive
  setIsActive(true);     // Sets isActive
}, [isActive]);          // isActive changes → start changes → effect runs

// ✅ GOOD - use ref for internal checks
const isActiveRef = useRef(false);
const start = useCallback(() => {
  if (isActiveRef.current) return;  // Check ref
  isActiveRef.current = true;
  setIsActive(true);  // State for UI only
}, []);  // Stable callback

5. Not Handling Device Disconnection

Problem: Camera/mic gets unplugged but UI doesn't update.

// ✅ GOOD - handle device changes
function GoodExample() {
  const { camera, microphone } = useMediaManager({
    autoSwitchDevices: true,  // Auto-reacquire on disconnect
    onError: (type, error) => {
      console.error(`${type} error:`, error);
      // Show user notification
    },
  });

  // Check for ended tracks
  if (camera.status === 'error') {
    return <div>Camera disconnected: {camera.error}</div>;
  }
}

6. Screen Share Audio

Problem: Forgetting to include system audio in screen share.

// ✅ GOOD - request system audio
const { startScreenShare } = useMediaManager({
  screenShareOptions: {
    video: true,
    audio: true,  // Include system audio (tab audio)
  },
});

API Reference

Device Status

type DeviceStatus =
  | 'idle'       // Not started
  | 'acquiring'  // Requesting permission
  | 'active'     // Track is live and enabled
  | 'muted'      // Track is live but disabled
  | 'stopped'    // Track was stopped (camera off)
  | 'error';     // Error occurred

DeviceState

interface DeviceState {
  status: DeviceStatus;
  stream: MediaStream | null;
  trackEnabled: boolean;
  error: string | null;
}

Subpath Imports (Tree-Shaking)

// Only import what you need for smaller bundles
import { useDevices } from '@classytic/react-stream/devices';
import { useConstraints, QUALITY_PRESETS } from '@classytic/react-stream/constraints';
import { useScreenShare } from '@classytic/react-stream/screen';
import { useAudioAnalyzer } from '@classytic/react-stream/audio';
import { useTrackPublisher } from '@classytic/react-stream/webrtc';
import { useNoiseSuppression } from '@classytic/react-stream/fx/audio';
import { useWorkerProcessor } from '@classytic/react-stream/fx/processor';
import { MediaProvider, useMediaContext } from '@classytic/react-stream/context';

AI & Processing

Audio Noise Suppression (WASM)

import { useNoiseSuppression } from "@classytic/react-stream/fx/audio";

function NoiseControl({ micTrack }) {
  const ns = useNoiseSuppression({
    wasmUrl: "/models/rnnoise.wasm",
    onReady: () => console.log("NS ready"),
    onError: (err) => console.error(err),
  });

  // Start processing
  const enableNS = () => ns.start(micTrack);

  // Use ns.processedTrack for WebRTC
  return (
    <button onClick={enableNS} disabled={ns.isActive}>
      {ns.isActive ? "NS Active" : "Enable Noise Suppression"}
    </button>
  );
}

Video Processing (Off-Thread)

import { useWorkerProcessor } from "@classytic/react-stream/fx/processor";

function BackgroundBlur({ videoTrack }) {
  const processor = useWorkerProcessor({
    workerUrl: "/workers/blur-worker.js",
    config: { blurRadius: 15 },
    onReady: () => console.log("Worker ready"),
  });

  // processor.processedTrack is the blurred video
  return (
    <button onClick={() => processor.start(videoTrack)}>
      Enable Background Blur
    </button>
  );
}

Browser Support

FeatureChromeFirefoxSafariEdge
Core Media74+66+14+79+
Worker Processing94+--94+
WebTransport97+114+-97+
WebCodecs94+130+16.4+94+
AudioWorklet66+76+14.1+79+

Debug Mode

Enable debug logging to see internal state changes:

import { enableDebug, disableDebug } from "@classytic/react-stream";

// In development
if (process.env.NODE_ENV === 'development') {
  enableDebug();
}

// Or enable specific loggers
enableDebug('useMediaManager');
enableDebug('createMediaStore');

Agent Skill

This package includes an agent skill for AI coding agents (Claude Code, Cursor, Copilot, Cline, etc.):

npx skills add classytic/rtc --skill react-stream

The skill provides context-aware guidance for using the library — API patterns, WebRTC integration, common pitfalls, and browser support.

License

MIT

Keywords

react

FAQs

Package last updated on 19 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts