New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

@immich/walkrs

Package Overview
Dependencies
Maintainers
3
Versions
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@immich/walkrs

Fast file tree walker for Node.js, built with ripgrep's ignore crate

latest
Source
npmnpm
Version
0.0.13
Version published
Maintainers
3
Created
Source

@immich/walkrs

High-performance file tree walker for Node.js, built with Rust and the battle-tested ignore crate from ripgrep.

Background

This project grew out of the need for fast and reliable external library scanning in Immich. Immich needed to scan very large photo libraries efficiently, often containing hundreds of thousands of files across complex directory structures.

By leveraging Rust's performance and the same ignore logic used in ripgrep (one of the fastest file search tools available), walkrs delivers exceptional speed and reliability for file tree traversal.

Installation

pnpm add @immich/walkrs

Usage

import { walk } from '@immich/walkrs';

// Simple usage - walk a directory
const files: string[] = [];
for await (const batch of walk({ paths: ['/path/to/scan'] })) {
  files.push(...JSON.parse(batch));
}

// Advanced usage with filtering
const photos: string[] = [];
for await (const batch of walk({
  paths: ['/photos', '/backup/photos'],
  extensions: ['.jpg', '.png', '.heic', '.webp'],
  exclusionPatterns: ['**/.stfolder/**'],
  includeHidden: false,
})) {
  photos.push(...JSON.parse(batch));
}

Performance

walkrs is designed to handle massive directory trees efficiently. It is greatly affected by multithreading: In benchmarks we have scanned 11M files in under 30 seconds over NFS on a machine with 32 CPU threads available. When restricting walkrs to a single thread, the time for the same task goes up to 208 seconds. Compare this with the single-threaded fast-glob based on nodejs which uses 360 seconds for the same task.

Benchmarking

Since performance is critical, we provide dedicated benchmark scripts.

Setup

Before running benchmarks, you need to create benchmark datasets. This is a one-time setup that generates test directories with various file counts. Note: This can take several minutes to complete depending on your system.

pnpm run bench:setup

This creates datasets in the bench/datasets/ directory:

  • 10 - 10 files
  • 100 - 100 files
  • 1k - 1,000 files
  • 10k - 10,000 files
  • 100k - 100,000 files
  • 1m - 1,000,000 files
  • 10m - 10,000,000 files

Running Benchmarks

Run benchmarks against any dataset:

# Run with default settings on all datasets
pnpm run ts:bench

# Run on a specific dataset
pnpm run ts:bench 1m

# Run multiple datasets
pnpm run ts:bench 100 10k 1m

FAQs

Package last updated on 20 Feb 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts