Note:
The path in data.Key is prefixed by the bucket ID and is not the value which should be passed to the download method in order to fetch the file.
To fetch the file via the download method, use data.path and data.bucketId as follows:
Supabase Storage provides specialized analytics buckets using Apache Iceberg table format, optimized for analytical workloads and large-scale data processing. These buckets are designed for data lake architectures, time-series data, and business intelligence applications.
What are Analytics Buckets?
Analytics buckets use the Apache Iceberg open table format, providing:
ACID transactions for data consistency
Schema evolution without data rewrites
Time travel to query historical data
Efficient metadata management for large datasets
Optimized for analytical queries rather than individual file operations
When to Use Analytics Buckets
Use analytics buckets for:
Time-series data (logs, metrics, events)
Data lake architectures
Business intelligence and reporting
Large-scale batch processing
Analytical workloads requiring ACID guarantees
Use regular storage buckets for:
User file uploads (images, documents, videos)
Individual file management
Content delivery
Simple object storage needs
Quick Start
You can access analytics functionality through the analytics property on your storage client:
Returns:IcebergRestCatalog instance from iceberg-js
Note: The from() method returns an Iceberg REST Catalog client that provides full access to the Apache Iceberg REST API. For complete documentation of available operations, see the iceberg-js documentation.
Error Handling
Analytics buckets use the same error handling pattern as the rest of the Storage SDK:
asyncfunctionensureAnalyticsBucket(bucketName: string) {
// Try to create the bucketconst { data, error } = await analytics.createBucket(bucketName)
if (error) {
// Check if bucket already exists (conflict error)if (error.statusCode === '409') {
console.log(`Bucket '${bucketName}' already exists`)
return { success: true, created: false }
}
// Other error occurredconsole.error('Failed to create bucket:', error.message)
return { success: false, error }
}
console.log(`Created new bucket: '${bucketName}'`)
return { success: true, created: true, data }
}
Listing All Buckets with Pagination
asyncfunctiongetAllAnalyticsBuckets() {
constallBuckets: AnalyticBucket[] = []
let offset = 0const limit = 100while (true) {
const { data, error } = await analytics.listBuckets({
limit,
offset,
sortColumn: 'created_at',
sortOrder: 'desc',
})
if (error) {
console.error('Error fetching buckets:', error.message)
break
}
if (!data || data.length === 0) {
break
}
allBuckets.push(...data)
// If we got fewer results than the limit, we've reached the endif (data.length < limit) {
break
}
offset += limit
}
return allBuckets
}
Vector Embeddings
Supabase Storage provides built-in support for storing and querying high-dimensional vector embeddings, powered by S3 Vectors. This enables semantic search, similarity matching, and AI-powered applications without needing a separate vector database.
Note: Vector embeddings functionality is available in @supabase/storage-js v2.76 and later.
Features
Vector Buckets: Organize vector indexes into logical containers
Vector Indexes: Define schemas with configurable dimensions and distance metrics
Batch Operations: Insert/update/delete up to 500 vectors per request
Similarity Search: Query for nearest neighbors using cosine, euclidean, or dot product distance
Metadata Filtering: Store and filter vectors by arbitrary JSON metadata
Pagination: Efficiently scan large vector datasets
Parallel Scanning: Distribute scans across multiple workers for high throughput
Cross-platform: Works in Node.js, browsers, and edge runtimes
Quick Start
You can access vector functionality in three ways, depending on your use case:
Filter Syntax:
The filter parameter accepts arbitrary JSON for metadata filtering. Non-filterable keys (configured at index creation) cannot be used in filters but can still be returned.
# Build the package
npx nx build storage-js
# Watch mode for development
npx nx build storage-js --watch
# Generate documentation
npx nx docs storage-js
Testing
Important: The storage-js tests require a local test infrastructure running in Docker. This is NOT the same as a regular Supabase instance - it's a specialized test setup with its own storage API, database, and Kong gateway.
Prerequisites
Docker must be installed and running
Port availability - The following ports must be free:
5432 (PostgreSQL database)
5050 (Storage API - sometimes 5000 conflicts macOS AirPlay conflict)
8000 (Kong API Gateway)
50020 (imgproxy for image transformations)
Note: If port 5000 conflicts with macOS AirPlay Receiver, the docker-compose.yml has been configured to use port 5050 instead.
Test Scripts Overview
Script
Description
What it does
test:storage
Complete test workflow
Runs the full test cycle: clean → start infra → run tests → clean
test:suite
Jest tests only
Runs Jest tests with coverage (requires infra to be running)
test:infra
Start test infrastructure
Starts Docker containers for storage API, database, and Kong
test:clean
Stop and clean infrastructure
Stops all Docker containers and removes them
Running Tests
Option 1: Complete Test Run (Recommended)
This handles everything automatically - starting infrastructure, running tests, and cleaning up:
# From monorepo root
npx nx test:storage storage-js
This command will:
Stop any existing test containers
Build and start fresh test infrastructure
Wait for services to be ready
Run all Jest tests with coverage
Clean up all containers after tests complete
Option 2: Manual Infrastructure Management
Useful for development when you want to run tests multiple times without restarting Docker:
# Step 1: Start the test infrastructure# From root
npx nx test:infra storage-js
# This starts: PostgreSQL, Storage API, Kong Gateway, and imgproxy# Step 2: Run tests (can run multiple times)
npx nx test:suite storage-js
# Step 3: When done, clean up the infrastructure
npx nx test:clean storage-js
Option 3: Development Mode
For actively developing and debugging tests:
# Start infrastructure once (from root)
npx nx test:infra storage-js
# Run tests in watch mode
npx nx test:suite storage-js --watch
# Clean up when done
npx nx test:clean storage-js
Test Infrastructure Details
The test infrastructure (infra/docker-compose.yml) includes:
PostgreSQL Database (port 5432)
Initialized with storage schema and test data
Contains bucket configurations and permissions
Storage API (port 5050, internal 5000)
Supabase Storage service for handling file operations
Configured with test authentication keys
Kong Gateway (port 8000)
API gateway that routes requests to storage service
Handles authentication and CORS
imgproxy (port 50020)
Image transformation service for on-the-fly image processing
Common Issues and Solutions
Issue
Solution
Port 5000 already in use
macOS AirPlay uses this port. Either disable AirPlay Receiver in System Settings or use the modified docker-compose.yml with port 5050
Port 5432 already in use
Another PostgreSQL instance is running. Stop it or modify the port in docker-compose.yml
"request failed, reason:" errors
Infrastructure isn't running. Run npx nx test:infra storage-js first
Tests fail with connection errors
Ensure Docker is running and healthy
"Container name already exists"
Run npx nx test:clean storage-js to remove existing containers
Understanding Test Failures
StorageUnknownError with "request failed": Infrastructure not running
Port binding errors: Ports are already in use by other services
Snapshot failures: Expected test data has changed - review and update snapshots if needed
What About Supabase CLI?
No, you don't need supabase start or a regular Supabase instance for these tests. The storage-js tests use their own specialized Docker setup that's lighter and focused specifically on testing the storage SDK. This test infrastructure:
Is completely independent from any Supabase CLI projects
Uses fixed test authentication keys
Has predictable test data and bucket configurations
Runs faster than a full Supabase stack
Doesn't interfere with your local Supabase development projects
Contributing
We welcome contributions! Please see our Contributing Guide for details on how to get started.
For major changes or if you're unsure about something, please open an issue first to discuss your proposed changes.
The AWS SDK for JavaScript provides a comprehensive set of tools for interacting with AWS services, including S3 for storage. It offers more extensive features and integrations compared to @supabase/storage-js, but it can be more complex to set up and use.
The Firebase Admin SDK allows you to manage your Firebase services programmatically. It includes functionality for Firebase Storage, which is similar to Supabase Storage. Firebase offers a broader range of services and integrations, but it may be overkill if you only need storage capabilities.
The npm package @supabase/storage-js receives a total of 10,551,856 weekly downloads. As such, @supabase/storage-js popularity was classified as popular.
We found that @supabase/storage-js demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.It has 14 open source maintainers collaborating on the project.
Package last updated on 09 Mar 2026
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
The GCVE initiative operated by CIRCL has officially opened its publishing ecosystem, letting organizations issue and share vulnerability identifiers without routing through a central authority.