New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

bun-types

Package Overview
Dependencies
Maintainers
3
Versions
801
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

bun-types - npm Package Compare versions

Comparing version 1.1.43-canary.20250106T140553 to 1.1.43-canary.20250107T145807

362

docs/api/s3.md

@@ -6,10 +6,7 @@ Production servers often read, upload, and write files to S3-compatible object storage services instead of the local filesystem. Historically, that means local filesystem APIs you use in development can't be used in production. When you use Bun, things are different.

```ts
import { s3, write, S3 } from "bun";
import { s3, write, S3Client } from "bun";
const metadata = await s3("123.json", {
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
// endpoint: "https://s3.us-east-1.amazonaws.com",
});
// Bun.s3 reads environment variables for credentials
// file() returns a lazy reference to a file on S3
const metadata = s3.file("123.json");

@@ -27,5 +24,8 @@ // Download from S3 as JSON

});
// Delete the file
await metadata.delete();
```
S3 is the [de facto standard](https://en.wikipedia.org/wiki/De_facto_standard) internet filesystem. You can use Bun's S3 API with S3-compatible storage services like:
S3 is the [de facto standard](https://en.wikipedia.org/wiki/De_facto_standard) internet filesystem. Bun's S3 API works with S3-compatible storage services like:

@@ -43,15 +43,18 @@ - AWS S3

### Using `Bun.s3()`
### `Bun.S3Client` & `Bun.s3`
The `s3()` helper function is used to create one-off `S3File` instances for a single file.
`Bun.s3` is equivalent to `new Bun.S3Client()`, relying on environment variables for credentials.
To explicitly set credentials, pass them to the `Bun.S3Client` constructor.
```ts
import { s3 } from "bun";
import { S3Client } from "bun";
// Using the s3() helper
const s3file = s3("my-file.txt", {
const client = new S3Client({
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
// endpoint: "https://s3.us-east-1.amazonaws.com", // optional
// sessionToken: "..."
// acl: "public-read",
// endpoint: "https://s3.us-east-1.amazonaws.com",
// endpoint: "https://<account-id>.r2.cloudflarestorage.com", // Cloudflare R2

@@ -61,9 +64,23 @@ // endpoint: "https://<region>.digitaloceanspaces.com", // DigitalOcean Spaces

});
// Bun.s3 is a global singleton that is equivalent to `new Bun.S3Client()`
Bun.s3 = client;
```
### Reading Files
### Working with S3 Files
You can read files from S3 using similar methods to Bun's file system APIs:
The **`file`** method in `S3Client` returns a **lazy reference to a file on S3**.
```ts
// A lazy reference to a file on S3
const s3file: S3File = client.file("123.json");
```
Like `Bun.file(path)`, the `S3Client`'s `file` method is synchronous. It does zero network requests until you call a method that depends on a network request.
### Reading files from S3
If you've used the `fetch` API, you're familiar with the `Response` and `Blob` APIs. `S3File` extends `Blob`. The same methods that work on `Blob` also work on `S3File`.
```ts
// Read an S3File as text

@@ -88,6 +105,14 @@ const text = await s3file.text();

## Writing Files
#### Memory optimization
Writing to S3 is just as simple:
Methods like `text()`, `json()`, `bytes()`, or `arrayBuffer()` avoid duplicating the string or bytes in memory when possible.
If the text happens to be ASCII, Bun directly transfers the string to JavaScriptCore (the engine) without transcoding and without duplicating the string in memory. When you use `.bytes()` or `.arrayBuffer()`, it will also avoid duplicating the bytes in memory.
These helper methods not only simplify the API, they also make it faster.
### Writing & uploading files to S3
Writing to S3 is just as simple.
```ts

@@ -97,2 +122,8 @@ // Write a string (replacing the file)

// Write a Buffer (replacing the file)
await s3file.write(Buffer.from("Hello World!"));
// Write a Response (replacing the file)
await s3file.write(new Response("Hello World!"));
// Write with content type

@@ -128,3 +159,3 @@ await s3file.write(JSON.stringify({ name: "John", age: 30 }), {

// Upload in 5 MB chunks
partSize: 5 * 1024 * 1024,
partSize: 5,
});

@@ -144,16 +175,7 @@ for (let i = 0; i < 10; i++) {

```ts
import { s3 } from "bun";
// Generate a presigned URL that expires in 24 hours (default)
const url = s3file.presign();
// Custom expiration time (in seconds)
const url2 = s3file.presign({ expiresIn: 3600 }); // 1 hour
// Using static method
const url3 = Bun.S3.presign("my-file.txt", {
bucket: "my-bucket",
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
// endpoint: "https://s3.us-east-1.amazonaws.com",
// endpoint: "https://<account-id>.r2.cloudflarestorage.com", // Cloudflare R2
expiresIn: 3600,
const url = s3.presign("my-file.txt", {
expiresIn: 3600, // 1 hour
});

@@ -194,2 +216,8 @@ ```

expiresIn: 3600, // 1 hour
// access control list
acl: "public-read",
// HTTP method
method: "PUT",
});

@@ -215,3 +243,3 @@ ```

To quickly redirect users to a presigned URL for an S3 file, you can pass an `S3File` instance to a `Response` object as the body.
To quickly redirect users to a presigned URL for an S3 file, pass an `S3File` instance to a `Response` object as the body.

@@ -243,7 +271,44 @@ ```ts

### Using Bun's S3Client with AWS S3
AWS S3 is the default. You can also pass a `region` option instead of an `endpoint` option for AWS S3.
```ts
import { s3 } from "bun";
import { S3Client } from "bun";
// AWS S3
const s3 = new S3Client({
accessKeyId: "access-key",
secretAccessKey: "secret-key",
bucket: "my-bucket",
// endpoint: "https://s3.us-east-1.amazonaws.com",
// region: "us-east-1",
});
```
### Using Bun's S3Client with Google Cloud Storage
To use Bun's S3 client with [Google Cloud Storage](https://cloud.google.com/storage), set `endpoint` to `"https://storage.googleapis.com"` in the `S3Client` constructor.
```ts
import { S3Client } from "bun";
// Google Cloud Storage
const gcs = new S3Client({
accessKeyId: "access-key",
secretAccessKey: "secret-key",
bucket: "my-bucket",
endpoint: "https://storage.googleapis.com",
});
```
### Using Bun's S3Client with Cloudflare R2
To use Bun's S3 client with [Cloudflare R2](https://developers.cloudflare.com/r2/), set `endpoint` to the R2 endpoint in the `S3Client` constructor. The R2 endpoint includes your account ID.
```ts
import { S3Client } from "bun";
// CloudFlare R2
const r2file = s3("my-file.txt", {
const r2 = new S3Client({
accessKeyId: "access-key",

@@ -254,16 +319,34 @@ secretAccessKey: "secret-key",

});
```
// DigitalOcean Spaces
const spacesFile = s3("my-file.txt", {
### Using Bun's S3Client with DigitalOcean Spaces
To use Bun's S3 client with [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces/), set `endpoint` to the DigitalOcean Spaces endpoint in the `S3Client` constructor.
```ts
import { S3Client } from "bun";
const spaces = new S3Client({
accessKeyId: "access-key",
secretAccessKey: "secret-key",
bucket: "my-bucket",
// region: "nyc3",
endpoint: "https://<region>.digitaloceanspaces.com",
});
```
// MinIO
const minioFile = s3("my-file.txt", {
### Using Bun's S3Client with MinIO
To use Bun's S3 client with [MinIO](https://min.io/), set `endpoint` to the URL that MinIO is running on in the `S3Client` constructor.
```ts
import { S3Client } from "bun";
const minio = new S3Client({
accessKeyId: "access-key",
secretAccessKey: "secret-key",
bucket: "my-bucket",
// Make sure to use the correct endpoint URL
// It might not be localhost in production!
endpoint: "http://localhost:9000",

@@ -299,12 +382,12 @@ });

These defaults are overriden by the options you pass to `s3(credentials)`, `new Bun.S3(credentials)`, or any of the methods that accept credentials. So if, for example, you use the same credentials for different buckets, you can set the credentials once in your `.env` file and then pass `bucket: "my-bucket"` to the `s3()` helper function without having to specify all the credentials again.
These defaults are overriden by the options you pass to `s3(credentials)`, `new Bun.S3Client(credentials)`, or any of the methods that accept credentials. So if, for example, you use the same credentials for different buckets, you can set the credentials once in your `.env` file and then pass `bucket: "my-bucket"` to the `s3()` helper function without having to specify all the credentials again.
### `S3` Buckets
### `S3Client` objects
Passing around all of these credentials can be cumbersome. To make it easier, you can create a `S3` bucket instance.
When you're not using environment variables or using multiple buckets, you can create a `S3Client` object to explicitly set credentials.
```ts
import { S3 } from "bun";
import { S3Client } from "bun";
const bucket = new S3({
const client = new S3Client({
accessKeyId: "your-access-key",

@@ -319,11 +402,2 @@ secretAccessKey: "your-secret-key",

// bucket is a function that creates `S3File` instances (lazy)
const file = bucket("my-file.txt");
// Write to S3
await file.write("Hello World!");
// Read from S3
const text = await file.text();
// Write using a Response

@@ -339,48 +413,45 @@ await file.write(new Response("Hello World!"));

// Delete the file
await file.unlink();
await file.delete();
```
#### Read a file from an `S3` bucket
### `S3Client.prototype.write`
The `S3` bucket instance is itself a function that creates `S3File` instances. It provides a more convenient API for interacting with S3.
To upload or write a file to S3, call `write` on the `S3Client` instance.
```ts
const s3file = bucket("my-file.txt");
const text = await s3file.text();
const json = await s3file.json();
const bytes = await s3file.bytes();
const arrayBuffer = await s3file.arrayBuffer();
const client = new Bun.S3Client({
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
endpoint: "https://s3.us-east-1.amazonaws.com",
bucket: "my-bucket",
});
await client.write("my-file.txt", "Hello World!");
await client.write("my-file.txt", new Response("Hello World!"));
// equivalent to
// await client.file("my-file.txt").write("Hello World!");
```
#### Write a file to S3
### `S3Client.prototype.delete`
To write a file to the bucket, you can use the `write` method.
To delete a file from S3, call `delete` on the `S3Client` instance.
```ts
const bucket = new Bun.S3({
const client = new Bun.S3Client({
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
endpoint: "https://s3.us-east-1.amazonaws.com",
bucket: "my-bucket",
});
await bucket.write("my-file.txt", "Hello World!");
await bucket.write("my-file.txt", new Response("Hello World!"));
```
You can also call `.write` on the `S3File` instance created by the `S3` bucket instance.
```ts
const s3file = bucket("my-file.txt");
await s3file.write("Hello World!", {
type: "text/plain",
});
await s3file.write(new Response("Hello World!"));
await client.delete("my-file.txt");
// equivalent to
// await client.file("my-file.txt").delete();
```
#### Delete a file from S3
### `S3Client.prototype.exists`
To delete a file from the bucket, you can use the `delete` method.
To check if a file exists in S3, call `exists` on the `S3Client` instance.
```ts
const bucket = new Bun.S3({
const client = new Bun.S3Client({
accessKeyId: "your-access-key",

@@ -391,12 +462,7 @@ secretAccessKey: "your-secret-key",

await bucket.delete("my-file.txt");
const exists = await client.exists("my-file.txt");
// equivalent to
// const exists = await client.file("my-file.txt").exists();
```
You can also use the `unlink` method, which is an alias for `delete`.
```ts
// "delete" and "unlink" are aliases of each other.
await bucket.unlink("my-file.txt");
```
## `S3File`

@@ -429,4 +495,15 @@

readonly size: Promise<number>;
exists(options?: S3Options): Promise<boolean>;
unlink(options?: S3Options): Promise<void>;
delete(options?: S3Options): Promise<void>;
presign(options?: S3Options): string;
stat(options?: S3Options): Promise<S3Stat>;
/**
* Size is not synchronously available because it requires a network request.
*
* @deprecated Use `stat()` instead.
*/
size: NaN;
// ... more omitted for brevity

@@ -448,3 +525,3 @@ }

### Partial reads
### Partial reads with `slice`

@@ -465,2 +542,13 @@ To read a partial range of a file, you can use the `slice` method.

### Deleting files from S3
To delete a file from S3, you can use the `delete` method.
```ts
await s3file.delete();
// await s3File.unlink();
```
`delete` is the same as `unlink`.
## Error codes

@@ -479,33 +567,37 @@

## `S3` static methods
## `S3Client` static methods
The `S3` class provides several static methods for interacting with S3.
The `S3Client` class provides several static methods for interacting with S3.
### `S3.presign`
### `S3Client.presign` (static)
To generate a presigned URL for an S3 file, you can use the `S3.presign` method.
To generate a presigned URL for an S3 file, you can use the `S3Client.presign` static method.
```ts
import { S3 } from "bun";
import { S3Client } from "bun";
const url = S3.presign("my-file.txt", {
const credentials = {
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
expiresIn: 3600,
// endpoint: "https://s3.us-east-1.amazonaws.com",
// endpoint: "https://<account-id>.r2.cloudflarestorage.com", // Cloudflare R2
};
const url = S3Client.presign("my-file.txt", {
...credentials,
expiresIn: 3600,
});
```
This is the same as `S3File.prototype.presign` and `new S3(credentials).presign`, as a static method on the `S3` class.
This is equivalent to calling `new S3Client(credentials).presign("my-file.txt", { expiresIn: 3600 })`.
### `S3.exists`
### `S3Client.exists` (static)
To check if an S3 file exists, you can use the `S3.exists` method.
To check if an S3 file exists, you can use the `S3Client.exists` static method.
```ts
import { S3 } from "bun";
import { S3Client } from "bun";
const exists = await S3.exists("my-file.txt", {
const credentials = {
accessKeyId: "your-access-key",

@@ -515,3 +607,5 @@ secretAccessKey: "your-secret-key",

// endpoint: "https://s3.us-east-1.amazonaws.com",
});
};
const exists = await S3Client.exists("my-file.txt", credentials);
```

@@ -523,5 +617,3 @@

const s3file = Bun.s3("my-file.txt", {
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
bucket: "my-bucket",
...credentials,
});

@@ -531,9 +623,10 @@ const exists = await s3file.exists();

### `S3.size`
### `S3Client.stat` (static)
To get the size of an S3 file, you can use the `S3.size` method.
To get the size, etag, and other metadata of an S3 file, you can use the `S3Client.stat` static method.
```ts
import { S3 } from "bun";
const size = await S3.size("my-file.txt", {
import { S3Client } from "bun";
const credentials = {
accessKeyId: "your-access-key",

@@ -543,13 +636,20 @@ secretAccessKey: "your-secret-key",

// endpoint: "https://s3.us-east-1.amazonaws.com",
});
};
const stat = await S3Client.stat("my-file.txt", credentials);
// {
// size: 1024,
// etag: "1234567890",
// lastModified: new Date(),
// }
```
### `S3.unlink`
### `S3Client.delete` (static)
To delete an S3 file, you can use the `S3.unlink` method.
To delete an S3 file, you can use the `S3Client.delete` static method.
```ts
import { S3 } from "bun";
import { S3Client } from "bun";
await S3.unlink("my-file.txt", {
const credentials = {
accessKeyId: "your-access-key",

@@ -559,3 +659,10 @@ secretAccessKey: "your-secret-key",

// endpoint: "https://s3.us-east-1.amazonaws.com",
});
};
await S3Client.delete("my-file.txt", credentials);
// equivalent to
// await new S3Client(credentials).delete("my-file.txt");
// S3Client.unlink is alias of S3Client.delete
await S3Client.unlink("my-file.txt", credentials);
```

@@ -572,4 +679,25 @@

This is the equivalent of calling `Bun.s3("my-file.txt", { bucket: "my-bucket" })`.
You can additionally pass `s3` options to the `fetch` and `Bun.file` functions.
This `s3://` protocol exists to make it easier to use the same code for local files and S3 files.
```ts
const response = await fetch("s3://my-bucket/my-file.txt", {
s3: {
accessKeyId: "your-access-key",
secretAccessKey: "your-secret-key",
endpoint: "https://s3.us-east-1.amazonaws.com",
},
headers: {
"x-amz-meta-foo": "bar",
},
});
```
### UTF-8, UTF-16, and BOM (byte order mark)
Like `Response` and `Blob`, `S3File` assumes UTF-8 encoding by default.
When calling one of the `text()` or `json()` methods on an `S3File`:
- When a UTF-16 byte order mark (BOM) is detected, it will be treated as UTF-16. JavaScriptCore natively supports UTF-16, so it skips the UTF-8 transcoding process (and strips the BOM). This is mostly good, but it does mean if you have invalid surrogate pairs characters in your UTF-16 string, they will be passed through to JavaScriptCore (same as source code).
- When a UTF-8 BOM is detected, it gets stripped before the string is passed to JavaScriptCore and invalid UTF-8 codepoints are replaced with the Unicode replacement character (`\uFFFD`).
- UTF-32 is not supported.

@@ -36,2 +36,10 @@ To add a particular package:

## `--peer`
To add a package as a peer dependency (`"peerDependencies"`):
```bash
$ bun add --peer @types/bun
```
## `--exact`

@@ -38,0 +46,0 @@

@@ -5,3 +5,3 @@ ---

To add an npm package as a peer dependency, use the `--optional` flag.
To add an npm package as an optional dependency, use the `--optional` flag.

@@ -8,0 +8,0 @@ ```sh

@@ -146,2 +146,8 @@ The `bun` CLI contains an `npm`-compatible package manager designed to be a faster replacement for existing package management tools like `npm`, `yarn`, and `pnpm`. It's designed for Node.js compatibility; use it in any Bun or Node.js project.

To add a package as a peer dependency (`"peerDependencies"`):
```bash
$ bun add --peer @types/bun
```
To install a package globally:

@@ -148,0 +154,0 @@

@@ -56,2 +56,12 @@ Bun supports [`workspaces`](https://docs.npmjs.com/cli/v9/using-npm/workspaces?v=true#description) in `package.json`. Workspaces make it easy to develop complex software as a _monorepo_ consisting of several independent packages.

`bun install` will install dependencies for all workspaces in the monorepo, de-duplicating packages if possible. If you only want to install dependencies for specific workspaces, you can use the `--filter` flag.
```bash
# Install dependencies for all workspaces starting with `pkg-` except for `pkg-c`
$ bun install --filter "pkg-*" --filter "!pkg-c"
# Paths can also be used. This is equivalent to the command above.
$ bun install --filter "./packages/pkg-*" --filter "!pkg-c" # or --filter "!./packages/pkg-c"
```
Workspaces have a couple major benefits.

@@ -58,0 +68,0 @@

@@ -344,3 +344,3 @@ Bun aims for complete Node.js API compatibility. Most `npm` packages intended for `Node.js` environments will work with Bun out of the box; the best way to know for certain is to try it.

🟡 Missing `domain` `initgroups` `setegid` `seteuid` `setgid` `setgroups` `setuid` `allowedNodeEnvironmentFlags` `getActiveResourcesInfo` `setActiveResourcesInfo` `moduleLoadList` `setSourceMapsEnabled`. `process.binding` is partially implemented.
🟡 Missing `initgroups` `allowedNodeEnvironmentFlags` `getActiveResourcesInfo` `setActiveResourcesInfo` `moduleLoadList` `setSourceMapsEnabled`. `process.binding` is partially implemented.

@@ -347,0 +347,0 @@ ### [`queueMicrotask()`](https://developer.mozilla.org/en-US/docs/Web/API/queueMicrotask)

{
"version": "1.1.43-canary.20250106T140553",
"version": "1.1.43-canary.20250107T145807",
"name": "bun-types",

@@ -4,0 +4,0 @@ "license": "MIT",

Sorry, the diff of this file is too big to display

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc