![Oracle Drags Its Feet in the JavaScript Trademark Dispute](https://cdn.sanity.io/images/cgdhsj6q/production/919c3b22c24f93884c548d60cbb338e819ff2435-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Oracle Drags Its Feet in the JavaScript Trademark Dispute
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
@tus/s3-store
Advanced tools
@tus/s3-store
👉 Note: since 1.0.0 packages are split and published under the
@tus
scope. The old package,tus-node-server
, is considered unstable and will only receive security fixes. Make sure to use the new package.
In Node.js (16.0+), install with npm:
npm install @tus/s3-store
const {Server} = require('@tus/server')
const {S3Store} = require('@tus/s3-store')
const s3Store = new S3Store({
partSize: 8 * 1024 * 1024, // Each uploaded part will have ~8MiB,
s3ClientConfig: {
bucket: process.env.AWS_BUCKET,
region: process.env.AWS_REGION,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
},
},
})
const server = new Server({path: '/files', datastore: s3Store})
// ...
This package exports S3Store
. There is no default export.
new S3Store(options)
Creates a new AWS S3 store with options.
options.bucket
The bucket name.
options.partSize
The preferred part size for parts send to S3. Can not be lower than 5MiB or more than 5GiB. The server calculates the optimal part size, which takes this size into account, but may increase it to not exceed the S3 10K parts limit.
options.minPartSize
The minimal part size for parts.
Can be used to ensure that all non-trailing parts are exactly the same size
by setting partSize
and minPartSize
to the same value.
Can not be lower than 5MiB or more than 5GiB.
The server calculates the optimal part size, which takes this size into account, but may increase it to not exceed the S3 10K parts limit.
options.s3ClientConfig
Options to pass to the AWS S3 SDK. Checkout the
S3ClientConfig
docs for the supported options. You need to at least set the region
, bucket
name, and
your preferred method of authentication.
options.expirationPeriodInMilliseconds
Enables the expiration extension and sets the expiration period of an upload url in milliseconds. Once the expiration period has passed, the upload url will return a 410 Gone status code.
options.useTags
Some S3 providers don't support tagging objects. If you are using certain features like
the expiration extension and your provider doesn't support tagging, you can set this
option to false
to disable tagging.
options.cache
An optional cache implementation (KvStore
).
Default uses an in-memory cache (MemoryKvStore
). When running multiple instances of the
server, you need to provide a cache implementation that is shared between all instances
like the RedisKvStore
.
See the exported KV stores from @tus/server
for more information.
options.maxConcurrentPartUploads
This setting determines the maximum number of simultaneous part uploads to an S3 storage service. The default value is 60. This default is chosen in conjunction with the typical partSize of 8MiB, aiming for an effective transfer rate of 3.84Gbit/s.
Considerations: The ideal value for maxConcurrentPartUploads
varies based on your
partSize
and the upload bandwidth to your S3 bucket. A larger partSize means less
overall upload bandwidth available for other concurrent uploads.
Lowering the Value: Reducing maxConcurrentPartUploads
decreases the number of
simultaneous upload requests to S3. This can be beneficial for conserving memory, CPU,
and disk I/O resources, especially in environments with limited system resources or
where the upload speed it low or the part size is large.
Increasing the Value: A higher value potentially enhances the data transfer rate to the server, but at the cost of increased resource usage (memory, CPU, and disk I/O). This can be advantageous when the goal is to maximize throughput, and sufficient system resources are available.
Bandwidth Considerations: It's important to note that if your upload bandwidth to S3
is a limiting factor, increasing maxConcurrentPartUploads
won’t lead to higher
throughput. Instead, it will result in additional resource consumption without
proportional gains in transfer speed.
The tus protocol supports optional extensions. Below is a table of the supported
extensions in @tus/s3-store
.
Extension | @tus/s3-store |
---|---|
Creation | ✅ |
Creation With Upload | ✅ |
Expiration | ✅ |
Checksum | ❌ |
Termination | ✅ |
Concatenation | ❌ |
After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to set an S3 Lifecycle configuration to abort incomplete multipart uploads.
Unlike other stores, the expiration extension on the S3 store does not need to call
server.cleanUpExpiredUploads()
. The store creates a
Tus-Completed
tag for all objects, including .part
and .info
files, to indicate
whether an upload is finished. This means you could setup a lifecyle policy to
automatically clean them up without a CRON job.
{
"Rules": [
{
"Filter": {
"Tag": {
"Key": "Tus-Completed",
"Value": "false"
}
},
"Expiration": {
"Days": 2
}
}
]
}
If you want more granularity, it is still possible to configure a CRON job to call
server.cleanExpiredUploads()
yourself.
credentials
to fetch credentials inside a AWS containerThe credentials
config is directly passed into the AWS SDK so you can refer to the AWS
docs for the supported values of
credentials
const aws = require('aws-sdk')
const {Server} = require('@tus/server')
const {S3Store} = require('@tus/s3-store')
const s3Store = new S3Store({
partSize: 8 * 1024 * 1024,
s3ClientConfig: {
bucket: process.env.AWS_BUCKET,
region: process.env.AWS_REGION,
credentials: new aws.ECSCredentials({
httpOptions: {timeout: 5000},
maxRetries: 10,
}),
},
})
const server = new Server({path: '/files', datastore: s3Store})
// ...
@tus/s3-store
can be used with all S3-compatible storage solutions, including Cloudflare R2.
However R2 requires that all non-trailing parts are exactly the same size.
This can be achieved by setting partSize
and minPartSize
to the same value.
// ...
const s3Store = new S3Store({
partSize: 8 * 1024 * 1024,
minPartSize: 8 * 1024 * 1024,
// ...
})
This package is fully typed with TypeScript.
This package requires Node.js 16.0+.
See
contributing.md
.
FAQs
AWS S3 store for @tus/server
The npm package @tus/s3-store receives a total of 2,407 weekly downloads. As such, @tus/s3-store popularity was classified as popular.
We found that @tus/s3-store demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.