abstract-tus-store
Black box test suite and interface specification for Tus-like stores.
Inspired by abstract-blob-store.
Tus stores implement an API for creating and writing sequentially to
"upload resources".
The required interface consists of 4 functions:
create(key[, opts])
creates a new upload resourceinfo(uploadId)
returns the current size, final key, size and metadata of an upload resourceappend(uploadId, readStream, [offset,] [opts])
to append to an upload resourcecreateReadStream(key)
creates a readable stream for the key
(primarily used to test implementation)
Optional interface:
createPartial([opts])
to create a new "partial" upload resourceconcat(key, uploadIds, [opts])
concatenate "partial" upload resources to keydel(uploadId)
delete an upload resource to free up resourcesminChunkSize
optional property that announces the minimal amount of
bytes to write in an append call (except for the last one)
Some modules that use this
Send a PR adding yours if you write a new one.
Badge
Include this badge in your readme if you make a new module that uses the abstract-tus-store
API:
Install
npm install --save-dev abstract-tus-store
Requires Node v6+
Usage
See ./test directory for usage example.
In your tests:
import testStore from 'abstract-tus-store'
import memStore from 'abstract-tus-store/mem-store'
testStore({
setup() {
return memStore()
}
teardown() {
}
})
API
This documentation is mainly targeted at implementers.
General notes:
-
All functions must return promises
-
Error classes are available as keys on an errors
object exported by abstract-tus-store
, e.g.
import { errors } from 'abstract-tus-store'
new errors.UnknownKey('some unknown key')
Some terminology and definitions:
- A key represents the final destination (in the store) of an upload
resource's content and metadata (once the upload is complete).
- Keys should be used "as is" to ensure interoperability with
other libraries and external systems. For example, an AWS S3 store
should use the key directly as the S3 key.
- An upload resource
- represents an upload in progress;
- is linked to exactly one or zero keys (for partial uploads);
- A deferred upload is an upload resource which hasn't been assigned
an upload length yet.
- A partial upload is an upload with no linked key.
- An upload length is the total expected size of an upload resource
(e.g. the size of a file to upload).
- An offset represents how many bytes have been written to an upload
resource (so far).
- An upload resource is complete when its offset reaches its
upload length.
- An upload ID uniquely identities an upload resource
- Upload IDs should be unique and unpredictable so that library users
can send/receive them directly to/from untrusted systems.
create(key[, opts])
Create a new upload resource that will be stored at key
once
completed.
key
: String required location of the content and metadata of
the completed upload resourceopts.uploadLength
: Number expected size of the upload in bytesopts.metadata
: Object arbitrary map of key/values
Must resolve to an object with the following properties:
uploadId
: String required unique, unpredictable identifier for
the created upload resource
If opts.uploadLength
is not supplied, it must be passed to a
subsequent call to append
or else the upload will never complete.
Calls to create should always return a new and unique upload ID.
info(uploadId)
Get the offset
and uploadLength
of an upload resource.
uploadId
: String required a known upload ID
Must resolve to an object with the following properties:
offset
: Number required offset of the upload resource. Must be
present even if the offset is 0 or the upload is already completed.uploadLength
: Number (required if known) upload length of the
upload resource.metadata
: Object metadata that was set at creation.isPartial
: Boolean true
if partial upload
Must throw an UploadNotFound
error if the upload resource does not
exist. Must not attempt to create the upload resource if it does not
already exist.
append(uploadId, data, [offset,] [opts])
Append data to an upload resource.
uploadId
: String required a known Upload IDdata
: Readable Stream required Data that will be appended to the upload resource.offset
: Number Optional offset to help prevent data corruption.opts.beforeComplete
: Function (async) function that will be called
as beforeComplete(upload, uploadId)
before completing the upload.opts.uploadLength
: Number Used to set the length of a
deferred upload.
Resolves to an object with the following properties:
offset
: Number required the new offset of the uploadupload
: Object (required if the append causes the upload to
complete) the upload object as returned by info(uploadId)
Data must be read and written to the upload resource until the data
stream ends or the upload completes (offset === uploadLength
).
The optional offset
parameter can be used to prevent data corruption.
Let's say you want to continue uploading a file. You get the current
offset of the upload with a call to info
, and then call append
with
a read stream that starts reading the file from offset
. If offset
has changed between your call to read
and append
, append
will
throw an OffsetMismatch
error.
If the call to append
causes the upload resource to complete, append
must not resolve until the upload resource's content and metadata become
available on its linked key (except for partial uploads which do not
have a key). If an object already exists at said key, it must be
overwritten.
Must throw an OffsetMismatch
error if the supplied offset is
incorrect.
Must throw an UploadNotFound
error if the upload doesn't exist.
createReadStream(key[, onInfo])
Creates a readable stream to read the key's content from the backing
store. This is mostly used for testing implementations.
key
: String required key to read from the storeonInfo
: Function optional callback that will be called with
an { contentLength, metadata }
object
Optional APIs
TODO...
createPartial(opts)
to create a new "partial" upload resourceconcat(key, uploadIds, [opts])
concatenate "partial" upload resources to keydel(uploadId)
delete an upload resource to free up resourcesminChunkSize
optional property that announces the minimal amount of
bytes that must be written in an append call (except for the last one)