Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

s3

Package Overview
Dependencies
Maintainers
1
Versions
29
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

s3 - npm Package Compare versions

Comparing version 3.1.3 to 4.0.0

CHANGELOG.md

19

package.json
{
"name": "s3",
"version": "3.1.3",
"version": "4.0.0",
"description": "high level amazon s3 client. upload and download files and directories",
"main": "index.js",
"main": "lib/index.js",
"scripts": {

@@ -22,3 +22,6 @@ "test": "mocha"

"stream",
"async"
"async",
"parallel",
"multipart",
"size"
],

@@ -31,14 +34,14 @@ "author": "Andrew Kelley",

"devDependencies": {
"mocha": "~1.18.2",
"ncp": "~0.5.1"
"mocha": "~1.21.4",
"ncp": "~0.6.0"
},
"dependencies": {
"aws-sdk": "~2.0.8",
"aws-sdk": "~2.0.12",
"findit": "~2.0.0",
"graceful-fs": "~3.0.2",
"mkdirp": "~0.5.0",
"pend": "~1.1.1",
"pend": "~1.1.2",
"rimraf": "~2.2.8",
"stream-counter": "~1.0.0"
"fd-slicer": "~0.1.0"
}
}
# High Level Amazon S3 Client
## Features and Limitations
## Features

@@ -11,6 +11,9 @@ * Automatically retry a configurable number of times when S3 returns an error.

* Progress reporting.
* Limited to files less than 5GB.
* Limited to objects which were not uploaded using a multipart request.
* Supports files of any size (up to S3's maximum 5 TB object size limit).
* Uploads large files quickly using parallel multipart uploads.
* Uses heuristics to compute multipart ETags client-side to avoid uploading
or downloading files unnecessarily.
See also the companion CLI tool, [s3-cli](https://github.com/andrewrk/node-s3-cli).
See also the companion CLI tool which is meant to be a drop-in replacement for
s3cmd: [s3-cli](https://github.com/andrewrk/node-s3-cli).

@@ -25,5 +28,7 @@ ## Synopsis

var client = s3.createClient({
maxAsyncS3: 14, // this is the default
s3RetryCount: 3 // this is the default
maxAsyncS3: 20, // this is the default
s3RetryCount: 3, // this is the default
s3RetryDelay: 1000, // this is the default
multipartUploadThreshold: 20971520, // this is the default (20 MB)
multipartUploadSize: 15728640, // this is the default (15 MB)
s3Options: {

@@ -150,3 +155,3 @@ accessKeyId: "your s3 key",

* `maxAsyncS3` - maximum number of simultaneous requests this client will
ever have open to S3. defaults to `14`.
ever have open to S3. defaults to `20`.
* `s3RetryCount` - how many times to try an S3 operation before giving up.

@@ -156,2 +161,10 @@ Default 3.

operation. Default 1000.
* `multipartUploadThreshold` - if a file is this many bytes or greater, it
will be uploaded via a multipart request. Default is 20MB. Minimum is 5MB.
Maximum is 5GB.
* `multipartUploadSize` - when uploading via multipart, this is the part size.
The minimum size is 5MB. The maximum size is 5GB. Default is 15MB. Note that
S3 has a maximum of 10000 parts for a multipart upload, so if this value is
too small, it will be ignored in favor of the minimum necessary value
required to upload the file.

@@ -200,4 +213,2 @@ ### s3.getPublicUrl(bucket, key, [bucketLocation])

* `localFile`: path to the file on disk you want to upload to S3.
* `localFileStat`: optional - if you happen to already have the stat object
from `fs.stat`, you can provide it here.

@@ -208,2 +219,4 @@ The difference between using AWS SDK `putObject` and this one:

* If the reported MD5 upon upload completion does not match, it retries.
* If the file size is large enough, uses multipart upload to upload parts in
parallel.
* Retry based on the client's retry settings.

@@ -224,7 +237,14 @@ * Progress reporting.

* `'progress'` - emitted when `progressMd5Amount`, `progressAmount`, and
`progressTotal` properties change.
* `'stream' (stream)` - emitted when a `ReadableStream` for `localFile` has
been opened. Be aware that this might fire multiple times if a request to S3
must be retried.
`progressTotal` properties change. Note that it is possible for progress to
go backwards when an upload fails and must be retried.
* `'fileOpened' (fdSlicer)` - emitted when `localFile` has been opened. The file
is opened with the [fd-slicer](https://github.com/andrewrk/node-fd-slicer)
module because we might need to read from multiple locations in the file at
the same time. `fdSlicer` is an object for which you can call
`createReadStream(options)`. See the fd-slicer README for more information.
And these methods:
* `abort()` - call this to stop the find operation.
### client.downloadFile(params)

@@ -415,3 +435,3 @@

};
// pass `null` for `s3Params` if you want to skip dowlnoading this object.
// pass `null` for `s3Params` if you want to skip downloading this object.
callback(err, s3Params);

@@ -520,92 +540,3 @@ }

## History
### 3.1.3
* `uploadDir` and `downloadDir`: fix incorrectly deleting files
### 3.1.2
* add license
* update aws-sdk to 2.0.6. Fixes SSL download reliability.
### 3.1.1
* `uploadDir` handles source directory not existing error correctly
### 3.1.0
* `uploadFile` computes MD5 and sends bytes at the same time
* `getPublicUrl` handles `us-east-1` bucket location correctly
### 3.0.2
* fix upload path on Windows
### 3.0.1
* Default `maxAsyncS3` setting change from `30` to `14`.
* Add `Expect: 100-continue` header to downloads.
### 3.0.0
* `uploadDir` and `downloadDir` completely rewritten with more efficient
algorithm, which is explained in the documentation.
* Default `maxAsyncS3` setting changed from `Infinity` to `30`.
* No longer recommend adding graceful-fs to your app.
* No longer recommend increasing ulimit for number of open files.
* Add `followSymlinks` option to `uploadDir` and `downloadDir`
* `uploadDir` and `downloadDir` support these additional progress properties:
- `filesFound`
- `objectsFound`
- `deleteAmount`
- `deleteTotal`
- `doneFindingFiles`
- `doneFindingObjects`
- `progressMd5Amount`
- `progressMd5Total`
- `doneMd5`
### 2.0.0
* `getPublicUrl` API changed to support bucket regions. Use `getPublicUrlHttp`
if you want an insecure URL.
### 1.3.0
* `downloadFile` respects `maxAsyncS3`
* Add `copyObject` API
* AWS JS SDK updated to 2.0.0-rc.18
* errors with `retryable` set to `false` are not retried
* Add `moveObject` API
* `uploadFile` emits a `stream` event.
### 1.2.1
* fix `listObjects` for greater than 1000 objects
* `downloadDir` supports `getS3Params` parameter
* `uploadDir` and `downloadDir` expose `objectsFound` progress
### 1.2.0
* `uploadDir` accepts `getS3Params` function parameter
### 1.1.1
* fix handling of directory seperator in Windows
* allow `uploadDir` and `downloadDir` with empty `Prefix`
### 1.1.0
* Add an API function to get the HTTP url to an S3 resource
### 1.0.0
* complete module rewrite
* depend on official AWS SDK instead of knox
* support `uploadDir`, `downloadDir`, `listObjects`, `deleteObject`, and `deleteDir`
### 0.3.1
* fix `resp.req.url` sometimes not defined causing crash
* fix emitting `end` event before write completely finished
Tests upload and download large amounts of data to and from S3. The test
timeout is set to 40 seconds because Internet connectivity waries wildly.
SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc