Research
Security News
Quasar RAT Disguised as an npm Package for Detecting Vulnerabilities in Ethereum Smart Contracts
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
See also the companion CLI tool, s3-cli.
var s3 = require('s3');
var client = s3.createClient({
maxAsyncS3: 14, // this is the default
s3RetryCount: 3 // this is the default
s3RetryDelay: 1000, // this is the default
s3Options: {
accessKeyId: "your s3 key",
secretAccessKey: "your s3 secret",
// any other options are passed to new AWS.S3()
// See: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Config.html#constructor-property
},
});
var s3 = require('s3');
var awsS3Client = new AWS.S3(s3Options);
var options = {
s3Client: awsS3Client,
};
var client = s3.fromAwsSdkS3(options);
var params = {
localFile: "some/local/file",
s3Params: {
Bucket: "s3 bucket name",
Key: "some/remote/file",
// other options supported by putObject, except Body and ContentLength.
// See: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property
},
};
var uploader = client.uploadFile(params);
uploader.on('error', function(err) {
console.error("unable to upload:", err.stack);
});
uploader.on('progress', function() {
console.log("progress", uploader.progressMd5Amount,
uploader.progressAmount, uploader.progressTotal);
});
uploader.on('end', function() {
console.log("done uploading");
});
var params = {
localFile: "some/local/file",
s3Params: {
Bucket: "s3 bucket name",
Key: "some/remote/file",
// other options supported by getObject
// See: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getObject-property
},
};
var downloader = client.downloadFile(params);
downloader.on('error', function(err) {
console.error("unable to download:", err.stack);
});
downloader.on('progress', function() {
console.log("progress", downloader.progressAmount, downloader.progressTotal);
});
downloader.on('end', function() {
console.log("done downloading");
});
var params = {
localDir: "some/local/dir",
deleteRemoved: true, // default false, whether to remove s3 objects
// that have no corresponding local file.
s3Params: {
Bucket: "s3 bucket name",
Prefix: "some/remote/dir/",
// other options supported by putObject, except Body and ContentLength.
// See: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property
},
};
var uploader = client.uploadDir(params);
uploader.on('error', function(err) {
console.error("unable to sync:", err.stack);
});
uploader.on('progress', function() {
console.log("progress", uploader.progressAmount, uploader.progressTotal);
});
uploader.on('end', function() {
console.log("done uploading");
});
Consider increasing the socket pool size in the http
and https
global
agents. This will improve bandwidth when using uploadDir
and downloadDir
functions. For example:
http.globalAgent.maxSockets = https.globalAgent.maxSockets = 20;
Creates an S3 client.
options
:
s3Client
- optional, an instance of AWS.S3
. Leave blank if you provide s3Options
.s3Options
- optional. leave blank if you provide s3Client
.
new AWS.S3()
:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Config.html#constructor-propertymaxAsyncS3
- maximum number of simultaneous requests this client will
ever have open to S3. defaults to 14
.s3RetryCount
- how many times to try an S3 operation before giving up.
Default 3.s3RetryDelay
- how many milliseconds to wait before retrying an S3
operation. Default 1000.bucket
S3 bucketkey
S3 keybucketLocation
string, one of these:
You can find out your bucket location programatically by using this API: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getBucketLocation-property
returns a string which looks like this:
https://s3.amazonaws.com/bucket/key
or maybe this if you are not in US Standard:
https://s3-eu-west-1.amazonaws.com/bucket/key
bucket
S3 Bucketkey
S3 KeyWorks for any region, and returns a string which looks like this:
http://bucket.s3.amazonaws.com/key
See http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property
params
:
s3Params
: params to pass to AWS SDK putObject
.localFile
: path to the file on disk you want to upload to S3.localFileStat
: optional - if you happen to already have the stat object
from fs.stat
, you can provide it here.The difference between using AWS SDK putObject
and this one:
Returns an EventEmitter
with these properties:
progressMd5Amount
progressAmount
progressTotal
And these events:
'error' (err)
'end' (data)
- emitted when the file is uploaded successfully
data
is the same object that you get from putObject
in AWS SDK'progress'
- emitted when progressMd5Amount
, progressAmount
, and
progressTotal
properties change.'stream' (stream)
- emitted when a ReadableStream
for localFile
has
been opened. Be aware that this might fire multiple times if a request to S3
must be retried.See http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getObject-property
params
:
localFile
- the destination path on disk to write the s3 object intos3Params
: params to pass to AWS SDK getObject
.The difference between using AWS SDK getObject
and this one:
Returns an EventEmitter
with these properties:
progressAmount
progressTotal
And these events:
'error' (err)
'end'
- emitted when the file is uploaded successfully'progress'
- emitted when progressAmount
and progressTotal
properties change.See http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property
params
:
recursive
- true
or false
whether or not you want to recurse
into directories.s3Params
- params to pass to AWS SDK listObjects
.Note that if you set Delimiter
in s3Params
then you will get a list of
objects and folders in the directory you specify. You probably do not want to
set recursive
to true
at the same time as specifying a Delimiter
because
this will cause a request per directory. If you want all objects that share a
prefix, leave the Delimiter
option null
or undefined
.
Be sure that s3Params.Prefix
ends with a trailing slash (/
) unless you
are requesting the top-level listing, in which case s3Params.Prefix
should
be empty string.
The difference between using AWS SDK listObjects
and this one:
Returns an EventEmitter
with these properties:
progressAmount
objectsFound
dirsFound
And these events:
'error' (err)
'end'
- emitted when done listing and no more 'data' events will be emitted.'data' (data)
- emitted when a batch of objects are found. This is
the same as the data
object in AWS SDK.'progress'
- emitted when progressAmount
, objectsFound
, and
dirsFound
properties change.And these methods:
abort()
- call this to stop the find operation.See http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#deleteObjects-property
s3Params
are the same.
The difference between using AWS SDK deleteObjects
and this one:
Returns an EventEmitter
with these properties:
progressAmount
progressTotal
And these events:
'error' (err)
'end'
- emitted when all objects are deleted.'progress'
- emitted when the progressAmount
or progressTotal
properties change.'data' (data)
- emitted when a request completes. There may be more.Syncs an entire directory to S3.
params
:
localDir
- source path on local file system to sync to S3deleteRemoved
- delete s3 objects with no corresponding local file.
default falsegetS3Params
- optional function which will be called for every file that
needs to be uploaded. See below.followSymlinks
- defaults to false
s3Params
Prefix
(required)Bucket
(required)function getS3Params(localFile, stat, callback) {
// call callback like this:
var err = new Error(...); // only if there is an error
var s3Params = { // if there is no error
ContentType: getMimeType(localFile), // just an example
};
// pass `null` for `s3Params` if you want to skip uploading this file.
callback(err, s3Params);
}
Returns an EventEmitter
with these properties:
progressAmount
progressTotal
progressMd5Amount
progressMd5Total
deleteAmount
deleteTotal
filesFound
objectsFound
doneFindingFiles
doneFindingObjects
doneMd5
And these events:
'error' (err)
'end'
- emitted when all files are uploaded'progress'
- emitted when any of the above progress properties change.uploadDir
works like this:
Prefix
. S3 guarantees
returned objects to be in sorted order.localDir
.deleteRemoved
is set, deleting remote objects whose corresponding local
files are missing.Syncs an entire directory from S3.
params
:
localDir
- destination directory on local file system to sync todeleteRemoved
- delete local files with no corresponding s3 object. default false
getS3Params
- optional function which will be called for every object that
needs to be downloaded. See below.s3Params
Prefix
(required)Bucket
(required)function getS3Params(localFile, s3Object, callback) {
// localFile is the destination path where the object will be written to
// s3Object is same as one element in the `Contents` array from here:
// http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property
// call callback like this:
var err = new Error(...); // only if there is an error
var s3Params = { // if there is no error
VersionId: "abcd", // just an example
};
// pass `null` for `s3Params` if you want to skip dowlnoading this object.
callback(err, s3Params);
}
Returns an EventEmitter
with these properties:
progressAmount
progressTotal
progressMd5Amount
progressMd5Total
deleteAmount
deleteTotal
filesFound
objectsFound
doneFindingFiles
doneFindingObjects
doneMd5
And these events:
'error' (err)
'end'
- emitted when all files are uploaded'progress'
- emitted when any of the progress properties above changedownloadDir
works like this:
Prefix
. S3 guarantees
returned objects to be in sorted order.localDir
.deleteRemoved
is set, deleting local files whose corresponding objects
are missing.Deletes an entire directory on S3.
s3Params
:
Bucket
Prefix
MFA
(optional)Returns an EventEmitter
with these properties:
progressAmount
progressTotal
And these events:
'error' (err)
'end'
- emitted when all objects are deleted.'progress'
- emitted when the progressAmount
or progressTotal
properties change.deleteDir
works like this:
See http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#copyObject-property
s3Params
are the same. Don't forget that CopySource
must contain the
source bucket name as well as the source key name.
The difference between using AWS SDK copyObject
and this one:
Returns an EventEmitter
with these events:
'error' (err)
'end' (data)
See http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#copyObject-property
s3Params
are the same. Don't forget that CopySource
must contain the
source bucket name as well as the source key name.
Under the hood, this uses copyObject
and then deleteObjects
only if the
copy succeeded.
Returns an EventEmitter
with these events:
'error' (err)
'copySuccess' (data)
'end' (data)
S3_KEY=<valid_s3_key> S3_SECRET=<valid_s3_secret> S3_BUCKET=<valid_s3_bucket> npm test
maxAsyncS3
setting change from 30
to 14
.Expect: 100-continue
header to downloads.uploadDir
and downloadDir
completely rewritten with more efficient
algorithm, which is explained in the documentation.maxAsyncS3
setting changed from Infinity
to 30
.followSymlinks
option to uploadDir
and downloadDir
uploadDir
and downloadDir
support these additional progress properties:
filesFound
objectsFound
deleteAmount
deleteTotal
doneFindingFiles
doneFindingObjects
progressMd5Amount
progressMd5Total
doneMd5
getPublicUrl
API changed to support bucket regions. Use getPublicUrlHttp
if you want an insecure URL.downloadFile
respects maxAsyncS3
copyObject
APIretryable
set to false
are not retriedmoveObject
APIuploadFile
emits a stream
event.listObjects
for greater than 1000 objectsdownloadDir
supports getS3Params
parameteruploadDir
and downloadDir
expose objectsFound
progressuploadDir
accepts getS3Params
function parameteruploadDir
and downloadDir
with empty Prefix
uploadDir
, downloadDir
, listObjects
, deleteObject
, and deleteDir
resp.req.url
sometimes not defined causing crashend
event before write completely finished3.1.2
FAQs
high level amazon s3 client. upload and download files and directories
The npm package s3 receives a total of 17,255 weekly downloads. As such, s3 popularity was classified as popular.
We found that s3 demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.