6xs
6xs means Simple Storage Service Static Site Sync.
It takes your /public
directory (or however you call it) and pushes its
contents (optionally matching with node-glob
) into a selected S3 bucket.
It also can:
- remove remote files that are not found in your local directory
- create an invalidation for a chosen CloudFront distribution
Usage
This usage example presents all available configuration options:
var sync = require('6xs');
var path = require('path');
sync({
base: path.join(__dirname, 'public'),
patterns: ['*.html', 'font/*'],
logger: function () {
return console.log.apply(console, arguments);
},
contentTypeMap: {
html: 'text/html',
css: 'text/css',
js: 'application/javascript',
json: 'application/json'
},
aws: {
access_key_id: 'abcdef...',
secret_access_key: 'xyz987...',
ssl: true,
retries: 3,
concurrency: 10
},
s3: {
region: 'eu-west-1',
bucket: 'your-bucket-name',
remove_remote_surplus: true
max_age: 365,
s_max_age: 1,
},
cf_distribution_id: 'qwerty...'
}, function (err, uploadedFiles) {
});
CLI usage
Usage
$ 6xs <settings/options>
This will upload the current working directory to the specified S3 bucket.
Required settings
-i, --id AWS Access Key ID
-s, --secret AWS Secret Access Key
-b, --bucket AWS S3 Bucket name
-r, --region AWS region
Options
-p, --patterns Glob patterns of the files to upload
default: **
e.g. *.html
e.g. *.html,fonts/*
-ma, --max-age Cache-Control max-age header, in days
default: 365
-sa, --s-max-age Cache-Control s-maxage header, in days
default: 1
--retries Number of retries
Default: 3
--concurrency Number of concurrent uploads
Default: 10
--remove-surplus Remove remote files that are not
found in your local directory
--no-ssl Don't use SSL
-cf, --cloudfront The distribution ID to invalidate
Examples
$ 6xs -i I2B -s KPAvL4GR -b my-s3-site.gov -r us-west-2 --remove-surplus
Uploading: ...
Contributing
Pull requests and/or issue reports are warmly welcomed!
Running tests
$ npm run test
$ npm run coverage
Running integration tests locally
Travis build won't run integration tests if your PR originates in a fork.
You'll need to provide 4 environmental variables to run integration tests
locally. The user identified by the access key has to have an appropriate
allowing policy for the S3 bucket assigned.
$ AWS_ACCESS_KEY_ID=key-id \
AWS_SECRET_ACCESS_KEY=secret \
S3_REGION=your-region \
S3_BUCKET=your-test-bucket \
npm run test-integration
If you understand implications you can copy integration-test.sh.dist
and
adjust it to your needs.
Contributors
License
MIT