Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@danmasta/deploy

Package Overview
Dependencies
Maintainers
1
Versions
5
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@danmasta/deploy

Deployment tool for aws and docker

  • 0.0.3
  • unpublished
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
0
Maintainers
1
Weekly downloads
 
Created
Source

Deploy

Easy deployment to aws using docker and aws cli

Features:

  • Easy to use
  • Deploy to your elastic beanstalk environments
  • Build and tag docker images
  • Push to private ECR registry
  • Interactive cmd line utility

About

We needed a better way to package and deploy apps. This package aims to make deployments to multiple elastic beanstalk apps and environments really simple. It can be used as a non-interactive cmd line utility, required and used programatically, or as an interactive cmd line prompt.

Usage

Install globally with npm

npm install -g @danmasta/deploy

Run the deploy script

deploy

For more usage info check out the examples

Options

Options can be passed via cmd line arguments, as an object if you require the package programatically, or stored with your other configuration if using a config package.

NameTypeDesription
-a, --applicationstringName of the elastic beanstalk application to deploy to. Default is null
-e, --environmentstringName of the elastic beanstalk environment to deploy to. Default is null
-u, --ecr_urlstringEC2 container registry url. Default is null
-b, --eb_bucketstrings3 bucket url for pushing application zip. Default is null
-r, --regionstringAWS region. Default is us-east-1
-o, --output_dirstringLocation to save application zip before pushing to s3. Default is ./dist/deploy
-v, --versionstringVersion string used to tag docker image. Default is null
-d, --dockerrunstirngWhere is your dockerrun file located. Default is ./Dockerrun.aws.json
-i, --interactivebooleanIf true will run the interactive cmd prompt. Default is true
-s, --silentbooleanIf true will disable log output, default is false
--regionsarrayOptional list of regions to show in interactive prompt. Default is all aws elastic beanstalk regions

It's really simple to store default deploy opts in config and just run deploy

Setup

Here is some basic instructions to help get you started with docker and aws, these may not be exhaustive, but should be enough to get you headed in the right direction.

Docker - Mac

Install docker for mac and follow the instructions. I don't currently have any more input on this platform right now.

Docker - Windows

Install

Install docker native for Windows (v17.06 or later), we will be following a setup example similar to what they have here: https://docs.docker.com/machine/drivers/hyper-v/

Hyper-V

Enable Hyper-V in Bios

Enable Hyper-V in Windows by searching for 'Turn Windows features on and off'

Make sure the Hyper-V options are enabled

Virtual Switch

You will also need to create a virtual switch to use docker on windows

Search for and open the 'Hyper-V Manager'

Open 'Virtual Switch Manager' in the actions pane on the right side

Make sure External is highlighted, then click 'Create Virtual Switch'

Select your NIC, usually default is fine, and set the name something like 'external-switch' and click OK

Docker Machine

You need to create at least a default docker machine, you can use the following script

docker-machine create -d hyperv --hyperv-virtual-switch 'external-switch' default

Note: when interacting with docker-machines with hyperv drivers you will need to use an elevated/admin shell

AWS

Install

First add python to your path, then use pip to install aws cli and eb cli

pip install awscli awsebcli --upgrade --user
Configure

Next you need to configure aws cli with your credentials so you can access aws tools

aws configure

It will look like this

AWS Access Key ID [None]: <KEY>
AWS Secret Access Key [None]: <SECRET>
Default region name [None]: us-east-1
Default output format [None]: json

You will be asked for your access key and secret

Now login to your docker registry by running the get-login command

aws ecr get-login --no-include-email --region us-east-1

This will output the docker login command which looks something like this

docker login -u AWS -p <KEY> https://<ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com

Copy and paste that command into your shell to complete login.

Note: the login cmd is run automatically each time you deploy, but it's still good to test it right now to make sure you have everything working

Create an application in elastic beanstalk Then go to IAM and attach the following policies to the aws-elasticbeanstalk-ec2-role

  • AmazonEC2ContainerRegistryReadOnly
  • AmazonAPIGatewayPushToCloudWatchLogs

Create an ECR Registry for your app

Create log groups in cloudwatch for your app (optional)

These do two things: let your eb environments pull from your private ecr registry, gives your ec2 instances the ability to push logs to cloudwatch

Now you are ready to setup an application and deploy!

Examples

Use config to set defaults

// ./config/default.js
module.exports = {
    deploy: {
        application: 'appname',
        environment: 'envname',
        ecr_url: '<ACCOUNT_ID>.dkr.ecr.us-west-2.amazonaws.com',
        eb_bucket: 'elasticbeanstalk-us-west-2-<ACCOUNT_ID>',
        region: 'us-west-2',
        output_dir: './dist/deploy',
        version: null,
        dockerrun: './Dockerrun.aws.json',
        interactive: true,
        regions: [
            'us-west-2',
            'us-east-1',
            'eu-west-1',
            'ap-northeast-1'
        ]
    }
}

Multiple Configs / Environments

Since this package uses the env and config pacakges, you can easily switch config values with cmd args. So if you have a config structure like this:

./config/default.js
./config/production.js
./config/staging.js
./config/qa1.js
./config/qa2.js

You can load deploy values for qa1 environment by just running:

deploy --env production --config qa1

Dockerrun Example - Multi-Container

Deploy can interpolate values from your dockerrun template, just use handlebar syntax

{
  "AWSEBDockerrunVersion": 2,
  "volumes": [
  ],
  "containerDefinitions": [
    {
      "name": "{{application}}",
      "image": "{{image}}",
      "essential": true,
      "memory": 512,
      "mountPoints": [
        {
          "sourceVolume": "awseb-logs-{{application}}",
          "containerPath": "/var/log/app"
        }
      ],
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 8080
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "{{environment}}-app",
          "awslogs-region": "{{region}}"
        }
      }
    }
  ]
}

Contact

If you have any questions feel free to get in touch

Keywords

FAQs

Package last updated on 26 Oct 2017

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc