Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
@aws-cdk/aws-batch
Advanced tools
AWS CDK v1 has reached End-of-Support on 2023-06-01. This package is no longer being updated, and users should migrate to AWS CDK v2.
For more information on how to migrate, see the Migrating to AWS CDK v2 guide.
This module is part of the AWS Cloud Development Kit project.
AWS Batch is a batch processing tool for efficiently running hundreds of thousands computing jobs in AWS. Batch can dynamically provision different types of compute resources based on the resource requirements of submitted jobs.
AWS Batch simplifies the planning, scheduling, and executions of your batch workloads across a full range of compute services like Amazon EC2 and Spot Resources.
Batch achieves this by utilizing queue processing of batch job requests. To successfully submit a job for execution, you need the following resources:
For more information on AWS Batch visit the AWS Docs for Batch.
At the core of AWS Batch is the compute environment. All batch jobs are processed within a compute environment, which uses resource like OnDemand/Spot EC2 instances or Fargate.
In MANAGED mode, AWS will handle the provisioning of compute resources to accommodate the demand. Otherwise, in UNMANAGED mode, you will need to manage the provisioning of those resources.
Below is an example of each available type of compute environment:
declare const vpc: ec2.Vpc;
// default is managed
const awsManagedEnvironment = new batch.ComputeEnvironment(this, 'AWS-Managed-Compute-Env', {
computeResources: {
vpc,
}
});
const customerManagedEnvironment = new batch.ComputeEnvironment(this, 'Customer-Managed-Compute-Env', {
managed: false, // unmanaged environment
});
It is possible to have AWS Batch submit spotfleet requests for obtaining compute resources. Below is an example of how this can be done:
const vpc = new ec2.Vpc(this, 'VPC');
const spotEnvironment = new batch.ComputeEnvironment(this, 'MySpotEnvironment', {
computeResources: {
type: batch.ComputeResourceType.SPOT,
bidPercentage: 75, // Bids for resources at 75% of the on-demand price
vpc,
},
});
It is possible to have AWS Batch submit jobs to be run on Fargate compute resources. Below is an example of how this can be done:
const vpc = new ec2.Vpc(this, 'VPC');
const fargateSpotEnvironment = new batch.ComputeEnvironment(this, 'MyFargateEnvironment', {
computeResources: {
type: batch.ComputeResourceType.FARGATE_SPOT,
vpc,
},
});
AWS Batch uses an allocation strategy to determine what compute resource will efficiently handle incoming job requests. By default, BEST_FIT will pick an available compute instance based on vCPU requirements. If none exist, the job will wait until resources become available. However, with this strategy, you may have jobs waiting in the queue unnecessarily despite having more powerful instances available. Below is an example of how that situation might look like:
Compute Environment:
1. m5.xlarge => 4 vCPU
2. m5.2xlarge => 8 vCPU
Job Queue:
---------
| A | B |
---------
Job Requirements:
A => 4 vCPU - ALLOCATED TO m5.xlarge
B => 2 vCPU - WAITING
In this situation, Batch will allocate Job A to compute resource #1 because it is the most cost efficient resource that matches the vCPU requirement. However, with this BEST_FIT
strategy, Job B will not be allocated to our other available compute resource even though it is strong enough to handle it. Instead, it will wait until the first job is finished processing or wait a similar m5.xlarge
resource to be provisioned.
The alternative would be to use the BEST_FIT_PROGRESSIVE
strategy in order for the remaining job to be handled in larger containers regardless of vCPU requirement and costs.
Simply define your Launch Template:
// This example is only available in TypeScript
const myLaunchTemplate = new ec2.CfnLaunchTemplate(this, 'LaunchTemplate', {
launchTemplateName: 'extra-storage-template',
launchTemplateData: {
blockDeviceMappings: [
{
deviceName: '/dev/xvdcz',
ebs: {
encrypted: true,
volumeSize: 100,
volumeType: 'gp2',
},
},
],
},
});
and use it:
declare const vpc: ec2.Vpc;
declare const myLaunchTemplate: ec2.CfnLaunchTemplate;
const myComputeEnv = new batch.ComputeEnvironment(this, 'ComputeEnv', {
computeResources: {
launchTemplate: {
launchTemplateName: myLaunchTemplate.launchTemplateName as string, //or simply use an existing template name
},
vpc,
},
computeEnvironmentName: 'MyStorageCapableComputeEnvironment',
});
To import an existing batch compute environment, call ComputeEnvironment.fromComputeEnvironmentArn()
.
Below is an example:
const computeEnv = batch.ComputeEnvironment.fromComputeEnvironmentArn(this, 'imported-compute-env', 'arn:aws:batch:us-east-1:555555555555:compute-environment/My-Compute-Env');
Occasionally, you will need to deviate from the default processing AMI.
ECS Optimized Amazon Linux 2 example:
declare const vpc: ec2.Vpc;
const myComputeEnv = new batch.ComputeEnvironment(this, 'ComputeEnv', {
computeResources: {
image: new ecs.EcsOptimizedAmi({
generation: ec2.AmazonLinuxGeneration.AMAZON_LINUX_2,
}),
vpc,
}
});
Custom based AMI example:
declare const vpc: ec2.Vpc;
const myComputeEnv = new batch.ComputeEnvironment(this, 'ComputeEnv', {
computeResources: {
image: ec2.MachineImage.genericLinux({
"[aws-region]": "[ami-ID]",
}),
vpc,
}
});
Jobs are always submitted to a specific queue. This means that you have to create a queue before you can start submitting jobs. Each queue is mapped to at least one (and no more than three) compute environment. When the job is scheduled for execution, AWS Batch will select the compute environment based on ordinal priority and available capacity in each environment.
declare const computeEnvironment: batch.ComputeEnvironment;
const jobQueue = new batch.JobQueue(this, 'JobQueue', {
computeEnvironments: [
{
// Defines a collection of compute resources to handle assigned batch jobs
computeEnvironment,
// Order determines the allocation order for jobs (i.e. Lower means higher preference for job assignment)
order: 1,
},
],
});
Sometimes you might have jobs that are more important than others, and when submitted, should take precedence over the existing jobs. To achieve this, you can create a priority based execution strategy, by assigning each queue its own priority:
declare const sharedComputeEnvs: batch.ComputeEnvironment;
const highPrioQueue = new batch.JobQueue(this, 'JobQueue', {
computeEnvironments: [{
computeEnvironment: sharedComputeEnvs,
order: 1,
}],
priority: 2,
});
const lowPrioQueue = new batch.JobQueue(this, 'JobQueue', {
computeEnvironments: [{
computeEnvironment: sharedComputeEnvs,
order: 1,
}],
priority: 1,
});
By making sure to use the same compute environments between both job queues, we will give precedence to the highPrioQueue
for the assigning of jobs to available compute environments.
To import an existing batch job queue, call JobQueue.fromJobQueueArn()
.
Below is an example:
const jobQueue = batch.JobQueue.fromJobQueueArn(this, 'imported-job-queue', 'arn:aws:batch:us-east-1:555555555555:job-queue/High-Prio-Queue');
A Batch Job definition helps AWS Batch understand important details about how to run your application in the scope of a Batch Job. This involves key information like resource requirements, what containers to run, how the compute environment should be prepared, and more. Below is a simple example of how to create a job definition:
import * as ecr from '@aws-cdk/aws-ecr';
const repo = ecr.Repository.fromRepositoryName(this, 'batch-job-repo', 'todo-list');
new batch.JobDefinition(this, 'batch-job-def-from-ecr', {
container: {
image: new ecs.EcrImage(repo, 'latest'),
},
});
Below is an example of how you can create a Batch Job Definition from a local Docker application.
new batch.JobDefinition(this, 'batch-job-def-from-local', {
container: {
// todo-list is a directory containing a Dockerfile to build the application
image: ecs.ContainerImage.fromAsset('../todo-list'),
},
});
You can provide custom log driver and its configuration for the container.
import * as ssm from '@aws-cdk/aws-ssm';
new batch.JobDefinition(this, 'job-def', {
container: {
image: ecs.EcrImage.fromRegistry('docker/whalesay'),
logConfiguration: {
logDriver: batch.LogDriver.AWSLOGS,
options: { 'awslogs-region': 'us-east-1' },
secretOptions: [
batch.ExposedSecret.fromParametersStore('xyz', ssm.StringParameter.fromStringParameterName(this, 'parameter', 'xyz')),
],
},
},
});
To import an existing batch job definition from its ARN, call JobDefinition.fromJobDefinitionArn()
.
Below is an example:
const job = batch.JobDefinition.fromJobDefinitionArn(this, 'imported-job-definition', 'arn:aws:batch:us-east-1:555555555555:job-definition/my-job-definition');
To import an existing batch job definition from its name, call JobDefinition.fromJobDefinitionName()
.
If name is specified without a revision then the latest active revision is used.
Below is an example:
// Without revision
const job1 = batch.JobDefinition.fromJobDefinitionName(this, 'imported-job-definition', 'my-job-definition');
// With revision
const job2 = batch.JobDefinition.fromJobDefinitionName(this, 'imported-job-definition', 'my-job-definition:3');
FAQs
The CDK Construct Library for AWS::Batch
We found that @aws-cdk/aws-batch demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 4 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.