
Security News
Crates.io Implements Trusted Publishing Support
Crates.io adds Trusted Publishing support, enabling secure GitHub Actions-based crate releases without long-lived API tokens.
@cloudlessopenlabs/pulumix
Advanced tools
Pulumi guide: To learn more about Pulumi, please refer to https://gist.github.com/nicolasdao/6cdd85d94b8ee992297d351c248f4092. IAM roles & policies: Managing AWS resources almost always involves managing IAM roles and policies. For a quick recap on that topic, please refer to this document: https://gist.github.com/nicolasdao/6cdd85d94b8ee992297d351c248f4092#iam-recap.
(test -f .npmrc || echo @cloudlesslabs:registry=https://npm.pkg.github.com/cloudlesslabs >> .npmrc) && \
npm i @cloudlesslabs/pulumix
- Pulumi
- Helper methods
- Docker
- NPM
package.json
scripts- Automation API
- AWS
- AppSync
- Aurora
- ECR
- EC2
- EFS
- Lambda
- A few words about AWS Lambda
- API Gateway with explicit Lambda handlers
- Basic Lambda with an API Gateway
- Configuring Cloudwatch
- Configuring IAM policies to enable Lambda access to other resources
- Letting other AWS services to access a lambda
- Scheduling a lambda
- Lambda with container
- Lambda with EFS
- Lambda with Layers
- Lambda versions and aliases
- Policy
- Role
- S3
- Secret
- Security Group
- SSM
- Step-function
- VPC
- GCP
- Troubleshooting
- AWS
- Terminal utilities are failing with timeout errors
ETIMEDOUT
- AWS Lambda cannot access the public internet
failed to create '/home/sbx_userxxxx/.pulumi'
- no resource plugin 'aws-v4.17.0' found in the workspace or on your $PATH
- AWS Lambda:
IMAGE Launch error: fork/exec /lambda-entrypoint.sh: exec format error
- Annexes
- References
const yourStack = new pulumi.StackReference('your-stack-name')
The yourStack
object is similar to this:
{
id: 'some-string',
name: 'some-string',
outputs: {
'aurora-endpoint': 'some-string',
'aurora-readonly-endpoint': 'some-string',
'instance-1-endpoint': 'some-string',
'private-bucket': 'some-string',
'public-file-bucket': 'some-string',
services: [
'some-string',
'some-string',
'some-string',
'some-string'
]
},
urn: 'some-string'
}
Outputs cannot be accessed explicitly. Instead, you must use the getOutput
method:
const endpoint = yourStack.getOutput('aurora-endpoint')
const pulumi = require('@pulumi/pulumi')
const aws = require('@pulumi/aws')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const REGION = aws.config.region
const ACCOUNT_ID = aws.config.allowedAccountIds[0]
Output<T>
To know more about the issue this helper fixes, please refer to this document: https://gist.github.com/nicolasdao/6cdd85d94b8ee992297d351c248f4092#the-outputt-type-the-pulumiinterpolate-and-apply-functions
const pulumi = require('@pulumi/pulumi')
/**
* Converts an Output<T> to a Promise<T>
*
* @param {Output<T>||[Output<T>]} resource
* @return {Promise<T>||Promise<[T]>}
*/
const resolve = resource => new Promise((next, fail) => {
if (!resource)
next(resource)
try {
if (Array.isArray(resource)) {
if (resource.every(r => r.apply))
pulumi.all(resource).apply(data => next(data))
else
Promise.all(resource.map(r => resolve(r))).then(data => next(data)).catch(fail)
} else if (resource.apply)
resource.apply(data => next(data))
else
next(resource)
} catch(err) {
fail(err)
}
})
module.exports = {
resolve
}
Use this helper as follow:
const getAvailabilityZones = async () => {
const [subnetsA, subnetsB] = await resolve([vpc.publicSubnets, vpc.privateSubnets])
const subnets = [...subnetsA, ...subnetsB]
const azs = []
for (let i=0;i<subnets.length;i++) {
const subnet = subnets[i].subnet
const az = await resolve(subnet.availabilityZone)
if (azs.indexOf(az) < 0)
azs.push(az)
}
return azs
}
Please refer to the Google Cloud Run example.
The previous link shows how to pass environment variables to the container, which is the best practice when it comes to create flexible and reusable Docker images. It's also better from a security standpoint as you adding secrets in an image could lead to secrets leaking. However, there are scenarios where the image might have to be configured based on specific environment variables. The following code snippet demonstrates how to leverage the native --build-arg
option in the docker build
command to achieve that:
const pulumi = require('@pulumi/pulumi')
const gcp = require('@pulumi/gcp')
const docker = require('@pulumi/docker')
const config = new pulumi.Config()
const gcpAccessToken = pulumi.output(gcp.organizations.getClientConfig({}).then(c => c.accessToken))
// Uploads new Docker image with your app to Google Cloud Container Registry (doc: https://www.pulumi.com/docs/reference/pkg/docker/image/)
const dockerImage = new docker.Image('your-image', {
imageName: pulumi.interpolate`gcr.io/${gcp.config.project}/your-app:v1`,
build: {
context: './app',
extraOptions: [
'--build-arg',
`DB_USER='${process.env.DB_USER}'`,
'--build-arg',
`DB_PASSWORD='${process.env.DB_PASSWORD}'`
]
},
registry: {
server: 'gcr.io',
username: 'oauth2accesstoken',
password: pulumi.interpolate`${gcpAccessToken}`
}
})
This method means that the Dockerfile
must also define those variables:
FROM node:12-slim
ARG DB_USER
ARG DB_PASSWORD
# ...
package.json
scripts{
"scripts": {
"up": "func() { pulumi up -s YOUR_ORG/$1 -y; }; func",
"prev": "func() { pulumi preview -s YOUR_ORG/$1; }; func",
"out": "func() { pulumi stack output -s YOUR_ORG/$1; }; func",
"refresh": "func() { pulumi refresh -s YOUR_ORG/$1 -y; }; func",
"blast": "func() { pulumi destroy -s YOUR_ORG/$1; }; func",
"clean": "func() { cp Pulumi.$1.yaml Pulumi.$1.backup.yaml; pulumi stack rm YOUR_ORG/$1; cp Pulumi.$1.backup.yaml Pulumi.$1.yaml; rm -rf Pulumi.$1.backup.yaml; }; func",
"import": "func() { pulumi stack export -s YOUR_ORG/$1 > stack.json; }; func",
"export": "func() { pulumi stack import -s YOUR_ORG/$1 --file stack.json; }; func"
}
}
NOTE: When your stack lives under an organization, the stack must be prefixed with your organization's name. In the samples above, replace
YOUR_ORG
with your organization's name. If you wish to use your default Pulumi account, then delete theYOUR_ORG/
prefix.
npm run up dev
: Deploys the dev stack.npm run prev dev
: Previews the dev stack.npm run out dev
: Prints the dev stack's outputs.npm run refresh dev
: Update the Pulumi stack using the real stack as reference. Used to remove drift. This has no consequences on your physical files.npm run blast dev
: Destroys the dev stack.npm run remove dev
: Removes the dev stack.npm run import dev
: Imports the Pulumi dev state into a local ./stack.json
file. Use this to inspect all resources or to fix a pending_operations
issues.npm run export dev
: Exports the local ./stack.json
file to the Pulumi dev state.{
"scripts": {
"id": "func() { aws ec2 describe-instances --filter \"Name=tag:Name,Values=your-project-name-$1\" --query \"Reservations[].Instances[?State.Name == 'running'].InstanceId[]\" --output text; }; func",
"conn": "func() { aws ssm start-session --target $(npm run id $1 | tail -1); }; func",
"ssh": "func(){ echo Forwarding traffic from local port $2 to $1 EC2 on port 22; aws ssm start-session --target $(npm run id $1 | tail -1) --document-name AWS-StartPortForwardingSession --parameters '{\"portNumber\":[\"22\"], \"localPortNumber\":[\"'$2'\"]}'; };func",
"rds": "func(){ aws rds describe-db-clusters --query 'DBClusters[].{DBClusterIdentifier:DBClusterIdentifier,Endpoint:Endpoint,ReaderEndpoint:ReaderEndpoint} | [?DBClusterIdentifier == `your-project-name'$1'`]' | grep -Eo '\"Endpoint\":\\s\"(.*?)\\.com' | cut -c 14-; };func"
}
}
npm run id dev
: Gets the EC2 instance ID.npm run rds dev
: Gets the RDS endpoint.npm run conn dev
: Connects tp the EC2 instance via SSM session manager.npm run ssh dev 9999
: Starts a port-forwarding session via SSM. Traffic sent to 127.0.0.1:9999 is forwarded to the EC2 on port 22.The following example shows what a Dockerfile
for an AWS Lambda would look like:
FROM amazon/aws-lambda-nodejs:14.2021.09.29.20
ARG FUNCTION_DIR="/var/task"
# Pulumi setup
## 1. Configure the Pulumi environment variables
ENV PULUMI_SKIP_UPDATE_CHECK true
ENV PULUMI_HOME "/tmp"
ENV PULUMI_CONFIG_PASSPHRASE "your-passphrase"
## 2. Install Pulumi dependencies
RUN yum install -y \
which \
tar \
gzip
## 3. Install Pulumi. All version at https://www.pulumi.com/docs/get-started/install/versions/
RUN curl -fsSL https://get.pulumi.com/ | bash -s -- --version 3.10.0 && \
mv ~/.pulumi/bin/* /usr/bin
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Install all dependencies
COPY package*.json ${FUNCTION_DIR}
RUN npm install --only=prod --prefix ${FUNCTION_DIR}
# Copy app files
COPY . ${FUNCTION_DIR}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "index.handler" ]
Notice:
PULUMI_SKIP_UPDATE_CHECK
must be set to true to prevent the pesky warnings to update Pulumi to the latest version.PULUMI_HOME
must be set to a folder where the Lambda has write access (by default, it only has write access to the /tmp
folder. Use EFS to access more options). The default PULUMI_HOME value is ~
. Unfortunately, Lambda don't have access to that folder. Not configuring the PULUMI_HOME variable would result in a failed to create '/home/sbx_userxxxx/.pulumi'
error message when the lambda executes the pulumi login file:///tmp/
command. For a detailed example of what files are contained inside this folder, please refer to this document.PULUMI_CONFIG_PASSPHRASE
must be set, even if you don't use secrets, otherwise, you'll receive an passphrase must be set with PULUMI_CONFIG_PASSPHRASE or PULUMI_CONFIG_PASSPHRASE_FILE environment variables
error message durin the pulumi up
execution.bash -s -- --version 3.10.0
: Use the explicit version to make sure Pulumi's update don't break your code.mv ~/.pulumi/bin/* /usr/bin
moves the the executable files to where the lambda can access them (i.e., /usr/bin
).Because Pulumi relies on the standard AWS SDK to access AWS's APIs, the appropriate policies must be set in your hosting environment. For example, in order to provision S3 buckets, the following policy must be attached:
const createBucketsPolicy = new aws.iam.Policy(`create-bucket`, {
path: '/',
description: 'Allows the creation of S3 buckets',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
's3:CreateBucket',
's3:Delete*',
's3:Get*',
's3:List*',
's3:Put*'
],
Resource: '*',
Effect: 'Allow'
}]
})
})
In you Lambda code, you can know use the Automation API, or call Pulumi via the child_process
(which is actually what the automation API does):
const { automationApi, aws:{ s3 } } = require('@cloudlesslabs/pulumix')
const main = async () => {
const [errors, result] = await automationApi.up({
project: 'my-project-name',
provider: {
name:'aws',
version: '4.17.0' // IMPORTANT: This cannot be any version. Please refer to the note below.
},
stack: {
name: 'dev',
config: {
'aws:region': 'ap-southeast-2',
'aws:allowedAccountIds': [123456]
}
},
program: async () => {
const myBucket = await s3.bucket({
name:'my-unique-website-name',
website: {
indexDocument: 'index.html'
}
})
return myBucket
}
})
console.log(`Pulumi home dir: ${result.stack.workspace.pulumiHome}`)
console.log(`Pulumi work dir(contains checkpoints): ${result.stack.workspace.workDir}`)
console.log(`Pulumi output:`)
console.log(result.outputs.myBucket.value)
// Example
// {
// id: 'lu-20210922kogrikvuow',
// arn: 'arn:aws:s3:::lu-20210922kogrikvuow',
// bucket: 'lu-20210922kogrikvuow',
// bucketDomainName: 'lu-20210922kogrikvuow.s3.amazonaws.com',
// bucketRegionalDomainName: 'lu-20210922kogrikvuow.s3.ap-southeast-2.amazonaws.com',
// websiteDomain: 's3-website-ap-southeast-2.amazonaws.com',
// websiteEndpoint: 'lu-20210922kogrikvuow.s3-website-ap-southeast-2.amazonaws.com'
// }
}
console.log('RESULT') console.log(result) console.log('RESULT OUTPUTS') console.log((result||{}).outputs)
// Clean Pulumi checkpoints
const workspace = ((result||{}).stack||{}).workspace||{}
const { pulumiHome, workDir } = workspace
IMPORTANT: The
provider.version
required and is tied to the Pulumi version you're using (3.10.0
in this example). Configuring the wrong AWS version will throw an error similar to no resource plugin 'aws-v4.17.0' found in the workspace or on your $PATH. To know which AWS version to use, set one up, deploy, and check the error message.
The following example:
project
field of the Query
type. That lambda will receive the following payload:/**
* Processes the GraphQL request.
*
* @param {Object} event
* @param {Object} ...rest Depends on the the value of 'mappingTemplate.payload'
* @param {Object} .args Arguments, e.g., { where: { id:1 , name:'jeans' }, limit:20 }
* @param {Object} .identity Identity object. It depends on the authentication method. It will typically contain claims.
* @param {Object} .source GraphQL response object from parent.
*
* @return {Object}
*/
exports.handler = async event => {
const { field, hello, ...rest } = event
const { source, args, identity } = rest
console.log('FIELD CONTROLLED VIA THE mappingTemplate.payload')
console.log({
field,
hello
})
console.log('RESERVED FIELDS')
console.log({
source, // GraphQL response object from a parent.
args, // Arguments. In the example below { id:1, name:'jeans' }
identity // Identity object. It depends on the authentication method. It will typically contain claims.
})
}
To learn more about the identity
object, please refer to the Cognito $context.identity
object example.
const pulumi = require('@pulumi/pulumi')
const { resolve, aws: { appSync } } = require('@cloudlesslabs/pulumix')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const PRODUCT_STACK = `your-product-stack/${ENV}`
const productStack = new pulumi.StackReference(PRODUCT_STACK)
const productApi = productStack.getOutput('lambda')
const main = async () => {
const tags = {
Project: PROJ,
Env: ENV
}
const productLambda = await resolve(productApi.lambda)
const schema = `
type Product {
id: ID!
name: String
}
type User {
id: ID!
}
type Query {
products(id: Int, name: String): [Product]
users: [User]
}
schema {
query: Query
}`
// Create the GraphQL API with its Schema.
const graphql = await appSync.api({
name: PROJECT,
description: `Lineup ${ENV} GraphQL API`,
schema,
resolver: {
// Add all the lambda that are used as data source must be listed here
// in order to configure access from this GraphQL API.
lambdaArns:[productLambda.arn]
},
cloudwatch: true,
tags
})
// Create a data source to retrieve and store data.
const dataSource = await appSync.dataSource({
name: PROJECT,
api: {
id: graphql.api.id,
roleArn: graphql.roleArn
},
functionArn: productLambda.arn,
tags
})
// Create a VTL resolver that can bridge between a field and data source.
const productResolver = await appSync.resolver({
name: `${PROJECT}-resolver-product`,
api:{
id: graphql.api.id,
roleArn: graphql.roleArn
},
type: 'Query',
field: 'projects',
mappingTemplate:{
payload: {
field: 'projects',
hello: 'world'
}
},
dataSource,
tags
})
return {
graphql,
dataSource,
resolvers: {
productResolver
}
}
}
module.exports = main()
NOTE: The sample above is similar to:
const graphql = await appSync.api({
// ...
authConfig: {
apiKey: true
}
})
Because AppSync resolvers that use Lambda data source can be straightforward (most of the time, they're just a pass through to the lambda), we've created a lambdaResolvers
helper method which created a single data source for that lambda and then uses GraphQL schema inspection to isolate the fields for which resolvers must be created to route HTTP requests to that Lambda data source.
const pulumi = require('@pulumi/pulumi')
const { resolve, aws: { appSync } } = require('@cloudlesslabs/pulumix')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const PRODUCT_STACK = `your-product-stack/${ENV}`
const productStack = new pulumi.StackReference(PRODUCT_STACK)
const productApi = productStack.getOutput('lambda')
const main = async () => {
const tags = {
Project: PROJ,
Env: ENV
}
const productLambda = await resolve(productApi.lambda)
const schema = `
type Product {
id: ID!
name: String
}
type User {
id: ID!
}
type Query {
products(id: Int, name: String): [Product]
users: [User]
}
schema {
query: Query
}`
// Create the GraphQL API with its Schema.
const graphql = await appSync.api({
name: PROJECT,
description: `Lineup ${ENV} GraphQL API`,
schema,
resolver: {
// Add all the lambda that are used as data source must be listed here
// in order to configure access from this GraphQL API.
lambdaArns:[productLambda.arn]
},
cloudwatch: true,
tags
})
// Create a single data source using the 'functionArn' value and then create as many resolvers as
// there are fields in the 'Query' type.
const { dataSource, resolvers } = await appSync.lambdaResolvers({
name: PROJECT,
api: {
id: graphql.api.id,
roleArn: graphql.roleArn
},
schema: {
value: schema,
includes:['Query'] // This means resolvers for all the `Query` fields will be created.
},
functionArn: productLambda.arn,
tags
})
return {
graphql,
productAPI: {
dataSource,
resolvers
}
}
}
module.exports = main()
Use the authConfig
property. For example, Cognito:
const graphql = await appSync.api({
name: 'my-api',
description: `My GraphQL API`,
schema:`
schema {
query: Query
}
type Product {
id: ID!
name: String
}
type User {
id: ID!
}
type Query {
products: [Product]
users: [User]
}`,
resolver: {
lambdaArns:[productLambda.arn]
},
authConfig: {
cognito: {
userPoolId: '1234',
awsRegion: 'ap-southeast-2'
}
},
cloudwatch: true,
tags
})
authConfig
:
{
iam: true
}
authConfig
:
{
cognito: {
userPoolId: '1234' // Required
awsRegion: 'ap-southeast-2', // Required
// appIdClientRegex: '^my-app.*', // Optional
// defaultAction: 'DENY' // Default is 'ALLOW'. Allowed values: 'DENY', 'ALLOW'
}
}
$context.identity
objectThis object is the one that is both accessible in the VTL mapping template and passed to the Lambda under the event.identity
property. It is similar to this sample:
{
claims: {
sub: '3c5b5034-1975-4889-a839-d43a7e0fbc48',
iss: 'https://cognito-idp.ap-southeast-2.amazonaws.com/ap-southeast-2_k63pzVJgQ',
version: 2,
client_id: '7n06fpr1t4ntm1hofbh8bnhp96',
origin_jti: '84c72cd1-eaad-40e5-a98f-9d7cd7a586cd',
event_id: 'c95393c0-bab7-40a8-b9e9-48e17b8d23fd',
token_use: 'access',
scope: 'phone openid profile email',
auth_time: 1634788385,
exp: 1634791985,
iat: 1634788385,
jti: 'ade2fe51-4b56-4a8f-9d9f-a9f3d03fd0aa',
username: '3c5b5034-1975-4889-a839-d43a7e0fbc48'
},
defaultAuthStrategy: 'ALLOW',
groups: null,
issuer: 'https://cognito-idp.ap-southeast-2.amazonaws.com/ap-southeast-2_k63pzVJgQ',
sourceIp: [ '49.179.157.39' ],
sub: '3c5b5034-1975-4889-a839-d43a7e0fbc48',
username: '3c5b5034-1975-4889-a839-d43a7e0fbc48'
}
}
authConfig
:
{
oidc: {
issuer: 'dewd'
clientId: '1121321'
authTtl: '60000', // 60,000 ms (1 min)
iatTtl: '60000' // 60,000 ms (1 min)
}
}
WARNING: If both an Aurora cluster and an RDS proxy are provisioned at the same time, the initial
pulumi up
will probably fail with the following error:registering RDS DB Proxy (xxxxxx/default) Target: InvalidDBInstanceState: DB Instance xxxxxxxxxx is in an unsupported state - CONFIGURING_LOG_EXPORTS, needs to be in [AVAILABLE, MODIFYING, BACKING_UP]
This is because the RDS target can only be created with DB instances that are running. Because the initial setup takes time, the DB instance won't be running by the time the RDS target creation process starts. There is no other option to wait and run
pulumi up
again later.
WARNING: Once the
masterUsername
is set, it cannot be changed. Attempting to change it will create a delete and replace operation, which is obvioulsy not what you may want.
const { aws:{ rds:{ aurora } } } = require('@cloudlesslabs/pulumix')
const auroraOutput = aurora({
name: 'my-db',
engine: 'mysql',
availabilityZones: ['ap-southeast-2a', 'ap-southeast-2b', 'ap-southeast-2c'],
backupRetentionPeriod: 30, // 30 days
auth: {
masterUsername: process.env.DB_USERNAME,
masterPassword: process.env.DB_PASSWORD,
},
instanceNbr: 1,
instanceSize: 'db.t2.small',
vpcId: 'vpc-1234',
subnetIds: ['subnet-1234', 'subnet-4567'],
ingress:[
{ protocol: 'tcp', fromPort: 3306, toPort: 3306, cidrBlocks: ['10.0.1.204/32'], description:`Bastion host access` }
],
protect:false,
publicAccess:false,
tags: {
Project:'my-project',
Env: 'dev'
}
})
Notice that we're adding an
ingress
rule that gives access to an EC2 instance. In practice, create a dedicated security group to can access the RDS cluster, then add this SG to any system that needs access.
Use the ec2
function described in the EC2 with SSM section and the aurora
function described in the RDS Aurora section. The important bit in the next sample is the aurora ingress
, which allows the bastion to access Aurora:
ingress:[
{ protocol: 'tcp', fromPort: 3306, toPort: 3306, cidrBlocks: [pulumi.interpolate`${bastionOutput.privateIp}/32`], description:`Bastion host ${ec2Name} access` }
]
const { aws:{ ec2, rds:{ aurora } } } = require('@cloudlesslabs/pulumix')
// Bastion server
const ec2Name = `${PROJECT}-rds-bastion`
const { ami, instanceType } = config.requireObject('bastion')
const bastionOutput = ec2({
name: ec2Name,
ami,
instanceType,
availabilityZone: vpc.availabilityZones[0],
subnetId: vpc.publicSubnetIds[0],
publicKey,
toggleSSM: true,
ssmVpcId:vpc.id,
ssmVpcSecurityGroupId: vpc.defaultSecurityGroupId,
tags
})
// Aurora
const { backupRetentionPeriod, instanceSize, instanceNbr } = config.requireObject('aurora')
const auroraOutput = aurora({
name: PROJECT,
engine: 'mysql',
availabilityZones: vpc.availabilityZones,
backupRetentionPeriod,
auth: {
masterUsername: process.env.DB_USERNAME,
masterPassword: process.env.DB_PASSWORD,
},
instanceNbr,
instanceSize,
vpcId:vpc.id,
subnetIds: vpc.isolatedSubnetIds,
ingress:[
{ protocol: 'tcp', fromPort: 3306, toPort: 3306, cidrBlocks: [pulumi.interpolate`${bastionOutput.privateIp}/32`], description:`Bastion host ${ec2Name} access` }
],
protect:false,
publicAccess:false,
tags
})
The basic setup consists of:
ingress
rules. You may want to create a dedicated security group that can access the RDS proxy. This way you can simply add this SG to any resource you wish to have access to the proxy rather than having to add those resource to the ingress list.WARNING: If both an Aurora cluster and an RDS proxy are provisioned at the same time, the initial
pulumi up
will probably fail with the following error:registering RDS DB Proxy (xxxxxx/default) Target: InvalidDBInstanceState: DB Instance xxxxxxxxxx is in an unsupported state - CONFIGURING_LOG_EXPORTS, needs to be in [AVAILABLE, MODIFYING, BACKING_UP]
This is because the RDS target can only be created with DB instances that are running. Because the initial setup takes time, the DB instance won't be running by the time the RDS target creation process starts. There is no other option to wait and run
pulumi up
again later.
Use the proxy
property. When this feature is enabled, an additional security group is created for RDS proxy.
const auroraOutput = aurora({
name: 'my-db',
engine: 'mysql',
availabilityZones: ['ap-southeast-2a', 'ap-southeast-2b', 'ap-southeast-2c'],
backupRetentionPeriod: 30, // 30 days
auth: {
masterUsername: process.env.DB_USERNAME,
masterPassword: process.env.DB_PASSWORD,
},
instanceNbr: 1,
instanceSize: 'db.t2.small',
vpcId: 'vpc-1234',
subnetIds: ['subnet-1234', 'subnet-4567'],
ingress:[
{ protocol: 'tcp', fromPort: 3306, toPort: 3306, cidrBlocks: ['10.0.1.204/32'], description:`Bastion host access` }
],
proxy: true
})
To configure it in greater details, use an object instead:
{
proxy: {
enabled: true, // Default true.
subnetIds: null, // Default is the RDS's subnetIds.
logSQLqueries: false, // Default false
idleClientTimeout: 1800, // Default 1800 seconds
requireTls: true, // Default true.
iam: false // Default false. If true, the RDS credentials are disabled and the only way to connect is via IAM.
}
}
By default, all the ingress
rules apply to identically both RDS and RDS proxy. This first example is equivalent to this:
{
ingress:[
{
protocol: 'tcp',
fromPort: 3306,
toPort: 3306,
cidrBlocks: ['10.0.1.204/32'],
description:`Bastion host access`,
rds: true,
proxy: true
}
],
}
To create ingress rules that are specific to RDS or RDS proxy, use the rds
or proxy
flag on each rule.
When the iam
flag is not turned on, you must add the additional steps in your client configuration:
rds-db:connect
policy to your resource's IAM role.const AWS = require('aws-sdk')
const config = {
region: 'ap-southeast-2',
hostname: 'my-project.proxy-12345.ap-southeast-2.rds.amazonaws.com',
port: 3306,
username: 'admin'
}
const signer = new AWS.RDS.Signer(config)
signer.getAuthToken({ username:config.username }, (err, password) => {
if (err)
console.log(`Something went wrong: ${err.stack}`)
else
console.log(`Great! the password is: ${password}`)
})
To integrate this signer with the mysql2
package:
const mysql = require('mysql2/promise')
const db = mysql.createPool({
host: 'my-project.proxy-12345.ap-southeast-2.rds.amazonaws.com', // can also be an IP
user: 'admin',
ssl: { rejectUnauthorized: false},
database: 'my-db-name',
multipleStatements: true,
waitForConnections: true,
connectionLimit: 2, // connection pool size
queueLimit: 0,
timezone: '+00:00', // UTC
authPlugins: {
mysql_clear_password: () => () => {
return signer.getAuthToken({ username:'admin' })
}
}
})
rds-db:connect
action on the IAM roleconst { aws:{ lambda, rds:{ policy: { createConnectPolicy } } } } = require('@cloudlesslabs/pulumix')
const rdsAccessPolicy = createConnectPolicy({ name:`my-project-access-rds`, rdsArn:proxy.arn })
const lambdaOutput = await lambda.fn({
//...
policies:[rdsAccessPolicy],
//...
})
createConnectPolicy
accepts the following input:
rdsArn
: It is required. Examples: arn:aws:rds:ap-southeast-2:1234:db-proxy:prx-123
, arn:aws:rds:ap-southeast-2:1234:cluster:blabla
or arn:aws:rds:ap-southeast-2:1234:db:blibli
.resourceId
: Optional. Default resource name (1)username
: Optional. Default *
. Other examples: 'mark', 'peter'Only RDS proxy embeds its resource ID in its arn. This means that the resourceId
should not be provided when the rdsArn
is an RDS proxy. For all the other RDS resources (clusters and instances), the resourceId
is required. For an Aurora cluster, this resource is called clusterResourceId
, while for an instance, it is called dbiResourceId
.
For more details around creating this policy, please refer to this article Creating and using an IAM policy for IAM database access
This section is not about the code sample(which is trivial and added below), but about the approach. It is NOT RECOMMENDED to use Pulumi to provision a secret in AWS secrets manager and then use it directly into Aurora. The reasons for this are:
pulumi up
, there is a risk to update the DB credentials, which could break clients relying on your DB.Instead, you shoud:
const auroraOutput = aurora({
...
auth: {
secretId: 'my-db-creds-dev' // This can be the secret's name, id or arn.
},
...
})
(1) For example
my-db-creds-<STACKNAME>
(e.g.,my-db-creds-dev
).
The next sample shows how to provision an EC2 bastion host secured via SSM in a private subnet. A private subnet does not need to have a NAT Gateway to work with SSM, but in this example, it is required in order to use the EC2_SHELL
which needs internet access to install telnet (this is just for example, because in theory, you would use SSM to install telnet, which would remove the need for this userData script, and therefore would also remove the need for a NAT gateway).
Also, notice that we are passing the RSA public key to this instance. This will set up the RSA key for the ec2-user
SSH user. The RSA private key is intended to be shared to any engineer that needs to establish a secured SSH tunnel between their local machine and this bastion host. Private RSA keys are usually not supposed to be shared lightly, but in this case, the security and accesses are managed by SSM, which relaxes the restrictions around sharing the RSA private key. For more details about SSH tunneling with SSM, please refer to this document: https://gist.github.com/nicolasdao/4808f0a1e5e50fdd29ede50d2e56024d#ssh-tunnel-to-private-rds-instances.
const { aws: { ec2 } } = require('@cloudlesslabs/pulumix')
const EC2_SHELL = `#!/bin/bash
set -ex
cd /tmp
sudo yum install -y telnet`
const EC2_RSA_PUBLIC_KEY = 'ssh-rsa AAAA...' // You'll give the private key to your dev so they use it to connect
const ec2Output = ec2.instance({
name: 'my-ec2-machine',
ami: 'ami-02dc2e45afd1dc0db', // That's Amazon Linux 2 for 64-bits ARM which comes pre-installed with the SSM agent.
instanceType: 't4g.nano', // EC2 ARM graviton 2
availabilityZone: 'ap-southeast-2a', // Tip: Use `npx get-regions` to find an AZ.
subnetId: privateSubnetId,
userData: EC2_SHELL,
publicKey:EC2_RSA_PUBLIC_KEY,
ssm: { // Toggles SSM
vpcId:vpc.id,
vpcDefaultSecurityGroupId: vpc.vpc.defaultSecurityGroupId
},
tags: {
Project: 'my-cool-project',
Env: 'dev'
}
})
const awsx = require('@pulumi/awsx')
const path = require('path')
// ECR images. Doc:
// - buildAndPushImage API: https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/awsx/ecr/#buildAndPushImage
// - 2nd argument is a DockerBuild object: https://www.pulumi.com/docs/reference/pkg/docker/image/#dockerbuild
const image = awsx.ecr.buildAndPushImage('my-image-name', {
context: path.resolve('../app'),
args:{
SOME_ARG: 'hello'
},
tags: {
Name: 'my-image-name'
}
})
Where args
is what is passed to the --build-arg
option of the docker build
command.
The URL for this new image is inside the image.imageValue
property.
const { aws:{ ecr } } = require('@cloudlesslabs/pulumix')
const myImage = await ecr.image({
name: 'my-image',
tag: 'v2',
dir: path.resolve('./app')
})
Where myImage
is structured as follow:
myImage.imageValues
: It contains the values you can use in the FROM
directive of another Dockerfile (e.g., FROM 12345.dkr.ecr.ap-southeast-2.amazonaws.com/my-image:v2
). If the tag
property is set, this array contains two values. The first item is tagged with the the tag
value, and the second is tagged with <tag>-<SHA-digest>
. If the tag
is not set, this array contains only one item tagged with the SHA-digest.myImage.repository
: Output object with the repository's details.lifecyclePolicy
: Output object with the lifecycle policy.const myImage = await ecr.image({
name: 'my-image',
tag: 'v3',
dir: path.resolve('./app'),
args: {
DB_USER: '1234',
DB_PASSWORD: '4567'
},
imageTagMutable: false, // the default is true
lifecyclePolicies:[{
description: 'Only keep up to 50 tagged images',
tagPrefixList:['v'],
countNumber: 50
}],
tags: {
Project: 'my-cool-project',
Env: 'prod',
Name: 'my-image'
}
})
NOTICE:
- When
imageTagMutable
is set to false, each tagged version becomes immutable, which means your deployment will fail if you're pushing a tag that already exists.
By default, repositories are private. To make them public, use:
const myImage = await ecr.image({
name: 'my-image',
tag: 'v3',
dir: path.resolve('./app'),
args: {
DB_USER: '1234',
DB_PASSWORD: '4567'
},
imageTagMutable: false, // the default is true
lifecyclePolicies:[{
description: 'Only keep up to 50 tagged images',
tagPrefixList:['v'],
countNumber: 50
}],
publicConfig: {
aboutText: 'This is a public repo',
description: 'This is a public repo',
usageText: 'Use it as follow...',
architectures: ['ARM', 'ARM 64', 'x86', 'x86-64'],
operatingSystems: ['Linux']
},
tags: {
Project: 'my-cool-project',
Env: 'prod',
Name: 'my-image'
}
})
const pulumi = require('@pulumi/pulumi')
const { aws:{ securityGroup, vpc, lambda, efs } } = require('@cloudlesslabs/pulumix')
const { resolve } = require('path')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const tags = {
Project: PROJ,
Env: ENV
}
const main = async () => {
// VPC with a public subnet and an isolated subnet (i.e., private with no NAT)
const vpcOutput = await vpc({
name: PROJECT,
subnets: [{ type: 'public' }, { type: 'isolated', name: 'efs' }],
numberOfAvailabilityZones: 3,
protect: true,
tags
})
// Security group that can access EFS
const { securityGroup:accessToEfsSecurityGroup } = await securityGroup.sg({
name: `${PROJECT}-access-efs`,
description: `Access to the EFS filesystem ${PROJECT}.`,
egress: [{
protocol: '-1',
fromPort: 0,
toPort: 65535,
cidrBlocks: ['0.0.0.0/0'],
ipv6CidrBlocks: ['::/0'],
description:'Allows to respond to all traffic'
}],
vpcId: vpc.id,
tags
})
// EFS
const efsOutput = await efs({
name: PROJECT,
accessPointDir: '/projects',
vpcId: vpc.id,
subnetIds: vpc.isolatedSubnetIds,
ingress:[{
// Allows traffic from resources with the 'accessToEfsSecurityGroup' SG.
protocol: 'tcp', fromPort: 2049, toPort: 2049, securityGroups: [accessToEfsSecurityGroup.id], description: 'SG for NFS access to EFS'
}],
protect: true,
tags
})
// Lambda
const lambdaOutput = await lambda.fn({
name: PROJECT,
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
timeout: 30,
vpcConfig: {
subnetIds: vpc.isolatedSubnetIds,
securityGroupIds:[
// Use the 'accessToEfsSecurityGroup' so that this lambda can access the EFS filesystem.
accessToEfsSecurityGroup.id
],
enableENIcreation: true
},
fileSystemConfig: {
arn: efsOutput.accessPoint.arn,
localMountPath: '/mnt/somefolder'
},
cloudwatch: true,
logsRetentionInDays: 7,
tags
})
return {
vpc: vpcOutput,
accessToEfsSecurityGroup,
efs: efsOutput,
lambda: lambdaOutput
}
}
module.exports = main()
IMPORTANT: When using Docker, please make sure that your image uses the same architecture (i.e.,
x86_64
vsarm64
) then your Lambda OS. DO NOT USE something likeFROM amazon/aws-lambda-nodejs:14
as this is equivalent to the latest digest. Who knows what architecture the latest digest uses. Instead, browse the Docker Hub registry and find the tag that explicitly supports your OS architecture. For example,FROM amazon/aws-lambda-nodejs:14.2021.09.29.20
useslinux/arm64
while14.2021.10.14.13
useslinux/amd64
.
It is important to know the key design principles behind AWS Lambdas before using them. Please refer to this document for a quick refresher course: https://gist.github.com/nicolasdao/e72beb55f3550351e777a4a52d18f0be#a-few-words-about-aws-lambda
As of 29 of September 2021, ARM-based lambdas are powered by the AWS Graviton2 processor. This results in a significantly better of performance/price ratio.
This is why @cloudlesslabs/pulumix
uses the arm64
architecture as default rather than x86_64
(which is the normal AWS SDK and Pulumi default). This configuration can be changed via the architecture
property:
const { resolve } = require('path')
const { aws:{ lambda } } = require('@cloudlesslabs/pulumix')
lambda.fn({
name: 'my-lambda',
architecture: 'x86_64', // Default is 'arm64'
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
}
})
IMPORTANT: When using Docker, please make sure that your image uses the same architecture (i.e., x86_64
vs arm64
) then your Lambda OS. DO NOT USE something like FROM amazon/aws-lambda-nodejs:14
as this is equivalent to the latest digest. Who knows what architecture the latest digest uses. Instead, browse the Docker Hub registry and find the tag that explicitly supports your OS architecture. For example, FROM amazon/aws-lambda-nodejs:14.2021.09.29.20
uses linux/arm64
while 14.2021.10.14.13
uses linux/amd64
.
const { resolve } = require('path')
const { aws:{ lambda } } = require('@cloudlesslabs/pulumix')
lambda.fn({
name: 'my-lambda',
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
timeout: 30, // Optional. Default 3 seconds.
memorySize: 128, // Optional. Default 128MB
cloudwatch: true, // Optional. Default false.
logsRetentionInDays: 7 // Optional. The default is 0 (i.e., never expires).
policies: [somePolicy], // Optional. Default null.
tags: { // Optional.
Project: 'my-project',
Env: 'dev'
}
}).then(output => {
console.log(output.lambda)
console.log(output.role)
console.log(output.logGroup)
})
const pulumi = require('@pulumi/pulumi')
const aws = require('@pulumi/aws')
const awsx = require('@pulumi/awsx')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const api = new awsx.apigateway.API(PROJECT, {
routes: [
{
method: 'GET',
path: '/{subFolder}/{subSubFolders+}',
eventHandler: async ev => {
return {
statusCode: 200,
body: JSON.stringify({
subFolder: ev.pathParameters.subFolder,
subSubFolders: ev.pathParameters.subSubFolders
})
}
}
}
],
})
exports.url = api.url
CloudWatch is automatically configured for each Lambda provisioned via each route.
This next sample is more explicit than the previous example. It assumes that the root folder contains an app/
folder which contains the actual NodeJS lambda code:
app/
|__ src/
|__ index.js
|__ index.js
|__ package.json
The
package.json
is not always required. If yourindex.js
is simple and does not contain external NodeJS dependencies, then theindex.js
will suffice.
Where ./index.js
is similar to:
const { doSomething } = require('./src')
exports.handler = async ev => {
const message = await doSomething()
return {
statusCode: 200,
body: message
}
}
// https://www.pulumi.com/docs/reference/pkg/aws/lambda/function/
const pulumi = require('@pulumi/pulumi')
const aws = require('@pulumi/aws')
const { resolve } = require('path')
const { aws:{ lambda } } = require('@cloudlesslabs/pulumix')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const REGION = aws.config.region
const tags = {
Project: PROJ,
Env: ENV,
Region: REGION
}
const main = async () => {
const lambdaOutput = await lambda.fn({
name: PROJECT,
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
timeout:30,
memorySize:128,
tags
})
// API GATEWAY: https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/awsx/apigateway/
const api = new awsx.apigateway.API(PROJECT, {
routes: [
{
method: 'GET',
path: '/{subFolder}/{subSubFolders+}',
eventHandler: lambdaOutput.lambda
}
]
})
return api.url
}
module.exports = main()
Cloudwatch could be set up via policies as explained in the next section, but because this setup is common, we've added support for it via the Lambda API:
const { aws:{ lambda } } = require('@cloudlesslabs/pulumix')
const lambdaOutput = await lambda.fn({
// ...
cloudwatch: true,
logsRetentionInDays: 7 // This is optional. The default is 0 (i.e., never expires).
})
Tips:
- Inspect AWS managed policies to see how their statement is structured. You can easily do this with
npx get-policies
.- To find the right action, use this link: https://iam.cloudonaut.io/
- Please refer to the Annexes in the Policies examples section for common examples.
To illustrate this topic, let's see how we could configure CloudWatch so the Lambda can send its logs to a log group. To enable this setup, we need to create a new policy that allows the creations of log groups, log streams and log event as associate that policy to the Lambda's role.
// IAM: Allow lambda to create log groups, log streams and log events.
// Doc: https://www.pulumi.com/docs/reference/pkg/aws/iam/policy/
const cloudWatchPolicy = new aws.iam.Policy(PROJECT, {
path: '/',
description: 'IAM policy for logging from a lambda',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'logs:CreateLogGroup',
'logs:CreateLogStream',
'logs:PutLogEvents'
],
Resource: 'arn:aws:logs:*:*:*',
Effect: 'Allow'
}]
})
})
const lambdaOutput = await lambda.fn({
name: PROJECT,
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
timeout:30,
memorySize:128,
policies: [cloudWatchPolicy],
tags
})
TIPS: Leverage existing AWS Managed policies instead of creating your own each time (use
npx get-policies
to find them). This example could be re-written as follow:
const lambdaOutput = await lambda.fn({ name: PROJECT, fn: { runtime: 'nodejs12.x', dir: resolve('./app') }, timeout:30, memorySize:128, policies: [{ arn: 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole' }], tags })
Because enabling CloudWatch on a Lambda is so common, this policy can be automatically toggled as follow:
const lambdaOutput = await lambda.fn({ // ... cloudwatch: true, logsRetentionInDays: 7 // This is optional. The default is 0 (i.e., never expires). })
For God knows what reason, not all services can invoke AWS Lambdas via the standard Identity-based policies strategy. That's why it is recommended to use the Resource-based policies strategy instead via the Pulumi aws.lambda.Permission
API. For example, this is how you would allow AWS Cognito to invoke a lambda:
new aws.lambda.Permission(name, {
action: 'lambda:InvokeFunction',
function: lambda.name,
principal: 'cognito-idp.amazonaws.com',
sourceArn: userPool.arn
})
To easily find the principal's name, use the the command
npx get-principals
.
const { aws:{ lambda } } = require('@cloudlesslabs/pulumix')
const { resolve } = require('path')
const lambdaOutput = await lambda.fn({
name: 'my-example',
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
schedule: {
expression: 'rate(1 minute)'
}
})
To learn more about the
expression
syntax, please refer to the official AWS doc at https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html.
By default, the event object sent to the Lambda is similar to this:
{
version: '0',
id: 'cee5b84f-57b6-c60b-2c8c-9e1867b7e9ac',
'detail-type': 'Scheduled Event',
source: 'aws.events',
account: '12345677',
time: '2022-01-27T02:18:59Z',
region: 'ap-southeast-2',
resources: [
'arn:aws:events:ap-southeast-2:12345677:rule/some-event-name'
],
detail: {}
}
This object can be fully replaced with your own via the optional schedule.payload
property:
const lambdaOutput = await lambda.fn({
name: 'my-example',
fn: {
runtime: 'nodejs12.x',
dir: resolve('./app')
},
schedule: {
expression: 'rate(1 minute)',
payload: {
hello: 'World'
}
}
})
WARNING: You must make sure that the Docker image is compatible with the Lambda architecture (i.e., x86_64 vs arm64). For a list of all the AWS lambda images with their associated OS, please refer to https://hub.docker.com/r/amazon/aws-lambda-nodejs/tags?page=1&ordering=last_updated.
app
folder as follow:mkdir app && \
cd app && \
touch index.js && \
touch Dockerfile
Dockerfile
:FROM amazon/aws-lambda-nodejs:14.2021.09.29.20
ARG FUNCTION_DIR="/var/task"
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy handler function and package.json
COPY index.js ${FUNCTION_DIR}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "index.handler" ]
To see how to deal with
npm install
, please refer to https://gist.github.com/nicolasdao/f440e76b8fd748d84ad3b9ca7cf5fd12#the-instructions-order-in-your-dockerfile-matters-for-performance.More about this AWS image below (1).
index.js
:// IMPORTANT: IT MUST BE AN ASYNC FUNCTION OR THE CALLBACK VERSION: (event, context, callback) => callback(null, { statusCode:200, body: 'Hello' })
exports.handler = async event => {
return {
statusCode: 200,
body: `Hello world!`
}
}
docker build -t my-app .
docker run -p 127.0.0.1:4000:8080 my-app:latest
curl -XPOST "http://localhost:4000/2015-03-31/functions/function/invocations" -d '{}'
More details about these commands below (2).
index.js
:const pulumi = require('@pulumi/pulumi')
const { resolve } = require('path')
const { aws:{ lambda } } = require('@cloudlesslabs/pulumix')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const lambdaOutput = await lambda.fn({
name: PROJECT,
fn: {
dir: resolve('./app'),
type: 'image' // If './app' contains a 'Dockerfile', this prop is not needed. 'lambda' is able to automatically infer the type is an 'image'.
},
timeout:30,
memorySize:128
})
(1) The amazon/aws-lambda-nodejs:14.2021.09.29.20 docker image hosts a node web server listening on port 8080. The CMD expects a string or array following this naming convention: ".". (2) Once the container is running, the only way to test it is to perform POST to this path:
2015-03-31/functions/function/invocations
. This container won't listen to anything else; no GET, no PUT, no DELETE.
You may also want to add a .dockerignore
. We've added a Dockerfile and a .dockerignore example in the Annexes under the Docker files examples section.
As a quick refresher, the following Dockerfile
:
FROM amazon/aws-lambda-nodejs:14.2021.09.29.20
ARG FUNCTION_DIR="/var/task"
ENV HELLO Mike Davis
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy handler function and package.json
COPY index.js ${FUNCTION_DIR}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "index.handler" ]
Sets up an HELLO
environment variable that can be accessed by the Lambda code as follow:
exports.handler = async event => {
return {
statusCode: 200,
body: `Hello ${process.env.HELLO}!`
}
}
This could have been set up via the docker build
and with an ARG
in the Dockerfile
:
FROM amazon/aws-lambda-nodejs:14.2021.09.29.20
ARG FUNCTION_DIR="/var/task"
ARG MSG
ENV HELLO $MSG
...
docker build --build-arg MSG=buddy -t my-app .
docker run -p 127.0.0.1:4000:8080 my-app:latest
To define one or many --build-arg
via Pulumi, use the following API:
// ECR images. Doc:
// - buildAndPushImage API: https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/awsx/ecr/#buildAndPushImage
// - 2nd argument is a DockerBuild object: https://www.pulumi.com/docs/reference/pkg/docker/image/#dockerbuild
const image = awsx.ecr.buildAndPushImage(PROJECT, {
context: './app',
args: {
MSG: 'Mr Dao. How do you do?'
}
})
Please refer to the Mounting an EFS access point on a Lambda section.
For a full example of a project that uses Lambda with Docker and Git installed to save files on EFS, please refer to this project: https://github.com/nicolasdao/example-aws-lambda-efs
IMPORTANT: Your layer code must be under
/your-layer/nodejs/
, notyour-layer/
For a refresher on how Lambda Layers work, please refer to this document: https://gist.github.com/nicolasdao/e72beb55f3550351e777a4a52d18f0be#layers
Pulumi file index.js
:
const pulumi = require('@pulumi/pulumi')
const aws = require('@pulumi/aws')
const { resolve } = require('path')
const { aws:{ lambda } } = require('@cloudlesslabs/pulumix')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const REGION = aws.config.region
const RUNTIME = 'nodejs12.x'
const tags = {
Project: PROJ,
Env: ENV,
Region: REGION
}
const main = async () => {
const lambdaLayerOutput1 = await lambda.layer({
name: `${PROJECT}-layer-01`,
description: 'Includes puffy',
runtime: RUNTIME,
dir: resolve('./layers/layer01'),
tags
})
const lambdaLayerOutput2 = await lambda.layer({
name: `${PROJECT}-layer-02`,
description: 'Do something else',
runtime: RUNTIME,
dir: resolve('./layers/layer02'),
tags
})
const lambdaOutput = await lambda.fn({
name: PROJECT,
fn: {
runtime: RUNTIME,
dir: resolve('./app')
},
layers:[
lambdaLayerOutput1.arn,
lambdaLayerOutput2.arn
],
timeout:30,
memorySize:128,
tags
})
return {
lambda: lambdaOutput,
lambdaLayer: lambdaLayerOutput1
}
}
module.exports = main()
Lambda file:
exports.handler = async () => {
console.log('Welcome to lambda test layers!')
try {
require('puffy')
console.log('puffy is ready')
} catch (err) {
console.error('ERROR')
console.log(err)
}
try {
const { sayHi } = require('/opt/nodejs/utils')
sayHi()
sayBye()
} catch (err) {
console.error('ERROR IN LAYER ONE')
console.log(err)
}
try {
const { sayHi } = require('/opt/nodejs')
sayHi()
} catch (err) {
console.error('ERRor in layer twO')
console.log(err)
}
}
Layer01 code ./layers/layer01/nodejs/utils.js
module.exports = {
sayHi: () => console.log('Hello, I am layer One')
}
Layer02 code ./layers/layer01/nodejs/index.js
module.exports = {
sayHi: () => console.log('Hello, I am layer Two')
}
To learn more about what versions and aliases are and why they are useful, please refer to this document: AWS LAMBDA/Deployment strategies
To publish the latest deployment to a new version, use the publish
property:
const lambdaOutput = await lambda.fn({
name: PROJECT,
fn: {
runtime: RUNTIME,
dir: resolve('./app')
},
publish: true,
timeout:30,
memorySize:128,
tags
})
To create an alias:
// Doc: https://www.pulumi.com/registry/packages/aws/api-docs/lambda/alias/
const testLambdaAlias = new aws.lambda.Alias('testLambdaAlias', {
name: 'prod',
description: 'a sample description',
functionName: lambdaOutput.arn,
functionVersion: '1',
routingConfig: {
additionalVersionWeights: {
'2': 0.5,
}
}
})
Full API doc at https://www.pulumi.com/registry/packages/aws/api-docs/lambda/alias/.
// Doc: https://www.pulumi.com/registry/packages/aws/api-docs/iam/policy/
const cloudWatchPolicy = new aws.iam.Policy('my-custom-policy', {
name: 'my-custom-policy',
description: 'IAM policy for logging from a lambda',
path: '/',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'logs:CreateLogGroup',
'logs:CreateLogStream',
'logs:PutLogEvents'
],
Resource: 'arn:aws:logs:*:*:*',
Effect: 'Allow'
}]
})
})
To see a concrete example that combine a role and a policy to allow multiple services to invole a Lambda, please refer to this example under the AWS role section.
// Doc: https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/
const lambdaRole = new aws.iam.Role('lambda-role', {
name: 'lambda-role',
description: 'IAM role for a Lambda',
assumeRolePolicy: {
Version: '2012-10-17',
Statement: [{
Action: 'sts:AssumeRole',
Principal: {
Service: 'lambda.amazonaws.com', // tip: Use the command `npx get-principals` to find any AWS principal
},
Effect: 'Allow',
Sid: ''
}],
}
})
TIPS: The
Service
property supports both the string type and the array string type. TheStatement
for a role with multiple services would look like this:
[{ Action: 'sts:AssumeRole', Principal: { Service: [ 'lambda.amazonaws.com', 'cognito-idp.amazonaws.com' ] }, Effect: 'Allow', Sid: '' }]
This example assumes we have already acquired a lambda's ARN (string):
const lambdaArnString = getLambdaArn() // Just for demo.
// 1. Create a multi-services IAM role.
const myRole = new aws.iam.Role('my-multi-services-role', {
name: 'my-multi-services-role',
description: 'IAM role for a multi-services role',
assumeRolePolicy: {
Version: '2012-10-17',
Statement: [{
Action: 'sts:AssumeRole',
Principal: {
Service: [// tip: Use the command `npx get-principals` to find any AWS principal
'events.amazonaws.com',
'cognito-idp.amazonaws.com'
]
},
Effect: 'Allow',
Sid: ''
}],
}
})
// 2. Create a policy that can invoke the lambda.
const invokePolicy = new aws.iam.Policy('my-custom-policy', {
name: 'my-custom-policy',
description: 'IAM policy for invoking a lambda',
path: '/',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'lambda:InvokeFunction'
],
Resource: lambdaArnString,
Effect: 'Allow'
}]
})
})
// 3. Attach the policy to the role
const lambdaRolePolicyAttachment = new aws.iam.RolePolicyAttachment(`attached-policy`, {
role: myRole.name,
policyArn: invokePolicy.arn
})
const { aws:{ s3 }, resolve } = require('@cloudlesslabs/pulumix')
const createBucket = async name => {
const { bucket } = await s3.bucket({
name,
website: { // When this property is set, the bucket is public. Otherwise, the bucket is private.
indexDocument: 'index.html'
}
})
const [websiteEndpoint, bucketDomainName, bucketRegionalDomainName] = await resolve([
bucket.websiteEndpoint,
bucket.bucketDomainName,
bucket.bucketRegionalDomainName])
console.log(`Website URL: ${websiteEndpoint}`)
console.log(`Bucket domain name: ${bucketDomainName}`) // e.g., 'bucketname.s3.amazonaws.com'
console.log(`Bucket regional domain name: ${bucketRegionalDomainName}`) // e.g., 'https://bucketname.s3.ap-southeast-2.amazonaws.com'
}
createBucket('my-unique-name')
This feature is not using native Pulumi APIs. Instead, it uses the AWS SDK to sync files via the S3 API after the bucket has been created. When the content
property of the s3.bucket
input is set, a new files
property is added to the output. The new files
property is an array containing object similar to this:
[{
key: "favicon.png",
hash: "5efd4dc4c28ef3548aec63ae88865ff9"
},{
key: "global.css",
hash: "8ff861b6a5b09e7d5fa681d8dd31262a"
}]
Because this array is stored in Pulumi, we can use this reference object to determine which file must be updated (based on its hash), which file must be added (based its key) and which file must be deleted (based on its key). This is demoed in the sample below where you can see that the existingContent
is passed from the stack to the s3.bucket
API.
The following example syncs the files stored under the ./app/public
folder and excludes all files under the node_modules
folder.
const pulumi = require('@pulumi/pulumi')
const { resolve, aws: { s3 } } = require('@cloudlesslabs/pulumix')
const { join } = require('path')
const ENV = pulumi.getStack()
const PROJ = pulumi.getProject()
const PROJECT = `${PROJ}-${ENV}`
const thisStack = new pulumi.StackReference(`${PROJ}/${ENV}`)
const oldFiles = thisStack.getOutput('files')
const main = async () => {
const existingContent = (await resolve(oldFiles)) || []
const { bucket, files } = await s3.bucket({
name: PROJECT,
website: { // When this property is set, the bucket is public. Otherwise, the bucket is private.
indexDocument: 'index.html',
content: {
dir:join(__dirname, './app/public'),
ignore: '**/node_modules/**',
existingContent, // e.g., [{key: "favicon.png",hash: "5efd4dc4c28ef3548aec63ae88865ff9" },{ key: "global.css",hash: "8ff861b6a5b09e7d5fa681d8dd31262a" }]
// remove:true
}
}
})
return {
bucket,
files
}
}
module.exports = main()
IMPORTANT: To delete a bucket, its content must be removed first. Re-deploy the stack by uncommenting the
// remove:true
line. This will remove all the content.
Using the exact same sample from above:
const main = async () => {
const existingContent = (await resolve(oldFiles)) || []
const { bucket, files, cloudfront } = await s3.bucket({
name: PROJECT,
website: { // When this property is set, the bucket is public. Otherwise, the bucket is private.
indexDocument: 'index.html',
content: {
dir:join(__dirname, './app/public'),
ignore: '**/node_modules/**',
existingContent, // e.g., [{key: "favicon.png",hash: "5efd4dc4c28ef3548aec63ae88865ff9" },{ key: "global.css",hash: "8ff861b6a5b09e7d5fa681d8dd31262a" }]
// remove:true
},
cloudfront: {
invalidateOnUpdate: true
}
}
})
return {
bucket,
files,
cloudfront
}
}
const { aws:{ secret } } = require('@cloudlesslabs/pulumix')
secret.get('my-secret-name').then(({ version, data }) => {
console.log(version)
console.log(data) // Actual secret object
})
WARNING: Don't forget to also define an egress rule to allow traffic out from your resource. This is a typicall mistake that causes systems to not be able to contact any other services. The most common egress rule is:
{ protocol: '-1', fromPort:0, toPort:65535, cidrBlocks: ['0.0.0.0/0'], ipv6CidrBlocks: ['::/0'], description:'Allow all traffic' }
const { aws:{ securityGroup } } = require('@cloudlesslabs/pulumix')
const { securityGroup:mySecurityGroup, securityGroupRules:myRules } = await securityGroup.sg({
name: `my-special-sg`,
description: `Controls something special.`,
vpcId: 'vpc-1234',
egress: [{
protocol: '-1',
fromPort:0, toPort:65535, cidrBlocks: ['0.0.0.0/0'],
ipv6CidrBlocks: ['::/0'],
description:'Allow all traffic'
}],
tags: {
Project: 'demo'
}
})
const { aws: { ssm } } = require('@cloudlesslabs/pulumix')
const main = async () => {
// Full parameters list at https://www.pulumi.com/registry/packages/aws/api-docs/ssm/parameter/
const foo = await ssm.parameterStore.parameter({
name: 'foo',
value: { hello:'world' }
})
return foo
}
main()
To retrieve a value from Parameter store:
const { aws: { ssm } } = require('@cloudlesslabs/pulumix')
const main = async () => {
const { version, value } = await ssm.parameterStore.get({ name:'foo', version:2, json:true })
console.log({
version,
value
})
}
NOTICE: This method does not use the Pulumi API as it creates
registered twice
issues when both aget
andcreate
operations that use the same name are put in the same script.
To store or update data in Parameter Store without using Pulumi:
const { aws: { ssm } } = require('@cloudlesslabs/pulumix')
const main = async () => {
// Full parameters list at https://www.pulumi.com/registry/packages/aws/api-docs/ssm/parameter/
const data = await parameterStore.create({
name: 'foo',
value: {
hello: 'World'
},
overWrite:true // Default false. True means you can overwrite the value.
})
return data // { version: 1, tier: 'Standard' }
}
main()
The previous example demonstrates how to read the value of a parameter store variable. However, this API does not use Pulumi under the hood. To get a specific version using the native Pulumi API:
const param = aws.ssm.Parameter.get('foo','foo:12')
When the version is not used with the parameter store's ID, the latest version is returned.
By default, this uitility creates a policy that allows the step-function to invoke any lambda.
const { aws: { stepFunction } } = require('@cloudlesslabs/pulumix')
const main = async () => {
const preProvision = await stepFunction.stateMachine({
name: 'my-step-function',
type: 'standard', // Valid values: 'standard' (default) or 'express'
description: 'Does something.',
states: preProvisionWorkflow,
// policies: [],
cloudWatchLevel: 'all', // Default is 'off'. Valid values: 'all', 'error', 'fatal'
logsRetentionInDays: 7, // Default 0 (i.e., never expires). Only applies when 'cloudWatch' is true.
tags:{
Name: 'my-step-function'
}
})
return {
preProvision
}
}
module.exports = main()
The preProvisionWorkflow
is a JSON object that you can export from the Step Function designer in the AWS console. This object is rather complex so we recommend to use the designer.
WARNING: Once the VPC's subnets have been created, updating them will produce a replace, which can have dire consequences to your entire infrastructure. Therefore think twice when setting them up.
The following setup is quite safe:
const vpcOutput = vpc({
name: 'my-project-dev',
subnets: [{ type: 'public' }, { type: 'private' }],
numberOfAvailabilityZones: 3, // Provide the maximum number of AZs based on your region. The default is 2
protect: false,
tags: {
Project: 'my-project',
Env: 'dev'
}
})
This setup will divide the VPC's CIDR block in equal portions based on the total number of subnets created. The above example shows 6 subnets (3 public and 3 private). Because the example above did not specify any CIDR block for the VPC, it is set to 10.0.0.0/16
which represents 65,536 IP addresses. This means each subnet can use up to ~10922
IP addresses.
The last thing to be aware of is that the private subnets will also provision 3 NATs in the public subnets. The temptation would be to use isolated
subnets instead of private ones to save on money, but from my experience, this is pointless. You'll always end up internet access from your isolated subnets, so don't bother and setup private subnets from the beginning.
Full API doc at https://www.pulumi.com/docs/reference/pkg/gcp/
const pulumi = require('@pulumi/pulumi')
const gcp = require('@pulumi/gcp')
if (!process.env.PROJECT)
throw new Error('Missing required environment variable \'process.env.PROJECT\'')
const config = new pulumi.Config()
const { location } = config.requireObject('gcp_bucket')
const STACK_NAME = pulumi.getStack()
const RESOURCE_PREFIX = `${process.env.PROJECT}-${STACK_NAME}`
const FILE_BUCKET = `${RESOURCE_PREFIX}-storage-pb`
const PRIVATE_BUCKET = `${RESOURCE_PREFIX}-nosql-db`
// Create the public file storage
const publicFileBucket = new gcp.storage.Bucket(FILE_BUCKET, {
name: FILE_BUCKET, // This seems redundant, but it is not. It forces Pulumi to not add a unique suffix on your bucket.
bucketPolicyOnly: true, // Means the policy applies on the entire bucket rather than on a per object basis
cors: [{
maxAgeSeconds: 3600,
methods: [ 'GET', 'OPTIONS', 'HEAD', 'POST', 'PUT', 'DELETE' ],
origins: ['*'],
responseHeaders: ['*'],
}],
location
})
// Create the private bucket
const privateBucket = new gcp.storage.Bucket(PRIVATE_BUCKET, {
name: PRIVATE_BUCKET,
location
})
module.exports = {
publicFileBucket: {
id: publicFileBucket.id,
publicUrl: publicFileBucket.selfLink,
url: publicFileBucket.url,
storageClass: publicFileBucket.storageClass,
location: publicFileBucket.location
},
privateBucket: {
id: privateBucket.id,
publicUrl: privateBucket.selfLink,
url: privateBucket.url,
storageClass: privateBucket.storageClass,
location: privateBucket.location
}
}
require('@pulumi/pulumi')
const gcp = require('@pulumi/gcp')
if (!process.env.PROJECT)
throw new Error('Missing required environment variable \'process.env.PROJECT\'')
const SERVICES = [
'cloudbuild.googleapis.com',
'containerregistry.googleapis.com',
'run.googleapis.com',
'secretmanager.googleapis.com'
]
const services = []
for(const service of SERVICES) {
const { id } = new gcp.projects.Service(service, {
project: process.env.PROJECT,
service
})
services.push(id)
}
module.exports = {
services
}
WARNING: Enabling Firebase on a Google project cannot be undone. I would suggest to not delete the Pulumi code that enable that service even if you which to not use Firebase in your project. You might think you just want to clean the Pulumi project, but the truth is that this will create issues as the Firebase project cannot be disabled.
Firebase is kind of a weird service. In essence, it is part of the GCP suite, but from a brand perspective, it is a separate product. Though there are a few Firebase services(1) that can be enabled in a GCP project the way it was explained in the previous section, this is not the way to enable Firebase on a Google project. The correct Pulumi API is the following:
const gcp = require('@pulumi/gcp')
const firebase = new gcp.firebase.Project('your-firebase-project-name', {
project: 'your-gcp-project-id'
})
module.exports = {
firebase: firebase.id
}
This above snippets as a few side-effects. It will provision the following:
Firebase Admin SDK
is added.(1) The GCP Firebase services are:
firebase.googleapis.com
firebaseappdistribution.googleapis.com
firebaseapptesters.googleapis.com
firebasedynamiclinks.googleapis.com
firebaseextensions.googleapis.com
firebasehosting.googleapis.com
firebaseinappmessaging.googleapis.com
firebaseinstallations.googleapis.com
firebaseml.googleapis.com
firebasemods.googleapis.com
firebasepredictions.googleapis.com
firebaseremoteconfig.googleapis.com
firebaserules.googleapis.com
firebasestorage.googleapis.com
firestore.googleapis.com
Unfortunately, as of August 2020, it is not possible to automate the enabling of that service via Pulumi because Identity Platform is an app in the Google Cloud Marketplace rather than a first class Google Cloud service.
To enable that service, manually log to the Google Cloud console here.
The following steps shows how to provision a Cloud Run service with the following aspects:
<PULUMI PROJECT NAME>-<STACK>
. <PULUMI PROJECT NAME>
is the name
property in the Pulumi.yaml
. For example, if the stack is called test
, the service's name could be: yourproject-test
.To use this sample, make sure to:
npm i @pulumi/pulumi @pulumi/gcp @pulumi/docker
Pulumi.<STACK NAME>.yaml
so it contain at a minimum the following settings:
config:
your-project-name:memory: 512Mi
gcp:project: your-gcp-project-id
gcp:region: australia-southeast1
dotenv
or your build server):
DB_USER
DB_PASSWORD
app
folder. It does not need any cloudbuild.yaml
since the build is automated with Pulumi, but it still needs a Dockerfile
as per usual.const pulumi = require('@pulumi/pulumi')
const gcp = require('@pulumi/gcp')
const docker = require('@pulumi/docker')
const { git } = require('./utils')
// Validates that the environment variables are set up
const ENV_VARS = [
'DB_USER',
'DB_PASSWORD'
]
for (let varName of ENV_VARS)
if (!process.env[varName])
throw new Error(`Missing required environment variables 'process.env.${varName}'`)
const config = new pulumi.Config()
const STACK_NAME = pulumi.getStack()
const MEMORY = config.require('memory') || '512Mi'
const SHORT_SHA = git.shortSha()
const SERVICE_NAME = `${config.name}-${STACK_NAME}`
const IMAGE_NAME = `${SERVICE_NAME}-image`
const SERVICE_ACCOUNT_NAME = `${SERVICE_NAME}-cloudrun`
const SERVICE_ACCOUNT_NAME = `${config.name}-${STACK_NAME}-cloudrun`
if (!SHORT_SHA)
throw new Error('This project is not a git repository')
if (!gcp.config.project)
throw new Error(`Missing required 'gcp:project' in the '${STACK_NAME}' stack config`)
if (!gcp.config.region)
throw new Error(`Missing required 'gcp:region' in the '${STACK_NAME}' stack config`)
// Enables the Cloud Run service (doc: https://www.pulumi.com/docs/reference/pkg/gcp/projects/service/)
const enableCloudRun = new gcp.projects.Service('run.googleapis.com', {
service: 'run.googleapis.com'
})
const gcpAccessToken = pulumi.output(gcp.organizations.getClientConfig({}).then(c => c.accessToken))
// Uploads new Docker image with your app to Google Cloud Container Registry (doc: https://www.pulumi.com/docs/reference/pkg/docker/image/)
const dockerImage = new docker.Image(IMAGE_NAME, {
imageName: pulumi.interpolate`gcr.io/${gcp.config.project}/${config.name}:${SHORT_SHA}`,
build: {
context: './app'
},
registry: {
server: 'gcr.io',
username: 'oauth2accesstoken',
password: pulumi.interpolate`${gcpAccessToken}`
}
})
// Creates a new service account for that Cloud Run service (doc: https://www.pulumi.com/docs/reference/pkg/gcp/serviceaccount/account/)
const serviceAccount = new gcp.serviceAccount.Account(SERVICE_ACCOUNT_NAME, {
accountId: SERVICE_ACCOUNT_NAME, // This will automatically create the service account email as follow: <SERVICE_ACCOUNT_NAME>@<PROJECT ID>.iam.gserviceaccount.com
displayName: SERVICE_ACCOUNT_NAME
})
// Deploys the new Docker image to Google Cloud Run (doc: https://www.pulumi.com/docs/reference/pkg/gcp/cloudrun/)
const cloudRunService = new gcp.cloudrun.Service(SERVICE_NAME, {
name: SERVICE_NAME,
location: gcp.config.region,
template: {
// doc: https://www.pulumi.com/docs/reference/pkg/gcp/cloudrun/service/#servicetemplatespec
spec: {
// doc: https://www.pulumi.com/docs/reference/pkg/gcp/cloudrun/service/#servicetemplatespeccontainer
containers: [{
envs: ENV_VARS.map(name => ({ name, value:process.env[name] })),
image: dockerImage.imageName,
// doc: https://www.pulumi.com/docs/reference/pkg/gcp/cloudrun/service/#servicetemplatespeccontainerresources
resources: {
limits: {
memory: MEMORY // Available units are 'Gi', 'Mi' and 'Ki'
},
},
}],
serviceAccountName: serviceAccount.email, // This is optional. The default is the project's default service account
containerConcurrency: 80, // 80 is the max. Above this limit, Cloud Run spawn another container.
},
},
}, {
dependsOn: [
enableCloudRun
]
})
module.exports = {
serviceAccount: {
id: serviceAccount.id,
name: serviceAccount.name,
accountId: serviceAccount.accountId,
email: serviceAccount.email,
project: serviceAccount.project
},
cloudRunService: {
id: cloudRunService.id,
name: cloudRunService.name,
project: cloudRunService.project,
location: cloudRunService.location,
url: cloudRunService.status.url,
serviceAccount: cloudRunService.template.spec.serviceAccountName
},
dockerImage: dockerImage.imageName,
enableCloudRun: enableCloudRun.id
}
What's interesting in this template:
envs: ENV_VARS.map(name => ({ name, value:process.env[name] }))
. If your use case requires to pass some of those variables to the Docker image, please refer to the Passing environment variables to the Docker image rather than the Docker container section.registry
property in the docker.Image
instantiation. The Pulumi documentation on how to set this up for Google Cloud Container Registry was not really clear:
server
: Must be hardcoded to gcr.io
.username
: Must be hardcoded to oauth2accesstoken
.password
: This is the short-lived OAuth2 access token retrieved based on your Google credentials. That token can retrieved with the gcp.organizations.getClientConfig({}).then(c => c.accessToken)
API. However, because this is a Promise that resolves to a string, it must first be converted to an Output with pulumi.output
. The string can finally be passed to the docker.Image
instance with the pulumi.interpolation
function.const serviceAccount = new gcp.serviceAccount.Account(...)
...
const cloudRunService = new gcp.cloudrun.Service(SERVICE_NAME, {
template: {
spec: {
...
serviceAccountName: serviceAccount.email,
...
}
}
})
As mentioned earlier, this step is optional, but it is considered a best practice ot manage IAM policies betwene services. If the line serviceAccountName: serviceAccount.email
is omitted, the Cloud Run service is associated to the project default service account.By default, Cloud Run services are protected. This means that they cannot be access via HTTPS outside of your Google Clloud project's VPC. To enable HTTPS access to the public, add the following snippet at the bottom of the previous code snippet:
// Allows this service to be accessed via public HTTPS
const PUBLIC_MEMBER = `${SERVICE_NAME}-public-member`
const publicAccessMember = new gcp.cloudrun.IamMember(PUBLIC_MEMBER, {
service: cloudRunService.name,
location: cloudRunService.location,
role: 'roles/run.invoker',
member: 'allUsers'
})
This section demonstrates how to create a Cloud Run service that can invoke another protected Cloud Run service.
It is considered a best practice to not expose your Cloud Run services publicly unless this is a business requirement (e.g., exposing a web API for a mobile or web app). This means that for service-to-service communication, roles must be explicitly configured to allow specific agents to interact with each other. The approach is quite straightforward:
name
project
location
This means that those pieces of information must have been added to the stack's outputs.const otherProtectedStack = new pulumi.StackReference('your-other-protected-stack')
roles/run.invoker
role to the current Cloud Run's service account.
const binding = new gcp.cloudrun.IamBinding('your-new-binding-name', {
service: otherProtectedStack.outputs.cloudRunService.name,
location: otherProtectedStack.outputs.cloudRunService.location,
project: otherProtectedStack.outputs.cloudRunService.project,
role: 'roles/run.invoker',
members: [
pulumi.interpolate`serviceAccount:${serviceAccount.email}`
]
})
IMPORTANT: Notice the convention used to define the
members
:
- We need to use
pulumi.interpolate
becauseserviceAccount.email
is an Output.- We need to prefix the service account email with
serviceAccount
(careful, this is case-sensitive!), otherwise, aError 400: The member ... is of an unknown type
error is thrown.
Tenants
in the menu.Settings
, select the Security
tab and then click on the Allow tenants
button.doc: https://www.pulumi.com/docs/reference/pkg/gcp/identityplatform/tenant/
const tenant = new gcp.identityplatform.Tenant('your-tenant-name', {
allowPasswordSignup: true,
displayName: 'your-tenant-name'
})
module.exports = {
tenant: {
id: tenant.id,
tenantId: tenant.name // Value required in the client: firebase.auth().tenantId = tenantId
}
}
There are no Pulumi APIs to list all the project's service accounts, but it is easy to call the official Google Cloud REST API to get that information. Convert that Promise into an Output
with pulumi.output
so you can use it with other resources.
const pulumi = require('@pulumi/pulumi')
const gcp = require('@pulumi/gcp')
const fetch = require('node-fetch')
/**
* Selects service accounts in the current project.
*
* @param {String} query.where.email
* @param {String} query.where.emailContains
*
* @return {String} serviceAccounts[].description
* @return {String} serviceAccounts[].displayName
* @return {String} serviceAccounts[].email
* @return {String} serviceAccounts[].etag
* @return {String} serviceAccounts[].name
* @return {String} serviceAccounts[].oauth2ClientId
* @return {String} serviceAccounts[].projectId
* @return {String} serviceAccounts[].uniqueId
*/
const select = async query => {
const where = (query || {}).where || {}
const { accessToken } = await gcp.organizations.getClientConfig({})
const uri = `https://iam.googleapis.com/v1/projects/${gcp.config.project}/serviceAccounts`
const data = await fetch(uri, {
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${accessToken}`
}
}).then(res => res.json())
if (!data || !data.accounts || !data.accounts.length)
return []
const filters = []
if (where.email)
filters.push(account => account.email == where.email)
if (where.emailContains)
filters.push(account => account.email.indexOf(where.emailContains) >= 0)
return data.accounts.filter(account => filters.every(f => f(account)))
}
const find = query => select(query).then(data => data[0])
module.exports = {
select,
find
}
ETIMEDOUT
This is most likely due to one of the following:
Please refer to the A few words about AWS Lambda section.
failed to create '/home/sbx_userxxxx/.pulumi'
Please refer to the Setting it up in Docker section.
This typically happens with the Automation API. The AWS Pulumi plugin is not found because:
stack.workspace.installPlugin('aws', 'v4.17.0')
.stack.workspace.installPlugin('aws', 'v4.0.0')
. The plugin is version sensitive.Which version of the AWS SDK is required depends on the Pulumi version you're using. The best way to found out is to try to deploy without installing the AWS SDK, then read the error message to figure the version out.
IMAGE Launch error: fork/exec /lambda-entrypoint.sh: exec format error
This typically happens when the image used to run Lambda containers is using an OS that is incompatible with the expected Lambda OS. For example, amazon/aws-lambda-nodejs:14.2021.09.29.20
uses the arm64
architecture. This error will occur if the Lambda has been configured with its default x86_64
architecture.
To fix this issue, please refer to the ARM architecture recommended section.
There are 2 main ways to grant a service access to a resource:
Choosing one strategy over the other depends on your use case. That being said, some scenarios only accept one. For example, when configuring a lambda to be triggered by a schedule CRON job (i.e., Cloudwatch event), only the resource-based policy via an AWS lambda permission works. Go figure...
The standard way to configure allow a service to access a resource is to:
lambda-role
can only be assumed by the lambda.amazonaws.com
principal.Tip: Use
npx get-principals
to find the principal URI.
Tip: Use
npx get-policies
to search AWS managed policies and get their ARN.
For example:
// Step 1: Create a role that identifies the resource (mainly the principal).
const lambdaRole = new aws.iam.Role('lambda-role', {
assumeRolePolicy: {
Version: '2012-10-17',
Statement: [{
Action: 'sts:AssumeRole',
Principal: {
Service: 'lambda.amazonaws.com', // tip: Use the command `npx get-principals` to find any AWS principal
},
Effect: 'Allow',
Sid: ''
}],
}
})
// Step 2: Create a policy or use the `npx get-policies` to get a managed AWS policy ARN
const cloudWatchPolicy = new aws.iam.Policy('cw-policy', {
path: '/',
description: 'IAM policy for logging from a lambda',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'logs:CreateLogGroup',
'logs:CreateLogStream',
'logs:PutLogEvents'
],
Resource: 'arn:aws:logs:*:*:*',
Effect: 'Allow'
}]
})
})
// Step 3: Attach the policy to the role. You can attach more than one.
const lambdaLogs = new aws.iam.RolePolicyAttachment(`attached-policy`, {
role: lambdaRole.name,
policyArn: cloudWatchPolicy.arn
})
// Step 4: Reference that role on the resource
const lambda = new aws.lambda.Function('my-lambda', {
// ... other properties
role: lambdaRole.arn,
dependsOn:[lambdaLogs]
})
const s3ObjectPolicyName = `my-project-s3-access`
const s3ObjectPolicy = new aws.iam.Policy(s3ObjectPolicyName, {
name: s3ObjectPolicyName,
description: `Allow to read/write objects in an S3 bucket.`,
path: '/',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
's3:Get*',
's3:List*',
's3:PutObject'
],
Resource: [join(logBucketArn,'*')], // Notice that you cannot simply use the bucket's ARN.
Effect: 'Allow'
}]
})
})
const parameterStorePolicyName = `my-project-parameter-store`
const parameterStorePolicy = new aws.iam.Policy(parameterStorePolicyName, {
name: parameterStorePolicyName,
description: `Allow to read Parameter Store.`,
path: '/',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'ssm:GetParameters',
'ssm:GetParameter'
],
Resource: ['*'],
Effect: 'Allow'
}]
})
})
// IAM: Allow lambda to read Cloudwatch logs.
const cloudwatchLogGroupPolicyName = `my-project-read-log-group`
const cloudwatchLogGroupPolicy = new aws.iam.Policy(cloudwatchLogGroupPolicyName, {
name: cloudwatchLogGroupPolicyName,
description: `Allow to read Cloudwatch log group.`,
path: '/',
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Action: [
'logs:FilterLogEvents'
],
Resource: ['*'],
Effect: 'Allow'
}]
})
})
Dockerfile
example](#dockerfile-example)This example shows how you would setup two environment variables as well as setup the GitHub auth token to install private NPM packages hosted on GitHub:
WARNING: The
amazon/aws-lambda-nodejs:14.2021.09.29.20
image targets ARM architecture. Therefore, make sure your Lambda usesarm64
. To find the tag that explicitly supports your OS architecture, browse the official AWS Lambda Docker Hub registry.
FROM amazon/aws-lambda-nodejs:14.2021.09.29.20
ARG FUNCTION_DIR="/var/task"
ARG GITHUB_ACCESS_TOKEN
ARG SOME_ENV_DEMO
ENV SOME_ENV_DEMO $SOME_ENV_DEMO
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Setup access to the private GitHub package
RUN echo "//npm.pkg.github.com/:_authToken=$GITHUB_ACCESS_TOKEN" >> ~/.npmrc
COPY .npmrc ${FUNCTION_DIR}
# Install all dependencies
COPY package*.json ${FUNCTION_DIR}
RUN npm install --only=prod --prefix ${FUNCTION_DIR}
# Copy app files
COPY . ${FUNCTION_DIR}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "index.handler" ]
.dockerignore
exampleDockerfile
README.md
LICENSE
node_modules
npm-debug.log
.env
FAQs
NodeJS Pulumi helpers.
We found that @cloudlessopenlabs/pulumix demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Crates.io adds Trusted Publishing support, enabling secure GitHub Actions-based crate releases without long-lived API tokens.
Research
/Security News
Undocumented protestware found in 28 npm packages disrupts UI for Russian-language users visiting Russian and Belarusian domains.
Research
/Security News
North Korean threat actors deploy 67 malicious npm packages using the newly discovered XORIndex malware loader.