Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized HTML report.
Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized HTML report.
For full documentation, please visit the project on ReadTheDocs.
Cloudsplaining identifies violations of least privilege in AWS IAM policies and generates a pretty HTML report with a triage worksheet. It can scan all the policies in your AWS account or it can scan a single policy file.
It helps to identify IAM actions that do not leverage resource constraints. It also helps prioritize the remediation process by flagging IAM policies that present the following risks to the AWS account in question without restriction:
s3:GetObject
, ssm:GetParameter
, secretsmanager:GetSecretValue
)Cloudsplaining also identifies IAM Roles that can be assumed by AWS Compute Services (such as EC2, ECS, EKS, or Lambda), as they can present greater risk than user-defined roles - especially if the AWS Compute service is on an instance that is directly or indirectly exposed to the internet. Flagging these roles is particularly useful to penetration testers (or attackers) under certain scenarios. For example, if an attacker obtains privileges to execute ssm:SendCommand and there are privileged EC2 instances with the SSM agent installed, they can effectively have the privileges of those EC2 instances. Remote Code Execution via AWS Systems Manager Agent was already a known escalation/exploitation path, but Cloudsplaining can make the process of identifying theses cases easier. See the sample report for some examples.
You can also specify a custom exclusions file to filter out results that are False Positives for various reasons. For example, User Policies are permissive by design, whereas System roles are generally more restrictive. You might also have exclusions that are specific to your organization's multi-account strategy or AWS application architecture.
Policy Sentry revealed to us that it is possible to finally write IAM policies according to least privilege in a scalable manner. Before Policy Sentry was released, it was too easy to find IAM policy documents that lacked resource constraints. Consider the policy below, which allows the IAM principal (a role or user) to run s3:PutObject
on any S3 bucket in the AWS account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "*"
}
]
}
This is bad. Ideally, access should be restricted according to resource ARNs, like so:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
Policy Sentry makes it really easy to do this. Once Infrastructure as Code developers or AWS Administrators gain familiarity with the tool (which is quite easy to use), we've found that adoption starts very quickly. However, if you've been using AWS, there is probably a very large backlog of IAM policies that could use an uplift. If you have hundreds of AWS accounts with dozens of policies in each, how can we lock down those AWS accounts by programmatically identifying the policies that should be fixed?
That's why we wrote Cloudsplaining.
Cloudsplaining identifies violations of least privilege in AWS IAM policies and generates a pretty HTML report with a triage worksheet. It can scan all the policies in your AWS account or it can scan a single policy file.
brew tap salesforce/cloudsplaining https://github.com/salesforce/cloudsplaining
brew install cloudsplaining
pip3 install --user cloudsplaining
cloudsplaining
from command line by running cloudsplaining --help
.To enable Bash completion, put this in your .bashrc
:
eval "$(_CLOUDSPLAINING_COMPLETE=source cloudsplaining)"
To enable ZSH completion, put this in your .zshrc:
eval "$(_CLOUDSPLAINING_COMPLETE=source_zsh cloudsplaining)"
You can also scan a single policy file to identify risks instead of an entire account.
cloudsplaining scan-policy-file --input-file examples/policies/explicit-actions.json
The output will include a finding description and a list of the IAM actions that do not leverage resource constraints.
The output will resemble the following:
Issue found: Data Exfiltration
Actions: s3:GetObject
Issue found: Resource Exposure
Actions: ecr:DeleteRepositoryPolicy, ecr:SetRepositoryPolicy, s3:BypassGovernanceRetention, s3:DeleteAccessPointPolicy, s3:DeleteBucketPolicy, s3:ObjectOwnerOverrideToBucketOwner, s3:PutAccessPointPolicy, s3:PutAccountPublicAccessBlock, s3:PutBucketAcl, s3:PutBucketPolicy, s3:PutBucketPublicAccessBlock, s3:PutObjectAcl, s3:PutObjectVersionAcl
Issue found: Unrestricted Infrastructure Modification
Actions: ecr:BatchDeleteImage, ecr:CompleteLayerUpload, ecr:CreateRepository, ecr:DeleteLifecyclePolicy, ecr:DeleteRepository, ecr:DeleteRepositoryPolicy, ecr:InitiateLayerUpload, ecr:PutImage, ecr:PutImageScanningConfiguration, ecr:PutImageTagMutability, ecr:PutLifecyclePolicy, ecr:SetRepositoryPolicy, ecr:StartImageScan, ecr:StartLifecyclePolicyPreview, ecr:TagResource, ecr:UntagResource, ecr:UploadLayerPart, s3:AbortMultipartUpload, s3:BypassGovernanceRetention, s3:CreateAccessPoint, s3:CreateBucket, s3:DeleteAccessPoint, s3:DeleteAccessPointPolicy, s3:DeleteBucket, s3:DeleteBucketPolicy, s3:DeleteBucketWebsite, s3:DeleteObject, s3:DeleteObjectTagging, s3:DeleteObjectVersion, s3:DeleteObjectVersionTagging, s3:GetObject, s3:ObjectOwnerOverrideToBucketOwner, s3:PutAccelerateConfiguration, s3:PutAccessPointPolicy, s3:PutAnalyticsConfiguration, s3:PutBucketAcl, s3:PutBucketCORS, s3:PutBucketLogging, s3:PutBucketNotification, s3:PutBucketObjectLockConfiguration, s3:PutBucketPolicy, s3:PutBucketPublicAccessBlock, s3:PutBucketRequestPayment, s3:PutBucketTagging, s3:PutBucketVersioning, s3:PutBucketWebsite, s3:PutEncryptionConfiguration, s3:PutInventoryConfiguration, s3:PutLifecycleConfiguration, s3:PutMetricsConfiguration, s3:PutObject, s3:PutObjectAcl, s3:PutObjectLegalHold, s3:PutObjectRetention, s3:PutObjectTagging, s3:PutObjectVersionAcl, s3:PutObjectVersionTagging, s3:PutReplicationConfiguration, s3:ReplicateDelete, s3:ReplicateObject, s3:ReplicateTags, s3:RestoreObject, s3:UpdateJobPriority, s3:UpdateJobStatus
We can scan an entire AWS account and generate reports. To do this, we leverage the AWS IAM get-account-authorization-details API call, which downloads a large JSON file (around 100KB per account) that contains all of the IAM details for the account. This includes data on users, groups, roles, customer-managed policies, and AWS-managed policies.
You must have AWS credentials configured that can be used by the CLI.
You must have the privileges to run iam:GetAccountAuthorizationDetails. The arn:aws:iam::aws:policy/SecurityAudit
policy includes this, as do many others that allow Read access to the IAM Service.
To download the account authorization details, ensure you are authenticated to AWS, then run cloudsplaining
's download
command:
cloudsplaining download
~/.aws/credentials
file instead of environment variables, you can specify the profile name:cloudsplaining download --profile myprofile
It will download a JSON file in your current directory that contains your account authorization detail information.
Cloudsplaining tool does not attempt to understand the context behind everything in your AWS account. It's possible to understand the context behind some of these things programmatically - whether the policy is applied to an instance profile, whether the policy is attached, whether inline IAM policies are in use, and whether or not AWS Managed Policies are in use. Only you know the context behind the design of your AWS infrastructure and the IAM strategy.
As such, it's important to eliminate False Positives that are context-dependent. You can do this with an exclusions file. We've included a command that will generate an exclusions file for you so you don't have to remember the required format.
You can create an exclusions template via the following command:
cloudsplaining create-exclusions-file
This will generate a file in your current directory titled exclusions.yml
.
Now when you run the scan
command, you can use the exclusions file like this:
cloudsplaining scan --exclusions-file exclusions.yml --input-file examples/files/example.json --output examples/files/
For more information on the structure of the exclusions file, see Filtering False Positives
Now that we've downloaded the account authorization file, we can scan all of the AWS IAM policies with cloudsplaining
.
Run the following command:
cloudsplaining scan --exclusions-file exclusions.yml --input-file examples/files/example.json --output examples/files/
It will create an HTML report like this:
It will also create a raw JSON data file:
default-iam-results.json
: This contains the raw JSON output of the report. You can use this data file for operating on the scan results for various purposes. For example, you could write a Python script that parses this data and opens up automated JIRA issues or Salesforce Work Items. An example entry is shown below. The full example can be viewed at examples/files/iam-results-example.json{
"example-authz-details": [
{
"AccountID": "012345678901",
"ManagedBy": "Customer",
"PolicyName": "InsecureUserPolicy",
"Arn": "arn:aws:iam::012345678901:user/userwithlotsofpermissions",
"ActionsCount": 2,
"ServicesCount": 1,
"Actions": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Services": [
"s3"
]
}
]
}
See the examples/files folder for sample output.
Resource constraints are best practice - especially for system roles/instance profiles - but sometimes, these are by design. For example, consider a situation where a custom IAM policy is used on an instance profile for an EC2 instance that provisions Terraform. In this case, broad permissions are design requirements - so we don't want to include these in the results.
You can create an exclusions template via the following command:
cloudsplaining create-exclusions-file
This will generate a file in your current directory titled exclusions.yml
.
The default exclusions file looks like this:
# Policy names to exclude from evaluation
# Suggestion: Add policies here that are known to be overly permissive by design, after you run the initial report.
policies:
- "AWSServiceRoleFor*"
- "*ServiceRolePolicy"
- "*ServiceLinkedRolePolicy"
- "AdministratorAccess" # Otherwise, this will take a long time
- "service-role*"
- "aws-service-role*"
# Don't evaluate these roles, users, or groups as part of the evaluation
roles:
- "service-role*"
- "aws-service-role*"
users:
- ""
groups:
- ""
# Read-only actions to include in the results, such as s3:GetObject
# By default, it includes Actions that could lead to Data Exfiltration
include-actions:
- "s3:GetObject"
- "ssm:GetParameter"
- "ssm:GetParameters"
- "ssm:GetParametersByPath"
- "secretsmanager:GetSecretValue"
# Write actions to include from the results, such as kms:Decrypt
exclude-actions:
- ""
policies
, list the path of policy names that you want to exclude.MyRole
, list MyRole
or MyR*
in the roles
list.users
and groups
list.Now when you run the scan
command, you can use the exclusions file like this:
cloudsplaining scan --exclusions-file exclusions.yml --input-file examples/files/example.json --output examples/files/
If your IAM user or IAM role has sts:AssumeRole
permissions to a common IAM role across multiple AWS accounts, you can use the scan-multi-account
command.
This diagram depicts how the process works:
Note: If you are new to setting up cross-account access, check out the official AWS Tutorial on Delegating access across AWS accounts using IAM roles. That can help you set up the architecture above.
cloudsplaining create-multi-account-config-file \
-o multi-account-config.yml
multi-account-config.yml
with the following contents:accounts:
default_account: 123456789012
prod: 123456789013
test: 123456789014
Note: Observe how the format of the file above includes
account_name: accountID
. Edit the file contents to match your desired account name and account ID. Include as many account IDs as you like.
For the next step, let's say that:
CommonSecurityRole
.scanning-user
.sts:AssumeRole
permissions to assume the CommonSecurityRole
in all your target accounts specified in the YAML file we created previously.my-results-bucket
Using the data above, you can run the following command:
cloudsplaining scan-multi-account \
-c multi-account-config.yml \
--profile scanning-user \
--role-name CommonSecurityRole \
--output-bucket my-results-bucket
Note that if you run the above without the
--profile
flag, it will execute in the standard AWS Credentials order of precedence (i.e., Environment variables, credentials profiles, ECS container credentials, then finally EC2 Instance Profile credentials).
# Download authorization details
cloudsplaining download
# Download from a specific AWS profile
cloudsplaining download --profile someprofile
# Scan Authorization details
cloudsplaining scan --input-file default.json
# Scan Authorization details with custom exclusions
cloudsplaining scan --input-file default.json --exclusions-file exclusions.yml
# Scan Policy Files
cloudsplaining scan-policy-file --input-file examples/policies/wildcards.json
cloudsplaining scan-policy-file --input-file examples/policies/wildcards.json --exclusions-file examples/example-exclusions.yml
# Scan Multiple Accounts
# Generate the multi account config file
cloudsplaining create-multi-account-config-file -o accounts.yml
cloudsplaining scan-multi-account -c accounts.yml -r TargetRole --output-directory ./
Will it scan all policies by default?
No, it will only scan policies that are attached to IAM principals.
Will the download command download all policy versions?
Not by default. If you want to do this, specify the --include-non-default-policy-versions
flag. Note that the scan
tool does not currently operate on non-default versions.
I followed the installation instructions but can't execute the program via command line at all. What do I do?
This is likely an issue with your PATH. Your PATH environment variable is not considering the binary packages installed by pip3
. On a Mac, you can likely fix this by entering the command below, depending on the versions you have installed. YMMV.
export PATH=$HOME/Library/Python/3.7/bin/:$PATH
I followed the installation instructions, but I am receiving a ModuleNotFoundError
that says No module named policy_sentry.analysis.expand
. What should I do?
Try upgrading to the latest version of Cloudsplaining. This error was fixed in version 0.0.10.
FAQs
AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized HTML report.
We found that cloudsplaining demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.