Back

International > The Center for Internet Security

CIS Amazon Web Services Foundations Benchmark, Combined Levels, v3.0.0



AD ID

0003956

AD STATUS

CIS Amazon Web Services Foundations Benchmark, Combined Levels, v3.0.0

ORIGINATOR

The Center for Internet Security

TYPE

Best Practice Guideline

AVAILABILITY

Free

SYNONYMS

CIS Amazon Web Services Foundations Benchmark, v3.0.0, Combined Levels

CIS Amazon Web Services Foundations Benchmark, Combined Levels

EFFECTIVE

2024-01-31

ADDED

The document as a whole was last reviewed and released on 2024-08-02T00:00:00-0700.

AD ID

0003956

AD STATUS

Free

ORIGINATOR

The Center for Internet Security

TYPE

Best Practice Guideline

AVAILABILITY

SYNONYMS

CIS Amazon Web Services Foundations Benchmark, v3.0.0, Combined Levels

CIS Amazon Web Services Foundations Benchmark, Combined Levels

EFFECTIVE

2024-01-31

ADDED

The document as a whole was last reviewed and released on 2024-08-02T00:00:00-0700.


Important Notice

This Authority Document In Depth Report is copyrighted - © 2024 - Network Frontiers LLC. All rights reserved. Copyright in the Authority Document analyzed herein is held by its authors. Network Frontiers makes no claims of copyright in this Authority Document.

This Authority Document In Depth Report is provided for informational purposes only and does not constitute, and should not be construed as, legal advice. The reader is encouraged to consult with an attorney experienced in these areas for further explanation and advice.

This Authority Document In Depth Report provides analysis and guidance for use and implementation of the Authority Document but it is not a substitute for the original authority document itself. Readers should refer to the original authority document as the definitive resource on obligations and compliance requirements.

The process we used to tag and map this document

This document has been mapped into the Unified Compliance Framework using a patented methodology and patented tools (you can research our patents HERE). The mapping team has taken every effort to ensure the quality of mapping is of the highest degree. To learn more about the process we use to map Authority Documents, or to become involved in that process, click HERE.

Controls and asociated Citations breakdown

When the UCF Mapping Teams tag Citations and their associated mandates within an Authority Document, those Citations and Mandates are tied to Common Controls. In addition, and by virtue of those Citations and mandates being tied to Common Controls, there are three sets of meta data that are associated with each Citation; Controls by Impact Zone, Controls by Type, and Controls by Classification.

The online version of the mapping analysis you see here is just a fraction of the work the UCF Mapping Team has done. The downloadable version of this document, available within the Common Controls Hub (available HERE) contains the following:

Document implementation analysis – statistics about the document’s alignment with Common Controls as compared to other Authority Documents and statistics on usage of key terms and non-standard terms.

Citation and Mandate Tagging and Mapping – A complete listing of each and every Citation we found within CIS Amazon Web Services Foundations Benchmark, Combined Levels, v3.0.0 that have been tagged with their primary and secondary nouns and primary and secondary verbs in three column format. The first column shows the Citation (the marker within the Authority Document that points to where we found the guidance). The second column shows the Citation guidance per se, along with the tagging for the mandate we found within the Citation. The third column shows the Common Control ID that the mandate is linked to, and the final column gives us the Common Control itself.

Dictionary Terms – The dictionary terms listed for CIS Amazon Web Services Foundations Benchmark, Combined Levels, v3.0.0 are based upon terms either found within the Authority Document’s defined terms section(which most legal documents have), its glossary, and for the most part, as tagged within each mandate. The terms with links are terms that are the standardized version of the term.



Common Controls and
mandates by Impact Zone
54 Mandated Controls - bold    
15 Implied Controls - italic     135 Implementation

An Impact Zone is a hierarchical way of organizing our suite of Common Controls — it is a taxonomy. The top levels of the UCF hierarchy are called Impact Zones. Common Controls are mapped within the UCF’s Impact Zones and are maintained in a legal hierarchy within that Impact Zone. Each Impact Zone deals with a separate area of policies, standards, and procedures: technology acquisition, physical security, continuity, records management, etc.


The UCF created its taxonomy by looking at the corpus of standards and regulations through the lens of unification and a view toward how the controls impact the organization. Thus, we created a hierarchical structure for each impact zone that takes into account regulatory and standards bodies, doctrines, and language.

Number of Controls
204 Total
  • Leadership and high level objectives
    5
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Establish, implement, and maintain a data classification scheme. CC ID 11628
    [Ensure all data in Amazon S3 has been discovered, classified and secured when required. (Manual) Description: Amazon S3 buckets can contain sensitive data, that for security purposes should be discovered, monitored, classified and protected. Macie along with other 3rd party tools can automatically provide an inventory of Amazon S3 buckets. Rationale: Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Impact: There is a cost associated with using Amazon Macie. There is also typically a cost associated with 3rd Party tools that perform similar processes and protection. Audit: Perform the following steps to determine if Macie is running: From Console: 1. Login to the Macie console at https://console.aws.amazon.com/macie/ 2. In the left hand pane click on By job under findings. 3. Confirm that you have a Job setup for your S3 Buckets When you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below. If you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation. Remediation: Perform the steps below to enable and configure Amazon Macie From Console: 1. Log on to the Macie console at https://console.aws.amazon.com/macie/ 2. Click Get started. 3. Click Enable Macie. Setup a repository for sensitive data discovery results 1. In the Left pane, under Settings, click Discovery results. 2. Make sure Create bucket is selected. 3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number. 4. Click on Advanced. 5. Block all public access, make sure Yes is selected. 6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket. 7. Click on Save Create a job to discover sensitive data 1. In the left pane, click S3 buckets. Macie displays a list of all the S3 buckets for your account. 2. Select the check box for each bucket that you want Macie to analyze as part of the job 3. Click Create job. 4. Click Quick create. 5. For the Name and description step, enter a name and, optionally, a description of the job. 6. Then click Next. 7. For the Review and create step, click Submit. Review your findings 1. In the left pane, click Findings. 2. To view the details of a specific finding, choose any field other than the check box for the finding. If you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool. 2.1.3]
    Establish/Maintain Documentation Preventive
    Take into account the characteristics of the geographical, behavioral and functional setting for all datasets. CC ID 15046 Data and Information Management Preventive
    Approve the data classification scheme. CC ID 13858 Establish/Maintain Documentation Detective
    Disseminate and communicate the data classification scheme to interested personnel and affected parties. CC ID 16804 Communicate Preventive
    Identify roles, tasks, information, systems, and assets that fall under the organization's mandated Authority Documents. CC ID 00688
    [Ensure all data in Amazon S3 has been discovered, classified and secured when required. (Manual) Description: Amazon S3 buckets can contain sensitive data, that for security purposes should be discovered, monitored, classified and protected. Macie along with other 3rd party tools can automatically provide an inventory of Amazon S3 buckets. Rationale: Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Impact: There is a cost associated with using Amazon Macie. There is also typically a cost associated with 3rd Party tools that perform similar processes and protection. Audit: Perform the following steps to determine if Macie is running: From Console: 1. Login to the Macie console at https://console.aws.amazon.com/macie/ 2. In the left hand pane click on By job under findings. 3. Confirm that you have a Job setup for your S3 Buckets When you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below. If you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation. Remediation: Perform the steps below to enable and configure Amazon Macie From Console: 1. Log on to the Macie console at https://console.aws.amazon.com/macie/ 2. Click Get started. 3. Click Enable Macie. Setup a repository for sensitive data discovery results 1. In the Left pane, under Settings, click Discovery results. 2. Make sure Create bucket is selected. 3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number. 4. Click on Advanced. 5. Block all public access, make sure Yes is selected. 6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket. 7. Click on Save Create a job to discover sensitive data 1. In the left pane, click S3 buckets. Macie displays a list of all the S3 buckets for your account. 2. Select the check box for each bucket that you want Macie to analyze as part of the job 3. Click Create job. 4. Click Quick create. 5. For the Name and description step, enter a name and, optionally, a description of the job. 6. Then click Next. 7. For the Review and create step, click Submit. Review your findings 1. In the left pane, click Findings. 2. To view the details of a specific finding, choose any field other than the check box for the finding. If you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool. 2.1.3]
    Business Processes Preventive
  • Monitoring and measurement
    1
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Enable and configure logging on network access controls in accordance with organizational standards. CC ID 01963
    [Ensure Network Access Control Lists (NACL) changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. NACLs are used as a stateless packet filter to control ingress and egress traffic for subnets within a VPC. It is recommended that a metric filter and alarm be established for changes made to NACLs. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to NACLs will help ensure that AWS resources and services are not unintentionally exposed. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for NACL changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 -filter-pattern '{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` -metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.11]
    Configuration Preventive
  • Operational management
    3
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Establish, implement, and maintain information security procedures. CC ID 12006
    [Ensure all data in Amazon S3 has been discovered, classified and secured when required. (Manual) Description: Amazon S3 buckets can contain sensitive data, that for security purposes should be discovered, monitored, classified and protected. Macie along with other 3rd party tools can automatically provide an inventory of Amazon S3 buckets. Rationale: Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Impact: There is a cost associated with using Amazon Macie. There is also typically a cost associated with 3rd Party tools that perform similar processes and protection. Audit: Perform the following steps to determine if Macie is running: From Console: 1. Login to the Macie console at https://console.aws.amazon.com/macie/ 2. In the left hand pane click on By job under findings. 3. Confirm that you have a Job setup for your S3 Buckets When you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below. If you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation. Remediation: Perform the steps below to enable and configure Amazon Macie From Console: 1. Log on to the Macie console at https://console.aws.amazon.com/macie/ 2. Click Get started. 3. Click Enable Macie. Setup a repository for sensitive data discovery results 1. In the Left pane, under Settings, click Discovery results. 2. Make sure Create bucket is selected. 3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number. 4. Click on Advanced. 5. Block all public access, make sure Yes is selected. 6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket. 7. Click on Save Create a job to discover sensitive data 1. In the left pane, click S3 buckets. Macie displays a list of all the S3 buckets for your account. 2. Select the check box for each bucket that you want Macie to analyze as part of the job 3. Click Create job. 4. Click Quick create. 5. For the Name and description step, enter a name and, optionally, a description of the job. 6. Then click Next. 7. For the Review and create step, click Submit. Review your findings 1. In the left pane, click Findings. 2. To view the details of a specific finding, choose any field other than the check box for the finding. If you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool. 2.1.3]
    Business Processes Preventive
    Disseminate and communicate the information security procedures to all interested personnel and affected parties. CC ID 16303 Communicate Preventive
    Document the roles and responsibilities for all activities that protect restricted data in the information security procedures. CC ID 12304 Establish/Maintain Documentation Preventive
  • System hardening through configuration management
    172
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    System hardening through configuration management CC ID 00860 IT Impact Zone IT Impact Zone
    Establish, implement, and maintain system hardening procedures. CC ID 12001 Establish/Maintain Documentation Preventive
    Use the latest approved version of all assets. CC ID 00897
    [{Instance Metadata Service} Ensure that EC2 Metadata Service only allows IMDSv2 (Automated) Description: When enabling the Metadata Service on AWS EC2 instances, users have the option of using either Instance Metadata Service Version 1 (IMDSv1; a request/response method) or Instance Metadata Service Version 2 (IMDSv2; a session-oriented method). Rationale: Instance metadata is data about your instance that you can use to configure or manage the running instance. Instance metadata is divided into categories, for example, host name, events, and security groups. When enabling the Metadata Service on AWS EC2 instances, users have the option of using either Instance Metadata Service Version 1 (IMDSv1; a request/response method) or Instance Metadata Service Version 2 (IMDSv2; a session-oriented method). With IMDSv2, every request is now protected by session authentication. A session begins and ends a series of requests that software running on an EC2 instance uses to access the locally-stored EC2 instance metadata and credentials. Allowing Version 1 of the service may open EC2 instances to Server-Side Request Forgery (SSRF) attacks, so Amazon recommends utilizing Version 2 for better instance security. Audit: From Console: 1. Sign in to the AWS Management Console and navigate to the EC2 dashboard at https://console.aws.amazon.com/ec2/. 2. In the left navigation panel, under the INSTANCES section, choose Instances. 3. Select the EC2 instance that you want to examine. 4. Check for the IMDSv2 status, and ensure that it is set to Required. From Command Line: 1. Run the describe-instances command using appropriate filtering to list the IDs of all the existing EC2 instances currently available in the selected region: aws ec2 describe-instances --region --output table --query "Reservations[*].Instances[*].InstanceId" 2. The command output should return a table with the requested instance IDs. 3. Now run the describe-instances command using an instance ID returned at the previous step and custom filtering to determine whether the selected instance has IMDSv2: aws ec2 describe-instances --region --instance-ids --query "Reservations[*].Instances[*].MetadataOptions" --output table 4. Ensure for all ec2 instances HttpTokens is set to required and State is set to applied. 5. Repeat steps no. 3 and 4 to verify other EC2 instances provisioned within the current region. 6. Repeat steps no. 1 – 5 to perform the audit process for other AWS regions. Remediation: From Console: 1. Sign in to the AWS Management Console and navigate to the EC2 dashboard at https://console.aws.amazon.com/ec2/. 2. In the left navigation panel, under the INSTANCES section, choose Instances. 3. Select the EC2 instance that you want to examine. 4. Choose Actions > Instance Settings > Modify instance metadata options. 5. Ensure Instance metadata service is set to Enable and set IMDSv2 to Required. 6. Repeat steps no. 1 – 5 to perform the remediation process for other EC2 Instances in the all applicable AWS region(s). From Command Line: 1. Run the describe-instances command using appropriate filtering to list the IDs of all the existing EC2 instances currently available in the selected region: aws ec2 describe-instances --region --output table -query "Reservations[*].Instances[*].InstanceId" 2. The command output should return a table with the requested instance IDs. 3. Now run the modify-instance-metadata-options command using an instance ID returned at the previous step to update the Instance Metadata Version: aws ec2 modify-instance-metadata-options --instance-id --http-tokens required --region 4. Repeat steps no. 1 – 3 to perform the remediation process for other EC2 Instances in the same AWS region. 5. Change the region by updating --region and repeat the entire process for other regions. 5.6]
    Technical Security Preventive
    Include risk information when communicating critical security updates. CC ID 14948 Communicate Preventive
    Configure Least Functionality and Least Privilege settings to organizational standards. CC ID 07599 Configuration Preventive
    Configure "Block public access (bucket settings)" to organizational standards. CC ID 15444
    [Ensure that S3 Buckets are configured with 'Block public access (bucket settings)' (Automated) Description: Amazon S3 provides Block public access (bucket settings) and Block public access (account settings) to help you manage public access to Amazon S3 resources. By default, S3 buckets and objects are created with public access disabled. However, an IAM principal with sufficient S3 permissions can enable public access at the bucket and/or object level. While enabled, Block public access (bucket settings) prevents an individual bucket, and its contained objects, from becoming publicly accessible. Similarly, Block public access (account settings) prevents all buckets, and contained objects, from becoming publicly accessible across the entire account. Rationale: Amazon S3 Block public access (bucket settings) prevents the accidental or malicious public exposure of data contained within the respective bucket(s). Amazon S3 Block public access (account settings) prevents the accidental or malicious public exposure of data contained within all buckets of the respective AWS account. Whether blocking public access to all or some buckets is an organizational decision that should be based on data sensitivity, least privilege, and use case. Impact: When you apply Block Public Access settings to an account, the settings apply to all AWS Regions globally. The settings might not take effect in all Regions immediately or simultaneously, but they eventually propagate to all Regions. Audit: If utilizing Block Public Access (bucket settings) From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Edit public access settings'. 4. Ensure that block public access settings are set appropriately for this bucket 5. Repeat for all the buckets in your AWS account. From Command Line: 1. List all of the S3 Buckets aws s3 ls 2. Find the public access setting on that bucket aws s3api get-public-access-block --bucket Output if Block Public access is enabled: { "PublicAccessBlockConfiguration": { "BlockPublicAcls": true, "IgnorePublicAcls": true, "BlockPublicPolicy": true, "RestrictPublicBuckets": true } } If the output reads false for the separate configuration settings then proceed to the remediation. If utilizing Block Public Access (account settings) From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Choose Block public access (account settings) 3. Ensure that block public access settings are set appropriately for your AWS account. From Command Line: To check Public access settings for this account status, run the following command, aws s3control get-public-access-block --account-id --region Output if Block Public access is enabled: { "PublicAccessBlockConfiguration": { "IgnorePublicAcls": true, "BlockPublicPolicy": true, "BlockPublicAcls": true, "RestrictPublicBuckets": true } } If the output reads false for the separate configuration settings then proceed to the remediation. Remediation: If utilizing Block Public Access (bucket settings) From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Edit public access settings'. 4. Click 'Block all public access' 5. Repeat for all the buckets in your AWS account that contain sensitive data. From Command Line: 1. List all of the S3 Buckets aws s3 ls 2. Set the Block Public Access to true on that bucket aws s3api put-public-access-block --bucket --public-accessblock-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPu blicBuckets=true" If utilizing Block Public Access (account settings) From Console: If the output reads true for the separate configuration settings then it is set on the account. 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Choose Block Public Access (account settings) 3. Choose Edit to change the block public access settings for all the buckets in your AWS account 4. Choose the settings you want to change, and then choose Save. For details about each setting, pause on the i icons. 5. When you're asked for confirmation, enter confirm. Then Click Confirm to save your changes. From Command Line: To set Block Public access settings for this account, run the following command: aws s3control put-public-access-block --public-access-block-configuration BlockPublicAcls=true, IgnorePublicAcls=true, BlockPublicPolicy=true, RestrictPublicBuckets=true --account-id 2.1.4]
    Configuration Preventive
    Configure S3 Bucket Policies to organizational standards. CC ID 15431
    [Ensure S3 Bucket Policy is set to deny HTTP requests (Automated) Description: At the Amazon S3 bucket level, you can configure permissions through a bucket policy making the objects accessible only through HTTPS. Rationale: By default, Amazon S3 allows both HTTP and HTTPS requests. To achieve only allowing access to Amazon S3 objects through HTTPS you also have to explicitly deny access to HTTP requests. Bucket policies that allow HTTPS requests without explicitly denying HTTP requests will not comply with this recommendation. Audit: To allow access to HTTPS you can use a condition that checks for the key "aws:SecureTransport: true". This means that the request is sent through HTTPS but that HTTP can still be used. So to make sure you do not allow HTTP access confirm that there is a bucket policy that explicitly denies access for HTTP requests and that it contains the key "aws:SecureTransport": "false". From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Permissions', then Click on Bucket Policy. 4. Ensure that a policy is listed that matches: '{ "Sid": , "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::/*", "Condition": { "Bool": { "aws:SecureTransport": "false" }' and will be specific to your account 5. Repeat for all the buckets in your AWS account. From Command Line: 1. List all of the S3 Buckets aws s3 ls 2. Using the list of buckets run this command on each of them: aws s3api get-bucket-policy --bucket | grep aws:SecureTransport NOTE : If Error being thrown by CLI, it means no Policy has been configured for specified S3 bucket and by default it's allowing both HTTP and HTTPS requests. 3. Confirm that aws:SecureTransport is set to false aws:SecureTransport:false 4. Confirm that the policy line has Effect set to Deny 'Effect:Deny' Remediation: From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Permissions'. 4. Click 'Bucket Policy' 5. Add this to the existing policy filling in the required information { "Sid": ", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::/*", "Condition": { "Bool": { "aws:SecureTransport": "false" } } } 6. Save 7. Repeat for all the buckets in your AWS account that contain sensitive data. From Console using AWS Policy Generator: 1. Repeat steps 1-4 above. 2. Click on Policy Generator at the bottom of the Bucket Policy Editor 3. Select Policy Type S3 Bucket Policy 4. Add Statements • Effect = Deny • Principal = * • AWS Service = Amazon S3 • Actions = * • Amazon Resource Name = 5. Generate Policy 6. Copy the text and add it to the Bucket Policy. From Command Line: 1. Export the bucket policy to a json file. aws s3api get-bucket-policy --bucket --query Policy --output text > policy.json 2. Modify the policy.json file by adding in this statement: { "Sid": ", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::/*", "Condition": { "Bool": { "aws:SecureTransport": "false" } } } 3. Apply this modified policy back to the S3 bucket: aws s3api put-bucket-policy --bucket --policy file://policy.json Default Value: Both HTTP and HTTPS Request are allowed 2.1.1]
    Configuration Preventive
    Establish, implement, and maintain authenticators. CC ID 15305
    [{not used} Ensure credentials unused for 45 days or greater are disabled (Automated) Description: AWS IAM users can access AWS resources using different types of credentials, such as passwords or access keys. It is recommended that all credentials that have been unused in 45 or greater days be deactivated or removed. Rationale: Disabling or removing unnecessary credentials will reduce the window of opportunity for credentials associated with a compromised or abandoned account to be used. Audit: Perform the following to determine if unused credentials exist: From Console: 1. Login to the AWS Management Console 2. Click Services 3. Click IAM 4. Click on Users 5. Click the Settings (gear) icon. 6. Select Console last sign-in, Access key last used, and Access Key Id 7. Click on Close 8. Check and ensure that Console last sign-in is less than 45 days ago. Note - Never means the user has never logged in. 9. Check and ensure that Access key age is less than 45 days and that Access key last used does not say None If the user hasn't signed into the Console in the last 45 days or Access keys are over 45 days old refer to the remediation. From Command Line: Download Credential Report: 1. Run the following commands: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,5,6,9,10,11,14,15,16 | grep -v '^' Ensure unused credentials do not exist: 2. For each user having password_enabled set to TRUE , ensure password_last_used_date is less than 45 days ago. • When password_enabled is set to TRUE and password_last_used is set to No_Information , ensure password_last_changed is less than 45 days ago. 3. For each user having an access_key_1_active or access_key_2_active to TRUE , ensure the corresponding access_key_n_last_used_date is less than 45 days ago. • When a user having an access_key_x_active (where x is 1 or 2) to TRUE and corresponding access_key_x_last_used_date is set to N/A', ensure access_key_x_last_rotated` is less than 45 days ago. Remediation: From Console: Perform the following to manage Unused Password (IAM user console access) 1. Login to the AWS Management Console: 2. Click Services 3. Click IAM 4. Click on Users 5. Click on Security Credentials 6. Select user whose Console last sign-in is greater than 45 days 7. Click Security credentials 8. In section Sign-in credentials, Console password click Manage 9. Under Console Access select Disable 10.Click Apply Perform the following to deactivate Access Keys: 1. Login to the AWS Management Console: 2. Click Services 3. Click IAM 4. Click on Users 5. Click on Security Credentials 6. Select any access keys that are over 45 days old and that have been used and • Click on Make Inactive 7. Select any access keys that are over 45 days old and that have not been used and • Click the X to Delete 1.12]
    Technical Security Preventive
    Establish, implement, and maintain an authenticator standard. CC ID 01702 Establish/Maintain Documentation Preventive
    Disallow personal data in authenticators. CC ID 13864 Technical Security Preventive
    Establish, implement, and maintain an authenticator management system. CC ID 12031 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a repository of authenticators. CC ID 16372 Data and Information Management Preventive
    Establish, implement, and maintain authenticator procedures. CC ID 12002
    [Ensure security questions are registered in the AWS account (Manual) Description: The AWS support portal allows account owners to establish security questions that can be used to authenticate individuals calling AWS customer service for support. It is recommended that security questions be established. Rationale: When creating a new AWS account, a default super user is automatically created. This account is referred to as the 'root user' or 'root' account. It is recommended that the use of this account be limited and highly controlled. During events in which the 'root' password is no longer accessible or the MFA token associated with 'root' is lost/destroyed it is possible, through authentication using secret questions and associated answers, to recover 'root' user login access. Audit: From Console: 1. Login to the AWS account as the 'root' user 2. On the top right you will see the 3. Click on the 4. From the drop-down menu Click My Account 5. In the Configure Security Challenge Questions section on the Personal Information page, configure three security challenge questions. 6. Click Save questions . Remediation: From Console: 1. Login to the AWS Account as the 'root' user 2. Click on the from the top right of the console 3. From the drop-down menu Click My Account 4. Scroll down to the Configure Security Questions section 5. Click on Edit 6. Click on each Question • From the drop-down select an appropriate question • Click on the Answer section • Enter an appropriate answer o Follow process for all 3 questions 7. Click Update when complete 8. Save Questions and Answers and place in a secure physical location 1.3
    Do not setup access keys during initial user setup for all IAM users that have a console password (Manual) Description: AWS console defaults to no check boxes selected when creating a new IAM user. When creating the IAM User credentials you have to determine what type of access they require. Programmatic access: The IAM user might need to make API calls, use the AWS CLI, or use the Tools for Windows PowerShell. In that case, create an access key (access key ID and a secret access key) for that user. AWS Management Console access: If the user needs to access the AWS Management Console, create a password for the user. Rationale: Requiring the additional steps be taken by the user for programmatic access after their profile has been created will give a stronger indication of intent that access keys are [a] necessary for their work and [b] once the access key is established on an account that the keys may be in use somewhere in the organization. Note: Even if it is known the user will need access keys, require them to create the keys themselves or put in a support ticket to have them created as a separate step from user creation. Audit: Perform the following to determine if access keys were created upon user creation and are being used and rotated as prescribed: From Console: 1. Login to the AWS Management Console 2. Click Services 3. Click IAM 4. Click on a User where column Password age and Access key age is not set to None 5. Click on Security credentials Tab 6. Compare the user Creation time to the Access Key Created date. 7. For any that match, the key was created during initial user setup. • Keys that were created at the same time as the user profile and do not have a last used date should be deleted. Refer to the remediation below. From Command Line: 1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their access keys utilization: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,9,11,14,16 2. The output of this command will produce a table similar to the following: user,password_enabled,access_key_1_active,access_key_1_last_used_date,access_ key_2_active,access_key_2_last_used_date elise,false,true,2015-04-16T15:14:00+00:00,false,N/A brandon,true,true,N/A,false,N/A rakesh,false,false,N/A,false,N/A helene,false,true,2015-11-18T17:47:00+00:00,false,N/A paras,true,true,2016-08-28T12:04:00+00:00,true,2016-03-04T10:11:00+00:00 anitha,true,true,2016-06-08T11:43:00+00:00,true,N/A 3. For any user having password_enabled set to true AND access_key_last_used_date set to N/A refer to the remediation below. Remediation: Perform the following to delete access keys that do not pass the audit: From Console: 1. Login to the AWS Management Console: 2. Click Services 3. Click IAM 4. Click on Users 5. Click on Security Credentials 6. As an Administrator • Click on the X (Delete) for keys that were created at the same time as the user profile but have not been used. 7. As an IAM User • Click on the X (Delete) for keys that were created at the same time as the user profile but have not been used. From Command Line: aws iam delete-access-key --access-key-id --user-name 1.11
    {be active} Ensure there is only one active access key available for any single IAM user (Automated) Description: Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK) Rationale: Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API. One of the best ways to protect your account is to not allow users to have multiple access keys. Audit: From Console: 1. Sign in to the AWS Management Console and navigate to IAM dashboard at https://console.aws.amazon.com/iam/. 2. In the left navigation panel, choose Users. 3. Click on the IAM user name that you want to examine. 4. On the IAM user configuration page, select Security Credentials tab. 5. Under Access Keys section, in the Status column, check the current status for each access key associated with the IAM user. If the selected IAM user has more than one access key activated then the users access configuration does not adhere to security best practices and the risk of accidental exposures increases. • Repeat steps no. 3 – 5 for each IAM user in your AWS account. From Command Line: 1. Run list-users command to list all IAM users within your account: aws iam list-users --query "Users[*].UserName" The command output should return an array that contains all your IAM user names. 2. Run list-access-keys command using the IAM user name list to return the current status of each access key associated with the selected IAM user: aws iam list-access-keys --user-name The command output should expose the metadata ("Username", "AccessKeyId", "Status", "CreateDate") for each access key on that user account. 3. Check the Status property value for each key returned to determine each keys current state. If the Status property value for more than one IAM access key is set to Active, the user access configuration does not adhere to this recommendation, refer to the remediation below. • Repeat steps no. 2 and 3 for each IAM user in your AWS account. Remediation: From Console: 1. Sign in to the AWS Management Console and navigate to IAM dashboard at https://console.aws.amazon.com/iam/. 2. In the left navigation panel, choose Users. 3. Click on the IAM user name that you want to examine. 4. On the IAM user configuration page, select Security Credentials tab. 5. In Access Keys section, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working. 6. In the same Access Keys section, identify your non-operational access keys (other than the chosen one) and deactivate it by clicking the Make Inactive link. 7. If you receive the Change Key Status confirmation box, click Deactivate to switch off the selected key. 8. Repeat steps no. 3 – 7 for each IAM user in your AWS account. From Command Line: 1. Using the IAM user and access key information provided in the Audit CLI, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working. 2. Run the update-access-key command below using the IAM user name and the non-operational access key IDs to deactivate the unnecessary key(s). Refer to the Audit section to identify the unnecessary access key ID for the selected IAM user Note - the command does not return any output: aws iam update-access-key --access-key-id --status Inactive --user-name 3. To confirm that the selected access key pair has been successfully deactivated run the list-access-keys audit command again for that IAM User: aws iam list-access-keys --user-name • The command output should expose the metadata for each access key associated with the IAM user. If the non-operational key pair(s) Status is set to Inactive, the key has been successfully deactivated and the IAM user access configuration adheres now to this recommendation. 4. Repeat steps no. 1 – 3 for each IAM user in your AWS account. 1.13]
    Establish/Maintain Documentation Preventive
    Restrict access to authentication files to authorized personnel, as necessary. CC ID 12127 Technical Security Preventive
    Configure authenticator activation codes in accordance with organizational standards. CC ID 17032 Configuration Preventive
    Configure authenticators to comply with organizational standards. CC ID 06412 Configuration Preventive
    Configure the system to require new users to change their authenticator on first use. CC ID 05268 Configuration Preventive
    Configure authenticators so that group authenticators or shared authenticators are prohibited. CC ID 00519 Configuration Preventive
    Change the authenticator for shared accounts when the group membership changes. CC ID 14249 Business Processes Corrective
    Configure the system to prevent unencrypted authenticator use. CC ID 04457 Configuration Preventive
    Disable store passwords using reversible encryption. CC ID 01708 Configuration Preventive
    Configure the system to encrypt authenticators. CC ID 06735 Configuration Preventive
    Configure the system to mask authenticators. CC ID 02037 Configuration Preventive
    Configure the authenticator policy to ban the use of usernames or user identifiers in authenticators. CC ID 05992 Configuration Preventive
    Configure the "minimum number of digits required for new passwords" setting to organizational standards. CC ID 08717 Establish/Maintain Documentation Preventive
    Configure the "minimum number of upper case characters required for new passwords" setting to organizational standards. CC ID 08718 Establish/Maintain Documentation Preventive
    Configure the system to refrain from specifying the type of information used as password hints. CC ID 13783 Configuration Preventive
    Configure the "minimum number of lower case characters required for new passwords" setting to organizational standards. CC ID 08719 Establish/Maintain Documentation Preventive
    Disable machine account password changes. CC ID 01737 Configuration Preventive
    Configure the "minimum number of special characters required for new passwords" setting to organizational standards. CC ID 08720 Establish/Maintain Documentation Preventive
    Configure the "require new passwords to differ from old ones by the appropriate minimum number of characters" setting to organizational standards. CC ID 08722 Establish/Maintain Documentation Preventive
    Configure the "password reuse" setting to organizational standards. CC ID 08724
    [Ensure IAM password policy prevents password reuse (Automated) Description: IAM password policies can prevent the reuse of a given password by the same user. It is recommended that the password policy prevent the reuse of passwords. Rationale: Preventing password reuse increases account resiliency against brute force login attempts. Audit: Perform the following to ensure the password policy is configured as prescribed: From Console: 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Ensure "Prevent password reuse" is checked 5. Ensure "Number of passwords to remember" is set to 24 From Command Line: aws iam get-account-password-policy Ensure the output of the above command includes "PasswordReusePrevention": 24 Remediation: Perform the following to set the password policy as prescribed: From Console: 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Check "Prevent password reuse" 5. Set "Number of passwords to remember" is set to 24 From Command Line: aws iam update-account-password-policy --password-reuse-prevention 24 Note: All commands starting with "aws iam update-account-password-policy" can be combined into a single command. 1.9]
    Establish/Maintain Documentation Preventive
    Configure the "Disable Remember Password" setting. CC ID 05270 Configuration Preventive
    Configure the "Minimum password age" to organizational standards. CC ID 01703 Configuration Preventive
    Configure the LILO/GRUB password. CC ID 01576 Configuration Preventive
    Configure the system to use Apple's Keychain Access to store passwords and certificates. CC ID 04481 Configuration Preventive
    Change the default password to Apple's Keychain. CC ID 04482 Configuration Preventive
    Configure Apple's Keychain items to ask for the Keychain password. CC ID 04483 Configuration Preventive
    Configure the Syskey Encryption Key and associated password. CC ID 05978 Configuration Preventive
    Configure the "Accounts: Limit local account use of blank passwords to console logon only" setting. CC ID 04505 Configuration Preventive
    Configure the "System cryptography: Force strong key protection for user keys stored in the computer" setting. CC ID 04534 Configuration Preventive
    Configure interactive logon for accounts that do not have assigned authenticators in accordance with organizational standards. CC ID 05267 Configuration Preventive
    Enable or disable remote connections from accounts with empty authenticators, as appropriate. CC ID 05269 Configuration Preventive
    Configure the "Send LanMan compatible password" setting. CC ID 05271 Configuration Preventive
    Configure the authenticator policy to ban or allow authenticators as words found in dictionaries, as appropriate. CC ID 05993 Configuration Preventive
    Configure the authenticator policy to ban or allow authenticators as proper names, as necessary. CC ID 17030 Configuration Preventive
    Set the most number of characters required for the BitLocker Startup PIN correctly. CC ID 06054 Configuration Preventive
    Set the default folder for BitLocker recovery passwords correctly. CC ID 06055 Configuration Preventive
    Notify affected parties to keep authenticators confidential. CC ID 06787 Behavior Preventive
    Discourage affected parties from recording authenticators. CC ID 06788 Behavior Preventive
    Ensure the root account is the first entry in password files. CC ID 16323 Data and Information Management Detective
    Configure the "shadow password for all accounts in /etc/passwd" setting to organizational standards. CC ID 08721 Establish/Maintain Documentation Preventive
    Configure the "password hashing algorithm" setting to organizational standards. CC ID 08723 Establish/Maintain Documentation Preventive
    Configure the "Disable password strength validation for Peer Grouping" setting to organizational standards. CC ID 10866 Configuration Preventive
    Configure the "Set the interval between synchronization retries for Password Synchronization" setting to organizational standards. CC ID 11185 Configuration Preventive
    Configure the "Set the number of synchronization retries for servers running Password Synchronization" setting to organizational standards. CC ID 11187 Configuration Preventive
    Configure the "Turn off password security in Input Panel" setting to organizational standards. CC ID 11296 Configuration Preventive
    Configure the "Turn on the Windows to NIS password synchronization for users that have been migrated to Active Directory" setting to organizational standards. CC ID 11355 Configuration Preventive
    Configure the authenticator display screen to organizational standards. CC ID 13794 Configuration Preventive
    Configure the authenticator field to disallow memorized secrets found in the memorized secret list. CC ID 13808 Configuration Preventive
    Configure the authenticator display screen to display the memorized secret as an option. CC ID 13806 Configuration Preventive
    Disseminate and communicate with the end user when a memorized secret entered into an authenticator field matches one found in the memorized secret list. CC ID 13807 Communicate Preventive
    Configure the look-up secret authenticator to dispose of memorized secrets after their use. CC ID 13817 Configuration Corrective
    Configure the memorized secret verifiers to refrain from allowing anonymous users to access memorized secret hints. CC ID 13823 Configuration Preventive
    Configure the system to allow paste functionality for the authenticator field. CC ID 13819 Configuration Preventive
    Configure the system to require successful authentication before an authenticator for a user account is changed. CC ID 13821 Configuration Preventive
    Protect authenticators or authentication factors from unauthorized modification and disclosure. CC ID 15317 Technical Security Preventive
    Obscure authentication information during the login process. CC ID 15316 Configuration Preventive
    Issue temporary authenticators, as necessary. CC ID 17062 Process or Activity Preventive
    Renew temporary authenticators, as necessary. CC ID 17061 Process or Activity Preventive
    Disable authenticators, as necessary. CC ID 17060 Process or Activity Preventive
    Change authenticators, as necessary. CC ID 15315
    [Ensure access keys are rotated every 90 days or less (Automated) Description: Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests that you make to AWS. AWS users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. It is recommended that all access keys be regularly rotated. Rationale: Rotating access keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used. Access keys should be rotated to ensure that data cannot be accessed with an old key which might have been lost, cracked, or stolen. Audit: Perform the following to determine if access keys are rotated as prescribed: From Console: 1. Go to Management Console (https://console.aws.amazon.com/iam) 2. Click on Users 3. Click setting icon 4. Select Console last sign-in 5. Click Close 6. Ensure that Access key age is less than 90 days ago. note) None in the Access key age means the user has not used the access key. From Command Line: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d The access_key_1_last_rotated and the access_key_2_last_rotated fields in this file notes The date and time, in ISO 8601 date-time format, when the user's access key was created or last changed. If the user does not have an active access key, the value in this field is N/A (not applicable). Remediation: Perform the following to rotate access keys: From Console: 1. Go to Management Console (https://console.aws.amazon.com/iam) 2. Click on Users 3. Click on Security Credentials 4. As an Administrator o Click on Make Inactive for keys that have not been rotated in 90 Days 5. As an IAM User o Click on Make Inactive or Delete for keys which have not been rotated or used in 90 Days 6. Click on Create Access Key 7. Update programmatic call with new Access Key credentials From Command Line: 1. While the first access key is still active, create a second access key, which is active by default. Run the following command: aws iam create-access-key At this point, the user has two active access keys. 2. Update all applications and tools to use the new access key. 3. Determine whether the first access key is still in use by using this command: aws iam get-access-key-last-used 4. One approach is to wait several days and then check the old access key for any use before proceeding. Even if step Step 3 indicates no use of the old key, it is recommended that you do not immediately delete the first access key. Instead, change the state of the first access key to Inactive using this command: aws iam update-access-key 5. Use only the new access key to confirm that your applications are working. Any applications and tools that still use the original access key will stop working at this point because they no longer have access to AWS resources. If you find such an application or tool, you can switch its state back to Active to reenable the first access key. Then return to step Step 2 and update this application to use the new key. 6. After you wait some period of time to ensure that all applications and tools have been updated, you can delete the first access key with this command: aws iam delete-access-key 1.14]
    Configuration Preventive
    Implement safeguards to protect authenticators from unauthorized access. CC ID 15310 Technical Security Preventive
    Change all default authenticators. CC ID 15309 Configuration Preventive
    Configure user accounts. CC ID 07036 Configuration Preventive
    Configure accounts with administrative privilege. CC ID 07033
    [{does not exist} Ensure no 'root' user account access key exists (Automated) Description: The 'root' user account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account. It is recommended that all access keys associated with the 'root' user account be deleted. Rationale: Deleting access keys associated with the 'root' user account limits vectors by which the account can be compromised. Additionally, deleting the 'root' access keys encourages the creation and use of role based accounts that are least privileged. Audit: Perform the following to determine if the 'root' user account has access keys: From Console: 1. Login to the AWS Management Console. 2. Click Services. 3. Click IAM. 4. Click on Credential Report. 5. This will download a .csv file which contains credential usage for all IAM users within an AWS Account - open this file. 6. For the user, ensure the access_key_1_active and access_key_2_active fields are set to FALSE. From Command Line: Run the following command: aws iam get-account-summary | grep "AccountAccessKeysPresent" If no 'root' access keys exist the output will show "AccountAccessKeysPresent": 0,. If the output shows a "1", then 'root' keys exist and should be deleted. Remediation: Perform the following to delete active 'root' user access keys. From Console: 1. Sign in to the AWS Management Console as 'root' and open the IAM console at https://console.aws.amazon.com/iam/. 2. Click on at the top right and select My Security Credentials from the drop down list. 3. On the pop out screen Click on Continue to Security Credentials. 4. Click on Access Keys (Access Key ID and Secret Access Key). 5. Under the Status column (if there are any Keys which are active). 6. Click Delete (Note: Deleted keys cannot be recovered). Note: While a key can be made inactive, this inactive key will still show up in the CLI command from the audit procedure, and may lead to a key being falsely flagged as being non-compliant. 1.4]
    Configuration Preventive
    Employ multifactor authentication for accounts with administrative privilege. CC ID 12496
    [Ensure MFA is enabled for the 'root' user account (Automated) Description: The 'root' user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password as well as for an authentication code from their AWS MFA device. Note: When virtual MFA is used for 'root' accounts, it is recommended that the device used is NOT a personal device, but rather a dedicated mobile device (tablet or phone) that is managed to be kept charged and secured independent of any individual personal devices. ("non-personal virtual MFA") This lessens the risks of losing access to the MFA due to device loss, device trade-in or if the individual owning the device is no longer employed at the company. Rationale: Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that emits a time-sensitive key and have knowledge of a credential. Audit: Perform the following to determine if the 'root' user account has MFA setup: From Console: 1. Login to the AWS Management Console 2. Click Services 3. Click IAM 4. Click on Credential Report 5. This will download a .csv file which contains credential usage for all IAM users within an AWS Account - open this file 6. For the user, ensure the mfa_active field is set to TRUE . From Command Line: 1. Run the following command: aws iam get-account-summary | grep "AccountMFAEnabled" 2. Ensure the AccountMFAEnabled property is set to 1 Remediation: Perform the following to establish MFA for the 'root' user account: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. Note: to manage MFA devices for the 'root' AWS account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials. 2. Choose Dashboard , and under Security Status , expand Activate MFA on your root account. 3. Choose Activate MFA 4. In the wizard, choose A virtual MFA device and then choose Next Step. 5. IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes. 6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications.) If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device). 7. Determine whether the MFA app supports QR codes, and then do one of the following: o Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code. o In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application. When you are finished, the virtual MFA device starts generating one-time passwords. In the Manage MFA Device wizard, in the Authentication Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password into the Authentication Code 2 box. Choose Assign Virtual MFA. 1.5
    Ensure hardware MFA is enabled for the 'root' user account (Manual) Description: The 'root' user account is the most privileged user in an AWS account. MFA adds an extra layer of protection on top of a user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password as well as for an authentication code from their AWS MFA device. For Level 2, it is recommended that the 'root' user account be protected with a hardware MFA. Rationale: A hardware MFA has a smaller attack surface than a virtual MFA. For example, a hardware MFA does not suffer the attack surface introduced by the mobile smartphone on which a virtual MFA resides. Note: Using hardware MFA for many, many AWS accounts may create a logistical device management issue. If this is the case, consider implementing this Level 2 recommendation selectively to the highest security AWS accounts and the Level 1 recommendation applied to the remaining accounts. Audit: Perform the following to determine if the 'root' user account has a hardware MFA setup: 1. Run the following command to determine if the 'root' account has MFA setup: aws iam get-account-summary | grep "AccountMFAEnabled" The AccountMFAEnabled property is set to 1 will ensure that the 'root' user account has MFA (Virtual or Hardware) Enabled. If AccountMFAEnabled property is set to 0 the account is not compliant with this recommendation. 2. If AccountMFAEnabled property is set to 1, determine 'root' account has Hardware MFA enabled. Run the following command to list all virtual MFA devices: aws iam list-virtual-mfa-devices If the output contains one MFA with the following Serial Number, it means the MFA is virtual, not hardware and the account is not compliant with this recommendation: "SerialNumber": "arn:aws:iam::__:mfa/root-account-mfadevice" Remediation: Perform the following to establish a hardware MFA for the 'root' user account: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. Note: to manage MFA devices for the AWS 'root' user account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials. 2. Choose Dashboard , and under Security Status , expand Activate MFA on your root account. 3. Choose Activate MFA 4. In the wizard, choose A hardware MFA device and then choose Next Step. 5. In the Serial Number box, enter the serial number that is found on the back of the MFA device. 6. In the Authentication Code 1 box, enter the six-digit number displayed by the MFA device. You might need to press the button on the front of the device to display the number. 7. Wait 30 seconds while the device refreshes the code, and then enter the next six-digit number into the Authentication Code 2 box. You might need to press the button on the front of the device again to display the second number. 8. Choose Next Step. The MFA device is now associated with the AWS account. The next time you use your AWS account credentials to sign in, you must type a code from the hardware MFA device. Remediation for this recommendation is not available through AWS CLI. 1.6]
    Technical Security Preventive
    Establish, implement, and maintain network parameter modification procedures. CC ID 01517 Establish/Maintain Documentation Preventive
    Configure routing tables to organizational standards. CC ID 15438
    [Ensure routing tables for VPC peering are "least access" (Manual) Description: Once a VPC peering connection is established, routing tables must be updated to establish any connections between the peered VPCs. These routes can be as specific as desired - even peering a VPC to only a single host on the other side of the connection. Rationale: Being highly selective in peering routing tables is a very effective way of minimizing the impact of breach as resources outside of these routes are inaccessible to the peered VPC. Audit: Review routing tables of peered VPCs for whether they route all subnets of each VPC and whether that is necessary to accomplish the intended purposes for peering the VPCs. From Command Line: 1. List all the route tables from a VPC and check if "GatewayId" is pointing to a (e.g. pcx-1a2b3c4d) and if "DestinationCidrBlock" is as specific as desired. aws ec2 describe-route-tables --filter "Name=vpc-id,Values=" --query "RouteTables[*].{RouteTableId:RouteTableId, VpcId:VpcId, Routes:Routes, AssociatedSubnets:Associations[*].SubnetId}" Remediation: Remove and add route table entries to ensure that the least number of subnets or hosts as is required to accomplish the purpose for peering are routable. From Command Line: 1. For each containing routes non compliant with your routing policy (which grants more than desired "least access"), delete the non compliant route: aws ec2 delete-route --route-table-id --destination-cidrblock 2. Create a new compliant route: aws ec2 create-route --route-table-id --destination-cidrblock --vpc-peering-connection-id 5.5]
    Configuration Preventive
    Configure Services settings to organizational standards. CC ID 07434 Configuration Preventive
    Configure AWS Config to organizational standards. CC ID 15440
    [Ensure AWS Config is enabled in all regions (Automated) Description: AWS Config is a web service that performs configuration management of supported AWS resources within your account and delivers log files to you. The recorded information includes the configuration item (AWS resource), relationships between configuration items (AWS resources), any configuration changes between resources. It is recommended AWS Config be enabled in all regions. Rationale: The AWS configuration item history captured by AWS Config enables security analysis, resource change tracking, and compliance auditing. Impact: It is recommended AWS Config be enabled in all regions. Audit: Process to evaluate AWS Config configuration per region From Console: 1. Sign in to the AWS Management Console and open the AWS Config console at https://console.aws.amazon.com/config/. 2. On the top right of the console select target Region. 3. If a Config recorder is enabled in this region, you should navigate to the Settings page from the navigation menu on the left hand side. If a Config recorder is not yet enabled in this region then you should select "Get Started". 4. Ensure "Record all resources supported in this region" is checked. 5. Ensure "Include global resources (e.g., AWS IAM resources)" is checked, unless it is enabled in another region (this is only required in one region) 6. Ensure the correct S3 bucket has been defined. 7. Ensure the correct SNS topic has been defined. 8. Repeat steps 2 to 7 for each region. From Command Line: 1. Run this command to show all AWS Config recorders and their properties: aws configservice describe-configuration-recorders 2. Evaluate the output to ensure that all recorders have a recordingGroup object which includes "allSupported": true. Additionally, ensure that at least one recorder has "includeGlobalResourceTypes": true Note: There is one more parameter "ResourceTypes" in recordingGroup object. We don't need to check the same as whenever we set "allSupported": true, AWS enforces resource types to be empty ("ResourceTypes":[]) Sample Output: { "ConfigurationRecorders": [ { "recordingGroup": { "allSupported": true, "resourceTypes": [], "includeGlobalResourceTypes": true }, "roleARN": "arn:aws:iam:::role/servicerole/", "name": "default" } ] } 3. Run this command to show the status for all AWS Config recorders: aws configservice describe-configuration-recorder-status 4. In the output, find recorders with name key matching the recorders that were evaluated in step 2. Ensure that they include "recording": true and "lastStatus": "SUCCESS" Remediation: To implement AWS Config configuration: From Console: 1. Select the region you want to focus on in the top right of the console 2. Click Services 3. Click Config 4. If a Config recorder is enabled in this region, you should navigate to the Settings page from the navigation menu on the left hand side. If a Config recorder is not yet enabled in this region then you should select "Get Started". 5. Select "Record all resources supported in this region" 6. Choose to include global resources (IAM resources) 7. Specify an S3 bucket in the same account or in another managed AWS account 8. Create an SNS Topic from the same AWS account or another managed AWS account From Command Line: 1. Ensure there is an appropriate S3 bucket, SNS topic, and IAM role per the AWS Config Service prerequisites. 2. Run this command to create a new configuration recorder: aws configservice put-configuration-recorder --configuration-recorder name=default,roleARN=arn:aws:iam::012345678912:role/myConfigRole --recordinggroup allSupported=true,includeGlobalResourceTypes=true 3. Create a delivery channel configuration file locally which specifies the channel attributes, populated from the prerequisites set up previously: { "name": "default", "s3BucketName": "my-config-bucket", "snsTopicARN": "arn:aws:sns:us-east-1:012345678912:my-config-notice", "configSnapshotDeliveryProperties": { "deliveryFrequency": "Twelve_Hours" } } 4. Run this command to create a new delivery channel, referencing the json configuration file made in the previous step: aws configservice put-delivery-channel --delivery-channel file://deliveryChannel.json 5. Start the configuration recorder by running the following command: aws configservice start-configuration-recorder --configuration-recorder-name default 3.3]
    Configuration Preventive
    Configure Logging settings in accordance with organizational standards. CC ID 07611 Configuration Preventive
    Configure "CloudTrail" to organizational standards. CC ID 15443
    [Ensure CloudTrail is enabled in all regions (Automated) Description: AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail provides a history of AWS API calls for an account, including API calls made via the Management Console, SDKs, command line tools, and higher-level AWS services (such as CloudFormation). Rationale: The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. Additionally, • ensuring that a multi-regions trail exists will ensure that unexpected activity occurring in otherwise unused regions is detected • ensuring that a multi-regions trail exists will ensure that Global Service Logging is enabled for a trail by default to capture recording of events generated on AWS global services • for a multi-regions trail, ensuring that management events configured for all type of Read/Writes ensures recording of management operations that are performed on all resources in an AWS account Impact: S3 lifecycle features can be used to manage the accumulation and management of logs over time. See the following AWS resource for more information on these features: 1. https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html Audit: Perform the following to determine if CloudTrail is enabled for all regions: From Console: 1. Sign in to the AWS Management Console and open the CloudTrail console at https://console.aws.amazon.com/cloudtrail 2. Click on Trails on the left navigation pane • You will be presented with a list of trails across all regions 3. Ensure at least one Trail has Yes specified in the Multi-region trail column 4. Click on a trail via the link in the Name column 5. Ensure Logging is set to ON 6. Ensure Multi-region trail is set to Yes 7. In section Management Events ensure API activity set to ALL From Command Line: aws cloudtrail describe-trails Ensure IsMultiRegionTrail is set to true aws cloudtrail get-trail-status --name Ensure IsLogging is set to true aws cloudtrail get-event-selectors --trail-name Ensure there is at least one fieldSelector for a Trail that equals Management This should NOT output any results for Field: "readOnly" if either true or false is returned one of the checkboxes is not selected for read or write Example of correct output: "TrailARN": "", "AdvancedEventSelectors": [ { "Name": "Management events selector", "FieldSelectors": [ { "Field": "eventCategory", "Equals": [ "Management" ] Remediation: Perform the following to enable global (Multi-region) CloudTrail logging: From Console: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/cloudtrail 2. Click on Trails on the left navigation pane 3. Click Get Started Now , if presented • Click Add new trail • Enter a trail name in the Trail name box • A trail created in the console is a multi-region trail by default • Specify an S3 bucket name in the S3 bucket box • Specify the AWS KMS alias under the Log file SSE-KMS encryption section or create a new key • Click Next 4. Ensure Management events check box is selected. 5. Ensure both Read and Write are check under API activity 6. Click Next 7. review your trail settings and click Create trail From Command Line: aws cloudtrail create-trail --name --bucket-name --is-multi-region-trail aws cloudtrail update-trail --name --is-multi-region-trail Note: Creating CloudTrail via CLI without providing any overriding options configures Management Events to set All type of Read/Writes by default. Default Value: Not Enabled 3.1]
    Configuration Preventive
    Configure "CloudTrail log file validation" to organizational standards. CC ID 15437
    [Ensure CloudTrail log file validation is enabled (Automated) Description: CloudTrail log file validation creates a digitally signed digest file containing a hash of each log that CloudTrail writes to S3. These digest files can be used to determine whether a log file was changed, deleted, or unchanged after CloudTrail delivered the log. It is recommended that file validation be enabled on all CloudTrails. Rationale: Enabling log file validation will provide additional integrity checking of CloudTrail logs. Audit: Perform the following on each trail to determine if log file validation is enabled: From Console: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/cloudtrail 2. Click on Trails on the left navigation pane 3. For Every Trail: • Click on a trail via the link in the Name column • Under the General details section, ensure Log file validation is set to Enabled From Command Line: aws cloudtrail describe-trails Ensure LogFileValidationEnabled is set to true for each trail Remediation: Perform the following to enable log file validation on a given trail: From Console: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/cloudtrail 2. Click on Trails on the left navigation pane 3. Click on target trail 4. Within the General details section click edit 5. Under the Advanced settings section 6. Check the enable box under Log file validation 7. Click Save changes From Command Line: aws cloudtrail update-trail --name --enable-log-file-validation Note that periodic validation of logs using these digests can be performed by running the following command: aws cloudtrail validate-logs --trail-arn --start-time --end-time Default Value: Not Enabled 3.2]
    Configuration Preventive
    Configure "VPC flow logging" to organizational standards. CC ID 15436
    [Ensure VPC flow logging is enabled in all VPCs (Automated) Description: VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs. It is recommended that VPC Flow Logs be enabled for packet "Rejects" for VPCs. Rationale: VPC Flow Logs provide visibility into network traffic that traverses the VPC and can be used to detect anomalous traffic or insight during security workflows. Impact: By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods: 1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/Settin gLogRetention.html Audit: Perform the following to determine if VPC Flow logs are enabled: From Console: 1. Sign into the management console 2. Select Services then VPC 3. In the left navigation pane, select Your VPCs 4. Select a VPC 5. In the right pane, select the Flow Logs tab. 6. Ensure a Log Flow exists that has Active in the Status column. From Command Line: 1. Run describe-vpcs command (OSX/Linux/UNIX) to list the VPC networks available in the current AWS region: aws ec2 describe-vpcs --region --query Vpcs[].VpcId 2. The command output returns the VpcId available in the selected region. 3. Run describe-flow-logs command (OSX/Linux/UNIX) using the VPC ID to determine if the selected virtual network has the Flow Logs feature enabled: aws ec2 describe-flow-logs --filter "Name=resource-id,Values=" 4. If there are no Flow Logs created for the selected VPC, the command output will return an empty list []. 5. Repeat step 3 for other VPCs available in the same region. 6. Change the region by updating --region and repeat steps 1 - 5 for all the VPCs. Remediation: Perform the following to determine if VPC Flow logs is enabled: From Console: 1. Sign into the management console 2. Select Services then VPC 3. In the left navigation pane, select Your VPCs 4. Select a VPC 5. In the right pane, select the Flow Logs tab. 6. If no Flow Log exists, click Create Flow Log 7. For Filter, select Reject 8. Enter in a Role and Destination Log Group 9. Click Create Log Flow 10. Click on CloudWatch Logs Group Note: Setting the filter to "Reject" will dramatically reduce the logging data accumulation for this recommendation and provide sufficient information for the purposes of breach detection, research and remediation. However, during periods of least privilege security group engineering, setting this the filter to "All" can be very helpful in discovering existing traffic flows required for proper operation of an already running environment. From Command Line: 1. Create a policy document and name it as role_policy_document.json and paste the following content: { "Version": "2012-10-17", "Statement": [ { "Sid": "test", "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } 2. Create another policy document and name it as iam_policy.json and paste the following content: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action":[ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogGroups", "logs:DescribeLogStreams", "logs:PutLogEvents", "logs:GetLogEvents", "logs:FilterLogEvents" ], "Resource": "*" } ] } 3. Run the below command to create an IAM role: aws iam create-role --role-name --assume-role-policydocument file://role_policy_document.json 4. Run the below command to create an IAM policy: aws iam create-policy --policy-name --policy-document file://iam-policy.json 5. Run attach-group-policy command using the IAM policy ARN returned at the previous step to attach the policy to the IAM role (if the command succeeds, no output is returned): aws iam attach-group-policy --policy-arn arn:aws:iam:::policy/ --group-name 6. Run describe-vpcs to get the VpcId available in the selected region: aws ec2 describe-vpcs --region 7. The command output should return the VPC Id available in the selected region. 8. Run create-flow-logs to create a flow log for the vpc: aws ec2 create-flow-logs --resource-type VPC --resource-ids -traffic-type REJECT --log-group-name --deliver-logspermission-arn 9. Repeat step 8 for other vpcs available in the selected region. 10. Change the region by updating --region and repeat remediation procedure for other vpcs. 3.7]
    Configuration Preventive
    Configure "object-level logging" to organizational standards. CC ID 15433
    [Ensure that Object-level logging for write events is enabled for S3 bucket (Automated) Description: S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don't log data events and so it is recommended to enable Object-level logging for S3 buckets. Rationale: Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity within your S3 Buckets using Amazon CloudWatch Events. Impact: Enabling logging for these object level events may significantly increase the number of events logged and may incur additional cost. Audit: From Console: 1. Login to the AWS Management Console and navigate to CloudTrail dashboard at https://console.aws.amazon.com/cloudtrail/ 2. In the left panel, click Trails and then click on the CloudTrail Name that you want to examine. 3. Review General details 4. Confirm that Multi-region trail is set to Yes 5. Scroll down to Data events 6. Confirm that it reads: Data Events:S3 Log selector template Log all events If 'basic events selectors' is being used it should read: Data events: S3 Bucket Name: All current and future S3 buckets Write: Enabled 7. Repeat steps 2 to 6 to verify that Multi-region trail and Data events logging of S3 buckets in CloudTrail. If the CloudTrails do not have multi-region and data events configured for S3 refer to the remediation below. From Command Line: 1. Run list-trails command to list the names of all Amazon CloudTrail trails currently available in all AWS regions: aws cloudtrail list-trails 2. The command output will be a list of all the trail names to include. "TrailARN": "arn:aws:cloudtrail:::trail/", "Name": "", "HomeRegion": "" 3. Next run 'get-trail- command to determine Multi-region. aws cloudtrail get-trail --name --region 4. The command output should include: "IsMultiRegionTrail": true, 5. Next run get-event-selectors command using the Name of the trail and the region returned in step 2 to determine if Data events logging feature is enabled within the selected CloudTrail trail for all S3 buckets: aws cloudtrail get-event-selectors --region --trail-name --query EventSelectors[*].DataResources[] 6. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector. "Type": "AWS::S3::Object", "Values": [ "arn:aws:s3" 7. If the get-event-selectors command returns an empty array '[]', the Data events are not included in the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded. 8. Repeat steps 1 to 5 for auditing each CloudTrail to determine if Data events for S3 are covered. If Multi-region is not set to true and the Data events does not show S3 defined as shown refer to the remediation procedure below. Remediation: From Console: 1. Login to the AWS Management Console and navigate to S3 dashboard at https://console.aws.amazon.com/s3/ 2. In the left navigation panel, click buckets and then click on the S3 Bucket Name that you want to examine. 3. Click Properties tab to see in detail bucket configuration. 4. In the AWS Cloud Trail data events' section select the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by slicking the Configure in Cloudtrailbutton or navigating to the Cloudtrail console linkhttps://console.aws.amazon.com/cloudtrail/` 5. Once the Cloudtrail is selected, Select the data Data Events check box. 6. Select S3 from the `Data event type drop down. 7. Select Log all events from the Log selector template drop down. 8. Repeat steps 2 to 5 to enable object-level logging of write events for other S3 buckets. From Command Line: 1. To enable object-level data events logging for S3 buckets within your AWS account, run put-event-selectors command using the name of the trail that you want to reconfigure as identifier: aws cloudtrail put-event-selectors --region --trail-name --event-selectors '[{ "ReadWriteType": "WriteOnly", "IncludeManagementEvents":true, "DataResources": [{ "Type": "AWS::S3::Object", "Values": ["arn:aws:s3:::/"] }] }]' 2. The command output will be object-level event trail configuration. 3. If you want to enable it for all buckets at once then change Values parameter to ["arn:aws:s3"] in command given above. 4. Repeat step 1 for each s3 bucket to update object-level logging of write events. 5. Change the AWS region by updating the --region command parameter and perform the process for other regions. 3.8
    Ensure that Object-level logging for read events is enabled for S3 bucket (Automated) Description: S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don't log data events and so it is recommended to enable Object-level logging for S3 buckets. Rationale: Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity using Amazon CloudWatch Events. Impact: Enabling logging for these object level events may significantly increase the number of events logged and may incur additional cost. Audit: From Console: 1. Login to the AWS Management Console and navigate to CloudTrail dashboard at https://console.aws.amazon.com/cloudtrail/ 2. In the left panel, click Trails and then click on the CloudTrail Name that you want to examine. 3. Review General details 4. Confirm that Multi-region trail is set to Yes 5. Scroll down to Data events 6. Confirm that it reads: Data Events:S3 Log selector template Log all events If 'basic events selectors' is being used it should read: Data events: S3 Bucket Name: All current and future S3 buckets Write: Enabled 7. Repeat steps 2 to 6 to verify that Multi-region trail and Data events logging of S3 buckets in CloudTrail. If the CloudTrails do not have multi-region and data events configured for S3 refer to the remediation below. From Command Line: 1. Run describe-trails command to list the names of all Amazon CloudTrail trails currently available in the selected AWS region: aws cloudtrail describe-trails --region --output table --query trailList[*].Name 2. The command output will be table of the requested trail names. 3. Run get-event-selectors command using the name of the trail returned at the previous step and custom query filters to determine if Data events logging feature is enabled within the selected CloudTrail trail configuration for s3 bucket resources: aws cloudtrail get-event-selectors --region --trail-name --query EventSelectors[*].DataResources[] 4. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector. 5. If the get-event-selectors command returns an empty array, the Data events are not included into the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded. 6. Repeat steps 1 to 5 for auditing each s3 bucket to identify other trails that are missing the capability to log Data events. 7. Change the AWS region by updating the --region command parameter and perform the audit process for other regions. Remediation: From Console: 1. Login to the AWS Management Console and navigate to S3 dashboard at https://console.aws.amazon.com/s3/ 2. In the left navigation panel, click buckets and then click on the S3 Bucket Name that you want to examine. 3. Click Properties tab to see in detail bucket configuration. 4. In the AWS Cloud Trail data events' section select the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by slicking the Configure in Cloudtrailbutton or navigating to the Cloudtrail console linkhttps://console.aws.amazon.com/cloudtrail/` 5. Once the Cloudtrail is selected, Select the data Data Events check box. 6. Select S3 from the `Data event type drop down. 7. Select Log all events from the Log selector template drop down. 8. Repeat steps 2 to 5 to enable object-level logging of write events for other S3 buckets. From Command Line: 1. To enable object-level data events logging for S3 buckets within your AWS account, run put-event-selectors command using the name of the trail that you want to reconfigure as identifier: aws cloudtrail put-event-selectors --region --trail-name --event-selectors '[{ "ReadWriteType": "ReadOnly", "IncludeManagementEvents":true, "DataResources": [{ "Type": "AWS::S3::Object", "Values": ["arn:aws:s3:::/"] }] }]' 2. The command output will be object-level event trail configuration. 3. If you want to enable it for all buckets at once then change Values parameter to ["arn:aws:s3"] in command given above. 4. Repeat step 1 for each s3 bucket to update object-level logging of read events. 5. Change the AWS region by updating the --region command parameter and perform the process for other regions. 3.9]
    Configuration Preventive
    Configure all logs to capture auditable events or actionable events. CC ID 06332 Configuration Preventive
    Configure the log to capture AWS Organizations changes. CC ID 15445
    [Ensure AWS Organizations changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for AWS Organizations changes made in the master AWS Account. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring AWS Organizations changes can help you prevent any unwanted, accidental or intentional modifications that may lead to unauthorized access or other security breaches. This monitoring technique helps you to ensure that any unexpected changes performed within your AWS Organizations can be investigated and any unwanted changes can be rolled back. Audit: If you are using CloudTrails and CloudWatch, perform the following: 1. Ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: • Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails, Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active: aws cloudtrail get-trail-status --name Ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events: aws cloudtrail get-event-selectors --trail-name • Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All. 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = "AcceptHandshake") || ($.eventName = "AttachPolicy") || ($.eventName = "CreateAccount") || ($.eventName = "CreateOrganizationalUnit") || ($.eventName = "CreatePolicy") || ($.eventName = "DeclineHandshake") || ($.eventName = "DeleteOrganization") || ($.eventName = "DeleteOrganizationalUnit") || ($.eventName = "DeletePolicy") || ($.eventName = "DetachPolicy") || ($.eventName = "DisablePolicyType") || ($.eventName = "EnablePolicyType") || ($.eventName = "InviteAccountToOrganization") || ($.eventName = "LeaveOrganization") || ($.eventName = "MoveAccount") || ($.eventName = "RemoveAccountFromOrganization") || ($.eventName = "UpdatePolicy") || ($.eventName = "UpdateOrganizationalUnit")) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4: aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic: aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for AWS Organizations changes and the taken from audit step 1: aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 -filter-pattern '{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = "AcceptHandshake") || ($.eventName = "AttachPolicy") || ($.eventName = "CreateAccount") || ($.eventName = "CreateOrganizationalUnit") || ($.eventName = "CreatePolicy") || ($.eventName = "DeclineHandshake") || ($.eventName = "DeleteOrganization") || ($.eventName = "DeleteOrganizationalUnit") || ($.eventName = "DeletePolicy") || ($.eventName = "DetachPolicy") || ($.eventName = "DisablePolicyType") || ($.eventName = "EnablePolicyType") || ($.eventName = "InviteAccountToOrganization") || ($.eventName = "LeaveOrganization") || ($.eventName = "MoveAccount") || ($.eventName = "RemoveAccountFromOrganization") || ($.eventName = "UpdatePolicy") || ($.eventName = "UpdateOrganizationalUnit")) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify: aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2: aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2: aws cloudwatch put-metric-alarm --alarm-name `` -metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.15]
    Configuration Preventive
    Configure the log to capture Identity and Access Management policy changes. CC ID 15442
    [Ensure IAM policy changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established changes made to Identity and Access Management (IAM) policies. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to IAM policies will help ensure authentication and authorization controls remain intact. Impact: Monitoring these changes may cause a number of "false positives" more so in larger environments. This alert may need more tuning then others to eliminate some of those erroneous alerts. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventNa me=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolic y)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=Del etePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersi on)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.event Name=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGr oupPolicy)||($.eventName=DetachGroupPolicy)}" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for IAM policy changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name `` -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 -filter-pattern '{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventNa me=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolic y)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=Del etePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersi on)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.event Name=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGr oupPolicy)||($.eventName=DetachGroupPolicy)}' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` -metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.4]
    Configuration Preventive
    Configure the log to capture management console sign-in without multi-factor authentication. CC ID 15441
    [Ensure management console sign-in without MFA is monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for console logins that are not protected by multi-factor authentication (MFA). Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring for single-factor console logins will increase visibility into accounts that are not protected by MFA. These type of accounts are more susceptible to compromise and unauthorized access. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name Ensure in the output that IsLogging is set to TRUE • Ensure identified Multi-region 'Cloudtrail' captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure in the output there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = "ConsoleLogin") && ($.additionalEventData.MFAUsed != "Yes") }" Or (To reduce false positives incase Single Sign-On (SSO) is used in organization): "filterPattern": "{ ($.eventName = "ConsoleLogin") && ($.additionalEventData.MFAUsed != "Yes") && ($.userIdentity.type = "IAMUser") && ($.responseElements.ConsoleLogin = "Success") }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for AWS Management Console sign-in without MFA and the taken from audit step 1. Use Command: aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = "ConsoleLogin") && ($.additionalEventData.MFAUsed != "Yes") }' Or (To reduce false positives incase Single Sign-On (SSO) is used in organization): aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = "ConsoleLogin") && ($.additionalEventData.MFAUsed != "Yes") && ($.userIdentity.type = "IAMUser") && ($.responseElements.ConsoleLogin = "Success") }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold -evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.2]
    Configuration Preventive
    Configure the log to capture route table changes. CC ID 15439
    [Ensure route table changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. Routing tables are used to route network traffic between subnets and to network gateways. It is recommended that a metric filter and alarm be established for changes to route tables. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to route tables will help ensure that all VPC traffic flows through an expected path and prevent any accidental or intentional modifications that may lead to uncontrolled network traffic. An alarm should be triggered every time an AWS API call is performed to create, replace, delete, or disassociate a Route Table. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{($.eventSource = ec2.amazonaws.com) && ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for route table changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold -evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.13]
    Configuration Preventive
    Configure the log to capture virtual private cloud changes. CC ID 15435
    [Ensure VPC changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is possible to have more than 1 VPC within an account, in addition it is also possible to create a peer connection between 2 VPCs enabling network traffic to route between VPCs. It is recommended that a metric filter and alarm be established for changes made to VPCs. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. VPCs in AWS are logically isolated virtual networks that can be used to launch AWS resources. Monitoring changes to VPC configuration will help ensure VPC traffic flow is not getting impacted. Changes to VPCs can impact network accessibility from the public internet and additionally impact VPC traffic flow to and from resources launched in the VPC. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for VPC changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 -filter-pattern '{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` -metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.14]
    Configuration Preventive
    Configure the log to capture changes to encryption keys. CC ID 15432
    [Ensure disabling or scheduled deletion of customer created CMKs is monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for customer created CMKs which have changed state to disabled or scheduled deletion. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Data encrypted with disabled or deleted keys will no longer be accessible. Changes in the state of a CMK should be monitored to make sure the change is intentional. Impact: Creation, storage, and management of CMK may create additional labor requirements compared to the use of Provide Managed Keys. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for disabled or scheduled for deletion CMK's and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metrictransformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.7]
    Configuration Preventive
    Configure the log to capture unauthorized API calls. CC ID 15429
    [Ensure unauthorized API calls are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for unauthorized API calls. Rationale: Monitoring unauthorized API calls will help reduce time to detect malicious activity and can alert you to a potential security incident. CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Impact: This alert may be triggered by normal read-only console activities that attempt to opportunistically gather optional information, but gracefully fail if they don't have permissions. If an excessive number of alerts are being generated then an organization may wish to consider adding read access to the limited IAM user permissions simply to quiet the alerts. In some cases doing this may allow the users to actually view some areas of the system - any additional access given should be reviewed for alignment with the original limited IAM user intent. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with "Name":note` • From value associated with "CloudWatchLogsLogGroupArn" note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name <"Name" as shown in describetrails> Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this that you captured in step 1: aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.errorCode ="*UnauthorizedOperation") || ($.errorCode ="AccessDenied*") && ($.sourceIPAddress!="delivery.logs.amazonaws.com") && ($.eventName!="HeadBucket") }", 4. Note the "filterName" value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query "MetricAlarms[?MetricName == `unauthorized_api_calls_metric`]" 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for unauthorized API calls and the taken from audit step 1. aws logs put-metric-filter --log-group-name "cloudtrail_log_group_name" -filter-name "" --metric-transformations metricName=unauthorized_api_calls_metric,metricNamespace=CISBenchmark,metricValue=1 --filter-pattern "{ ($.errorCode ="*UnauthorizedOperation") || ($.errorCode ="AccessDenied*") && ($.sourceIPAddress!="delivery.logs.amazonaws.com") && ($.eventName!="HeadBucket") }" Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. Note: Capture the TopicArn displayed when creating the SNS Topic in Step 2. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name "unauthorized_api_calls_alarm" --metric-name "unauthorized_api_calls_metric" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold -evaluation-periods 1 --namespace "CISBenchmark" --alarm-actions 4.1]
    Configuration Preventive
    Configure the log to capture changes to network gateways. CC ID 15421
    [Ensure changes to network gateways are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. Network gateways are required to send/receive traffic to a destination outside of a VPC. It is recommended that a metric filter and alarm be established for changes to network gateways. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to network gateways will help ensure that all ingress/egress traffic traverses the VPC border via a controlled path. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for network gateways changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.12]
    Configuration Preventive
    Configure the log to capture hardware and software access attempts. CC ID 01220
    [Ensure AWS Management Console authentication failures are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for failed console authentication attempts. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring failed console logins may decrease lead time to detect an attempt to brute force a credential, which may provide an indicator, such as source IP address, that can be used in other event correlation. Impact: Monitoring for these failures may create a large number of alerts, more so in larger environments. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = ConsoleLogin) && ($.errorMessage = "Failed authentication") }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for AWS management Console Login Failures and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = ConsoleLogin) && ($.errorMessage = "Failed authentication") }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 -namespace 'CISBenchmark' --alarm-actions 4.6]
    Log Management Detective
    Configure the log to capture access to restricted data or restricted information. CC ID 00644
    [Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket (Automated) Description: S3 Bucket Access Logging generates a log that contains access records for each request made to your S3 bucket. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. It is recommended that bucket access logging be enabled on the CloudTrail S3 bucket. Rationale: By enabling S3 bucket logging on target S3 buckets, it is possible to capture all events which may affect objects within any target buckets. Configuring logs to be placed in a separate bucket allows access to log information which can be useful in security and incident response workflows. Audit: Perform the following ensure the CloudTrail S3 bucket has access logging is enabled: From Console: 1. Go to the Amazon CloudTrail console at https://console.aws.amazon.com/cloudtrail/home 2. In the API activity history pane on the left, click Trails 3. In the Trails pane, note the bucket names in the S3 bucket column 4. Sign in to the AWS Management Console and open the S3 console at https://console.aws.amazon.com/s3. 5. Under All Buckets click on a target S3 bucket 6. Click on Properties in the top right of the console 7. Under Bucket: _ _ click on Logging 8. Ensure Enabled is checked. From Command Line: 1. Get the name of the S3 bucket that CloudTrail is logging to: aws cloudtrail describe-trails --query 'trailList[*].S3BucketName' 2. Ensure Bucket Logging is enabled: aws s3api get-bucket-logging --bucket Ensure command does not return empty output. Sample Output for a bucket with logging enabled: { "LoggingEnabled": { "TargetPrefix": "", "TargetBucket": "" } } Remediation: Perform the following to enable S3 bucket logging: From Console: 1. Sign in to the AWS Management Console and open the S3 console at https://console.aws.amazon.com/s3. 2. Under All Buckets click on the target S3 bucket 3. Click on Properties in the top right of the console 4. Under Bucket: click on Logging 5. Configure bucket logging o Click on the Enabled checkbox o Select Target Bucket from list o Enter a Target Prefix 6. Click Save. From Command Line: 1. Get the name of the S3 bucket that CloudTrail is logging to: aws cloudtrail describe-trails --region --query trailList[*].S3BucketName 2. Copy and add target bucket name at , Prefix for logfile at and optionally add an email address in the following template and save it as : { "LoggingEnabled": { "TargetBucket": "", "TargetPrefix": "", "TargetGrants": [ { "Grantee": { "Type": "AmazonCustomerByEmail", "EmailAddress": "" }, "Permission": "FULL_CONTROL" } ] } } 3. Run the put-bucket-logging command with bucket name and as input: for more information refer to put-bucket-logging: aws s3api put-bucket-logging --bucket --bucket-logging-status file:// Default Value: Logging is disabled. 3.4]
    Log Management Detective
    Configure the log to capture actions taken by individuals with root privileges or administrative privileges and add logging option to the root file system. CC ID 00645
    [{root user} Ensure usage of 'root' account is monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for 'root' login attempts to detect the unauthorized use, or attempts to use the root account. Rationale: Monitoring for 'root' account logins will provide visibility into the use of a fully privileged account and an opportunity to reduce the use of it. Cloud Watch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ $.userIdentity.type = "Root" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != "AwsServiceEvent" }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for 'Root' account usage and the taken from audit step 1. aws logs put-metric-filter --log-group-name `` -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filterpattern '{ $.userIdentity.type = "Root" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != "AwsServiceEvent" }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metricname `` --statistic Sum --period 300 --threshold 1 -comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 -namespace 'CISBenchmark' --alarm-actions 4.3]
    Log Management Detective
    Configure the log to capture configuration changes. CC ID 06881
    [Ensure AWS Config configuration changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for detecting changes to AWS Config's configurations. Rationale: Monitoring changes to AWS Config configuration will help ensure sustained visibility of configuration items within the AWS account. CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel) ||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for AWS Configuration changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel) ||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.9
    Ensure CloudTrail configuration changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, where metric filters and alarms can be established. It is recommended that a metric filter and alarm be utilized for detecting changes to CloudTrail's configurations. Rationale: Monitoring changes to CloudTrail's configuration will help ensure sustained visibility to activities performed in the AWS account. Impact: These steps can be performed manually in a company's existing SIEM platform in cases where CloudTrail logs are monitored outside of the AWS monitoring tools within CloudWatch. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured, or that the filters are configured in the appropriate SIEM alerts: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the filterPattern output from the above command contains the following: "filterPattern": "{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for cloudtrail configuration changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 -namespace 'CISBenchmark' --alarm-actions 4.5]
    Configuration Preventive
    Configure the log to capture changes to User privileges, audit policies, and trust policies by enabling audit policy changes. CC ID 01698
    [Ensure S3 bucket policy changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for changes to S3 bucket policies. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to S3 bucket policies may reduce time to detect and correct permissive policies on sensitive S3 buckets. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for S3 bucket policy changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.8]
    Log Management Detective
    Configure the log to capture user account additions, modifications, and deletions. CC ID 16482 Log Management Preventive
    Configure Key, Certificate, Password, Authentication and Identity Management settings in accordance with organizational standards. CC ID 07621 Configuration Preventive
    Configure "MFA Delete" to organizational standards. CC ID 15430
    [Ensure MFA Delete is enabled on S3 buckets (Manual) Description: Once MFA Delete is enabled on your sensitive and classified S3 bucket it requires the user to have two forms of authentication. Rationale: Adding MFA delete to an S3 bucket, requires additional authentication when you change the version state of your bucket or you delete and object version adding another layer of security in the event your security credentials are compromised or unauthorized access is granted. Impact: Enabling MFA delete on an S3 bucket could required additional administrator oversight. Enabling MFA delete may impact other services that automate the creation and/or deletion of S3 buckets. Audit: Perform the steps below to confirm MFA delete is configured on an S3 Bucket From Console: 1. Login to the S3 console at https://console.aws.amazon.com/s3/ 2. Click the Check box next to the Bucket name you want to confirm 3. In the window under Properties 4. Confirm that Versioning is Enabled 5. Confirm that MFA Delete is Enabled From Command Line: 1. Run the get-bucket-versioning aws s3api get-bucket-versioning --bucket my-bucket Output example: Enabled Enabled If the Console or the CLI output does not show Versioning and MFA Delete enabled refer to the remediation below. Remediation: Perform the steps below to enable MFA delete on an S3 bucket. Note: -You cannot enable MFA Delete using the AWS Management Console. You must use the AWS CLI or API. -You must use your 'root' account to enable MFA Delete on S3 buckets. From Command line: 1. Run the s3api put-bucket-versioning command aws s3api put-bucket-versioning --profile my-root-profile --bucket Bucket_Name --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa “arn:aws:iam::aws_account_id:mfa/root-account-mfa-device passcode" 2.1.2]
    Configuration Preventive
    Configure Identity and Access Management policies to organizational standards. CC ID 15422
    [Ensure IAM policies that allow full "*:*" administrative privileges are not attached (Automated) Description: IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended and considered a standard security advice to grant least privilege -that is, granting only the permissions required to perform a task. Determine what users need to do and then craft policies for them that let the users perform only those tasks, instead of allowing full administrative privileges. Rationale: It's more secure to start with a minimum set of permissions and grant additional permissions as necessary, rather than starting with permissions that are too lenient and then trying to tighten them later. Providing full administrative privileges instead of restricting to the minimum set of permissions that the user is required to do exposes the resources to potentially unwanted actions. IAM policies that have a statement with "Effect": "Allow" with "Action": "*" over "Resource": "*" should be removed. Audit: Perform the following to determine what policies are created: From Command Line: 1. Run the following to get a list of IAM policies: aws iam list-policies --only-attached --output text 2. For each policy returned, run the following command to determine if any policies is allowing full administrative privileges on the account: aws iam get-policy-version --policy-arn --version-id 3. In output ensure policy should not have any Statement block with "Effect": "Allow" and Action set to "*" and Resource set to "*" Remediation: From Console: Perform the following to detach the policy that has full administrative privileges: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, click Policies and then search for the policy name found in the audit step. 3. Select the policy that needs to be deleted. 4. In the policy action menu, select first Detach 5. Select all Users, Groups, Roles that have this policy attached 6. Click Detach Policy 7. In the policy action menu, select Detach 8. Select the newly detached policy and select Delete From Command Line: Perform the following to detach the policy that has full administrative privileges as found in the audit step: 1. Lists all IAM users, groups, and roles that the specified managed policy is attached to. aws iam list-entities-for-policy --policy-arn 2. Detach the policy from all IAM Users: aws iam detach-user-policy --user-name --policy-arn 3. Detach the policy from all IAM Groups: aws iam detach-group-policy --group-name --policy-arn 4. Detach the policy from all IAM Roles: aws iam detach-role-policy --role-name --policy-arn 1.16]
    Configuration Preventive
    Configure the Identity and Access Management Access analyzer to organizational standards. CC ID 15420
    [Ensure that IAM Access analyzer is enabled for all regions (Automated) Description: Enable IAM Access analyzer for IAM policies about all resources in each active AWS region. IAM Access Analyzer is a technology introduced at AWS reinvent 2019. After the Analyzer is enabled in IAM, scan results are displayed on the console showing the accessible resources. Scans show resources that other accounts and federated users can access, such as KMS keys and IAM roles. So the results allow you to determine if an unintended user is allowed, making it easier for administrators to monitor least privileges access. Access Analyzer analyzes only policies that are applied to resources in the same AWS Region. Rationale: AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data. Access Analyzer identifies resources that are shared with external principals by using logicbased reasoning to analyze the resource-based policies in your AWS environment. IAM Access Analyzer continuously monitors all policies for S3 bucket, IAM roles, KMS (Key Management Service) keys, AWS Lambda functions, and Amazon SQS(Simple Queue Service) queues. Audit: From Console: 1. Open the IAM console at https://console.aws.amazon.com/iam/ 2. Choose Access analyzer 3. Click 'Analyzers' 4. Ensure that at least one analyzer is present 5. Ensure that the STATUS is set to Active 6. Repeat these step for each active region From Command Line: 1. Run the following command: aws accessanalyzer list-analyzers | grep status 2. Ensure that at least one Analyzer the status is set to ACTIVE 3. Repeat the steps above for each active region. If an Access analyzer is not listed for each region or the status is not set to active refer to the remediation procedure below. Remediation: From Console: Perform the following to enable IAM Access analyzer for IAM policies: 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Access analyzer. 3. Choose Create analyzer. 4. On the Create analyzer page, confirm that the Region displayed is the Region where you want to enable Access Analyzer. 5. Enter a name for the analyzer. Optional as it will generate a name for you automatically. 6. Add any tags that you want to apply to the analyzer. Optional. 7. Choose Create Analyzer. 8. Repeat these step for each active region From Command Line: Run the following command: aws accessanalyzer create-analyzer --analyzer-name --type Repeat this command above for each active region. Note: The IAM Access Analyzer is successfully configured only when the account you use has the necessary permissions. 1.20]
    Configuration Preventive
    Configure the "Minimum password length" to organizational standards. CC ID 07711
    [Ensure IAM password policy requires minimum length of 14 or greater (Automated) Description: Password policies are, in part, used to enforce password complexity requirements. IAM password policies can be used to ensure password are at least a given length. It is recommended that the password policy require a minimum password length 14. Rationale: Setting a password complexity policy increases account resiliency against brute force login attempts. Audit: Perform the following to ensure the password policy is configured as prescribed: From Console: 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Ensure "Minimum password length" is set to 14 or greater. From Command Line: aws iam get-account-password-policy Ensure the output of the above command includes "MinimumPasswordLength": 14 (or higher) Remediation: Perform the following to set the password policy as prescribed: From Console: 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Set "Minimum password length" to 14 or greater. 5. Click "Apply password policy" From Command Line: aws iam update-account-password-policy --minimum-password-length 14 Note: All commands starting with "aws iam update-account-password-policy" can be combined into a single command. 1.8]
    Configuration Preventive
    Configure Encryption settings in accordance with organizational standards. CC ID 07625
    [Ensure that encryption-at-rest is enabled for RDS Instances (Automated) Description: Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances. After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance. Rationale: Databases are likely to hold sensitive and critical data, it is highly recommended to implement encryption in order to protect your data from unauthorized access or disclosure. With RDS encryption enabled, the data stored on the instance's underlying storage, the automated backups, read replicas, and snapshots, are all encrypted. Audit: From Console: 1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/ 2. In the navigation pane, under RDS dashboard, click Databases. 3. Select the RDS Instance that you want to examine 4. Click Instance Name to see details, then click on Configuration tab. 5. Under Configuration Details section, In Storage pane search for the Encryption Enabled Status. 6. If the current status is set to Disabled, Encryption is not enabled for the selected RDS Instance database instance. 7. Repeat steps 3 to 7 to verify encryption status of other RDS Instance in same region. 8. Change region from the top of the navigation bar and repeat audit for other regions. From Command Line: 1. Run describe-db-instances command to list all RDS Instance database names, available in the selected AWS region, Output will return each Instance database identifier-name. aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. Run again describe-db-instances command using the RDS Instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status True Or False. aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].StorageEncrypted' 3. If the StorageEncrypted parameter value is False, Encryption is not enabled for the selected RDS database instance. 4. Repeat steps 1 to 3 for auditing each RDS Instance and change Region to verify for other regions Remediation: From Console: 1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on Databases 3. Select the Database instance that needs to be encrypted. 4. Click on Actions button placed at the top right and select Take Snapshot. 5. On the Take Snapshot page, enter a database name of which you want to take a snapshot in the Snapshot Name field and click on Take Snapshot. 6. Select the newly created snapshot and click on the Action button placed at the top right and select Copy snapshot from the Action menu. 7. On the Make Copy of DB Snapshot page, perform the following: • In the New DB Snapshot Identifier field, Enter a name for the new snapshot. • Check Copy Tags, New snapshot must have the same tags as the source snapshot. • Select Yes from the Enable Encryption dropdown list to enable encryption, You can choose to use the AWS default encryption key or custom key from Master Key dropdown list. 8. Click Copy Snapshot to create an encrypted copy of the selected instance snapshot. 9. Select the new Snapshot Encrypted Copy and click on the Action button placed at the top right and select Restore Snapshot button from the Action menu, This will restore the encrypted snapshot to a new database instance. 10. On the Restore DB Instance page, enter a unique name for the new database instance in the DB Instance Identifier field. 11. Review the instance configuration details and click Restore DB Instance. 12. As the new instance provisioning process is completed can update application configuration to refer to the endpoint of the new Encrypted database instance Once the database endpoint is changed at the application level, can remove the unencrypted instance. From Command Line: 1. Run describe-db-instances command to list all RDS database names available in the selected AWS region, The command output should return the database instance identifier. aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. Run create-db-snapshot command to create a snapshot for the selected database instance, The command output will return the new snapshot with name DB Snapshot Name. aws rds create-db-snapshot --region --db-snapshot-identifier --db-instance-identifier 3. Now run list-aliases command to list the KMS keys aliases available in a specified region, The command output should return each key alias currently available. For our RDS encryption activation process, locate the ID of the AWS default KMS key. aws kms list-aliases --region 4. Run copy-db-snapshot command using the default KMS key ID for RDS instances returned earlier to create an encrypted copy of the database instance snapshot, The command output will return the encrypted instance snapshot configuration. aws rds copy-db-snapshot --region --source-db-snapshotidentifier --target-db-snapshot-identifier --copy-tags --kms-key-id 5. Run restore-db-instance-from-db-snapshot command to restore the encrypted snapshot created at the previous step to a new database instance, If successful, the command output should return the new encrypted database instance configuration. aws rds restore-db-instance-from-db-snapshot --region --dbinstance-identifier --db-snapshot-identifier 6. Run describe-db-instances command to list all RDS database names, available in the selected AWS region, Output will return database instance identifier name Select encrypted database name that we just created DB-Name-Encrypted. aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 7. Run again describe-db-instances command using the RDS instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status True. aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].StorageEncrypted' 2.3.1]
    Configuration Preventive
    Configure "Elastic Block Store volume encryption" to organizational standards. CC ID 15434
    [Ensure EBS Volume Encryption is Enabled in all Regions (Automated) Description: Elastic Compute Cloud (EC2) supports encryption at rest when using the Elastic Block Store (EBS) service. While disabled by default, forcing encryption at EBS volume creation is supported. Rationale: Encrypting data at rest reduces the likelihood that it is unintentionally exposed and can nullify the impact of disclosure if the encryption remains unbroken. Impact: Losing access or removing the KMS key in use by the EBS volumes will result in no longer being able to access the volumes. Audit: From Console: 1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ 2. Under Account attributes, click EBS encryption. 3. Verify Always encrypt new EBS volumes displays Enabled. 4. Review every region in-use. Note: EBS volume encryption is configured per region. From Command Line: 1. Run aws --region ec2 get-ebs-encryption-by-default 2. Verify that "EbsEncryptionByDefault": true is displayed. 3. Review every region in-use. Note: EBS volume encryption is configured per region. Remediation: From Console: 1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ 2. Under Account attributes, click EBS encryption. 3. Click Manage. 4. Click the Enable checkbox. 5. Click Update EBS encryption 6. Repeat for every region requiring the change. Note: EBS volume encryption is configured per region. From Command Line: 1. Run aws --region ec2 enable-ebs-encryption-by-default 2. Verify that "EbsEncryptionByDefault": true is displayed. 3. Repeat every region requiring the change. Note: EBS volume encryption is configured per region. 2.2.1]
    Configuration Preventive
    Configure "Encryption Oracle Remediation" to organizational standards. CC ID 15366 Configuration Preventive
    Configure the "encryption provider" to organizational standards. CC ID 14591 Configuration Preventive
    Configure the "Microsoft network server: Digitally sign communications (always)" to organizational standards. CC ID 07626 Configuration Preventive
    Configure the "Domain member: Digitally encrypt or sign secure channel data (always)" to organizational standards. CC ID 07657 Configuration Preventive
    Configure the "Domain member: Digitally sign secure channel data (when possible)" to organizational standards. CC ID 07678 Configuration Preventive
    Configure the "Network Security: Configure encryption types allowed for Kerberos" to organizational standards. CC ID 07799 Configuration Preventive
    Configure the "System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing" to organizational standards. CC ID 07822 Configuration Preventive
    Configure the "Configure use of smart cards on fixed data drives" to organizational standards. CC ID 08361 Configuration Preventive
    Configure the "Enforce drive encryption type on removable data drives" to organizational standards. CC ID 08363 Configuration Preventive
    Configure the "Configure TPM platform validation profile for BIOS-based firmware configurations" to organizational standards. CC ID 08370 Configuration Preventive
    Configure the "Configure use of passwords for removable data drives" to organizational standards. CC ID 08394 Configuration Preventive
    Configure the "Configure use of hardware-based encryption for removable data drives" to organizational standards. CC ID 08401 Configuration Preventive
    Configure the "Require additional authentication at startup" to organizational standards. CC ID 08422 Configuration Preventive
    Configure the "Deny write access to fixed drives not protected by BitLocker" to organizational standards. CC ID 08429 Configuration Preventive
    Configure the "Configure startup mode" to organizational standards. CC ID 08430 Configuration Preventive
    Configure the "Require client MAPI encryption" to organizational standards. CC ID 08446 Configuration Preventive
    Configure the "Configure dial plan security" to organizational standards. CC ID 08453 Configuration Preventive
    Configure the "Allow access to BitLocker-protected removable data drives from earlier versions of Windows" to organizational standards. CC ID 08457 Configuration Preventive
    Configure the "Enforce drive encryption type on fixed data drives" to organizational standards. CC ID 08460 Configuration Preventive
    Configure the "Allow Secure Boot for integrity validation" to organizational standards. CC ID 08461 Configuration Preventive
    Configure the "Configure use of passwords for operating system drives" to organizational standards. CC ID 08478 Configuration Preventive
    Configure the "Choose how BitLocker-protected removable drives can be recovered" to organizational standards. CC ID 08484 Configuration Preventive
    Configure the "Validate smart card certificate usage rule compliance" to organizational standards. CC ID 08492 Configuration Preventive
    Configure the "Allow enhanced PINs for startup" to organizational standards. CC ID 08495 Configuration Preventive
    Configure the "Choose how BitLocker-protected operating system drives can be recovered" to organizational standards. CC ID 08499 Configuration Preventive
    Configure the "Allow access to BitLocker-protected fixed data drives from earlier versions of Windows" to organizational standards. CC ID 08505 Configuration Preventive
    Configure the "Choose how BitLocker-protected fixed drives can be recovered" to organizational standards. CC ID 08509 Configuration Preventive
    Configure the "Configure use of passwords for fixed data drives" to organizational standards. CC ID 08513 Configuration Preventive
    Configure the "Choose drive encryption method and cipher strength" to organizational standards. CC ID 08537 Configuration Preventive
    Configure the "Choose default folder for recovery password" to organizational standards. CC ID 08541 Configuration Preventive
    Configure the "Prevent memory overwrite on restart" to organizational standards. CC ID 08542 Configuration Preventive
    Configure the "Deny write access to removable drives not protected by BitLocker" to organizational standards. CC ID 08549 Configuration Preventive
    Configure the "opt encrypted" flag to organizational standards. CC ID 14534 Configuration Preventive
    Configure the "Provide the unique identifiers for your organization" to organizational standards. CC ID 08552 Configuration Preventive
    Configure the "Enable use of BitLocker authentication requiring preboot keyboard input on slates" to organizational standards. CC ID 08556 Configuration Preventive
    Configure the "Require encryption on device" to organizational standards. CC ID 08563 Configuration Preventive
    Configure the "Enable S/MIME for OWA 2007" to organizational standards. CC ID 08564 Configuration Preventive
    Configure the "Control use of BitLocker on removable drives" to organizational standards. CC ID 08566 Configuration Preventive
    Configure the "Configure use of hardware-based encryption for fixed data drives" to organizational standards. CC ID 08568 Configuration Preventive
    Configure the "Configure use of smart cards on removable data drives" to organizational standards. CC ID 08570 Configuration Preventive
    Configure the "Enforce drive encryption type on operating system drives" to organizational standards. CC ID 08573 Configuration Preventive
    Configure the "Disallow standard users from changing the PIN or password" to organizational standards. CC ID 08574 Configuration Preventive
    Configure the "Use enhanced Boot Configuration Data validation profile" to organizational standards. CC ID 08578 Configuration Preventive
    Configure the "Allow network unlock at startup" to organizational standards. CC ID 08588 Configuration Preventive
    Configure the "Enable S/MIME for OWA 2010" to organizational standards. CC ID 08592 Configuration Preventive
    Configure the "Configure minimum PIN length for startup" to organizational standards. CC ID 08594 Configuration Preventive
    Configure the "Configure TPM platform validation profile" to organizational standards. CC ID 08598 Configuration Preventive
    Configure the "Configure use of hardware-based encryption for operating system drives" to organizational standards. CC ID 08601 Configuration Preventive
    Configure the "Reset platform validation data after BitLocker recovery" to organizational standards. CC ID 08607 Configuration Preventive
    Configure the "Configure TPM platform validation profile for native UEFI firmware configurations" to organizational standards. CC ID 08614 Configuration Preventive
    Configure the "Do not enable BitLocker until recovery information is stored to AD DS for fixed data drives" setting to organizational standards. CC ID 10039 Configuration Preventive
    Configure the "Save BitLocker recovery information to AD DS for fixed data drives" setting to organizational standards. CC ID 10040 Configuration Preventive
    Configure the "Omit recovery options from the BitLocker setup wizard" setting to organizational standards. CC ID 10041 Configuration Preventive
    Configure the "Do not enable BitLocker until recovery information is stored to AD DS for operating system drives" setting to organizational standards. CC ID 10042 Configuration Preventive
    Configure the "Save BitLocker recovery information to AD DS for operating system drives" setting to organizational standards. CC ID 10043 Configuration Preventive
    Configure the "Allow BitLocker without a compatible TPM" setting to organizational standards. CC ID 10044 Configuration Preventive
    Configure the "Do not enable BitLocker until recovery information is stored to AD DS for removable data drives" setting to organizational standards. CC ID 10045 Configuration Preventive
    Configure the "Save BitLocker recovery information to AD DS for removable data drives" setting to organizational standards. CC ID 10046 Configuration Preventive
    Configure Security settings in accordance with organizational standards. CC ID 08469 Configuration Preventive
    Configure AWS Security Hub to organizational standards. CC ID 17166
    [Ensure AWS Security Hub is enabled (Automated) Description: Security Hub collects security data from across AWS accounts, services, and supported third-party partner products and helps you analyze your security trends and identify the highest priority security issues. When you enable Security Hub, it begins to consume, aggregate, organize, and prioritize findings from AWS services that you have enabled, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie. You can also enable integrations with AWS partner security products. Rationale: AWS Security Hub provides you with a comprehensive view of your security state in AWS and helps you check your environment against security industry standards and best practices - enabling you to quickly assess the security posture across your AWS accounts. Impact: It is recommended AWS Security Hub be enabled in all regions. AWS Security Hub requires AWS Config to be enabled. Audit: The process to evaluate AWS Security Hub configuration per region From Console: 1. Sign in to the AWS Management Console and open the AWS Security Hub console at https://console.aws.amazon.com/securityhub/. 2. On the top right of the console, select the target Region. 3. If presented with the Security Hub > Summary page then Security Hub is set-up for the selected region. 4. If presented with Setup Security Hub or Get Started With Security Hub - follow the online instructions. 5. Repeat steps 2 to 4 for each region. From Command Line: Run the following to list the Securityhub status: aws securityhub describe-hub This will list the Securityhub status by region. Audit for the presence of a 'SubscribedAt' value Example output: { "HubArn": "", "SubscribedAt": "2022-08-19T17:06:42.398Z", "AutoEnableControls": true } An error will be returned if Securityhub is not enabled. Example error: An error occurred (InvalidAccessException) when calling the DescribeHub operation: Account is not subscribed to AWS Security Hub Remediation: To grant the permissions required to enable Security Hub, attach the Security Hub managed policy AWSSecurityHubFullAccess to an IAM user, group, or role. Enabling Security Hub From Console: 1. Use the credentials of the IAM identity to sign in to the Security Hub console. 2. When you open the Security Hub console for the first time, choose Enable AWS Security Hub. 3. On the welcome page, Security standards list the security standards that Security Hub supports. 4. Choose Enable Security Hub. From Command Line: 1. Run the enable-security-hub command. To enable the default standards, include --enable-default-standards. aws securityhub enable-security-hub --enable-default-standards 2. To enable the security hub without the default standards, include --no-enabledefault-standards. aws securityhub enable-security-hub --no-enable-default-standards 4.16]
    Configuration Preventive
    Configure Patch Management settings in accordance with organizational standards. CC ID 08519
    [Ensure Auto Minor Version Upgrade feature is Enabled for RDS Instances (Automated) Description: Ensure that RDS database instances have the Auto Minor Version Upgrade flag enabled in order to receive automatically minor engine upgrades during the specified maintenance window. So, RDS instances can get the new features, bug fixes, and security patches for their database engines. Rationale: AWS RDS will occasionally deprecate minor engine versions and provide new ones for an upgrade. When the last version number within the release is replaced, the version changed is considered minor. With Auto Minor Version Upgrade feature enabled, the version upgrades will occur automatically during the specified maintenance window so your RDS instances can get the new features, bug fixes, and security patches for their database engines. Audit: From Console: 1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on Databases. 3. Select the RDS instance that wants to examine. 4. Click on the Maintenance and backups panel. 5. Under the Maintenance section, search for the Auto Minor Version Upgrade status. • If the current status is set to Disabled, means the feature is not set and the minor engine upgrades released will not be applied to the selected RDS instance From Command Line: 1. Run describe-db-instances command to list all RDS database names, available in the selected AWS region: aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. The command output should return each database instance identifier. 3. Run again describe-db-instances command using the RDS instance identifier returned earlier to determine the Auto Minor Version Upgrade status for the selected instance: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].AutoMinorVersionUpgrade' 4. The command output should return the feature current status. If the current status is set to true, the feature is enabled and the minor engine upgrades will be applied to the selected RDS instance. Remediation: From Console: 1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on Databases. 3. Select the RDS instance that wants to update. 4. Click on the Modify button placed on the top right side. 5. On the Modify DB Instance: page, In the Maintenance section, select Auto minor version upgrade click on the Yes radio button. 6. At the bottom of the page click on Continue, check to Apply Immediately to apply the changes immediately, or select Apply during the next scheduled maintenance window to avoid any downtime. 7. Review the changes and click on Modify DB Instance. The instance status should change from available to modifying and back to available. Once the feature is enabled, the Auto Minor Version Upgrade status should change to Yes. From Command Line: 1. Run describe-db-instances command to list all RDS database instance names, available in the selected AWS region: aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. The command output should return each database instance identifier. 3. Run the modify-db-instance command to modify the selected RDS instance configuration this command will apply the changes immediately, Remove -apply-immediately to apply changes during the next scheduled maintenance window and avoid any downtime: aws rds modify-db-instance --region --db-instance-identifier --auto-minor-version-upgrade --apply-immediately 4. The command output should reveal the new configuration metadata for the RDS instance and check AutoMinorVersionUpgrade parameter value. 5. Run describe-db-instances command to check if the Auto Minor Version Upgrade feature has been successfully enable: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].AutoMinorVersionUpgrade' 6. The command output should return the feature current status set to true, the feature is enabled and the minor engine upgrades will be applied to the selected RDS instance. 2.3.2]
    Configuration Preventive
    Configure "Select when Preview Builds and Feature Updates are received" to organizational standards. CC ID 15399 Configuration Preventive
    Configure "Select when Quality Updates are received" to organizational standards. CC ID 15355 Configuration Preventive
    Configure the "Check for missing Windows Updates" to organizational standards. CC ID 08520 Configuration Preventive
  • Technical security
    23
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Technical security CC ID 00508 IT Impact Zone IT Impact Zone
    Establish, implement, and maintain an access control program. CC ID 11702 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain an access rights management plan. CC ID 00513 Establish/Maintain Documentation Preventive
    Identify information system users. CC ID 12081 Technical Security Detective
    Establish and maintain contact information for user accounts, as necessary. CC ID 15418
    [Maintain current contact details (Manual) Description: Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy or indicative of likely security compromise is observed by the AWS Abuse team. Contact details should not be for a single individual, as circumstances may arise where that individual is unavailable. Email contact details should point to a mail alias which forwards email to multiple individuals within the organization; where feasible, phone contact details should point to a PABX hunt group or other call-forwarding system. Rationale: If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question, so it is in both the customers' and AWS' best interests that prompt contact can be established. This is best achieved by setting AWS account contact details to point to resources which have multiple individuals as recipients, such as email aliases and PABX hunt groups. Audit: This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:*Billing ) 1. Sign in to the AWS Management Console and open the Billing and Cost Management console at https://console.aws.amazon.com/billing/home#/. 2. On the navigation bar, choose your account name, and then choose Account. 3. On the Account Settings page, review and verify the current details. 4. Under Contact Information, review and verify the current details. Remediation: This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:*Billing ). 1. Sign in to the AWS Management Console and open the Billing and Cost Management console at https://console.aws.amazon.com/billing/home#/. 2. On the navigation bar, choose your account name, and then choose Account. 3. On the Account Settings page, next to Account Settings, choose Edit. 4. Next to the field that you need to update, choose Edit. 5. After you have entered your changes, choose Save changes. 6. After you have made your changes, choose Done. 7. To edit your contact information, under Contact Information, choose Edit. 8. For the fields that you want to change, type your updated information, and then choose Update. 1.1
    Ensure security contact information is registered (Manual) Description: AWS provides customers with the option of specifying the contact information for account's security team. It is recommended that this information be provided. Rationale: Specifying security-specific contact information will help ensure that security advisories sent by AWS reach the team in your organization that is best equipped to respond to them. Audit: Perform the following to determine if security contact information is present: From Console: 1. Click on your account name at the top right corner of the console 2. From the drop-down menu Click My Account 3. Scroll down to the Alternate Contacts section 4. Ensure contact information is specified in the Security section From Command Line: 1. Run the following command: aws account get-alternate-contact --alternate-contact-type SECURITY 2. Ensure proper contact information is specified for the Security contact. Remediation: Perform the following to establish security contact information: From Console: 1. Click on your account name at the top right corner of the console. 2. From the drop-down menu Click My Account 3. Scroll down to the Alternate Contacts section 4. Enter contact information in the Security section From Command Line: Run the following command with the following input parameters: --email-address, --name, and --phone-number. aws account put-alternate-contact --alternate-contact-type SECURITY 1.2]
    Data and Information Management Preventive
    Control access rights to organizational assets. CC ID 00004 Technical Security Preventive
    Configure access control lists in accordance with organizational standards. CC ID 16465
    [Ensure that public access is not given to RDS Instance (Automated) Description: Ensure and verify that RDS database instances provisioned in your AWS account do restrict unauthorized access in order to minimize security risks. To restrict access to any publicly accessible RDS database instance, you must disable the database Publicly Accessible flag and update the VPC security group associated with the instance. Rationale: Ensure that no public-facing RDS database instances are provisioned in your AWS account and restrict unauthorized access in order to minimize security risks. When the RDS instance allows unrestricted access (0.0.0.0/0), everyone and everything on the Internet can establish a connection to your database and this can increase the opportunity for malicious activities such as brute force attacks, PostgreSQL injections, or DoS/DDoS attacks. Audit: From Console: 1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. Under the navigation panel, On RDS Dashboard, click Databases. 3. Select the RDS instance that you want to examine. 4. Click Instance Name from the dashboard, Under `Connectivity and Security. 5. On the Security, check if the Publicly Accessible flag status is set to Yes, follow the below-mentioned steps to check database subnet access. • In the networking section, click the subnet link available under Subnets • The link will redirect you to the VPC Subnets page. • Select the subnet listed on the page and click the Route Table tab from the dashboard bottom panel. If the route table contains any entries with the destination CIDR block set to 0.0.0.0/0 and with an Internet Gateway attached. • The selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and can be accessible from the Internet. 6. Repeat steps no. 4 and 5 to determine the type (public or private) and subnet for other RDS database instances provisioned in the current region. 7. Change the AWS region from the navigation bar and repeat the audit process for other regions. From Command Line: 1. Run describe-db-instances command to list all RDS database names, available in the selected AWS region: aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. The command output should return each database instance identifier. 3. Run again describe-db-instances command using the PubliclyAccessible parameter as query filter to reveal the database instance Publicly Accessible flag status: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].PubliclyAccessible' 4. Check for the Publicly Accessible parameter status, If the Publicly Accessible flag is set to Yes. Then selected RDS database instance is publicly accessible and insecure, follow the below-mentioned steps to check database subnet access 5. Run again describe-db-instances command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC subnet(s) associated with the selected instance: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].DBSubnetGroup.Subnets[]' • The command output should list the subnets available in the selected database subnet group. 6. Run describe-route-tables command using the ID of the subnet returned at the previous step to describe the routes of the VPC route table associated with the selected subnet: aws ec2 describe-route-tables --region --filters "Name=association.subnet-id,Values=" --query 'RouteTables[*].Routes[]' • If the command returns the route table associated with database instance subnet ID. Check the GatewayId and DestinationCidrBlock attributes values returned in the output. If the route table contains any entries with the GatewayId value set to igw-xxxxxxxx and the DestinationCidrBlock value set to 0.0.0.0/0, the selected RDS database instance was provisioned inside a public subnet. • Or • If the command returns empty results, the route table is implicitly associated with subnet, therefore the audit process continues with the next step 7. Run again describe-db-instances command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC ID associated with the selected instance: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].DBSubnetGroup.VpcId' • The command output should show the VPC ID in the selected database subnet group 8. Now run describe-route-tables command using the ID of the VPC returned at the previous step to describe the routes of the VPC main route table implicitly associated with the selected subnet: aws ec2 describe-route-tables --region --filters "Name=vpcid,Values=" "Name=association.main,Values=true" --query 'RouteTables[*].Routes[]' • The command output returns the VPC main route table implicitly associated with database instance subnet ID. Check the GatewayId and DestinationCidrBlock attributes values returned in the output. If the route table contains any entries with the GatewayId value set to igw-xxxxxxxx and the DestinationCidrBlock value set to 0.0.0.0/0, the selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and does not adhere to AWS security best practices. Remediation: From Console: 1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. Under the navigation panel, On RDS Dashboard, click Databases. 3. Select the RDS instance that you want to update. 4. Click Modify from the dashboard top menu. 5. On the Modify DB Instance panel, under the Connectivity section, click on Additional connectivity configuration and update the value for Publicly Accessible to Not publicly accessible to restrict public access. Follow the below steps to update subnet configurations: • Select the Connectivity and security tab, and click on the VPC attribute value inside the Networking section. • Select the Details tab from the VPC dashboard bottom panel and click on Route table configuration attribute value. • On the Route table details page, select the Routes tab from the dashboard bottom panel and click on Edit routes. • On the Edit routes page, update the Destination of Target which is set to igwxxxxx and click on Save routes. 6. On the Modify DB Instance panel Click on Continue and In the Scheduling of modifications section, perform one of the following actions based on your requirements: • Select Apply during the next scheduled maintenance window to apply the changes automatically during the next scheduled maintenance window. • Select Apply immediately to apply the changes right away. With this option, any pending modifications will be asynchronously applied as soon as possible, regardless of the maintenance window setting for this RDS database instance. Note that any changes available in the pending modifications queue are also applied. If any of the pending modifications require downtime, choosing this option can cause unexpected downtime for the application. 7. Repeat steps 3 to 6 for each RDS instance available in the current region. 8. Change the AWS region from the navigation bar to repeat the process for other regions. 2.3.3]
    Configuration Preventive
    Define roles for information systems. CC ID 12454
    [Ensure a support role has been created to manage incidents with AWS Support (Automated) Description: AWS provides a support center that can be used for incident notification and response, as well as technical support and customer services. Create an IAM Role, with the appropriate policy assigned, to allow authorized users to manage incidents with AWS Support. Rationale: By implementing least privilege for access control, an IAM Role will require an appropriate IAM Policy to allow Support Center Access in order to manage Incidents with AWS Support. Audit: From Command Line: 1. List IAM policies, filter for the 'AWSSupportAccess' managed policy, and note the "Arn" element value: aws iam list-policies --query "Policies[?PolicyName == 'AWSSupportAccess']" 2. Check if the 'AWSSupportAccess' policy is attached to any role: aws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess 3. In Output, Ensure PolicyRoles does not return empty. 'Example: Example: PolicyRoles: [ ]' If it returns empty refer to the remediation below. Remediation: From Command Line: 1. Create an IAM role for managing incidents with AWS: • Create a trust relationship policy document that allows to manage AWS incidents, and save it locally as /tmp/TrustPolicy.json: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "" }, "Action": "sts:AssumeRole" } ] } 2. Create the IAM role using the above trust policy: aws iam create-role --role-name --assume-role-policydocument file:///tmp/TrustPolicy.json 3. Attach 'AWSSupportAccess' managed policy to the created IAM role: aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess --role-name 1.17]
    Human Resources Management Preventive
    Enable role-based access control for objects and users on information systems. CC ID 12458
    [Ensure IAM instance roles are used for AWS resource access from instances (Automated) Description: AWS access from within AWS instances can be done by either encoding AWS keys into AWS API calls or by assigning the instance to a role which has an appropriate permissions policy for the required access. "AWS Access" means accessing the APIs of AWS in order to access AWS resources or manage AWS account resources. Rationale: AWS IAM roles reduce the risks associated with sharing and rotating credentials that can be used outside of AWS itself. If credentials are compromised, they can be used from outside of the AWS account they give access to. In contrast, in order to leverage role permissions an attacker would need to gain and maintain access to a specific instance to use the privileges associated with it. Additionally, if credentials are encoded into compiled applications or other hard to change mechanisms, then they are even more unlikely to be properly rotated due to service disruption risks. As time goes on, credentials that cannot be rotated are more likely to be known by an increasing number of individuals who no longer work for the organization owning the credentials. Audit: From Console: 1. Sign in to the AWS Management Console and navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. 2. In the left navigation panel, choose Instances. 3. Select the EC2 instance you want to examine. 4. Select Actions. 5. Select View details. 6. Select Security in the lower panel. • If the value for Instance profile arn is an instance profile ARN, then an instance profile (that contains an IAM role) is attached. • If the value for IAM Role is blank, no role is attached. • If the value for IAM Role contains a role • If the value for IAM Role is "No roles attached to instance profile: ", then an instance profile is attached to the instance, but it does not contain an IAM role. 7. Repeat steps 3 to 6 for each EC2 instance in your AWS account. From Command Line: 1. Run the describe-instances command to list all EC2 instance IDs, available in the selected AWS region. The command output will return each instance ID: aws ec2 describe-instances --region --query 'Reservations[*].Instances[*].InstanceId' 2. Run the describe-instances command again for each EC2 instance using the IamInstanceProfile identifier in the query filter to check if an IAM role is attached: aws ec2 describe-instances --region --instance-id --query 'Reservations[*].Instances[*].IamInstanceProfile' 3. If an IAM role is attached, the command output will show the IAM instance profile ARN and ID. 4. Repeat steps 1 to 3 for each EC2 instance in your AWS account. Remediation: From Console: 1. Sign in to the AWS Management Console and navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. 2. In the left navigation panel, choose Instances. 3. Select the EC2 instance you want to modify. 4. Click Actions. 5. Click Security. 6. Click Modify IAM role. 7. Click Create new IAM role if a new IAM role is required. 8. Select the IAM role you want to attach to your instance in the IAM role dropdown. 9. Click Update IAM role. 10. Repeat steps 3 to 9 for each EC2 instance in your AWS account that requires an IAM role to be attached. From Command Line: 1. Run the describe-instances command to list all EC2 instance IDs, available in the selected AWS region: aws ec2 describe-instances --region --query 'Reservations[*].Instances[*].InstanceId' 2. Run the associate-iam-instance-profile command to attach an instance profile (which is attached to an IAM role) to the EC2 instance: aws ec2 associate-iam-instance-profile --region --instance-id --iam-instance-profile Name="Instance-Profile-Name" 3. Run the describe-instances command again for the recently modified EC2 instance. The command output should return the instance profile ARN and ID: aws ec2 describe-instances --region --instance-id --query 'Reservations[*].Instances[*].IamInstanceProfile' 4. Repeat steps 1 to 3 for each EC2 instance in your AWS account that requires an IAM role to be attached. 1.18]
    Technical Security Preventive
    Control user privileges. CC ID 11665
    [Ensure IAM Users Receive Permissions Only Through Groups (Automated) Description: IAM users are granted access to services, functions, and data through IAM policies. There are four ways to define policies for a user: 1) Edit the user policy directly, aka an inline, or user, policy; 2) attach a policy directly to a user; 3) add the user to an IAM group that has an attached policy; 4) add the user to an IAM group that has an inline policy. Only the third implementation is recommended. Rationale: Assigning IAM policy only through groups unifies permissions management to a single, flexible layer consistent with organizational functional roles. By unifying permissions management, the likelihood of excessive permissions is reduced. Audit: Perform the following to determine if an inline policy is set or a policy is directly attached to users: 1. Run the following to get a list of IAM users: aws iam list-users --query 'Users[*].UserName' --output text 2. For each user returned, run the following command to determine if any policies are attached to them: aws iam list-attached-user-policies --user-name aws iam list-user-policies --user-name 3. If any policies are returned, the user has an inline policy or direct policy attachment. Remediation: Perform the following to create an IAM group and assign a policy to it: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, click Groups and then click Create New Group. 3. In the Group Name box, type the name of the group and then click Next Step . 4. In the list of policies, select the check box for each policy that you want to apply to all members of the group. Then click Next Step . 5. Click Create Group Perform the following to add a user to a given group: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, click Groups 3. Select the group to add a user to 4. Click Add Users To Group 5. Select the users to be added to the group 6. Click Add Users Perform the following to remove a direct association between a user and policy: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. 2. In the left navigation pane, click on Users 3. For each user: o Select the user o Click on the Permissions tab o Expand Permissions policies o Click X for each policy; then click Detach or Remove (depending on policy type) 1.15]
    Technical Security Preventive
    Establish and maintain a list of individuals authorized to perform privileged functions. CC ID 17005 Establish/Maintain Documentation Preventive
    Enforce usage restrictions for superuser accounts. CC ID 07064
    [{administrative tasks} Eliminate use of the 'root' user for administrative and daily tasks (Manual) Description: With the creation of an AWS account, a 'root user' is created that cannot be disabled or deleted. That user has unrestricted access to and control over all resources in the AWS account. It is highly recommended that the use of this account be avoided for everyday tasks. Rationale: The 'root user' has unrestricted access to and control over all account resources. Use of it is inconsistent with the principles of least privilege and separation of duties, and can lead to unnecessary harm due to error or account compromise. Audit: From Console: 1. Login to the AWS Management Console at https://console.aws.amazon.com/iam/ 2. In the left pane, click Credential Report 3. Click on Download Report 4. Open of Save the file locally 5. Locate the under the user column 6. Review password_last_used, access_key_1_last_used_date, access_key_2_last_used_date to determine when the 'root user' was last used. From Command Line: Run the following CLI commands to provide a credential report for determining the last time the 'root user' was used: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,5,11,16 | grep -B1 '' Review password_last_used, access_key_1_last_used_date, access_key_2_last_used_date to determine when the root user was last used. Note: There are a few conditions under which the use of the 'root' user account is required. Please see the reference links for all of the tasks that require use of the 'root' user. Remediation: If you find that the 'root' user account is being used for daily activity to include administrative tasks that do not require the 'root' user: 1. Change the 'root' user password. 2. Deactivate or delete any access keys associated with the 'root' user. **Remember, anyone who has 'root' user credentials for your AWS account has unrestricted access to and control of all the resources in your account, including billing information. 1.7]
    Technical Security Preventive
    Establish, implement, and maintain user accounts in accordance with the organizational Governance, Risk, and Compliance framework. CC ID 00526
    [Ensure IAM users are managed centrally via identity federation or AWS Organizations for multi-account environments (Manual) Description: In multi-account environments, IAM user centralization facilitates greater user control. User access beyond the initial account is then provided via role assumption. Centralization of users can be accomplished through federation with an external identity provider or through the use of AWS Organizations. Rationale: Centralizing IAM user management to a single identity store reduces complexity and thus the likelihood of access management errors. Audit: For multi-account AWS environments with an external identity provider: 1. Determine the master account for identity federation or IAM user management 2. Login to that account through the AWS Management Console 3. Click Services 4. Click IAM 5. Click Identity providers 6. Verify the configuration Then, determine all accounts that should not have local users present. For each account: 1. Determine all accounts that should not have local users present 2. Log into the AWS Management Console 3. Switch role into each identified account 4. Click Services 5. Click IAM 6. Click Users 7. Confirm that no IAM users representing individuals are present For multi-account AWS environments implementing AWS Organizations without an external identity provider: 1. Determine all accounts that should not have local users present 2. Log into the AWS Management Console 3. Switch role into each identified account 4. Click Services 5. Click IAM 6. Click Users 7. Confirm that no IAM users representing individuals are present Remediation: The remediation procedure will vary based on the individual organization's implementation of identity federation and/or AWS Organizations with the acceptance criteria that no non-service IAM users, and non-root accounts, are present outside the account providing centralized IAM user management. 1.21]
    Technical Security Preventive
    Configure firewalls to deny all traffic by default, except explicitly designated traffic. CC ID 00547
    [{do not allow} Ensure no Network ACLs allow ingress from 0.0.0.0/0 to remote server administration ports (Automated) Description: The Network Access Control List (NACL) function provide stateless filtering of ingress and egress network traffic to AWS resources. It is recommended that no NACL allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389, using either the TDP (6), UDP (17) or ALL (-1) protocols Rationale: Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise. Audit: From Console: Perform the following to determine if the account is configured as prescribed: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Network ACLs 3. For each network ACL, perform the following: o Select the network ACL o Click the Inbound Rules tab o Ensure no rule exists that has a port range that includes port 22, 3389, using the protocols TDP (6), UDP (17) or ALL (-1) or other remote server administration ports for your environment and has a Source of 0.0.0.0/0 and shows ALLOW Note: A Port value of ALL or a port range such as 0-1024 are inclusive of port 22, 3389, and other remote server administration ports Remediation: From Console: Perform the following: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Network ACLs 3. For each network ACL to remediate, perform the following: o Select the network ACL o Click the Inbound Rules tab o Click Edit inbound rules o Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click Delete to remove the offending inbound rule o Click Save 5.1
    {do not allow} Ensure no security groups allow ingress from 0.0.0.0/0 to remote server administration ports (Automated) Description: Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389, using either the TDP (6), UDP (17) or ALL (-1) protocols Rationale: Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise. Impact: When updating an existing environment, ensure that administrators have access to remote server administration ports through another mechanism before removing access by deleting the 0.0.0.0/0 inbound rule. Audit: Perform the following to determine if the account is configured as prescribed: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Security Groups 3. For each security group, perform the following: 4. Select the security group 5. Click the Inbound Rules tab 6. Ensure no rule exists that has a port range that includes port 22, 3389, using the protocols TDP (6), UDP (17) or ALL (-1) or other remote server administration ports for your environment and has a Source of 0.0.0.0/0 Note: A Port value of ALL or a port range such as 0-1024 are inclusive of port 22, 3389, and other remote server administration ports. Remediation: Perform the following to implement the prescribed state: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Security Groups 3. For each security group, perform the following: 4. Select the security group 5. Click the Inbound Rules tab 6. Click the Edit inbound rules button 7. Identify the rules to be edited or removed 8. Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click Delete to remove the offending inbound rule 9. Click Save rules 5.2
    {do not allow} Ensure no security groups allow ingress from ::/0 to remote server administration ports (Automated) Description: Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389. Rationale: Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise. Impact: When updating an existing environment, ensure that administrators have access to remote server administration ports through another mechanism before removing access by deleting the ::/0 inbound rule. Audit: Perform the following to determine if the account is configured as prescribed: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Security Groups 3. For each security group, perform the following: 4. Select the security group 5. Click the Inbound Rules tab 6. Ensure no rule exists that has a port range that includes port 22, 3389, or other remote server administration ports for your environment and has a Source of ::/0 Note: A Port value of ALL or a port range such as 0-1024 are inclusive of port 22, 3389, and other remote server administration ports. Remediation: Perform the following to implement the prescribed state: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Security Groups 3. For each security group, perform the following: 4. Select the security group 5. Click the Inbound Rules tab 6. Click the Edit inbound rules button 7. Identify the rules to be edited or removed 8. Either A) update the Source field to a range other than ::/0, or, B) Click Delete to remove the offending inbound rule 9. Click Save rules 5.3
    Ensure the default security group of every VPC restricts all traffic (Automated) Description: A VPC comes with a default security group whose initial settings deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances assigned to the security group. If you don't specify a security group when you launch an instance, the instance is automatically assigned to this default security group. Security groups provide stateful filtering of ingress/egress network traffic to AWS resources. It is recommended that the default security group restrict all traffic. The default VPC in every region should have its default security group updated to comply. Any newly created VPCs will automatically contain a default security group that will need remediation to comply with this recommendation. NOTE: When implementing this recommendation, VPC flow logging is invaluable in determining the least privilege port access required by systems to work properly because it can log all packet acceptances and rejections occurring under the current security groups. This dramatically reduces the primary barrier to least privilege engineering - discovering the minimum ports required by systems in the environment. Even if the VPC flow logging recommendation in this benchmark is not adopted as a permanent security measure, it should be used during any period of discovery and engineering for least privileged security groups. Rationale: Configuring all VPC default security groups to restrict all traffic will encourage least privilege security group development and mindful placement of AWS resources into security groups which will in-turn reduce the exposure of those resources. Impact: Implementing this recommendation in an existing VPC containing operating resources requires extremely careful migration planning as the default security groups are likely to be enabling many ports that are unknown. Enabling VPC flow logging (of accepts) in an existing environment that is known to be breach free will reveal the current pattern of ports being used for each instance to communicate successfully. Audit: Perform the following to determine if the account is configured as prescribed: Security Group State 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. Repeat the next steps for all VPCs - including the default VPC in each AWS region: 3. In the left pane, click Security Groups 4. For each default security group, perform the following: 5. Select the default security group 6. Click the Inbound Rules tab 7. Ensure no rule exist 8. Click the Outbound Rules tab 9. Ensure no rules exist Security Group Members 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. Repeat the next steps for all default groups in all VPCs - including the default VPC in each AWS region: 3. In the left pane, click Security Groups 4. Copy the id of the default security group. 5. Change to the EC2 Management Console at https://console.aws.amazon.com/ec2/v2/home 6. In the filter column type 'Security Group ID : < security group id from #4 >' Remediation: Security Group Members Perform the following to implement the prescribed state: 1. Identify AWS resources that exist within the default security group 2. Create a set of least privilege security groups for those resources 3. Place the resources in those security groups 4. Remove the resources noted in #1 from the default security group Security Group State 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. Repeat the next steps for all VPCs - including the default VPC in each AWS region: 3. In the left pane, click Security Groups 4. For each default security group, perform the following: 5. Select the default security group 6. Click the Inbound Rules tab 7. Remove any inbound rules 8. Click the Outbound Rules tab 9. Remove any Outbound rules Recommended: IAM groups allow you to edit the "name" field. After remediating default groups rules for all VPCs in all regions, edit this field to add text similar to "DO NOT USE. DO NOT ADD RULES" 5.4]
    Configuration Preventive
    Configure firewalls to generate an audit log. CC ID 12038
    [Ensure security group changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. Security Groups are a stateful packet filter that controls ingress and egress traffic within a VPC. It is recommended that a metric filter and alarm be established for detecting changes to Security Groups. Rationale: Monitoring changes to security group will help ensure that resources and services are not unintentionally exposed. CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Impact: This may require additional 'tuning' to eliminate false positive and filter out expected activity so anomalies are easier to detect. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query "MetricAlarms[?MetricName== '']" 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for security groups changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name "" -filter-name "" --metric-transformations metricName= "" ,metricNamespace="CISBenchmark",metricValue=1 --filter-pattern "{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }" Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name "" Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn "" --protocol --notification-endpoint "" Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name "" --metric-name "" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 -namespace "CISBenchmark" --alarm-actions "" 4.10]
    Audits and Risk Management Preventive
    Restrict access to restricted data and restricted information on a need to know basis. CC ID 12453
    [Ensure access to AWSCloudShellFullAccess is restricted (Manual) Description: AWS CloudShell is a convenient way of running CLI commands against AWS services; a managed IAM policy ('AWSCloudShellFullAccess') provides full access to CloudShell, which allows file upload and download capability between a user's local system and the CloudShell environment. Within the CloudShell environment a user has sudo permissions, and can access the internet. So it is feasible to install file transfer software (for example) and move data from CloudShell to external internet servers. Rationale: Access to this policy should be restricted as it presents a potential channel for data exfiltration by malicious cloud admins that are given full permissions to the service. AWS documentation describes how to create a more restrictive IAM policy which denies file transfer permissions. Audit: From Console 1. Open the IAM console at https://console.aws.amazon.com/iam/ 2. In the left pane, select Policies 3. Search for and select AWSCloudShellFullAccess 4. On the Entities attached tab, ensure that there are no entities using this policy From Command Line 1. List IAM policies, filter for the 'AWSCloudShellFullAccess' managed policy, and note the "Arn" element value: aws iam list-policies --query "Policies[?PolicyName == 'AWSCloudShellFullAccess']" 2. Check if the 'AWSCloudShellFullAccess' policy is attached to any role: aws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSCloudShellFullAccess 3. In Output, Ensure PolicyRoles returns empty. 'Example: Example: PolicyRoles: [ ]' If it does not return empty refer to the remediation below. Note: Keep in mind that other policies may grant access. Remediation: From Console 1. Open the IAM console at https://console.aws.amazon.com/iam/ 2. In the left pane, select Policies 3. Search for and select AWSCloudShellFullAccess 4. On the Entities attached tab, for each item, check the box and select Detach 1.22]
    Data and Information Management Preventive
    Implement multifactor authentication techniques. CC ID 00561
    [Ensure multi-factor authentication (MFA) is enabled for all IAM users that have a console password (Automated) Description: Multi-Factor Authentication (MFA) adds an extra layer of authentication assurance beyond traditional credentials. With MFA enabled, when a user signs in to the AWS Console, they will be prompted for their user name and password as well as for an authentication code from their physical or virtual MFA token. It is recommended that MFA be enabled for all accounts that have a console password. Rationale: Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that displays a time-sensitive key and have knowledge of a credential. Audit: Perform the following to determine if a MFA device is enabled for all IAM users having a console password: From Console: 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the left pane, select Users 3. If the MFA or Password age columns are not visible in the table, click the gear icon at the upper right corner of the table and ensure a checkmark is next to both, then click Close. 4. Ensure that for each user where the Password age column shows a password age, the MFA column shows Virtual, U2F Security Key, or Hardware. From Command Line: 1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their password and MFA status: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,8 2. The output of this command will produce a table similar to the following: user,password_enabled,mfa_active elise,false,false brandon,true,true rakesh,false,false helene,false,false paras,true,true anitha,false,false 3. For any column having password_enabled set to true , ensure mfa_active is also set to true. Remediation: Perform the following to enable MFA: From Console: 1. Sign in to the AWS Management Console and open the IAM console at 'https://console.aws.amazon.com/iam/' 2. In the left pane, select Users. 3. In the User Name list, choose the name of the intended MFA user. 4. Choose the Security Credentials tab, and then choose Manage MFA Device. 5. In the Manage MFA Device wizard, choose Virtual MFA device, and then choose Continue. IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes. 6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications at https://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications). If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device). 7. Determine whether the MFA app supports QR codes, and then do one of the following: • Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code. • In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application. When you are finished, the virtual MFA device starts generating one-time passwords. 8. In the Manage MFA Device wizard, in the MFA Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second onetime password into the MFA Code 2 box. 9. Click Assign MFA. 1.10]
    Configuration Preventive
    Refrain from allowing individuals to self-enroll into multifactor authentication from untrusted devices. CC ID 17173 Technical Security Preventive
    Implement phishing-resistant multifactor authentication techniques. CC ID 16541 Technical Security Preventive
    Document and approve requests to bypass multifactor authentication. CC ID 15464 Establish/Maintain Documentation Preventive
    Encrypt in scope data or in scope information, as necessary. CC ID 04824
    [Ensure that encryption is enabled for EFS file systems (Automated) Description: EFS data should be encrypted at rest using AWS KMS (Key Management Service). Rationale: Data should be encrypted at rest to reduce the risk of a data breach via direct access to the storage device. Audit: From Console: 1. Login to the AWS Management Console and Navigate to `Elastic File System (EFS) dashboard. 2. Select File Systems from the left navigation panel. 3. Each item on the list has a visible Encrypted field that displays data at rest encryption status. 4. Validate that this field reads Encrypted for all EFS file systems in all AWS regions. From CLI: 1. Run describe-file-systems command using custom query filters to list the identifiers of all AWS EFS file systems currently available within the selected region: aws efs describe-file-systems --region --output table --query 'FileSystems[*].FileSystemId' 2. The command output should return a table with the requested file system IDs. 3. Run describe-file-systems command using the ID of the file system that you want to examine as identifier and the necessary query filters: aws efs describe-file-systems --region --file-system-id --query 'FileSystems[*].Encrypted' 4. The command output should return the file system encryption status true or false. If the returned value is false, the selected AWS EFS file system is not encrypted and if the returned value is true, the selected AWS EFS file system is encrypted. Remediation: It is important to note that EFS file system data at rest encryption must be turned on when creating the file system. If an EFS file system has been created without data at rest encryption enabled then you must create another EFS file system with the correct configuration and transfer the data. Steps to create an EFS file system with data encrypted at rest: From Console: 1. Login to the AWS Management Console and Navigate to Elastic File System (EFS) dashboard. 2. Select File Systems from the left navigation panel. 3. Click Create File System button from the dashboard top menu to start the file system setup process. 4. On the Configure file system access configuration page, perform the following actions. • Choose the right VPC from the VPC dropdown list. • Within Create mount targets section, select the checkboxes for all of the Availability Zones (AZs) within the selected VPC. These will be your mount targets. • Click Next step to continue. 5. Perform the following on the Configure optional settings page. • Create tags to describe your new file system. • Choose performance mode based on your requirements. • Check Enable encryption checkbox and choose aws/elasticfilesystem from Select KMS master key dropdown list to enable encryption for the new file system using the default master key provided and managed by AWS KMS. • Click Next step to continue. 6. Review the file system configuration details on the review and create page and then click Create File System to create your new AWS EFS file system. 7. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system. 8. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed. 9. Change the AWS region from the navigation bar and repeat the entire process for other aws regions. From CLI: 1. Run describe-file-systems command to describe the configuration information available for the selected (unencrypted) file system (see Audit section to identify the right resource): aws efs describe-file-systems --region --file-system-id 2. The command output should return the requested configuration information. 3. To provision a new AWS EFS file system, you need to generate a universally unique identifier (UUID) in order to create the token required by the create-filesystem command. To create the required token, you can use a randomly generated UUID from "https://www.uuidgenerator.net". 4. Run create-file-system command using the unique token created at the previous step. aws efs create-file-system --region --creation-token --performance-mode generalPurpose -encrypted 5. The command output should return the new file system configuration metadata. 6. Run create-mount-target command using the newly created EFS file system ID returned at the previous step as identifier and the ID of the Availability Zone (AZ) that will represent the mount target: aws efs create-mount-target --region --file-system-id --subnet-id 7. The command output should return the new mount target metadata. 8. Now you can mount your file system from an EC2 instance. 9. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system. 10. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed. aws efs delete-file-system --region --file-system-id 11. Change the AWS region by updating the --region and repeat the entire process for other aws regions. Default Value: EFS file system data is encrypted at rest by default when creating a file system via the Console. Encryption at rest is not enabled by default when creating a new file system using the AWS CLI, API, and SDKs. 2.4.1
    Ensure CloudTrail logs are encrypted at rest using KMS CMKs (Automated) Description: AWS CloudTrail is a web service that records AWS API calls for an account and makes those logs available to users and resources in accordance with IAM policies. AWS Key Management Service (KMS) is a managed service that helps create and control the encryption keys used to encrypt account data, and uses Hardware Security Modules (HSMs) to protect the security of encryption keys. CloudTrail logs can be configured to leverage server side encryption (SSE) and KMS customer created master keys (CMK) to further protect CloudTrail logs. It is recommended that CloudTrail be configured to use SSE-KMS. Rationale: Configuring CloudTrail to use SSE-KMS provides additional confidentiality controls on log data as a given user must have S3 read permission on the corresponding log bucket and must be granted decrypt permission by the CMK policy. Impact: Customer created keys incur an additional cost. See https://aws.amazon.com/kms/pricing/ for more information. Audit: Perform the following to determine if CloudTrail is configured to use SSE-KMS: From Console: 1. Sign in to the AWS Management Console and open the CloudTrail console at https://console.aws.amazon.com/cloudtrail 2. In the left navigation pane, choose Trails . 3. Select a Trail 4. Under the S3 section, ensure Encrypt log files is set to Yes and a KMS key ID is specified in the KSM Key Id field. From Command Line: 1. Run the following command: aws cloudtrail describe-trails 2. For each trail listed, SSE-KMS is enabled if the trail has a KmsKeyId property defined. Remediation: Perform the following to configure CloudTrail to use SSE-KMS: From Console: 1. Sign in to the AWS Management Console and open the CloudTrail console at https://console.aws.amazon.com/cloudtrail 2. In the left navigation pane, choose Trails . 3. Click on a Trail 4. Under the S3 section click on the edit button (pencil icon) 5. Click Advanced 6. Select an existing CMK from the KMS key Id drop-down menu • Note: Ensure the CMK is located in the same region as the S3 bucket • Note: You will need to apply a KMS Key policy on the selected CMK in order for CloudTrail as a service to encrypt and decrypt log files using the CMK provided. Steps are provided here for editing the selected CMK Key policy 7. Click Save 8. You will see a notification message stating that you need to have decrypt permissions on the specified KMS key to decrypt log files. 9. Click Yes From Command Line: aws cloudtrail update-trail --name --kms-id aws kms put-key-policy --key-id --policy 3.5]
    Data and Information Management Preventive
    Change cryptographic keys in accordance with organizational standards. CC ID 01302
    [Ensure rotation for customer-created symmetric CMKs is enabled (Automated) Description: AWS Key Management Service (KMS) allows customers to rotate the backing key which is key material stored within the KMS which is tied to the key ID of the customercreated customer master key (CMK). It is the backing key that is used to perform cryptographic operations such as encryption and decryption. Automated key rotation currently retains all prior backing keys so that decryption of encrypted data can take place transparently. It is recommended that CMK key rotation be enabled for symmetric keys. Key rotation can not be enabled for any asymmetric CMK. Rationale: Rotating encryption keys helps reduce the potential impact of a compromised key as data encrypted with a new key cannot be accessed with a previous key that may have been exposed. Keys should be rotated every year, or upon event that would result in the compromise of that key. Impact: Creation, management, and storage of CMKs may require additional time from an administrator. Audit: From Console: 1. Sign in to the AWS Management Console and open the KMS console at: https://console.aws.amazon.com/kms. 2. In the left navigation pane, click Customer-managed keys. 3. Select a customer managed CMK where Key spec = SYMMETRIC_DEFAULT. 4. Select the Key rotation tab. 5. Ensure the Automatically rotate this KMS key every year checkbox is checked. 6. Repeat steps 3–5 for all customer-managed CMKs where "Key spec = SYMMETRIC_DEFAULT". From Command Line: 1. Run the following command to get a list of all keys and their associated KeyIds: aws kms list-keys 2. For each key, note the KeyId and run the following command: describe-key --key-id 3. If the response contains "KeySpec = SYMMETRIC_DEFAULT", run the following command: aws kms get-key-rotation-status --key-id 4. Ensure KeyRotationEnabled is set to true. 5. Repeat steps 2–4 for all remaining CMKs. Remediation: From Console: 1. Sign in to the AWS Management Console and open the KMS console at: https://console.aws.amazon.com/kms. 2. In the left navigation pane, click Customer-managed keys. 3. Select a key where Key spec = SYMMETRIC_DEFAULT that does not have automatic rotation enabled. 4. Select the Key rotation tab. 5. Check the Automatically rotate this KMS key every year checkbox. 6. Click Save. 7. Repeat steps 3–6 for all customer-managed CMKs that do not have automatic rotation enabled. From Command Line: 1. Run the following command to enable key rotation: aws kms enable-key-rotation --key-id 3.6]
    Data and Information Management Preventive
    Establish, implement, and maintain Public Key certificate procedures. CC ID 07085
    [Ensure that all the expired SSL/TLS certificates stored in AWS IAM are removed (Automated) Description: To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates. Use IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM. IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage. IAM supports deploying server certificates in all regions, but you must obtain your certificate from an external provider for use with AWS. You cannot upload an ACM certificate to IAM. Additionally, you cannot manage your certificates from the IAM Console. Rationale: Removing expired SSL/TLS certificates eliminates the risk that an invalid certificate will be deployed accidentally to a resource such as AWS Elastic Load Balancer (ELB), which can damage the credibility of the application/website behind the ELB. As a best practice, it is recommended to delete expired certificates. Audit: From Console: Getting the certificates expiration information via AWS Management Console is not currently supported. To request information about the SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI). From Command Line: Run list-server-certificates command to list all the IAM-stored server certificates: aws iam list-server-certificates The command output should return an array that contains all the SSL/TLS certificates currently stored in IAM and their metadata (name, ID, expiration date, etc): { "ServerCertificateMetadataList": [ { "ServerCertificateId": "EHDGFRW7EJFYTE88D", "ServerCertificateName": "MyServerCertificate", "Expiration": "2018-07-10T23:59:59Z", "Path": "/", "Arn": "arn:aws:iam::012345678910:servercertificate/MySSLCertificate", "UploadDate": "2018-06-10T11:56:08Z" } ] } Verify the ServerCertificateName and Expiration parameter value (expiration date) for each SSL/TLS certificate returned by the list-server-certificates command and determine if there are any expired server certificates currently stored in AWS IAM. If so, use the AWS API to remove them. If this command returns: { { "ServerCertificateMetadataList": [] } This means that there are no expired certificates, It DOES NOT mean that no certificates exist. Remediation: From Console: Removing expired certificates via AWS Management Console is not currently supported. To delete SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI). From Command Line: To delete Expired Certificate run following command by replacing with the name of the certificate to delete: aws iam delete-server-certificate --server-certificate-name When the preceding command is successful, it does not return any output. Default Value: By default, expired certificates won't get deleted. 1.19]
    Establish/Maintain Documentation Preventive
Common Controls and
mandates by Type
54 Mandated Controls - bold    
15 Implied Controls - italic     135 Implementation

Each Common Control is assigned a meta-data type to help you determine the objective of the Control and associated Authority Document mandates aligned with it. These types include behavioral controls, process controls, records management, technical security, configuration management, etc. They are provided as another tool to dissect the Authority Document’s mandates and assign them effectively within your organization.

Number of Controls
204 Total
  • Audits and Risk Management
    1
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Configure firewalls to generate an audit log. CC ID 12038
    [Ensure security group changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. Security Groups are a stateful packet filter that controls ingress and egress traffic within a VPC. It is recommended that a metric filter and alarm be established for detecting changes to Security Groups. Rationale: Monitoring changes to security group will help ensure that resources and services are not unintentionally exposed. CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Impact: This may require additional 'tuning' to eliminate false positive and filter out expected activity so anomalies are easier to detect. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query "MetricAlarms[?MetricName== '']" 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for security groups changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name "" -filter-name "" --metric-transformations metricName= "" ,metricNamespace="CISBenchmark",metricValue=1 --filter-pattern "{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }" Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name "" Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn "" --protocol --notification-endpoint "" Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name "" --metric-name "" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 -namespace "CISBenchmark" --alarm-actions "" 4.10]
    Technical security Preventive
  • Behavior
    2
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Notify affected parties to keep authenticators confidential. CC ID 06787 System hardening through configuration management Preventive
    Discourage affected parties from recording authenticators. CC ID 06788 System hardening through configuration management Preventive
  • Business Processes
    3
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Identify roles, tasks, information, systems, and assets that fall under the organization's mandated Authority Documents. CC ID 00688
    [Ensure all data in Amazon S3 has been discovered, classified and secured when required. (Manual) Description: Amazon S3 buckets can contain sensitive data, that for security purposes should be discovered, monitored, classified and protected. Macie along with other 3rd party tools can automatically provide an inventory of Amazon S3 buckets. Rationale: Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Impact: There is a cost associated with using Amazon Macie. There is also typically a cost associated with 3rd Party tools that perform similar processes and protection. Audit: Perform the following steps to determine if Macie is running: From Console: 1. Login to the Macie console at https://console.aws.amazon.com/macie/ 2. In the left hand pane click on By job under findings. 3. Confirm that you have a Job setup for your S3 Buckets When you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below. If you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation. Remediation: Perform the steps below to enable and configure Amazon Macie From Console: 1. Log on to the Macie console at https://console.aws.amazon.com/macie/ 2. Click Get started. 3. Click Enable Macie. Setup a repository for sensitive data discovery results 1. In the Left pane, under Settings, click Discovery results. 2. Make sure Create bucket is selected. 3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number. 4. Click on Advanced. 5. Block all public access, make sure Yes is selected. 6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket. 7. Click on Save Create a job to discover sensitive data 1. In the left pane, click S3 buckets. Macie displays a list of all the S3 buckets for your account. 2. Select the check box for each bucket that you want Macie to analyze as part of the job 3. Click Create job. 4. Click Quick create. 5. For the Name and description step, enter a name and, optionally, a description of the job. 6. Then click Next. 7. For the Review and create step, click Submit. Review your findings 1. In the left pane, click Findings. 2. To view the details of a specific finding, choose any field other than the check box for the finding. If you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool. 2.1.3]
    Leadership and high level objectives Preventive
    Establish, implement, and maintain information security procedures. CC ID 12006
    [Ensure all data in Amazon S3 has been discovered, classified and secured when required. (Manual) Description: Amazon S3 buckets can contain sensitive data, that for security purposes should be discovered, monitored, classified and protected. Macie along with other 3rd party tools can automatically provide an inventory of Amazon S3 buckets. Rationale: Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Impact: There is a cost associated with using Amazon Macie. There is also typically a cost associated with 3rd Party tools that perform similar processes and protection. Audit: Perform the following steps to determine if Macie is running: From Console: 1. Login to the Macie console at https://console.aws.amazon.com/macie/ 2. In the left hand pane click on By job under findings. 3. Confirm that you have a Job setup for your S3 Buckets When you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below. If you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation. Remediation: Perform the steps below to enable and configure Amazon Macie From Console: 1. Log on to the Macie console at https://console.aws.amazon.com/macie/ 2. Click Get started. 3. Click Enable Macie. Setup a repository for sensitive data discovery results 1. In the Left pane, under Settings, click Discovery results. 2. Make sure Create bucket is selected. 3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number. 4. Click on Advanced. 5. Block all public access, make sure Yes is selected. 6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket. 7. Click on Save Create a job to discover sensitive data 1. In the left pane, click S3 buckets. Macie displays a list of all the S3 buckets for your account. 2. Select the check box for each bucket that you want Macie to analyze as part of the job 3. Click Create job. 4. Click Quick create. 5. For the Name and description step, enter a name and, optionally, a description of the job. 6. Then click Next. 7. For the Review and create step, click Submit. Review your findings 1. In the left pane, click Findings. 2. To view the details of a specific finding, choose any field other than the check box for the finding. If you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool. 2.1.3]
    Operational management Preventive
    Change the authenticator for shared accounts when the group membership changes. CC ID 14249 System hardening through configuration management Corrective
  • Communicate
    4
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Disseminate and communicate the data classification scheme to interested personnel and affected parties. CC ID 16804 Leadership and high level objectives Preventive
    Disseminate and communicate the information security procedures to all interested personnel and affected parties. CC ID 16303 Operational management Preventive
    Include risk information when communicating critical security updates. CC ID 14948 System hardening through configuration management Preventive
    Disseminate and communicate with the end user when a memorized secret entered into an authenticator field matches one found in the memorized secret list. CC ID 13807 System hardening through configuration management Preventive
  • Configuration
    140
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Enable and configure logging on network access controls in accordance with organizational standards. CC ID 01963
    [Ensure Network Access Control Lists (NACL) changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. NACLs are used as a stateless packet filter to control ingress and egress traffic for subnets within a VPC. It is recommended that a metric filter and alarm be established for changes made to NACLs. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to NACLs will help ensure that AWS resources and services are not unintentionally exposed. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for NACL changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 -filter-pattern '{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` -metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.11]
    Monitoring and measurement Preventive
    Configure access control lists in accordance with organizational standards. CC ID 16465
    [Ensure that public access is not given to RDS Instance (Automated) Description: Ensure and verify that RDS database instances provisioned in your AWS account do restrict unauthorized access in order to minimize security risks. To restrict access to any publicly accessible RDS database instance, you must disable the database Publicly Accessible flag and update the VPC security group associated with the instance. Rationale: Ensure that no public-facing RDS database instances are provisioned in your AWS account and restrict unauthorized access in order to minimize security risks. When the RDS instance allows unrestricted access (0.0.0.0/0), everyone and everything on the Internet can establish a connection to your database and this can increase the opportunity for malicious activities such as brute force attacks, PostgreSQL injections, or DoS/DDoS attacks. Audit: From Console: 1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. Under the navigation panel, On RDS Dashboard, click Databases. 3. Select the RDS instance that you want to examine. 4. Click Instance Name from the dashboard, Under `Connectivity and Security. 5. On the Security, check if the Publicly Accessible flag status is set to Yes, follow the below-mentioned steps to check database subnet access. • In the networking section, click the subnet link available under Subnets • The link will redirect you to the VPC Subnets page. • Select the subnet listed on the page and click the Route Table tab from the dashboard bottom panel. If the route table contains any entries with the destination CIDR block set to 0.0.0.0/0 and with an Internet Gateway attached. • The selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and can be accessible from the Internet. 6. Repeat steps no. 4 and 5 to determine the type (public or private) and subnet for other RDS database instances provisioned in the current region. 7. Change the AWS region from the navigation bar and repeat the audit process for other regions. From Command Line: 1. Run describe-db-instances command to list all RDS database names, available in the selected AWS region: aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. The command output should return each database instance identifier. 3. Run again describe-db-instances command using the PubliclyAccessible parameter as query filter to reveal the database instance Publicly Accessible flag status: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].PubliclyAccessible' 4. Check for the Publicly Accessible parameter status, If the Publicly Accessible flag is set to Yes. Then selected RDS database instance is publicly accessible and insecure, follow the below-mentioned steps to check database subnet access 5. Run again describe-db-instances command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC subnet(s) associated with the selected instance: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].DBSubnetGroup.Subnets[]' • The command output should list the subnets available in the selected database subnet group. 6. Run describe-route-tables command using the ID of the subnet returned at the previous step to describe the routes of the VPC route table associated with the selected subnet: aws ec2 describe-route-tables --region --filters "Name=association.subnet-id,Values=" --query 'RouteTables[*].Routes[]' • If the command returns the route table associated with database instance subnet ID. Check the GatewayId and DestinationCidrBlock attributes values returned in the output. If the route table contains any entries with the GatewayId value set to igw-xxxxxxxx and the DestinationCidrBlock value set to 0.0.0.0/0, the selected RDS database instance was provisioned inside a public subnet. • Or • If the command returns empty results, the route table is implicitly associated with subnet, therefore the audit process continues with the next step 7. Run again describe-db-instances command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC ID associated with the selected instance: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].DBSubnetGroup.VpcId' • The command output should show the VPC ID in the selected database subnet group 8. Now run describe-route-tables command using the ID of the VPC returned at the previous step to describe the routes of the VPC main route table implicitly associated with the selected subnet: aws ec2 describe-route-tables --region --filters "Name=vpcid,Values=" "Name=association.main,Values=true" --query 'RouteTables[*].Routes[]' • The command output returns the VPC main route table implicitly associated with database instance subnet ID. Check the GatewayId and DestinationCidrBlock attributes values returned in the output. If the route table contains any entries with the GatewayId value set to igw-xxxxxxxx and the DestinationCidrBlock value set to 0.0.0.0/0, the selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and does not adhere to AWS security best practices. Remediation: From Console: 1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. Under the navigation panel, On RDS Dashboard, click Databases. 3. Select the RDS instance that you want to update. 4. Click Modify from the dashboard top menu. 5. On the Modify DB Instance panel, under the Connectivity section, click on Additional connectivity configuration and update the value for Publicly Accessible to Not publicly accessible to restrict public access. Follow the below steps to update subnet configurations: • Select the Connectivity and security tab, and click on the VPC attribute value inside the Networking section. • Select the Details tab from the VPC dashboard bottom panel and click on Route table configuration attribute value. • On the Route table details page, select the Routes tab from the dashboard bottom panel and click on Edit routes. • On the Edit routes page, update the Destination of Target which is set to igwxxxxx and click on Save routes. 6. On the Modify DB Instance panel Click on Continue and In the Scheduling of modifications section, perform one of the following actions based on your requirements: • Select Apply during the next scheduled maintenance window to apply the changes automatically during the next scheduled maintenance window. • Select Apply immediately to apply the changes right away. With this option, any pending modifications will be asynchronously applied as soon as possible, regardless of the maintenance window setting for this RDS database instance. Note that any changes available in the pending modifications queue are also applied. If any of the pending modifications require downtime, choosing this option can cause unexpected downtime for the application. 7. Repeat steps 3 to 6 for each RDS instance available in the current region. 8. Change the AWS region from the navigation bar to repeat the process for other regions. 2.3.3]
    Technical security Preventive
    Configure firewalls to deny all traffic by default, except explicitly designated traffic. CC ID 00547
    [{do not allow} Ensure no Network ACLs allow ingress from 0.0.0.0/0 to remote server administration ports (Automated) Description: The Network Access Control List (NACL) function provide stateless filtering of ingress and egress network traffic to AWS resources. It is recommended that no NACL allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389, using either the TDP (6), UDP (17) or ALL (-1) protocols Rationale: Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise. Audit: From Console: Perform the following to determine if the account is configured as prescribed: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Network ACLs 3. For each network ACL, perform the following: o Select the network ACL o Click the Inbound Rules tab o Ensure no rule exists that has a port range that includes port 22, 3389, using the protocols TDP (6), UDP (17) or ALL (-1) or other remote server administration ports for your environment and has a Source of 0.0.0.0/0 and shows ALLOW Note: A Port value of ALL or a port range such as 0-1024 are inclusive of port 22, 3389, and other remote server administration ports Remediation: From Console: Perform the following: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Network ACLs 3. For each network ACL to remediate, perform the following: o Select the network ACL o Click the Inbound Rules tab o Click Edit inbound rules o Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click Delete to remove the offending inbound rule o Click Save 5.1
    {do not allow} Ensure no security groups allow ingress from 0.0.0.0/0 to remote server administration ports (Automated) Description: Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389, using either the TDP (6), UDP (17) or ALL (-1) protocols Rationale: Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise. Impact: When updating an existing environment, ensure that administrators have access to remote server administration ports through another mechanism before removing access by deleting the 0.0.0.0/0 inbound rule. Audit: Perform the following to determine if the account is configured as prescribed: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Security Groups 3. For each security group, perform the following: 4. Select the security group 5. Click the Inbound Rules tab 6. Ensure no rule exists that has a port range that includes port 22, 3389, using the protocols TDP (6), UDP (17) or ALL (-1) or other remote server administration ports for your environment and has a Source of 0.0.0.0/0 Note: A Port value of ALL or a port range such as 0-1024 are inclusive of port 22, 3389, and other remote server administration ports. Remediation: Perform the following to implement the prescribed state: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Security Groups 3. For each security group, perform the following: 4. Select the security group 5. Click the Inbound Rules tab 6. Click the Edit inbound rules button 7. Identify the rules to be edited or removed 8. Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click Delete to remove the offending inbound rule 9. Click Save rules 5.2
    {do not allow} Ensure no security groups allow ingress from ::/0 to remote server administration ports (Automated) Description: Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389. Rationale: Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise. Impact: When updating an existing environment, ensure that administrators have access to remote server administration ports through another mechanism before removing access by deleting the ::/0 inbound rule. Audit: Perform the following to determine if the account is configured as prescribed: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Security Groups 3. For each security group, perform the following: 4. Select the security group 5. Click the Inbound Rules tab 6. Ensure no rule exists that has a port range that includes port 22, 3389, or other remote server administration ports for your environment and has a Source of ::/0 Note: A Port value of ALL or a port range such as 0-1024 are inclusive of port 22, 3389, and other remote server administration ports. Remediation: Perform the following to implement the prescribed state: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Security Groups 3. For each security group, perform the following: 4. Select the security group 5. Click the Inbound Rules tab 6. Click the Edit inbound rules button 7. Identify the rules to be edited or removed 8. Either A) update the Source field to a range other than ::/0, or, B) Click Delete to remove the offending inbound rule 9. Click Save rules 5.3
    Ensure the default security group of every VPC restricts all traffic (Automated) Description: A VPC comes with a default security group whose initial settings deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances assigned to the security group. If you don't specify a security group when you launch an instance, the instance is automatically assigned to this default security group. Security groups provide stateful filtering of ingress/egress network traffic to AWS resources. It is recommended that the default security group restrict all traffic. The default VPC in every region should have its default security group updated to comply. Any newly created VPCs will automatically contain a default security group that will need remediation to comply with this recommendation. NOTE: When implementing this recommendation, VPC flow logging is invaluable in determining the least privilege port access required by systems to work properly because it can log all packet acceptances and rejections occurring under the current security groups. This dramatically reduces the primary barrier to least privilege engineering - discovering the minimum ports required by systems in the environment. Even if the VPC flow logging recommendation in this benchmark is not adopted as a permanent security measure, it should be used during any period of discovery and engineering for least privileged security groups. Rationale: Configuring all VPC default security groups to restrict all traffic will encourage least privilege security group development and mindful placement of AWS resources into security groups which will in-turn reduce the exposure of those resources. Impact: Implementing this recommendation in an existing VPC containing operating resources requires extremely careful migration planning as the default security groups are likely to be enabling many ports that are unknown. Enabling VPC flow logging (of accepts) in an existing environment that is known to be breach free will reveal the current pattern of ports being used for each instance to communicate successfully. Audit: Perform the following to determine if the account is configured as prescribed: Security Group State 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. Repeat the next steps for all VPCs - including the default VPC in each AWS region: 3. In the left pane, click Security Groups 4. For each default security group, perform the following: 5. Select the default security group 6. Click the Inbound Rules tab 7. Ensure no rule exist 8. Click the Outbound Rules tab 9. Ensure no rules exist Security Group Members 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. Repeat the next steps for all default groups in all VPCs - including the default VPC in each AWS region: 3. In the left pane, click Security Groups 4. Copy the id of the default security group. 5. Change to the EC2 Management Console at https://console.aws.amazon.com/ec2/v2/home 6. In the filter column type 'Security Group ID : < security group id from #4 >' Remediation: Security Group Members Perform the following to implement the prescribed state: 1. Identify AWS resources that exist within the default security group 2. Create a set of least privilege security groups for those resources 3. Place the resources in those security groups 4. Remove the resources noted in #1 from the default security group Security Group State 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. Repeat the next steps for all VPCs - including the default VPC in each AWS region: 3. In the left pane, click Security Groups 4. For each default security group, perform the following: 5. Select the default security group 6. Click the Inbound Rules tab 7. Remove any inbound rules 8. Click the Outbound Rules tab 9. Remove any Outbound rules Recommended: IAM groups allow you to edit the "name" field. After remediating default groups rules for all VPCs in all regions, edit this field to add text similar to "DO NOT USE. DO NOT ADD RULES" 5.4]
    Technical security Preventive
    Implement multifactor authentication techniques. CC ID 00561
    [Ensure multi-factor authentication (MFA) is enabled for all IAM users that have a console password (Automated) Description: Multi-Factor Authentication (MFA) adds an extra layer of authentication assurance beyond traditional credentials. With MFA enabled, when a user signs in to the AWS Console, they will be prompted for their user name and password as well as for an authentication code from their physical or virtual MFA token. It is recommended that MFA be enabled for all accounts that have a console password. Rationale: Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that displays a time-sensitive key and have knowledge of a credential. Audit: Perform the following to determine if a MFA device is enabled for all IAM users having a console password: From Console: 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the left pane, select Users 3. If the MFA or Password age columns are not visible in the table, click the gear icon at the upper right corner of the table and ensure a checkmark is next to both, then click Close. 4. Ensure that for each user where the Password age column shows a password age, the MFA column shows Virtual, U2F Security Key, or Hardware. From Command Line: 1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their password and MFA status: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,8 2. The output of this command will produce a table similar to the following: user,password_enabled,mfa_active elise,false,false brandon,true,true rakesh,false,false helene,false,false paras,true,true anitha,false,false 3. For any column having password_enabled set to true , ensure mfa_active is also set to true. Remediation: Perform the following to enable MFA: From Console: 1. Sign in to the AWS Management Console and open the IAM console at 'https://console.aws.amazon.com/iam/' 2. In the left pane, select Users. 3. In the User Name list, choose the name of the intended MFA user. 4. Choose the Security Credentials tab, and then choose Manage MFA Device. 5. In the Manage MFA Device wizard, choose Virtual MFA device, and then choose Continue. IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes. 6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications at https://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications). If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device). 7. Determine whether the MFA app supports QR codes, and then do one of the following: • Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code. • In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application. When you are finished, the virtual MFA device starts generating one-time passwords. 8. In the Manage MFA Device wizard, in the MFA Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second onetime password into the MFA Code 2 box. 9. Click Assign MFA. 1.10]
    Technical security Preventive
    Configure Least Functionality and Least Privilege settings to organizational standards. CC ID 07599 System hardening through configuration management Preventive
    Configure "Block public access (bucket settings)" to organizational standards. CC ID 15444
    [Ensure that S3 Buckets are configured with 'Block public access (bucket settings)' (Automated) Description: Amazon S3 provides Block public access (bucket settings) and Block public access (account settings) to help you manage public access to Amazon S3 resources. By default, S3 buckets and objects are created with public access disabled. However, an IAM principal with sufficient S3 permissions can enable public access at the bucket and/or object level. While enabled, Block public access (bucket settings) prevents an individual bucket, and its contained objects, from becoming publicly accessible. Similarly, Block public access (account settings) prevents all buckets, and contained objects, from becoming publicly accessible across the entire account. Rationale: Amazon S3 Block public access (bucket settings) prevents the accidental or malicious public exposure of data contained within the respective bucket(s). Amazon S3 Block public access (account settings) prevents the accidental or malicious public exposure of data contained within all buckets of the respective AWS account. Whether blocking public access to all or some buckets is an organizational decision that should be based on data sensitivity, least privilege, and use case. Impact: When you apply Block Public Access settings to an account, the settings apply to all AWS Regions globally. The settings might not take effect in all Regions immediately or simultaneously, but they eventually propagate to all Regions. Audit: If utilizing Block Public Access (bucket settings) From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Edit public access settings'. 4. Ensure that block public access settings are set appropriately for this bucket 5. Repeat for all the buckets in your AWS account. From Command Line: 1. List all of the S3 Buckets aws s3 ls 2. Find the public access setting on that bucket aws s3api get-public-access-block --bucket Output if Block Public access is enabled: { "PublicAccessBlockConfiguration": { "BlockPublicAcls": true, "IgnorePublicAcls": true, "BlockPublicPolicy": true, "RestrictPublicBuckets": true } } If the output reads false for the separate configuration settings then proceed to the remediation. If utilizing Block Public Access (account settings) From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Choose Block public access (account settings) 3. Ensure that block public access settings are set appropriately for your AWS account. From Command Line: To check Public access settings for this account status, run the following command, aws s3control get-public-access-block --account-id --region Output if Block Public access is enabled: { "PublicAccessBlockConfiguration": { "IgnorePublicAcls": true, "BlockPublicPolicy": true, "BlockPublicAcls": true, "RestrictPublicBuckets": true } } If the output reads false for the separate configuration settings then proceed to the remediation. Remediation: If utilizing Block Public Access (bucket settings) From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Edit public access settings'. 4. Click 'Block all public access' 5. Repeat for all the buckets in your AWS account that contain sensitive data. From Command Line: 1. List all of the S3 Buckets aws s3 ls 2. Set the Block Public Access to true on that bucket aws s3api put-public-access-block --bucket --public-accessblock-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPu blicBuckets=true" If utilizing Block Public Access (account settings) From Console: If the output reads true for the separate configuration settings then it is set on the account. 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Choose Block Public Access (account settings) 3. Choose Edit to change the block public access settings for all the buckets in your AWS account 4. Choose the settings you want to change, and then choose Save. For details about each setting, pause on the i icons. 5. When you're asked for confirmation, enter confirm. Then Click Confirm to save your changes. From Command Line: To set Block Public access settings for this account, run the following command: aws s3control put-public-access-block --public-access-block-configuration BlockPublicAcls=true, IgnorePublicAcls=true, BlockPublicPolicy=true, RestrictPublicBuckets=true --account-id 2.1.4]
    System hardening through configuration management Preventive
    Configure S3 Bucket Policies to organizational standards. CC ID 15431
    [Ensure S3 Bucket Policy is set to deny HTTP requests (Automated) Description: At the Amazon S3 bucket level, you can configure permissions through a bucket policy making the objects accessible only through HTTPS. Rationale: By default, Amazon S3 allows both HTTP and HTTPS requests. To achieve only allowing access to Amazon S3 objects through HTTPS you also have to explicitly deny access to HTTP requests. Bucket policies that allow HTTPS requests without explicitly denying HTTP requests will not comply with this recommendation. Audit: To allow access to HTTPS you can use a condition that checks for the key "aws:SecureTransport: true". This means that the request is sent through HTTPS but that HTTP can still be used. So to make sure you do not allow HTTP access confirm that there is a bucket policy that explicitly denies access for HTTP requests and that it contains the key "aws:SecureTransport": "false". From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Permissions', then Click on Bucket Policy. 4. Ensure that a policy is listed that matches: '{ "Sid": , "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::/*", "Condition": { "Bool": { "aws:SecureTransport": "false" }' and will be specific to your account 5. Repeat for all the buckets in your AWS account. From Command Line: 1. List all of the S3 Buckets aws s3 ls 2. Using the list of buckets run this command on each of them: aws s3api get-bucket-policy --bucket | grep aws:SecureTransport NOTE : If Error being thrown by CLI, it means no Policy has been configured for specified S3 bucket and by default it's allowing both HTTP and HTTPS requests. 3. Confirm that aws:SecureTransport is set to false aws:SecureTransport:false 4. Confirm that the policy line has Effect set to Deny 'Effect:Deny' Remediation: From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Permissions'. 4. Click 'Bucket Policy' 5. Add this to the existing policy filling in the required information { "Sid": ", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::/*", "Condition": { "Bool": { "aws:SecureTransport": "false" } } } 6. Save 7. Repeat for all the buckets in your AWS account that contain sensitive data. From Console using AWS Policy Generator: 1. Repeat steps 1-4 above. 2. Click on Policy Generator at the bottom of the Bucket Policy Editor 3. Select Policy Type S3 Bucket Policy 4. Add Statements • Effect = Deny • Principal = * • AWS Service = Amazon S3 • Actions = * • Amazon Resource Name = 5. Generate Policy 6. Copy the text and add it to the Bucket Policy. From Command Line: 1. Export the bucket policy to a json file. aws s3api get-bucket-policy --bucket --query Policy --output text > policy.json 2. Modify the policy.json file by adding in this statement: { "Sid": ", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::/*", "Condition": { "Bool": { "aws:SecureTransport": "false" } } } 3. Apply this modified policy back to the S3 bucket: aws s3api put-bucket-policy --bucket --policy file://policy.json Default Value: Both HTTP and HTTPS Request are allowed 2.1.1]
    System hardening through configuration management Preventive
    Configure authenticator activation codes in accordance with organizational standards. CC ID 17032 System hardening through configuration management Preventive
    Configure authenticators to comply with organizational standards. CC ID 06412 System hardening through configuration management Preventive
    Configure the system to require new users to change their authenticator on first use. CC ID 05268 System hardening through configuration management Preventive
    Configure authenticators so that group authenticators or shared authenticators are prohibited. CC ID 00519 System hardening through configuration management Preventive
    Configure the system to prevent unencrypted authenticator use. CC ID 04457 System hardening through configuration management Preventive
    Disable store passwords using reversible encryption. CC ID 01708 System hardening through configuration management Preventive
    Configure the system to encrypt authenticators. CC ID 06735 System hardening through configuration management Preventive
    Configure the system to mask authenticators. CC ID 02037 System hardening through configuration management Preventive
    Configure the authenticator policy to ban the use of usernames or user identifiers in authenticators. CC ID 05992 System hardening through configuration management Preventive
    Configure the system to refrain from specifying the type of information used as password hints. CC ID 13783 System hardening through configuration management Preventive
    Disable machine account password changes. CC ID 01737 System hardening through configuration management Preventive
    Configure the "Disable Remember Password" setting. CC ID 05270 System hardening through configuration management Preventive
    Configure the "Minimum password age" to organizational standards. CC ID 01703 System hardening through configuration management Preventive
    Configure the LILO/GRUB password. CC ID 01576 System hardening through configuration management Preventive
    Configure the system to use Apple's Keychain Access to store passwords and certificates. CC ID 04481 System hardening through configuration management Preventive
    Change the default password to Apple's Keychain. CC ID 04482 System hardening through configuration management Preventive
    Configure Apple's Keychain items to ask for the Keychain password. CC ID 04483 System hardening through configuration management Preventive
    Configure the Syskey Encryption Key and associated password. CC ID 05978 System hardening through configuration management Preventive
    Configure the "Accounts: Limit local account use of blank passwords to console logon only" setting. CC ID 04505 System hardening through configuration management Preventive
    Configure the "System cryptography: Force strong key protection for user keys stored in the computer" setting. CC ID 04534 System hardening through configuration management Preventive
    Configure interactive logon for accounts that do not have assigned authenticators in accordance with organizational standards. CC ID 05267 System hardening through configuration management Preventive
    Enable or disable remote connections from accounts with empty authenticators, as appropriate. CC ID 05269 System hardening through configuration management Preventive
    Configure the "Send LanMan compatible password" setting. CC ID 05271 System hardening through configuration management Preventive
    Configure the authenticator policy to ban or allow authenticators as words found in dictionaries, as appropriate. CC ID 05993 System hardening through configuration management Preventive
    Configure the authenticator policy to ban or allow authenticators as proper names, as necessary. CC ID 17030 System hardening through configuration management Preventive
    Set the most number of characters required for the BitLocker Startup PIN correctly. CC ID 06054 System hardening through configuration management Preventive
    Set the default folder for BitLocker recovery passwords correctly. CC ID 06055 System hardening through configuration management Preventive
    Configure the "Disable password strength validation for Peer Grouping" setting to organizational standards. CC ID 10866 System hardening through configuration management Preventive
    Configure the "Set the interval between synchronization retries for Password Synchronization" setting to organizational standards. CC ID 11185 System hardening through configuration management Preventive
    Configure the "Set the number of synchronization retries for servers running Password Synchronization" setting to organizational standards. CC ID 11187 System hardening through configuration management Preventive
    Configure the "Turn off password security in Input Panel" setting to organizational standards. CC ID 11296 System hardening through configuration management Preventive
    Configure the "Turn on the Windows to NIS password synchronization for users that have been migrated to Active Directory" setting to organizational standards. CC ID 11355 System hardening through configuration management Preventive
    Configure the authenticator display screen to organizational standards. CC ID 13794 System hardening through configuration management Preventive
    Configure the authenticator field to disallow memorized secrets found in the memorized secret list. CC ID 13808 System hardening through configuration management Preventive
    Configure the authenticator display screen to display the memorized secret as an option. CC ID 13806 System hardening through configuration management Preventive
    Configure the look-up secret authenticator to dispose of memorized secrets after their use. CC ID 13817 System hardening through configuration management Corrective
    Configure the memorized secret verifiers to refrain from allowing anonymous users to access memorized secret hints. CC ID 13823 System hardening through configuration management Preventive
    Configure the system to allow paste functionality for the authenticator field. CC ID 13819 System hardening through configuration management Preventive
    Configure the system to require successful authentication before an authenticator for a user account is changed. CC ID 13821 System hardening through configuration management Preventive
    Obscure authentication information during the login process. CC ID 15316 System hardening through configuration management Preventive
    Change authenticators, as necessary. CC ID 15315
    [Ensure access keys are rotated every 90 days or less (Automated) Description: Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests that you make to AWS. AWS users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. It is recommended that all access keys be regularly rotated. Rationale: Rotating access keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used. Access keys should be rotated to ensure that data cannot be accessed with an old key which might have been lost, cracked, or stolen. Audit: Perform the following to determine if access keys are rotated as prescribed: From Console: 1. Go to Management Console (https://console.aws.amazon.com/iam) 2. Click on Users 3. Click setting icon 4. Select Console last sign-in 5. Click Close 6. Ensure that Access key age is less than 90 days ago. note) None in the Access key age means the user has not used the access key. From Command Line: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d The access_key_1_last_rotated and the access_key_2_last_rotated fields in this file notes The date and time, in ISO 8601 date-time format, when the user's access key was created or last changed. If the user does not have an active access key, the value in this field is N/A (not applicable). Remediation: Perform the following to rotate access keys: From Console: 1. Go to Management Console (https://console.aws.amazon.com/iam) 2. Click on Users 3. Click on Security Credentials 4. As an Administrator o Click on Make Inactive for keys that have not been rotated in 90 Days 5. As an IAM User o Click on Make Inactive or Delete for keys which have not been rotated or used in 90 Days 6. Click on Create Access Key 7. Update programmatic call with new Access Key credentials From Command Line: 1. While the first access key is still active, create a second access key, which is active by default. Run the following command: aws iam create-access-key At this point, the user has two active access keys. 2. Update all applications and tools to use the new access key. 3. Determine whether the first access key is still in use by using this command: aws iam get-access-key-last-used 4. One approach is to wait several days and then check the old access key for any use before proceeding. Even if step Step 3 indicates no use of the old key, it is recommended that you do not immediately delete the first access key. Instead, change the state of the first access key to Inactive using this command: aws iam update-access-key 5. Use only the new access key to confirm that your applications are working. Any applications and tools that still use the original access key will stop working at this point because they no longer have access to AWS resources. If you find such an application or tool, you can switch its state back to Active to reenable the first access key. Then return to step Step 2 and update this application to use the new key. 6. After you wait some period of time to ensure that all applications and tools have been updated, you can delete the first access key with this command: aws iam delete-access-key 1.14]
    System hardening through configuration management Preventive
    Change all default authenticators. CC ID 15309 System hardening through configuration management Preventive
    Configure user accounts. CC ID 07036 System hardening through configuration management Preventive
    Configure accounts with administrative privilege. CC ID 07033
    [{does not exist} Ensure no 'root' user account access key exists (Automated) Description: The 'root' user account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account. It is recommended that all access keys associated with the 'root' user account be deleted. Rationale: Deleting access keys associated with the 'root' user account limits vectors by which the account can be compromised. Additionally, deleting the 'root' access keys encourages the creation and use of role based accounts that are least privileged. Audit: Perform the following to determine if the 'root' user account has access keys: From Console: 1. Login to the AWS Management Console. 2. Click Services. 3. Click IAM. 4. Click on Credential Report. 5. This will download a .csv file which contains credential usage for all IAM users within an AWS Account - open this file. 6. For the user, ensure the access_key_1_active and access_key_2_active fields are set to FALSE. From Command Line: Run the following command: aws iam get-account-summary | grep "AccountAccessKeysPresent" If no 'root' access keys exist the output will show "AccountAccessKeysPresent": 0,. If the output shows a "1", then 'root' keys exist and should be deleted. Remediation: Perform the following to delete active 'root' user access keys. From Console: 1. Sign in to the AWS Management Console as 'root' and open the IAM console at https://console.aws.amazon.com/iam/. 2. Click on at the top right and select My Security Credentials from the drop down list. 3. On the pop out screen Click on Continue to Security Credentials. 4. Click on Access Keys (Access Key ID and Secret Access Key). 5. Under the Status column (if there are any Keys which are active). 6. Click Delete (Note: Deleted keys cannot be recovered). Note: While a key can be made inactive, this inactive key will still show up in the CLI command from the audit procedure, and may lead to a key being falsely flagged as being non-compliant. 1.4]
    System hardening through configuration management Preventive
    Configure routing tables to organizational standards. CC ID 15438
    [Ensure routing tables for VPC peering are "least access" (Manual) Description: Once a VPC peering connection is established, routing tables must be updated to establish any connections between the peered VPCs. These routes can be as specific as desired - even peering a VPC to only a single host on the other side of the connection. Rationale: Being highly selective in peering routing tables is a very effective way of minimizing the impact of breach as resources outside of these routes are inaccessible to the peered VPC. Audit: Review routing tables of peered VPCs for whether they route all subnets of each VPC and whether that is necessary to accomplish the intended purposes for peering the VPCs. From Command Line: 1. List all the route tables from a VPC and check if "GatewayId" is pointing to a (e.g. pcx-1a2b3c4d) and if "DestinationCidrBlock" is as specific as desired. aws ec2 describe-route-tables --filter "Name=vpc-id,Values=" --query "RouteTables[*].{RouteTableId:RouteTableId, VpcId:VpcId, Routes:Routes, AssociatedSubnets:Associations[*].SubnetId}" Remediation: Remove and add route table entries to ensure that the least number of subnets or hosts as is required to accomplish the purpose for peering are routable. From Command Line: 1. For each containing routes non compliant with your routing policy (which grants more than desired "least access"), delete the non compliant route: aws ec2 delete-route --route-table-id --destination-cidrblock 2. Create a new compliant route: aws ec2 create-route --route-table-id --destination-cidrblock --vpc-peering-connection-id 5.5]
    System hardening through configuration management Preventive
    Configure Services settings to organizational standards. CC ID 07434 System hardening through configuration management Preventive
    Configure AWS Config to organizational standards. CC ID 15440
    [Ensure AWS Config is enabled in all regions (Automated) Description: AWS Config is a web service that performs configuration management of supported AWS resources within your account and delivers log files to you. The recorded information includes the configuration item (AWS resource), relationships between configuration items (AWS resources), any configuration changes between resources. It is recommended AWS Config be enabled in all regions. Rationale: The AWS configuration item history captured by AWS Config enables security analysis, resource change tracking, and compliance auditing. Impact: It is recommended AWS Config be enabled in all regions. Audit: Process to evaluate AWS Config configuration per region From Console: 1. Sign in to the AWS Management Console and open the AWS Config console at https://console.aws.amazon.com/config/. 2. On the top right of the console select target Region. 3. If a Config recorder is enabled in this region, you should navigate to the Settings page from the navigation menu on the left hand side. If a Config recorder is not yet enabled in this region then you should select "Get Started". 4. Ensure "Record all resources supported in this region" is checked. 5. Ensure "Include global resources (e.g., AWS IAM resources)" is checked, unless it is enabled in another region (this is only required in one region) 6. Ensure the correct S3 bucket has been defined. 7. Ensure the correct SNS topic has been defined. 8. Repeat steps 2 to 7 for each region. From Command Line: 1. Run this command to show all AWS Config recorders and their properties: aws configservice describe-configuration-recorders 2. Evaluate the output to ensure that all recorders have a recordingGroup object which includes "allSupported": true. Additionally, ensure that at least one recorder has "includeGlobalResourceTypes": true Note: There is one more parameter "ResourceTypes" in recordingGroup object. We don't need to check the same as whenever we set "allSupported": true, AWS enforces resource types to be empty ("ResourceTypes":[]) Sample Output: { "ConfigurationRecorders": [ { "recordingGroup": { "allSupported": true, "resourceTypes": [], "includeGlobalResourceTypes": true }, "roleARN": "arn:aws:iam:::role/servicerole/", "name": "default" } ] } 3. Run this command to show the status for all AWS Config recorders: aws configservice describe-configuration-recorder-status 4. In the output, find recorders with name key matching the recorders that were evaluated in step 2. Ensure that they include "recording": true and "lastStatus": "SUCCESS" Remediation: To implement AWS Config configuration: From Console: 1. Select the region you want to focus on in the top right of the console 2. Click Services 3. Click Config 4. If a Config recorder is enabled in this region, you should navigate to the Settings page from the navigation menu on the left hand side. If a Config recorder is not yet enabled in this region then you should select "Get Started". 5. Select "Record all resources supported in this region" 6. Choose to include global resources (IAM resources) 7. Specify an S3 bucket in the same account or in another managed AWS account 8. Create an SNS Topic from the same AWS account or another managed AWS account From Command Line: 1. Ensure there is an appropriate S3 bucket, SNS topic, and IAM role per the AWS Config Service prerequisites. 2. Run this command to create a new configuration recorder: aws configservice put-configuration-recorder --configuration-recorder name=default,roleARN=arn:aws:iam::012345678912:role/myConfigRole --recordinggroup allSupported=true,includeGlobalResourceTypes=true 3. Create a delivery channel configuration file locally which specifies the channel attributes, populated from the prerequisites set up previously: { "name": "default", "s3BucketName": "my-config-bucket", "snsTopicARN": "arn:aws:sns:us-east-1:012345678912:my-config-notice", "configSnapshotDeliveryProperties": { "deliveryFrequency": "Twelve_Hours" } } 4. Run this command to create a new delivery channel, referencing the json configuration file made in the previous step: aws configservice put-delivery-channel --delivery-channel file://deliveryChannel.json 5. Start the configuration recorder by running the following command: aws configservice start-configuration-recorder --configuration-recorder-name default 3.3]
    System hardening through configuration management Preventive
    Configure Logging settings in accordance with organizational standards. CC ID 07611 System hardening through configuration management Preventive
    Configure "CloudTrail" to organizational standards. CC ID 15443
    [Ensure CloudTrail is enabled in all regions (Automated) Description: AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail provides a history of AWS API calls for an account, including API calls made via the Management Console, SDKs, command line tools, and higher-level AWS services (such as CloudFormation). Rationale: The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. Additionally, • ensuring that a multi-regions trail exists will ensure that unexpected activity occurring in otherwise unused regions is detected • ensuring that a multi-regions trail exists will ensure that Global Service Logging is enabled for a trail by default to capture recording of events generated on AWS global services • for a multi-regions trail, ensuring that management events configured for all type of Read/Writes ensures recording of management operations that are performed on all resources in an AWS account Impact: S3 lifecycle features can be used to manage the accumulation and management of logs over time. See the following AWS resource for more information on these features: 1. https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html Audit: Perform the following to determine if CloudTrail is enabled for all regions: From Console: 1. Sign in to the AWS Management Console and open the CloudTrail console at https://console.aws.amazon.com/cloudtrail 2. Click on Trails on the left navigation pane • You will be presented with a list of trails across all regions 3. Ensure at least one Trail has Yes specified in the Multi-region trail column 4. Click on a trail via the link in the Name column 5. Ensure Logging is set to ON 6. Ensure Multi-region trail is set to Yes 7. In section Management Events ensure API activity set to ALL From Command Line: aws cloudtrail describe-trails Ensure IsMultiRegionTrail is set to true aws cloudtrail get-trail-status --name Ensure IsLogging is set to true aws cloudtrail get-event-selectors --trail-name Ensure there is at least one fieldSelector for a Trail that equals Management This should NOT output any results for Field: "readOnly" if either true or false is returned one of the checkboxes is not selected for read or write Example of correct output: "TrailARN": "", "AdvancedEventSelectors": [ { "Name": "Management events selector", "FieldSelectors": [ { "Field": "eventCategory", "Equals": [ "Management" ] Remediation: Perform the following to enable global (Multi-region) CloudTrail logging: From Console: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/cloudtrail 2. Click on Trails on the left navigation pane 3. Click Get Started Now , if presented • Click Add new trail • Enter a trail name in the Trail name box • A trail created in the console is a multi-region trail by default • Specify an S3 bucket name in the S3 bucket box • Specify the AWS KMS alias under the Log file SSE-KMS encryption section or create a new key • Click Next 4. Ensure Management events check box is selected. 5. Ensure both Read and Write are check under API activity 6. Click Next 7. review your trail settings and click Create trail From Command Line: aws cloudtrail create-trail --name --bucket-name --is-multi-region-trail aws cloudtrail update-trail --name --is-multi-region-trail Note: Creating CloudTrail via CLI without providing any overriding options configures Management Events to set All type of Read/Writes by default. Default Value: Not Enabled 3.1]
    System hardening through configuration management Preventive
    Configure "CloudTrail log file validation" to organizational standards. CC ID 15437
    [Ensure CloudTrail log file validation is enabled (Automated) Description: CloudTrail log file validation creates a digitally signed digest file containing a hash of each log that CloudTrail writes to S3. These digest files can be used to determine whether a log file was changed, deleted, or unchanged after CloudTrail delivered the log. It is recommended that file validation be enabled on all CloudTrails. Rationale: Enabling log file validation will provide additional integrity checking of CloudTrail logs. Audit: Perform the following on each trail to determine if log file validation is enabled: From Console: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/cloudtrail 2. Click on Trails on the left navigation pane 3. For Every Trail: • Click on a trail via the link in the Name column • Under the General details section, ensure Log file validation is set to Enabled From Command Line: aws cloudtrail describe-trails Ensure LogFileValidationEnabled is set to true for each trail Remediation: Perform the following to enable log file validation on a given trail: From Console: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/cloudtrail 2. Click on Trails on the left navigation pane 3. Click on target trail 4. Within the General details section click edit 5. Under the Advanced settings section 6. Check the enable box under Log file validation 7. Click Save changes From Command Line: aws cloudtrail update-trail --name --enable-log-file-validation Note that periodic validation of logs using these digests can be performed by running the following command: aws cloudtrail validate-logs --trail-arn --start-time --end-time Default Value: Not Enabled 3.2]
    System hardening through configuration management Preventive
    Configure "VPC flow logging" to organizational standards. CC ID 15436
    [Ensure VPC flow logging is enabled in all VPCs (Automated) Description: VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs. It is recommended that VPC Flow Logs be enabled for packet "Rejects" for VPCs. Rationale: VPC Flow Logs provide visibility into network traffic that traverses the VPC and can be used to detect anomalous traffic or insight during security workflows. Impact: By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods: 1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/Settin gLogRetention.html Audit: Perform the following to determine if VPC Flow logs are enabled: From Console: 1. Sign into the management console 2. Select Services then VPC 3. In the left navigation pane, select Your VPCs 4. Select a VPC 5. In the right pane, select the Flow Logs tab. 6. Ensure a Log Flow exists that has Active in the Status column. From Command Line: 1. Run describe-vpcs command (OSX/Linux/UNIX) to list the VPC networks available in the current AWS region: aws ec2 describe-vpcs --region --query Vpcs[].VpcId 2. The command output returns the VpcId available in the selected region. 3. Run describe-flow-logs command (OSX/Linux/UNIX) using the VPC ID to determine if the selected virtual network has the Flow Logs feature enabled: aws ec2 describe-flow-logs --filter "Name=resource-id,Values=" 4. If there are no Flow Logs created for the selected VPC, the command output will return an empty list []. 5. Repeat step 3 for other VPCs available in the same region. 6. Change the region by updating --region and repeat steps 1 - 5 for all the VPCs. Remediation: Perform the following to determine if VPC Flow logs is enabled: From Console: 1. Sign into the management console 2. Select Services then VPC 3. In the left navigation pane, select Your VPCs 4. Select a VPC 5. In the right pane, select the Flow Logs tab. 6. If no Flow Log exists, click Create Flow Log 7. For Filter, select Reject 8. Enter in a Role and Destination Log Group 9. Click Create Log Flow 10. Click on CloudWatch Logs Group Note: Setting the filter to "Reject" will dramatically reduce the logging data accumulation for this recommendation and provide sufficient information for the purposes of breach detection, research and remediation. However, during periods of least privilege security group engineering, setting this the filter to "All" can be very helpful in discovering existing traffic flows required for proper operation of an already running environment. From Command Line: 1. Create a policy document and name it as role_policy_document.json and paste the following content: { "Version": "2012-10-17", "Statement": [ { "Sid": "test", "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } 2. Create another policy document and name it as iam_policy.json and paste the following content: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action":[ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogGroups", "logs:DescribeLogStreams", "logs:PutLogEvents", "logs:GetLogEvents", "logs:FilterLogEvents" ], "Resource": "*" } ] } 3. Run the below command to create an IAM role: aws iam create-role --role-name --assume-role-policydocument file://role_policy_document.json 4. Run the below command to create an IAM policy: aws iam create-policy --policy-name --policy-document file://iam-policy.json 5. Run attach-group-policy command using the IAM policy ARN returned at the previous step to attach the policy to the IAM role (if the command succeeds, no output is returned): aws iam attach-group-policy --policy-arn arn:aws:iam:::policy/ --group-name 6. Run describe-vpcs to get the VpcId available in the selected region: aws ec2 describe-vpcs --region 7. The command output should return the VPC Id available in the selected region. 8. Run create-flow-logs to create a flow log for the vpc: aws ec2 create-flow-logs --resource-type VPC --resource-ids -traffic-type REJECT --log-group-name --deliver-logspermission-arn 9. Repeat step 8 for other vpcs available in the selected region. 10. Change the region by updating --region and repeat remediation procedure for other vpcs. 3.7]
    System hardening through configuration management Preventive
    Configure "object-level logging" to organizational standards. CC ID 15433
    [Ensure that Object-level logging for write events is enabled for S3 bucket (Automated) Description: S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don't log data events and so it is recommended to enable Object-level logging for S3 buckets. Rationale: Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity within your S3 Buckets using Amazon CloudWatch Events. Impact: Enabling logging for these object level events may significantly increase the number of events logged and may incur additional cost. Audit: From Console: 1. Login to the AWS Management Console and navigate to CloudTrail dashboard at https://console.aws.amazon.com/cloudtrail/ 2. In the left panel, click Trails and then click on the CloudTrail Name that you want to examine. 3. Review General details 4. Confirm that Multi-region trail is set to Yes 5. Scroll down to Data events 6. Confirm that it reads: Data Events:S3 Log selector template Log all events If 'basic events selectors' is being used it should read: Data events: S3 Bucket Name: All current and future S3 buckets Write: Enabled 7. Repeat steps 2 to 6 to verify that Multi-region trail and Data events logging of S3 buckets in CloudTrail. If the CloudTrails do not have multi-region and data events configured for S3 refer to the remediation below. From Command Line: 1. Run list-trails command to list the names of all Amazon CloudTrail trails currently available in all AWS regions: aws cloudtrail list-trails 2. The command output will be a list of all the trail names to include. "TrailARN": "arn:aws:cloudtrail:::trail/", "Name": "", "HomeRegion": "" 3. Next run 'get-trail- command to determine Multi-region. aws cloudtrail get-trail --name --region 4. The command output should include: "IsMultiRegionTrail": true, 5. Next run get-event-selectors command using the Name of the trail and the region returned in step 2 to determine if Data events logging feature is enabled within the selected CloudTrail trail for all S3 buckets: aws cloudtrail get-event-selectors --region --trail-name --query EventSelectors[*].DataResources[] 6. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector. "Type": "AWS::S3::Object", "Values": [ "arn:aws:s3" 7. If the get-event-selectors command returns an empty array '[]', the Data events are not included in the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded. 8. Repeat steps 1 to 5 for auditing each CloudTrail to determine if Data events for S3 are covered. If Multi-region is not set to true and the Data events does not show S3 defined as shown refer to the remediation procedure below. Remediation: From Console: 1. Login to the AWS Management Console and navigate to S3 dashboard at https://console.aws.amazon.com/s3/ 2. In the left navigation panel, click buckets and then click on the S3 Bucket Name that you want to examine. 3. Click Properties tab to see in detail bucket configuration. 4. In the AWS Cloud Trail data events' section select the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by slicking the Configure in Cloudtrailbutton or navigating to the Cloudtrail console linkhttps://console.aws.amazon.com/cloudtrail/` 5. Once the Cloudtrail is selected, Select the data Data Events check box. 6. Select S3 from the `Data event type drop down. 7. Select Log all events from the Log selector template drop down. 8. Repeat steps 2 to 5 to enable object-level logging of write events for other S3 buckets. From Command Line: 1. To enable object-level data events logging for S3 buckets within your AWS account, run put-event-selectors command using the name of the trail that you want to reconfigure as identifier: aws cloudtrail put-event-selectors --region --trail-name --event-selectors '[{ "ReadWriteType": "WriteOnly", "IncludeManagementEvents":true, "DataResources": [{ "Type": "AWS::S3::Object", "Values": ["arn:aws:s3:::/"] }] }]' 2. The command output will be object-level event trail configuration. 3. If you want to enable it for all buckets at once then change Values parameter to ["arn:aws:s3"] in command given above. 4. Repeat step 1 for each s3 bucket to update object-level logging of write events. 5. Change the AWS region by updating the --region command parameter and perform the process for other regions. 3.8
    Ensure that Object-level logging for read events is enabled for S3 bucket (Automated) Description: S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don't log data events and so it is recommended to enable Object-level logging for S3 buckets. Rationale: Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity using Amazon CloudWatch Events. Impact: Enabling logging for these object level events may significantly increase the number of events logged and may incur additional cost. Audit: From Console: 1. Login to the AWS Management Console and navigate to CloudTrail dashboard at https://console.aws.amazon.com/cloudtrail/ 2. In the left panel, click Trails and then click on the CloudTrail Name that you want to examine. 3. Review General details 4. Confirm that Multi-region trail is set to Yes 5. Scroll down to Data events 6. Confirm that it reads: Data Events:S3 Log selector template Log all events If 'basic events selectors' is being used it should read: Data events: S3 Bucket Name: All current and future S3 buckets Write: Enabled 7. Repeat steps 2 to 6 to verify that Multi-region trail and Data events logging of S3 buckets in CloudTrail. If the CloudTrails do not have multi-region and data events configured for S3 refer to the remediation below. From Command Line: 1. Run describe-trails command to list the names of all Amazon CloudTrail trails currently available in the selected AWS region: aws cloudtrail describe-trails --region --output table --query trailList[*].Name 2. The command output will be table of the requested trail names. 3. Run get-event-selectors command using the name of the trail returned at the previous step and custom query filters to determine if Data events logging feature is enabled within the selected CloudTrail trail configuration for s3 bucket resources: aws cloudtrail get-event-selectors --region --trail-name --query EventSelectors[*].DataResources[] 4. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector. 5. If the get-event-selectors command returns an empty array, the Data events are not included into the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded. 6. Repeat steps 1 to 5 for auditing each s3 bucket to identify other trails that are missing the capability to log Data events. 7. Change the AWS region by updating the --region command parameter and perform the audit process for other regions. Remediation: From Console: 1. Login to the AWS Management Console and navigate to S3 dashboard at https://console.aws.amazon.com/s3/ 2. In the left navigation panel, click buckets and then click on the S3 Bucket Name that you want to examine. 3. Click Properties tab to see in detail bucket configuration. 4. In the AWS Cloud Trail data events' section select the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by slicking the Configure in Cloudtrailbutton or navigating to the Cloudtrail console linkhttps://console.aws.amazon.com/cloudtrail/` 5. Once the Cloudtrail is selected, Select the data Data Events check box. 6. Select S3 from the `Data event type drop down. 7. Select Log all events from the Log selector template drop down. 8. Repeat steps 2 to 5 to enable object-level logging of write events for other S3 buckets. From Command Line: 1. To enable object-level data events logging for S3 buckets within your AWS account, run put-event-selectors command using the name of the trail that you want to reconfigure as identifier: aws cloudtrail put-event-selectors --region --trail-name --event-selectors '[{ "ReadWriteType": "ReadOnly", "IncludeManagementEvents":true, "DataResources": [{ "Type": "AWS::S3::Object", "Values": ["arn:aws:s3:::/"] }] }]' 2. The command output will be object-level event trail configuration. 3. If you want to enable it for all buckets at once then change Values parameter to ["arn:aws:s3"] in command given above. 4. Repeat step 1 for each s3 bucket to update object-level logging of read events. 5. Change the AWS region by updating the --region command parameter and perform the process for other regions. 3.9]
    System hardening through configuration management Preventive
    Configure all logs to capture auditable events or actionable events. CC ID 06332 System hardening through configuration management Preventive
    Configure the log to capture AWS Organizations changes. CC ID 15445
    [Ensure AWS Organizations changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for AWS Organizations changes made in the master AWS Account. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring AWS Organizations changes can help you prevent any unwanted, accidental or intentional modifications that may lead to unauthorized access or other security breaches. This monitoring technique helps you to ensure that any unexpected changes performed within your AWS Organizations can be investigated and any unwanted changes can be rolled back. Audit: If you are using CloudTrails and CloudWatch, perform the following: 1. Ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: • Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails, Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active: aws cloudtrail get-trail-status --name Ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events: aws cloudtrail get-event-selectors --trail-name • Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All. 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = "AcceptHandshake") || ($.eventName = "AttachPolicy") || ($.eventName = "CreateAccount") || ($.eventName = "CreateOrganizationalUnit") || ($.eventName = "CreatePolicy") || ($.eventName = "DeclineHandshake") || ($.eventName = "DeleteOrganization") || ($.eventName = "DeleteOrganizationalUnit") || ($.eventName = "DeletePolicy") || ($.eventName = "DetachPolicy") || ($.eventName = "DisablePolicyType") || ($.eventName = "EnablePolicyType") || ($.eventName = "InviteAccountToOrganization") || ($.eventName = "LeaveOrganization") || ($.eventName = "MoveAccount") || ($.eventName = "RemoveAccountFromOrganization") || ($.eventName = "UpdatePolicy") || ($.eventName = "UpdateOrganizationalUnit")) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4: aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic: aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for AWS Organizations changes and the taken from audit step 1: aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 -filter-pattern '{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = "AcceptHandshake") || ($.eventName = "AttachPolicy") || ($.eventName = "CreateAccount") || ($.eventName = "CreateOrganizationalUnit") || ($.eventName = "CreatePolicy") || ($.eventName = "DeclineHandshake") || ($.eventName = "DeleteOrganization") || ($.eventName = "DeleteOrganizationalUnit") || ($.eventName = "DeletePolicy") || ($.eventName = "DetachPolicy") || ($.eventName = "DisablePolicyType") || ($.eventName = "EnablePolicyType") || ($.eventName = "InviteAccountToOrganization") || ($.eventName = "LeaveOrganization") || ($.eventName = "MoveAccount") || ($.eventName = "RemoveAccountFromOrganization") || ($.eventName = "UpdatePolicy") || ($.eventName = "UpdateOrganizationalUnit")) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify: aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2: aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2: aws cloudwatch put-metric-alarm --alarm-name `` -metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.15]
    System hardening through configuration management Preventive
    Configure the log to capture Identity and Access Management policy changes. CC ID 15442
    [Ensure IAM policy changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established changes made to Identity and Access Management (IAM) policies. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to IAM policies will help ensure authentication and authorization controls remain intact. Impact: Monitoring these changes may cause a number of "false positives" more so in larger environments. This alert may need more tuning then others to eliminate some of those erroneous alerts. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventNa me=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolic y)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=Del etePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersi on)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.event Name=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGr oupPolicy)||($.eventName=DetachGroupPolicy)}" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for IAM policy changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name `` -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 -filter-pattern '{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventNa me=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolic y)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=Del etePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersi on)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.event Name=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGr oupPolicy)||($.eventName=DetachGroupPolicy)}' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` -metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.4]
    System hardening through configuration management Preventive
    Configure the log to capture management console sign-in without multi-factor authentication. CC ID 15441
    [Ensure management console sign-in without MFA is monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for console logins that are not protected by multi-factor authentication (MFA). Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring for single-factor console logins will increase visibility into accounts that are not protected by MFA. These type of accounts are more susceptible to compromise and unauthorized access. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name Ensure in the output that IsLogging is set to TRUE • Ensure identified Multi-region 'Cloudtrail' captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure in the output there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = "ConsoleLogin") && ($.additionalEventData.MFAUsed != "Yes") }" Or (To reduce false positives incase Single Sign-On (SSO) is used in organization): "filterPattern": "{ ($.eventName = "ConsoleLogin") && ($.additionalEventData.MFAUsed != "Yes") && ($.userIdentity.type = "IAMUser") && ($.responseElements.ConsoleLogin = "Success") }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for AWS Management Console sign-in without MFA and the taken from audit step 1. Use Command: aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = "ConsoleLogin") && ($.additionalEventData.MFAUsed != "Yes") }' Or (To reduce false positives incase Single Sign-On (SSO) is used in organization): aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = "ConsoleLogin") && ($.additionalEventData.MFAUsed != "Yes") && ($.userIdentity.type = "IAMUser") && ($.responseElements.ConsoleLogin = "Success") }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold -evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.2]
    System hardening through configuration management Preventive
    Configure the log to capture route table changes. CC ID 15439
    [Ensure route table changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. Routing tables are used to route network traffic between subnets and to network gateways. It is recommended that a metric filter and alarm be established for changes to route tables. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to route tables will help ensure that all VPC traffic flows through an expected path and prevent any accidental or intentional modifications that may lead to uncontrolled network traffic. An alarm should be triggered every time an AWS API call is performed to create, replace, delete, or disassociate a Route Table. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{($.eventSource = ec2.amazonaws.com) && ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for route table changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold -evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.13]
    System hardening through configuration management Preventive
    Configure the log to capture virtual private cloud changes. CC ID 15435
    [Ensure VPC changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is possible to have more than 1 VPC within an account, in addition it is also possible to create a peer connection between 2 VPCs enabling network traffic to route between VPCs. It is recommended that a metric filter and alarm be established for changes made to VPCs. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. VPCs in AWS are logically isolated virtual networks that can be used to launch AWS resources. Monitoring changes to VPC configuration will help ensure VPC traffic flow is not getting impacted. Changes to VPCs can impact network accessibility from the public internet and additionally impact VPC traffic flow to and from resources launched in the VPC. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for VPC changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 -filter-pattern '{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` -metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.14]
    System hardening through configuration management Preventive
    Configure the log to capture changes to encryption keys. CC ID 15432
    [Ensure disabling or scheduled deletion of customer created CMKs is monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for customer created CMKs which have changed state to disabled or scheduled deletion. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Data encrypted with disabled or deleted keys will no longer be accessible. Changes in the state of a CMK should be monitored to make sure the change is intentional. Impact: Creation, storage, and management of CMK may create additional labor requirements compared to the use of Provide Managed Keys. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for disabled or scheduled for deletion CMK's and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metrictransformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.7]
    System hardening through configuration management Preventive
    Configure the log to capture unauthorized API calls. CC ID 15429
    [Ensure unauthorized API calls are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for unauthorized API calls. Rationale: Monitoring unauthorized API calls will help reduce time to detect malicious activity and can alert you to a potential security incident. CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Impact: This alert may be triggered by normal read-only console activities that attempt to opportunistically gather optional information, but gracefully fail if they don't have permissions. If an excessive number of alerts are being generated then an organization may wish to consider adding read access to the limited IAM user permissions simply to quiet the alerts. In some cases doing this may allow the users to actually view some areas of the system - any additional access given should be reviewed for alignment with the original limited IAM user intent. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with "Name":note` • From value associated with "CloudWatchLogsLogGroupArn" note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name <"Name" as shown in describetrails> Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this that you captured in step 1: aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.errorCode ="*UnauthorizedOperation") || ($.errorCode ="AccessDenied*") && ($.sourceIPAddress!="delivery.logs.amazonaws.com") && ($.eventName!="HeadBucket") }", 4. Note the "filterName" value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query "MetricAlarms[?MetricName == `unauthorized_api_calls_metric`]" 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for unauthorized API calls and the taken from audit step 1. aws logs put-metric-filter --log-group-name "cloudtrail_log_group_name" -filter-name "" --metric-transformations metricName=unauthorized_api_calls_metric,metricNamespace=CISBenchmark,metricValue=1 --filter-pattern "{ ($.errorCode ="*UnauthorizedOperation") || ($.errorCode ="AccessDenied*") && ($.sourceIPAddress!="delivery.logs.amazonaws.com") && ($.eventName!="HeadBucket") }" Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. Note: Capture the TopicArn displayed when creating the SNS Topic in Step 2. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name "unauthorized_api_calls_alarm" --metric-name "unauthorized_api_calls_metric" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold -evaluation-periods 1 --namespace "CISBenchmark" --alarm-actions 4.1]
    System hardening through configuration management Preventive
    Configure the log to capture changes to network gateways. CC ID 15421
    [Ensure changes to network gateways are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. Network gateways are required to send/receive traffic to a destination outside of a VPC. It is recommended that a metric filter and alarm be established for changes to network gateways. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to network gateways will help ensure that all ingress/egress traffic traverses the VPC border via a controlled path. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for network gateways changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.12]
    System hardening through configuration management Preventive
    Configure the log to capture configuration changes. CC ID 06881
    [Ensure AWS Config configuration changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for detecting changes to AWS Config's configurations. Rationale: Monitoring changes to AWS Config configuration will help ensure sustained visibility of configuration items within the AWS account. CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel) ||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for AWS Configuration changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel) ||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.9
    Ensure CloudTrail configuration changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, where metric filters and alarms can be established. It is recommended that a metric filter and alarm be utilized for detecting changes to CloudTrail's configurations. Rationale: Monitoring changes to CloudTrail's configuration will help ensure sustained visibility to activities performed in the AWS account. Impact: These steps can be performed manually in a company's existing SIEM platform in cases where CloudTrail logs are monitored outside of the AWS monitoring tools within CloudWatch. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured, or that the filters are configured in the appropriate SIEM alerts: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the filterPattern output from the above command contains the following: "filterPattern": "{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for cloudtrail configuration changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 -namespace 'CISBenchmark' --alarm-actions 4.5]
    System hardening through configuration management Preventive
    Configure Key, Certificate, Password, Authentication and Identity Management settings in accordance with organizational standards. CC ID 07621 System hardening through configuration management Preventive
    Configure "MFA Delete" to organizational standards. CC ID 15430
    [Ensure MFA Delete is enabled on S3 buckets (Manual) Description: Once MFA Delete is enabled on your sensitive and classified S3 bucket it requires the user to have two forms of authentication. Rationale: Adding MFA delete to an S3 bucket, requires additional authentication when you change the version state of your bucket or you delete and object version adding another layer of security in the event your security credentials are compromised or unauthorized access is granted. Impact: Enabling MFA delete on an S3 bucket could required additional administrator oversight. Enabling MFA delete may impact other services that automate the creation and/or deletion of S3 buckets. Audit: Perform the steps below to confirm MFA delete is configured on an S3 Bucket From Console: 1. Login to the S3 console at https://console.aws.amazon.com/s3/ 2. Click the Check box next to the Bucket name you want to confirm 3. In the window under Properties 4. Confirm that Versioning is Enabled 5. Confirm that MFA Delete is Enabled From Command Line: 1. Run the get-bucket-versioning aws s3api get-bucket-versioning --bucket my-bucket Output example: Enabled Enabled If the Console or the CLI output does not show Versioning and MFA Delete enabled refer to the remediation below. Remediation: Perform the steps below to enable MFA delete on an S3 bucket. Note: -You cannot enable MFA Delete using the AWS Management Console. You must use the AWS CLI or API. -You must use your 'root' account to enable MFA Delete on S3 buckets. From Command line: 1. Run the s3api put-bucket-versioning command aws s3api put-bucket-versioning --profile my-root-profile --bucket Bucket_Name --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa “arn:aws:iam::aws_account_id:mfa/root-account-mfa-device passcode" 2.1.2]
    System hardening through configuration management Preventive
    Configure Identity and Access Management policies to organizational standards. CC ID 15422
    [Ensure IAM policies that allow full "*:*" administrative privileges are not attached (Automated) Description: IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended and considered a standard security advice to grant least privilege -that is, granting only the permissions required to perform a task. Determine what users need to do and then craft policies for them that let the users perform only those tasks, instead of allowing full administrative privileges. Rationale: It's more secure to start with a minimum set of permissions and grant additional permissions as necessary, rather than starting with permissions that are too lenient and then trying to tighten them later. Providing full administrative privileges instead of restricting to the minimum set of permissions that the user is required to do exposes the resources to potentially unwanted actions. IAM policies that have a statement with "Effect": "Allow" with "Action": "*" over "Resource": "*" should be removed. Audit: Perform the following to determine what policies are created: From Command Line: 1. Run the following to get a list of IAM policies: aws iam list-policies --only-attached --output text 2. For each policy returned, run the following command to determine if any policies is allowing full administrative privileges on the account: aws iam get-policy-version --policy-arn --version-id 3. In output ensure policy should not have any Statement block with "Effect": "Allow" and Action set to "*" and Resource set to "*" Remediation: From Console: Perform the following to detach the policy that has full administrative privileges: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, click Policies and then search for the policy name found in the audit step. 3. Select the policy that needs to be deleted. 4. In the policy action menu, select first Detach 5. Select all Users, Groups, Roles that have this policy attached 6. Click Detach Policy 7. In the policy action menu, select Detach 8. Select the newly detached policy and select Delete From Command Line: Perform the following to detach the policy that has full administrative privileges as found in the audit step: 1. Lists all IAM users, groups, and roles that the specified managed policy is attached to. aws iam list-entities-for-policy --policy-arn 2. Detach the policy from all IAM Users: aws iam detach-user-policy --user-name --policy-arn 3. Detach the policy from all IAM Groups: aws iam detach-group-policy --group-name --policy-arn 4. Detach the policy from all IAM Roles: aws iam detach-role-policy --role-name --policy-arn 1.16]
    System hardening through configuration management Preventive
    Configure the Identity and Access Management Access analyzer to organizational standards. CC ID 15420
    [Ensure that IAM Access analyzer is enabled for all regions (Automated) Description: Enable IAM Access analyzer for IAM policies about all resources in each active AWS region. IAM Access Analyzer is a technology introduced at AWS reinvent 2019. After the Analyzer is enabled in IAM, scan results are displayed on the console showing the accessible resources. Scans show resources that other accounts and federated users can access, such as KMS keys and IAM roles. So the results allow you to determine if an unintended user is allowed, making it easier for administrators to monitor least privileges access. Access Analyzer analyzes only policies that are applied to resources in the same AWS Region. Rationale: AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data. Access Analyzer identifies resources that are shared with external principals by using logicbased reasoning to analyze the resource-based policies in your AWS environment. IAM Access Analyzer continuously monitors all policies for S3 bucket, IAM roles, KMS (Key Management Service) keys, AWS Lambda functions, and Amazon SQS(Simple Queue Service) queues. Audit: From Console: 1. Open the IAM console at https://console.aws.amazon.com/iam/ 2. Choose Access analyzer 3. Click 'Analyzers' 4. Ensure that at least one analyzer is present 5. Ensure that the STATUS is set to Active 6. Repeat these step for each active region From Command Line: 1. Run the following command: aws accessanalyzer list-analyzers | grep status 2. Ensure that at least one Analyzer the status is set to ACTIVE 3. Repeat the steps above for each active region. If an Access analyzer is not listed for each region or the status is not set to active refer to the remediation procedure below. Remediation: From Console: Perform the following to enable IAM Access analyzer for IAM policies: 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Access analyzer. 3. Choose Create analyzer. 4. On the Create analyzer page, confirm that the Region displayed is the Region where you want to enable Access Analyzer. 5. Enter a name for the analyzer. Optional as it will generate a name for you automatically. 6. Add any tags that you want to apply to the analyzer. Optional. 7. Choose Create Analyzer. 8. Repeat these step for each active region From Command Line: Run the following command: aws accessanalyzer create-analyzer --analyzer-name --type Repeat this command above for each active region. Note: The IAM Access Analyzer is successfully configured only when the account you use has the necessary permissions. 1.20]
    System hardening through configuration management Preventive
    Configure the "Minimum password length" to organizational standards. CC ID 07711
    [Ensure IAM password policy requires minimum length of 14 or greater (Automated) Description: Password policies are, in part, used to enforce password complexity requirements. IAM password policies can be used to ensure password are at least a given length. It is recommended that the password policy require a minimum password length 14. Rationale: Setting a password complexity policy increases account resiliency against brute force login attempts. Audit: Perform the following to ensure the password policy is configured as prescribed: From Console: 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Ensure "Minimum password length" is set to 14 or greater. From Command Line: aws iam get-account-password-policy Ensure the output of the above command includes "MinimumPasswordLength": 14 (or higher) Remediation: Perform the following to set the password policy as prescribed: From Console: 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Set "Minimum password length" to 14 or greater. 5. Click "Apply password policy" From Command Line: aws iam update-account-password-policy --minimum-password-length 14 Note: All commands starting with "aws iam update-account-password-policy" can be combined into a single command. 1.8]
    System hardening through configuration management Preventive
    Configure Encryption settings in accordance with organizational standards. CC ID 07625
    [Ensure that encryption-at-rest is enabled for RDS Instances (Automated) Description: Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances. After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance. Rationale: Databases are likely to hold sensitive and critical data, it is highly recommended to implement encryption in order to protect your data from unauthorized access or disclosure. With RDS encryption enabled, the data stored on the instance's underlying storage, the automated backups, read replicas, and snapshots, are all encrypted. Audit: From Console: 1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/ 2. In the navigation pane, under RDS dashboard, click Databases. 3. Select the RDS Instance that you want to examine 4. Click Instance Name to see details, then click on Configuration tab. 5. Under Configuration Details section, In Storage pane search for the Encryption Enabled Status. 6. If the current status is set to Disabled, Encryption is not enabled for the selected RDS Instance database instance. 7. Repeat steps 3 to 7 to verify encryption status of other RDS Instance in same region. 8. Change region from the top of the navigation bar and repeat audit for other regions. From Command Line: 1. Run describe-db-instances command to list all RDS Instance database names, available in the selected AWS region, Output will return each Instance database identifier-name. aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. Run again describe-db-instances command using the RDS Instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status True Or False. aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].StorageEncrypted' 3. If the StorageEncrypted parameter value is False, Encryption is not enabled for the selected RDS database instance. 4. Repeat steps 1 to 3 for auditing each RDS Instance and change Region to verify for other regions Remediation: From Console: 1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on Databases 3. Select the Database instance that needs to be encrypted. 4. Click on Actions button placed at the top right and select Take Snapshot. 5. On the Take Snapshot page, enter a database name of which you want to take a snapshot in the Snapshot Name field and click on Take Snapshot. 6. Select the newly created snapshot and click on the Action button placed at the top right and select Copy snapshot from the Action menu. 7. On the Make Copy of DB Snapshot page, perform the following: • In the New DB Snapshot Identifier field, Enter a name for the new snapshot. • Check Copy Tags, New snapshot must have the same tags as the source snapshot. • Select Yes from the Enable Encryption dropdown list to enable encryption, You can choose to use the AWS default encryption key or custom key from Master Key dropdown list. 8. Click Copy Snapshot to create an encrypted copy of the selected instance snapshot. 9. Select the new Snapshot Encrypted Copy and click on the Action button placed at the top right and select Restore Snapshot button from the Action menu, This will restore the encrypted snapshot to a new database instance. 10. On the Restore DB Instance page, enter a unique name for the new database instance in the DB Instance Identifier field. 11. Review the instance configuration details and click Restore DB Instance. 12. As the new instance provisioning process is completed can update application configuration to refer to the endpoint of the new Encrypted database instance Once the database endpoint is changed at the application level, can remove the unencrypted instance. From Command Line: 1. Run describe-db-instances command to list all RDS database names available in the selected AWS region, The command output should return the database instance identifier. aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. Run create-db-snapshot command to create a snapshot for the selected database instance, The command output will return the new snapshot with name DB Snapshot Name. aws rds create-db-snapshot --region --db-snapshot-identifier --db-instance-identifier 3. Now run list-aliases command to list the KMS keys aliases available in a specified region, The command output should return each key alias currently available. For our RDS encryption activation process, locate the ID of the AWS default KMS key. aws kms list-aliases --region 4. Run copy-db-snapshot command using the default KMS key ID for RDS instances returned earlier to create an encrypted copy of the database instance snapshot, The command output will return the encrypted instance snapshot configuration. aws rds copy-db-snapshot --region --source-db-snapshotidentifier --target-db-snapshot-identifier --copy-tags --kms-key-id 5. Run restore-db-instance-from-db-snapshot command to restore the encrypted snapshot created at the previous step to a new database instance, If successful, the command output should return the new encrypted database instance configuration. aws rds restore-db-instance-from-db-snapshot --region --dbinstance-identifier --db-snapshot-identifier 6. Run describe-db-instances command to list all RDS database names, available in the selected AWS region, Output will return database instance identifier name Select encrypted database name that we just created DB-Name-Encrypted. aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 7. Run again describe-db-instances command using the RDS instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status True. aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].StorageEncrypted' 2.3.1]
    System hardening through configuration management Preventive
    Configure "Elastic Block Store volume encryption" to organizational standards. CC ID 15434
    [Ensure EBS Volume Encryption is Enabled in all Regions (Automated) Description: Elastic Compute Cloud (EC2) supports encryption at rest when using the Elastic Block Store (EBS) service. While disabled by default, forcing encryption at EBS volume creation is supported. Rationale: Encrypting data at rest reduces the likelihood that it is unintentionally exposed and can nullify the impact of disclosure if the encryption remains unbroken. Impact: Losing access or removing the KMS key in use by the EBS volumes will result in no longer being able to access the volumes. Audit: From Console: 1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ 2. Under Account attributes, click EBS encryption. 3. Verify Always encrypt new EBS volumes displays Enabled. 4. Review every region in-use. Note: EBS volume encryption is configured per region. From Command Line: 1. Run aws --region ec2 get-ebs-encryption-by-default 2. Verify that "EbsEncryptionByDefault": true is displayed. 3. Review every region in-use. Note: EBS volume encryption is configured per region. Remediation: From Console: 1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ 2. Under Account attributes, click EBS encryption. 3. Click Manage. 4. Click the Enable checkbox. 5. Click Update EBS encryption 6. Repeat for every region requiring the change. Note: EBS volume encryption is configured per region. From Command Line: 1. Run aws --region ec2 enable-ebs-encryption-by-default 2. Verify that "EbsEncryptionByDefault": true is displayed. 3. Repeat every region requiring the change. Note: EBS volume encryption is configured per region. 2.2.1]
    System hardening through configuration management Preventive
    Configure "Encryption Oracle Remediation" to organizational standards. CC ID 15366 System hardening through configuration management Preventive
    Configure the "encryption provider" to organizational standards. CC ID 14591 System hardening through configuration management Preventive
    Configure the "Microsoft network server: Digitally sign communications (always)" to organizational standards. CC ID 07626 System hardening through configuration management Preventive
    Configure the "Domain member: Digitally encrypt or sign secure channel data (always)" to organizational standards. CC ID 07657 System hardening through configuration management Preventive
    Configure the "Domain member: Digitally sign secure channel data (when possible)" to organizational standards. CC ID 07678 System hardening through configuration management Preventive
    Configure the "Network Security: Configure encryption types allowed for Kerberos" to organizational standards. CC ID 07799 System hardening through configuration management Preventive
    Configure the "System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing" to organizational standards. CC ID 07822 System hardening through configuration management Preventive
    Configure the "Configure use of smart cards on fixed data drives" to organizational standards. CC ID 08361 System hardening through configuration management Preventive
    Configure the "Enforce drive encryption type on removable data drives" to organizational standards. CC ID 08363 System hardening through configuration management Preventive
    Configure the "Configure TPM platform validation profile for BIOS-based firmware configurations" to organizational standards. CC ID 08370 System hardening through configuration management Preventive
    Configure the "Configure use of passwords for removable data drives" to organizational standards. CC ID 08394 System hardening through configuration management Preventive
    Configure the "Configure use of hardware-based encryption for removable data drives" to organizational standards. CC ID 08401 System hardening through configuration management Preventive
    Configure the "Require additional authentication at startup" to organizational standards. CC ID 08422 System hardening through configuration management Preventive
    Configure the "Deny write access to fixed drives not protected by BitLocker" to organizational standards. CC ID 08429 System hardening through configuration management Preventive
    Configure the "Configure startup mode" to organizational standards. CC ID 08430 System hardening through configuration management Preventive
    Configure the "Require client MAPI encryption" to organizational standards. CC ID 08446 System hardening through configuration management Preventive
    Configure the "Configure dial plan security" to organizational standards. CC ID 08453 System hardening through configuration management Preventive
    Configure the "Allow access to BitLocker-protected removable data drives from earlier versions of Windows" to organizational standards. CC ID 08457 System hardening through configuration management Preventive
    Configure the "Enforce drive encryption type on fixed data drives" to organizational standards. CC ID 08460 System hardening through configuration management Preventive
    Configure the "Allow Secure Boot for integrity validation" to organizational standards. CC ID 08461 System hardening through configuration management Preventive
    Configure the "Configure use of passwords for operating system drives" to organizational standards. CC ID 08478 System hardening through configuration management Preventive
    Configure the "Choose how BitLocker-protected removable drives can be recovered" to organizational standards. CC ID 08484 System hardening through configuration management Preventive
    Configure the "Validate smart card certificate usage rule compliance" to organizational standards. CC ID 08492 System hardening through configuration management Preventive
    Configure the "Allow enhanced PINs for startup" to organizational standards. CC ID 08495 System hardening through configuration management Preventive
    Configure the "Choose how BitLocker-protected operating system drives can be recovered" to organizational standards. CC ID 08499 System hardening through configuration management Preventive
    Configure the "Allow access to BitLocker-protected fixed data drives from earlier versions of Windows" to organizational standards. CC ID 08505 System hardening through configuration management Preventive
    Configure the "Choose how BitLocker-protected fixed drives can be recovered" to organizational standards. CC ID 08509 System hardening through configuration management Preventive
    Configure the "Configure use of passwords for fixed data drives" to organizational standards. CC ID 08513 System hardening through configuration management Preventive
    Configure the "Choose drive encryption method and cipher strength" to organizational standards. CC ID 08537 System hardening through configuration management Preventive
    Configure the "Choose default folder for recovery password" to organizational standards. CC ID 08541 System hardening through configuration management Preventive
    Configure the "Prevent memory overwrite on restart" to organizational standards. CC ID 08542 System hardening through configuration management Preventive
    Configure the "Deny write access to removable drives not protected by BitLocker" to organizational standards. CC ID 08549 System hardening through configuration management Preventive
    Configure the "opt encrypted" flag to organizational standards. CC ID 14534 System hardening through configuration management Preventive
    Configure the "Provide the unique identifiers for your organization" to organizational standards. CC ID 08552 System hardening through configuration management Preventive
    Configure the "Enable use of BitLocker authentication requiring preboot keyboard input on slates" to organizational standards. CC ID 08556 System hardening through configuration management Preventive
    Configure the "Require encryption on device" to organizational standards. CC ID 08563 System hardening through configuration management Preventive
    Configure the "Enable S/MIME for OWA 2007" to organizational standards. CC ID 08564 System hardening through configuration management Preventive
    Configure the "Control use of BitLocker on removable drives" to organizational standards. CC ID 08566 System hardening through configuration management Preventive
    Configure the "Configure use of hardware-based encryption for fixed data drives" to organizational standards. CC ID 08568 System hardening through configuration management Preventive
    Configure the "Configure use of smart cards on removable data drives" to organizational standards. CC ID 08570 System hardening through configuration management Preventive
    Configure the "Enforce drive encryption type on operating system drives" to organizational standards. CC ID 08573 System hardening through configuration management Preventive
    Configure the "Disallow standard users from changing the PIN or password" to organizational standards. CC ID 08574 System hardening through configuration management Preventive
    Configure the "Use enhanced Boot Configuration Data validation profile" to organizational standards. CC ID 08578 System hardening through configuration management Preventive
    Configure the "Allow network unlock at startup" to organizational standards. CC ID 08588 System hardening through configuration management Preventive
    Configure the "Enable S/MIME for OWA 2010" to organizational standards. CC ID 08592 System hardening through configuration management Preventive
    Configure the "Configure minimum PIN length for startup" to organizational standards. CC ID 08594 System hardening through configuration management Preventive
    Configure the "Configure TPM platform validation profile" to organizational standards. CC ID 08598 System hardening through configuration management Preventive
    Configure the "Configure use of hardware-based encryption for operating system drives" to organizational standards. CC ID 08601 System hardening through configuration management Preventive
    Configure the "Reset platform validation data after BitLocker recovery" to organizational standards. CC ID 08607 System hardening through configuration management Preventive
    Configure the "Configure TPM platform validation profile for native UEFI firmware configurations" to organizational standards. CC ID 08614 System hardening through configuration management Preventive
    Configure the "Do not enable BitLocker until recovery information is stored to AD DS for fixed data drives" setting to organizational standards. CC ID 10039 System hardening through configuration management Preventive
    Configure the "Save BitLocker recovery information to AD DS for fixed data drives" setting to organizational standards. CC ID 10040 System hardening through configuration management Preventive
    Configure the "Omit recovery options from the BitLocker setup wizard" setting to organizational standards. CC ID 10041 System hardening through configuration management Preventive
    Configure the "Do not enable BitLocker until recovery information is stored to AD DS for operating system drives" setting to organizational standards. CC ID 10042 System hardening through configuration management Preventive
    Configure the "Save BitLocker recovery information to AD DS for operating system drives" setting to organizational standards. CC ID 10043 System hardening through configuration management Preventive
    Configure the "Allow BitLocker without a compatible TPM" setting to organizational standards. CC ID 10044 System hardening through configuration management Preventive
    Configure the "Do not enable BitLocker until recovery information is stored to AD DS for removable data drives" setting to organizational standards. CC ID 10045 System hardening through configuration management Preventive
    Configure the "Save BitLocker recovery information to AD DS for removable data drives" setting to organizational standards. CC ID 10046 System hardening through configuration management Preventive
    Configure Security settings in accordance with organizational standards. CC ID 08469 System hardening through configuration management Preventive
    Configure AWS Security Hub to organizational standards. CC ID 17166
    [Ensure AWS Security Hub is enabled (Automated) Description: Security Hub collects security data from across AWS accounts, services, and supported third-party partner products and helps you analyze your security trends and identify the highest priority security issues. When you enable Security Hub, it begins to consume, aggregate, organize, and prioritize findings from AWS services that you have enabled, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie. You can also enable integrations with AWS partner security products. Rationale: AWS Security Hub provides you with a comprehensive view of your security state in AWS and helps you check your environment against security industry standards and best practices - enabling you to quickly assess the security posture across your AWS accounts. Impact: It is recommended AWS Security Hub be enabled in all regions. AWS Security Hub requires AWS Config to be enabled. Audit: The process to evaluate AWS Security Hub configuration per region From Console: 1. Sign in to the AWS Management Console and open the AWS Security Hub console at https://console.aws.amazon.com/securityhub/. 2. On the top right of the console, select the target Region. 3. If presented with the Security Hub > Summary page then Security Hub is set-up for the selected region. 4. If presented with Setup Security Hub or Get Started With Security Hub - follow the online instructions. 5. Repeat steps 2 to 4 for each region. From Command Line: Run the following to list the Securityhub status: aws securityhub describe-hub This will list the Securityhub status by region. Audit for the presence of a 'SubscribedAt' value Example output: { "HubArn": "", "SubscribedAt": "2022-08-19T17:06:42.398Z", "AutoEnableControls": true } An error will be returned if Securityhub is not enabled. Example error: An error occurred (InvalidAccessException) when calling the DescribeHub operation: Account is not subscribed to AWS Security Hub Remediation: To grant the permissions required to enable Security Hub, attach the Security Hub managed policy AWSSecurityHubFullAccess to an IAM user, group, or role. Enabling Security Hub From Console: 1. Use the credentials of the IAM identity to sign in to the Security Hub console. 2. When you open the Security Hub console for the first time, choose Enable AWS Security Hub. 3. On the welcome page, Security standards list the security standards that Security Hub supports. 4. Choose Enable Security Hub. From Command Line: 1. Run the enable-security-hub command. To enable the default standards, include --enable-default-standards. aws securityhub enable-security-hub --enable-default-standards 2. To enable the security hub without the default standards, include --no-enabledefault-standards. aws securityhub enable-security-hub --no-enable-default-standards 4.16]
    System hardening through configuration management Preventive
    Configure Patch Management settings in accordance with organizational standards. CC ID 08519
    [Ensure Auto Minor Version Upgrade feature is Enabled for RDS Instances (Automated) Description: Ensure that RDS database instances have the Auto Minor Version Upgrade flag enabled in order to receive automatically minor engine upgrades during the specified maintenance window. So, RDS instances can get the new features, bug fixes, and security patches for their database engines. Rationale: AWS RDS will occasionally deprecate minor engine versions and provide new ones for an upgrade. When the last version number within the release is replaced, the version changed is considered minor. With Auto Minor Version Upgrade feature enabled, the version upgrades will occur automatically during the specified maintenance window so your RDS instances can get the new features, bug fixes, and security patches for their database engines. Audit: From Console: 1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on Databases. 3. Select the RDS instance that wants to examine. 4. Click on the Maintenance and backups panel. 5. Under the Maintenance section, search for the Auto Minor Version Upgrade status. • If the current status is set to Disabled, means the feature is not set and the minor engine upgrades released will not be applied to the selected RDS instance From Command Line: 1. Run describe-db-instances command to list all RDS database names, available in the selected AWS region: aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. The command output should return each database instance identifier. 3. Run again describe-db-instances command using the RDS instance identifier returned earlier to determine the Auto Minor Version Upgrade status for the selected instance: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].AutoMinorVersionUpgrade' 4. The command output should return the feature current status. If the current status is set to true, the feature is enabled and the minor engine upgrades will be applied to the selected RDS instance. Remediation: From Console: 1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on Databases. 3. Select the RDS instance that wants to update. 4. Click on the Modify button placed on the top right side. 5. On the Modify DB Instance: page, In the Maintenance section, select Auto minor version upgrade click on the Yes radio button. 6. At the bottom of the page click on Continue, check to Apply Immediately to apply the changes immediately, or select Apply during the next scheduled maintenance window to avoid any downtime. 7. Review the changes and click on Modify DB Instance. The instance status should change from available to modifying and back to available. Once the feature is enabled, the Auto Minor Version Upgrade status should change to Yes. From Command Line: 1. Run describe-db-instances command to list all RDS database instance names, available in the selected AWS region: aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. The command output should return each database instance identifier. 3. Run the modify-db-instance command to modify the selected RDS instance configuration this command will apply the changes immediately, Remove -apply-immediately to apply changes during the next scheduled maintenance window and avoid any downtime: aws rds modify-db-instance --region --db-instance-identifier --auto-minor-version-upgrade --apply-immediately 4. The command output should reveal the new configuration metadata for the RDS instance and check AutoMinorVersionUpgrade parameter value. 5. Run describe-db-instances command to check if the Auto Minor Version Upgrade feature has been successfully enable: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].AutoMinorVersionUpgrade' 6. The command output should return the feature current status set to true, the feature is enabled and the minor engine upgrades will be applied to the selected RDS instance. 2.3.2]
    System hardening through configuration management Preventive
    Configure "Select when Preview Builds and Feature Updates are received" to organizational standards. CC ID 15399 System hardening through configuration management Preventive
    Configure "Select when Quality Updates are received" to organizational standards. CC ID 15355 System hardening through configuration management Preventive
    Configure the "Check for missing Windows Updates" to organizational standards. CC ID 08520 System hardening through configuration management Preventive
  • Data and Information Management
    7
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Take into account the characteristics of the geographical, behavioral and functional setting for all datasets. CC ID 15046 Leadership and high level objectives Preventive
    Establish and maintain contact information for user accounts, as necessary. CC ID 15418
    [Maintain current contact details (Manual) Description: Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy or indicative of likely security compromise is observed by the AWS Abuse team. Contact details should not be for a single individual, as circumstances may arise where that individual is unavailable. Email contact details should point to a mail alias which forwards email to multiple individuals within the organization; where feasible, phone contact details should point to a PABX hunt group or other call-forwarding system. Rationale: If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question, so it is in both the customers' and AWS' best interests that prompt contact can be established. This is best achieved by setting AWS account contact details to point to resources which have multiple individuals as recipients, such as email aliases and PABX hunt groups. Audit: This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:*Billing ) 1. Sign in to the AWS Management Console and open the Billing and Cost Management console at https://console.aws.amazon.com/billing/home#/. 2. On the navigation bar, choose your account name, and then choose Account. 3. On the Account Settings page, review and verify the current details. 4. Under Contact Information, review and verify the current details. Remediation: This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:*Billing ). 1. Sign in to the AWS Management Console and open the Billing and Cost Management console at https://console.aws.amazon.com/billing/home#/. 2. On the navigation bar, choose your account name, and then choose Account. 3. On the Account Settings page, next to Account Settings, choose Edit. 4. Next to the field that you need to update, choose Edit. 5. After you have entered your changes, choose Save changes. 6. After you have made your changes, choose Done. 7. To edit your contact information, under Contact Information, choose Edit. 8. For the fields that you want to change, type your updated information, and then choose Update. 1.1
    Ensure security contact information is registered (Manual) Description: AWS provides customers with the option of specifying the contact information for account's security team. It is recommended that this information be provided. Rationale: Specifying security-specific contact information will help ensure that security advisories sent by AWS reach the team in your organization that is best equipped to respond to them. Audit: Perform the following to determine if security contact information is present: From Console: 1. Click on your account name at the top right corner of the console 2. From the drop-down menu Click My Account 3. Scroll down to the Alternate Contacts section 4. Ensure contact information is specified in the Security section From Command Line: 1. Run the following command: aws account get-alternate-contact --alternate-contact-type SECURITY 2. Ensure proper contact information is specified for the Security contact. Remediation: Perform the following to establish security contact information: From Console: 1. Click on your account name at the top right corner of the console. 2. From the drop-down menu Click My Account 3. Scroll down to the Alternate Contacts section 4. Enter contact information in the Security section From Command Line: Run the following command with the following input parameters: --email-address, --name, and --phone-number. aws account put-alternate-contact --alternate-contact-type SECURITY 1.2]
    Technical security Preventive
    Restrict access to restricted data and restricted information on a need to know basis. CC ID 12453
    [Ensure access to AWSCloudShellFullAccess is restricted (Manual) Description: AWS CloudShell is a convenient way of running CLI commands against AWS services; a managed IAM policy ('AWSCloudShellFullAccess') provides full access to CloudShell, which allows file upload and download capability between a user's local system and the CloudShell environment. Within the CloudShell environment a user has sudo permissions, and can access the internet. So it is feasible to install file transfer software (for example) and move data from CloudShell to external internet servers. Rationale: Access to this policy should be restricted as it presents a potential channel for data exfiltration by malicious cloud admins that are given full permissions to the service. AWS documentation describes how to create a more restrictive IAM policy which denies file transfer permissions. Audit: From Console 1. Open the IAM console at https://console.aws.amazon.com/iam/ 2. In the left pane, select Policies 3. Search for and select AWSCloudShellFullAccess 4. On the Entities attached tab, ensure that there are no entities using this policy From Command Line 1. List IAM policies, filter for the 'AWSCloudShellFullAccess' managed policy, and note the "Arn" element value: aws iam list-policies --query "Policies[?PolicyName == 'AWSCloudShellFullAccess']" 2. Check if the 'AWSCloudShellFullAccess' policy is attached to any role: aws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSCloudShellFullAccess 3. In Output, Ensure PolicyRoles returns empty. 'Example: Example: PolicyRoles: [ ]' If it does not return empty refer to the remediation below. Note: Keep in mind that other policies may grant access. Remediation: From Console 1. Open the IAM console at https://console.aws.amazon.com/iam/ 2. In the left pane, select Policies 3. Search for and select AWSCloudShellFullAccess 4. On the Entities attached tab, for each item, check the box and select Detach 1.22]
    Technical security Preventive
    Encrypt in scope data or in scope information, as necessary. CC ID 04824
    [Ensure that encryption is enabled for EFS file systems (Automated) Description: EFS data should be encrypted at rest using AWS KMS (Key Management Service). Rationale: Data should be encrypted at rest to reduce the risk of a data breach via direct access to the storage device. Audit: From Console: 1. Login to the AWS Management Console and Navigate to `Elastic File System (EFS) dashboard. 2. Select File Systems from the left navigation panel. 3. Each item on the list has a visible Encrypted field that displays data at rest encryption status. 4. Validate that this field reads Encrypted for all EFS file systems in all AWS regions. From CLI: 1. Run describe-file-systems command using custom query filters to list the identifiers of all AWS EFS file systems currently available within the selected region: aws efs describe-file-systems --region --output table --query 'FileSystems[*].FileSystemId' 2. The command output should return a table with the requested file system IDs. 3. Run describe-file-systems command using the ID of the file system that you want to examine as identifier and the necessary query filters: aws efs describe-file-systems --region --file-system-id --query 'FileSystems[*].Encrypted' 4. The command output should return the file system encryption status true or false. If the returned value is false, the selected AWS EFS file system is not encrypted and if the returned value is true, the selected AWS EFS file system is encrypted. Remediation: It is important to note that EFS file system data at rest encryption must be turned on when creating the file system. If an EFS file system has been created without data at rest encryption enabled then you must create another EFS file system with the correct configuration and transfer the data. Steps to create an EFS file system with data encrypted at rest: From Console: 1. Login to the AWS Management Console and Navigate to Elastic File System (EFS) dashboard. 2. Select File Systems from the left navigation panel. 3. Click Create File System button from the dashboard top menu to start the file system setup process. 4. On the Configure file system access configuration page, perform the following actions. • Choose the right VPC from the VPC dropdown list. • Within Create mount targets section, select the checkboxes for all of the Availability Zones (AZs) within the selected VPC. These will be your mount targets. • Click Next step to continue. 5. Perform the following on the Configure optional settings page. • Create tags to describe your new file system. • Choose performance mode based on your requirements. • Check Enable encryption checkbox and choose aws/elasticfilesystem from Select KMS master key dropdown list to enable encryption for the new file system using the default master key provided and managed by AWS KMS. • Click Next step to continue. 6. Review the file system configuration details on the review and create page and then click Create File System to create your new AWS EFS file system. 7. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system. 8. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed. 9. Change the AWS region from the navigation bar and repeat the entire process for other aws regions. From CLI: 1. Run describe-file-systems command to describe the configuration information available for the selected (unencrypted) file system (see Audit section to identify the right resource): aws efs describe-file-systems --region --file-system-id 2. The command output should return the requested configuration information. 3. To provision a new AWS EFS file system, you need to generate a universally unique identifier (UUID) in order to create the token required by the create-filesystem command. To create the required token, you can use a randomly generated UUID from "https://www.uuidgenerator.net". 4. Run create-file-system command using the unique token created at the previous step. aws efs create-file-system --region --creation-token --performance-mode generalPurpose -encrypted 5. The command output should return the new file system configuration metadata. 6. Run create-mount-target command using the newly created EFS file system ID returned at the previous step as identifier and the ID of the Availability Zone (AZ) that will represent the mount target: aws efs create-mount-target --region --file-system-id --subnet-id 7. The command output should return the new mount target metadata. 8. Now you can mount your file system from an EC2 instance. 9. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system. 10. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed. aws efs delete-file-system --region --file-system-id 11. Change the AWS region by updating the --region and repeat the entire process for other aws regions. Default Value: EFS file system data is encrypted at rest by default when creating a file system via the Console. Encryption at rest is not enabled by default when creating a new file system using the AWS CLI, API, and SDKs. 2.4.1
    Ensure CloudTrail logs are encrypted at rest using KMS CMKs (Automated) Description: AWS CloudTrail is a web service that records AWS API calls for an account and makes those logs available to users and resources in accordance with IAM policies. AWS Key Management Service (KMS) is a managed service that helps create and control the encryption keys used to encrypt account data, and uses Hardware Security Modules (HSMs) to protect the security of encryption keys. CloudTrail logs can be configured to leverage server side encryption (SSE) and KMS customer created master keys (CMK) to further protect CloudTrail logs. It is recommended that CloudTrail be configured to use SSE-KMS. Rationale: Configuring CloudTrail to use SSE-KMS provides additional confidentiality controls on log data as a given user must have S3 read permission on the corresponding log bucket and must be granted decrypt permission by the CMK policy. Impact: Customer created keys incur an additional cost. See https://aws.amazon.com/kms/pricing/ for more information. Audit: Perform the following to determine if CloudTrail is configured to use SSE-KMS: From Console: 1. Sign in to the AWS Management Console and open the CloudTrail console at https://console.aws.amazon.com/cloudtrail 2. In the left navigation pane, choose Trails . 3. Select a Trail 4. Under the S3 section, ensure Encrypt log files is set to Yes and a KMS key ID is specified in the KSM Key Id field. From Command Line: 1. Run the following command: aws cloudtrail describe-trails 2. For each trail listed, SSE-KMS is enabled if the trail has a KmsKeyId property defined. Remediation: Perform the following to configure CloudTrail to use SSE-KMS: From Console: 1. Sign in to the AWS Management Console and open the CloudTrail console at https://console.aws.amazon.com/cloudtrail 2. In the left navigation pane, choose Trails . 3. Click on a Trail 4. Under the S3 section click on the edit button (pencil icon) 5. Click Advanced 6. Select an existing CMK from the KMS key Id drop-down menu • Note: Ensure the CMK is located in the same region as the S3 bucket • Note: You will need to apply a KMS Key policy on the selected CMK in order for CloudTrail as a service to encrypt and decrypt log files using the CMK provided. Steps are provided here for editing the selected CMK Key policy 7. Click Save 8. You will see a notification message stating that you need to have decrypt permissions on the specified KMS key to decrypt log files. 9. Click Yes From Command Line: aws cloudtrail update-trail --name --kms-id aws kms put-key-policy --key-id --policy 3.5]
    Technical security Preventive
    Change cryptographic keys in accordance with organizational standards. CC ID 01302
    [Ensure rotation for customer-created symmetric CMKs is enabled (Automated) Description: AWS Key Management Service (KMS) allows customers to rotate the backing key which is key material stored within the KMS which is tied to the key ID of the customercreated customer master key (CMK). It is the backing key that is used to perform cryptographic operations such as encryption and decryption. Automated key rotation currently retains all prior backing keys so that decryption of encrypted data can take place transparently. It is recommended that CMK key rotation be enabled for symmetric keys. Key rotation can not be enabled for any asymmetric CMK. Rationale: Rotating encryption keys helps reduce the potential impact of a compromised key as data encrypted with a new key cannot be accessed with a previous key that may have been exposed. Keys should be rotated every year, or upon event that would result in the compromise of that key. Impact: Creation, management, and storage of CMKs may require additional time from an administrator. Audit: From Console: 1. Sign in to the AWS Management Console and open the KMS console at: https://console.aws.amazon.com/kms. 2. In the left navigation pane, click Customer-managed keys. 3. Select a customer managed CMK where Key spec = SYMMETRIC_DEFAULT. 4. Select the Key rotation tab. 5. Ensure the Automatically rotate this KMS key every year checkbox is checked. 6. Repeat steps 3–5 for all customer-managed CMKs where "Key spec = SYMMETRIC_DEFAULT". From Command Line: 1. Run the following command to get a list of all keys and their associated KeyIds: aws kms list-keys 2. For each key, note the KeyId and run the following command: describe-key --key-id 3. If the response contains "KeySpec = SYMMETRIC_DEFAULT", run the following command: aws kms get-key-rotation-status --key-id 4. Ensure KeyRotationEnabled is set to true. 5. Repeat steps 2–4 for all remaining CMKs. Remediation: From Console: 1. Sign in to the AWS Management Console and open the KMS console at: https://console.aws.amazon.com/kms. 2. In the left navigation pane, click Customer-managed keys. 3. Select a key where Key spec = SYMMETRIC_DEFAULT that does not have automatic rotation enabled. 4. Select the Key rotation tab. 5. Check the Automatically rotate this KMS key every year checkbox. 6. Click Save. 7. Repeat steps 3–6 for all customer-managed CMKs that do not have automatic rotation enabled. From Command Line: 1. Run the following command to enable key rotation: aws kms enable-key-rotation --key-id 3.6]
    Technical security Preventive
    Establish, implement, and maintain a repository of authenticators. CC ID 16372 System hardening through configuration management Preventive
    Ensure the root account is the first entry in password files. CC ID 16323 System hardening through configuration management Detective
  • Establish/Maintain Documentation
    21
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Establish, implement, and maintain a data classification scheme. CC ID 11628
    [Ensure all data in Amazon S3 has been discovered, classified and secured when required. (Manual) Description: Amazon S3 buckets can contain sensitive data, that for security purposes should be discovered, monitored, classified and protected. Macie along with other 3rd party tools can automatically provide an inventory of Amazon S3 buckets. Rationale: Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Impact: There is a cost associated with using Amazon Macie. There is also typically a cost associated with 3rd Party tools that perform similar processes and protection. Audit: Perform the following steps to determine if Macie is running: From Console: 1. Login to the Macie console at https://console.aws.amazon.com/macie/ 2. In the left hand pane click on By job under findings. 3. Confirm that you have a Job setup for your S3 Buckets When you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below. If you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation. Remediation: Perform the steps below to enable and configure Amazon Macie From Console: 1. Log on to the Macie console at https://console.aws.amazon.com/macie/ 2. Click Get started. 3. Click Enable Macie. Setup a repository for sensitive data discovery results 1. In the Left pane, under Settings, click Discovery results. 2. Make sure Create bucket is selected. 3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number. 4. Click on Advanced. 5. Block all public access, make sure Yes is selected. 6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket. 7. Click on Save Create a job to discover sensitive data 1. In the left pane, click S3 buckets. Macie displays a list of all the S3 buckets for your account. 2. Select the check box for each bucket that you want Macie to analyze as part of the job 3. Click Create job. 4. Click Quick create. 5. For the Name and description step, enter a name and, optionally, a description of the job. 6. Then click Next. 7. For the Review and create step, click Submit. Review your findings 1. In the left pane, click Findings. 2. To view the details of a specific finding, choose any field other than the check box for the finding. If you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool. 2.1.3]
    Leadership and high level objectives Preventive
    Approve the data classification scheme. CC ID 13858 Leadership and high level objectives Detective
    Establish, implement, and maintain an access control program. CC ID 11702 Technical security Preventive
    Establish, implement, and maintain an access rights management plan. CC ID 00513 Technical security Preventive
    Establish and maintain a list of individuals authorized to perform privileged functions. CC ID 17005 Technical security Preventive
    Document and approve requests to bypass multifactor authentication. CC ID 15464 Technical security Preventive
    Establish, implement, and maintain Public Key certificate procedures. CC ID 07085
    [Ensure that all the expired SSL/TLS certificates stored in AWS IAM are removed (Automated) Description: To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates. Use IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM. IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage. IAM supports deploying server certificates in all regions, but you must obtain your certificate from an external provider for use with AWS. You cannot upload an ACM certificate to IAM. Additionally, you cannot manage your certificates from the IAM Console. Rationale: Removing expired SSL/TLS certificates eliminates the risk that an invalid certificate will be deployed accidentally to a resource such as AWS Elastic Load Balancer (ELB), which can damage the credibility of the application/website behind the ELB. As a best practice, it is recommended to delete expired certificates. Audit: From Console: Getting the certificates expiration information via AWS Management Console is not currently supported. To request information about the SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI). From Command Line: Run list-server-certificates command to list all the IAM-stored server certificates: aws iam list-server-certificates The command output should return an array that contains all the SSL/TLS certificates currently stored in IAM and their metadata (name, ID, expiration date, etc): { "ServerCertificateMetadataList": [ { "ServerCertificateId": "EHDGFRW7EJFYTE88D", "ServerCertificateName": "MyServerCertificate", "Expiration": "2018-07-10T23:59:59Z", "Path": "/", "Arn": "arn:aws:iam::012345678910:servercertificate/MySSLCertificate", "UploadDate": "2018-06-10T11:56:08Z" } ] } Verify the ServerCertificateName and Expiration parameter value (expiration date) for each SSL/TLS certificate returned by the list-server-certificates command and determine if there are any expired server certificates currently stored in AWS IAM. If so, use the AWS API to remove them. If this command returns: { { "ServerCertificateMetadataList": [] } This means that there are no expired certificates, It DOES NOT mean that no certificates exist. Remediation: From Console: Removing expired certificates via AWS Management Console is not currently supported. To delete SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI). From Command Line: To delete Expired Certificate run following command by replacing with the name of the certificate to delete: aws iam delete-server-certificate --server-certificate-name When the preceding command is successful, it does not return any output. Default Value: By default, expired certificates won't get deleted. 1.19]
    Technical security Preventive
    Document the roles and responsibilities for all activities that protect restricted data in the information security procedures. CC ID 12304 Operational management Preventive
    Establish, implement, and maintain system hardening procedures. CC ID 12001 System hardening through configuration management Preventive
    Establish, implement, and maintain an authenticator standard. CC ID 01702 System hardening through configuration management Preventive
    Establish, implement, and maintain an authenticator management system. CC ID 12031 System hardening through configuration management Preventive
    Establish, implement, and maintain authenticator procedures. CC ID 12002
    [Ensure security questions are registered in the AWS account (Manual) Description: The AWS support portal allows account owners to establish security questions that can be used to authenticate individuals calling AWS customer service for support. It is recommended that security questions be established. Rationale: When creating a new AWS account, a default super user is automatically created. This account is referred to as the 'root user' or 'root' account. It is recommended that the use of this account be limited and highly controlled. During events in which the 'root' password is no longer accessible or the MFA token associated with 'root' is lost/destroyed it is possible, through authentication using secret questions and associated answers, to recover 'root' user login access. Audit: From Console: 1. Login to the AWS account as the 'root' user 2. On the top right you will see the 3. Click on the 4. From the drop-down menu Click My Account 5. In the Configure Security Challenge Questions section on the Personal Information page, configure three security challenge questions. 6. Click Save questions . Remediation: From Console: 1. Login to the AWS Account as the 'root' user 2. Click on the from the top right of the console 3. From the drop-down menu Click My Account 4. Scroll down to the Configure Security Questions section 5. Click on Edit 6. Click on each Question • From the drop-down select an appropriate question • Click on the Answer section • Enter an appropriate answer o Follow process for all 3 questions 7. Click Update when complete 8. Save Questions and Answers and place in a secure physical location 1.3
    Do not setup access keys during initial user setup for all IAM users that have a console password (Manual) Description: AWS console defaults to no check boxes selected when creating a new IAM user. When creating the IAM User credentials you have to determine what type of access they require. Programmatic access: The IAM user might need to make API calls, use the AWS CLI, or use the Tools for Windows PowerShell. In that case, create an access key (access key ID and a secret access key) for that user. AWS Management Console access: If the user needs to access the AWS Management Console, create a password for the user. Rationale: Requiring the additional steps be taken by the user for programmatic access after their profile has been created will give a stronger indication of intent that access keys are [a] necessary for their work and [b] once the access key is established on an account that the keys may be in use somewhere in the organization. Note: Even if it is known the user will need access keys, require them to create the keys themselves or put in a support ticket to have them created as a separate step from user creation. Audit: Perform the following to determine if access keys were created upon user creation and are being used and rotated as prescribed: From Console: 1. Login to the AWS Management Console 2. Click Services 3. Click IAM 4. Click on a User where column Password age and Access key age is not set to None 5. Click on Security credentials Tab 6. Compare the user Creation time to the Access Key Created date. 7. For any that match, the key was created during initial user setup. • Keys that were created at the same time as the user profile and do not have a last used date should be deleted. Refer to the remediation below. From Command Line: 1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their access keys utilization: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,9,11,14,16 2. The output of this command will produce a table similar to the following: user,password_enabled,access_key_1_active,access_key_1_last_used_date,access_ key_2_active,access_key_2_last_used_date elise,false,true,2015-04-16T15:14:00+00:00,false,N/A brandon,true,true,N/A,false,N/A rakesh,false,false,N/A,false,N/A helene,false,true,2015-11-18T17:47:00+00:00,false,N/A paras,true,true,2016-08-28T12:04:00+00:00,true,2016-03-04T10:11:00+00:00 anitha,true,true,2016-06-08T11:43:00+00:00,true,N/A 3. For any user having password_enabled set to true AND access_key_last_used_date set to N/A refer to the remediation below. Remediation: Perform the following to delete access keys that do not pass the audit: From Console: 1. Login to the AWS Management Console: 2. Click Services 3. Click IAM 4. Click on Users 5. Click on Security Credentials 6. As an Administrator • Click on the X (Delete) for keys that were created at the same time as the user profile but have not been used. 7. As an IAM User • Click on the X (Delete) for keys that were created at the same time as the user profile but have not been used. From Command Line: aws iam delete-access-key --access-key-id --user-name 1.11
    {be active} Ensure there is only one active access key available for any single IAM user (Automated) Description: Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK) Rationale: Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API. One of the best ways to protect your account is to not allow users to have multiple access keys. Audit: From Console: 1. Sign in to the AWS Management Console and navigate to IAM dashboard at https://console.aws.amazon.com/iam/. 2. In the left navigation panel, choose Users. 3. Click on the IAM user name that you want to examine. 4. On the IAM user configuration page, select Security Credentials tab. 5. Under Access Keys section, in the Status column, check the current status for each access key associated with the IAM user. If the selected IAM user has more than one access key activated then the users access configuration does not adhere to security best practices and the risk of accidental exposures increases. • Repeat steps no. 3 – 5 for each IAM user in your AWS account. From Command Line: 1. Run list-users command to list all IAM users within your account: aws iam list-users --query "Users[*].UserName" The command output should return an array that contains all your IAM user names. 2. Run list-access-keys command using the IAM user name list to return the current status of each access key associated with the selected IAM user: aws iam list-access-keys --user-name The command output should expose the metadata ("Username", "AccessKeyId", "Status", "CreateDate") for each access key on that user account. 3. Check the Status property value for each key returned to determine each keys current state. If the Status property value for more than one IAM access key is set to Active, the user access configuration does not adhere to this recommendation, refer to the remediation below. • Repeat steps no. 2 and 3 for each IAM user in your AWS account. Remediation: From Console: 1. Sign in to the AWS Management Console and navigate to IAM dashboard at https://console.aws.amazon.com/iam/. 2. In the left navigation panel, choose Users. 3. Click on the IAM user name that you want to examine. 4. On the IAM user configuration page, select Security Credentials tab. 5. In Access Keys section, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working. 6. In the same Access Keys section, identify your non-operational access keys (other than the chosen one) and deactivate it by clicking the Make Inactive link. 7. If you receive the Change Key Status confirmation box, click Deactivate to switch off the selected key. 8. Repeat steps no. 3 – 7 for each IAM user in your AWS account. From Command Line: 1. Using the IAM user and access key information provided in the Audit CLI, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working. 2. Run the update-access-key command below using the IAM user name and the non-operational access key IDs to deactivate the unnecessary key(s). Refer to the Audit section to identify the unnecessary access key ID for the selected IAM user Note - the command does not return any output: aws iam update-access-key --access-key-id --status Inactive --user-name 3. To confirm that the selected access key pair has been successfully deactivated run the list-access-keys audit command again for that IAM User: aws iam list-access-keys --user-name • The command output should expose the metadata for each access key associated with the IAM user. If the non-operational key pair(s) Status is set to Inactive, the key has been successfully deactivated and the IAM user access configuration adheres now to this recommendation. 4. Repeat steps no. 1 – 3 for each IAM user in your AWS account. 1.13]
    System hardening through configuration management Preventive
    Configure the "minimum number of digits required for new passwords" setting to organizational standards. CC ID 08717 System hardening through configuration management Preventive
    Configure the "minimum number of upper case characters required for new passwords" setting to organizational standards. CC ID 08718 System hardening through configuration management Preventive
    Configure the "minimum number of lower case characters required for new passwords" setting to organizational standards. CC ID 08719 System hardening through configuration management Preventive
    Configure the "minimum number of special characters required for new passwords" setting to organizational standards. CC ID 08720 System hardening through configuration management Preventive
    Configure the "require new passwords to differ from old ones by the appropriate minimum number of characters" setting to organizational standards. CC ID 08722 System hardening through configuration management Preventive
    Configure the "password reuse" setting to organizational standards. CC ID 08724
    [Ensure IAM password policy prevents password reuse (Automated) Description: IAM password policies can prevent the reuse of a given password by the same user. It is recommended that the password policy prevent the reuse of passwords. Rationale: Preventing password reuse increases account resiliency against brute force login attempts. Audit: Perform the following to ensure the password policy is configured as prescribed: From Console: 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Ensure "Prevent password reuse" is checked 5. Ensure "Number of passwords to remember" is set to 24 From Command Line: aws iam get-account-password-policy Ensure the output of the above command includes "PasswordReusePrevention": 24 Remediation: Perform the following to set the password policy as prescribed: From Console: 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Check "Prevent password reuse" 5. Set "Number of passwords to remember" is set to 24 From Command Line: aws iam update-account-password-policy --password-reuse-prevention 24 Note: All commands starting with "aws iam update-account-password-policy" can be combined into a single command. 1.9]
    System hardening through configuration management Preventive
    Configure the "shadow password for all accounts in /etc/passwd" setting to organizational standards. CC ID 08721 System hardening through configuration management Preventive
    Configure the "password hashing algorithm" setting to organizational standards. CC ID 08723 System hardening through configuration management Preventive
    Establish, implement, and maintain network parameter modification procedures. CC ID 01517 System hardening through configuration management Preventive
  • Human Resources Management
    1
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Define roles for information systems. CC ID 12454
    [Ensure a support role has been created to manage incidents with AWS Support (Automated) Description: AWS provides a support center that can be used for incident notification and response, as well as technical support and customer services. Create an IAM Role, with the appropriate policy assigned, to allow authorized users to manage incidents with AWS Support. Rationale: By implementing least privilege for access control, an IAM Role will require an appropriate IAM Policy to allow Support Center Access in order to manage Incidents with AWS Support. Audit: From Command Line: 1. List IAM policies, filter for the 'AWSSupportAccess' managed policy, and note the "Arn" element value: aws iam list-policies --query "Policies[?PolicyName == 'AWSSupportAccess']" 2. Check if the 'AWSSupportAccess' policy is attached to any role: aws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess 3. In Output, Ensure PolicyRoles does not return empty. 'Example: Example: PolicyRoles: [ ]' If it returns empty refer to the remediation below. Remediation: From Command Line: 1. Create an IAM role for managing incidents with AWS: • Create a trust relationship policy document that allows to manage AWS incidents, and save it locally as /tmp/TrustPolicy.json: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "" }, "Action": "sts:AssumeRole" } ] } 2. Create the IAM role using the above trust policy: aws iam create-role --role-name --assume-role-policydocument file:///tmp/TrustPolicy.json 3. Attach 'AWSSupportAccess' managed policy to the created IAM role: aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess --role-name 1.17]
    Technical security Preventive
  • IT Impact Zone
    2
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Technical security CC ID 00508 Technical security IT Impact Zone
    System hardening through configuration management CC ID 00860 System hardening through configuration management IT Impact Zone
  • Log Management
    5
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Configure the log to capture hardware and software access attempts. CC ID 01220
    [Ensure AWS Management Console authentication failures are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for failed console authentication attempts. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring failed console logins may decrease lead time to detect an attempt to brute force a credential, which may provide an indicator, such as source IP address, that can be used in other event correlation. Impact: Monitoring for these failures may create a large number of alerts, more so in larger environments. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = ConsoleLogin) && ($.errorMessage = "Failed authentication") }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for AWS management Console Login Failures and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = ConsoleLogin) && ($.errorMessage = "Failed authentication") }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 -namespace 'CISBenchmark' --alarm-actions 4.6]
    System hardening through configuration management Detective
    Configure the log to capture access to restricted data or restricted information. CC ID 00644
    [Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket (Automated) Description: S3 Bucket Access Logging generates a log that contains access records for each request made to your S3 bucket. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. It is recommended that bucket access logging be enabled on the CloudTrail S3 bucket. Rationale: By enabling S3 bucket logging on target S3 buckets, it is possible to capture all events which may affect objects within any target buckets. Configuring logs to be placed in a separate bucket allows access to log information which can be useful in security and incident response workflows. Audit: Perform the following ensure the CloudTrail S3 bucket has access logging is enabled: From Console: 1. Go to the Amazon CloudTrail console at https://console.aws.amazon.com/cloudtrail/home 2. In the API activity history pane on the left, click Trails 3. In the Trails pane, note the bucket names in the S3 bucket column 4. Sign in to the AWS Management Console and open the S3 console at https://console.aws.amazon.com/s3. 5. Under All Buckets click on a target S3 bucket 6. Click on Properties in the top right of the console 7. Under Bucket: _ _ click on Logging 8. Ensure Enabled is checked. From Command Line: 1. Get the name of the S3 bucket that CloudTrail is logging to: aws cloudtrail describe-trails --query 'trailList[*].S3BucketName' 2. Ensure Bucket Logging is enabled: aws s3api get-bucket-logging --bucket Ensure command does not return empty output. Sample Output for a bucket with logging enabled: { "LoggingEnabled": { "TargetPrefix": "", "TargetBucket": "" } } Remediation: Perform the following to enable S3 bucket logging: From Console: 1. Sign in to the AWS Management Console and open the S3 console at https://console.aws.amazon.com/s3. 2. Under All Buckets click on the target S3 bucket 3. Click on Properties in the top right of the console 4. Under Bucket: click on Logging 5. Configure bucket logging o Click on the Enabled checkbox o Select Target Bucket from list o Enter a Target Prefix 6. Click Save. From Command Line: 1. Get the name of the S3 bucket that CloudTrail is logging to: aws cloudtrail describe-trails --region --query trailList[*].S3BucketName 2. Copy and add target bucket name at , Prefix for logfile at and optionally add an email address in the following template and save it as : { "LoggingEnabled": { "TargetBucket": "", "TargetPrefix": "", "TargetGrants": [ { "Grantee": { "Type": "AmazonCustomerByEmail", "EmailAddress": "" }, "Permission": "FULL_CONTROL" } ] } } 3. Run the put-bucket-logging command with bucket name and as input: for more information refer to put-bucket-logging: aws s3api put-bucket-logging --bucket --bucket-logging-status file:// Default Value: Logging is disabled. 3.4]
    System hardening through configuration management Detective
    Configure the log to capture actions taken by individuals with root privileges or administrative privileges and add logging option to the root file system. CC ID 00645
    [{root user} Ensure usage of 'root' account is monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for 'root' login attempts to detect the unauthorized use, or attempts to use the root account. Rationale: Monitoring for 'root' account logins will provide visibility into the use of a fully privileged account and an opportunity to reduce the use of it. Cloud Watch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ $.userIdentity.type = "Root" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != "AwsServiceEvent" }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for 'Root' account usage and the taken from audit step 1. aws logs put-metric-filter --log-group-name `` -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filterpattern '{ $.userIdentity.type = "Root" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != "AwsServiceEvent" }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metricname `` --statistic Sum --period 300 --threshold 1 -comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 -namespace 'CISBenchmark' --alarm-actions 4.3]
    System hardening through configuration management Detective
    Configure the log to capture changes to User privileges, audit policies, and trust policies by enabling audit policy changes. CC ID 01698
    [Ensure S3 bucket policy changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for changes to S3 bucket policies. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to S3 bucket policies may reduce time to detect and correct permissive policies on sensitive S3 buckets. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for S3 bucket policy changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.8]
    System hardening through configuration management Detective
    Configure the log to capture user account additions, modifications, and deletions. CC ID 16482 System hardening through configuration management Preventive
  • Process or Activity
    3
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Issue temporary authenticators, as necessary. CC ID 17062 System hardening through configuration management Preventive
    Renew temporary authenticators, as necessary. CC ID 17061 System hardening through configuration management Preventive
    Disable authenticators, as necessary. CC ID 17060 System hardening through configuration management Preventive
  • Technical Security
    15
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Identify information system users. CC ID 12081 Technical security Detective
    Control access rights to organizational assets. CC ID 00004 Technical security Preventive
    Enable role-based access control for objects and users on information systems. CC ID 12458
    [Ensure IAM instance roles are used for AWS resource access from instances (Automated) Description: AWS access from within AWS instances can be done by either encoding AWS keys into AWS API calls or by assigning the instance to a role which has an appropriate permissions policy for the required access. "AWS Access" means accessing the APIs of AWS in order to access AWS resources or manage AWS account resources. Rationale: AWS IAM roles reduce the risks associated with sharing and rotating credentials that can be used outside of AWS itself. If credentials are compromised, they can be used from outside of the AWS account they give access to. In contrast, in order to leverage role permissions an attacker would need to gain and maintain access to a specific instance to use the privileges associated with it. Additionally, if credentials are encoded into compiled applications or other hard to change mechanisms, then they are even more unlikely to be properly rotated due to service disruption risks. As time goes on, credentials that cannot be rotated are more likely to be known by an increasing number of individuals who no longer work for the organization owning the credentials. Audit: From Console: 1. Sign in to the AWS Management Console and navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. 2. In the left navigation panel, choose Instances. 3. Select the EC2 instance you want to examine. 4. Select Actions. 5. Select View details. 6. Select Security in the lower panel. • If the value for Instance profile arn is an instance profile ARN, then an instance profile (that contains an IAM role) is attached. • If the value for IAM Role is blank, no role is attached. • If the value for IAM Role contains a role • If the value for IAM Role is "No roles attached to instance profile: ", then an instance profile is attached to the instance, but it does not contain an IAM role. 7. Repeat steps 3 to 6 for each EC2 instance in your AWS account. From Command Line: 1. Run the describe-instances command to list all EC2 instance IDs, available in the selected AWS region. The command output will return each instance ID: aws ec2 describe-instances --region --query 'Reservations[*].Instances[*].InstanceId' 2. Run the describe-instances command again for each EC2 instance using the IamInstanceProfile identifier in the query filter to check if an IAM role is attached: aws ec2 describe-instances --region --instance-id --query 'Reservations[*].Instances[*].IamInstanceProfile' 3. If an IAM role is attached, the command output will show the IAM instance profile ARN and ID. 4. Repeat steps 1 to 3 for each EC2 instance in your AWS account. Remediation: From Console: 1. Sign in to the AWS Management Console and navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. 2. In the left navigation panel, choose Instances. 3. Select the EC2 instance you want to modify. 4. Click Actions. 5. Click Security. 6. Click Modify IAM role. 7. Click Create new IAM role if a new IAM role is required. 8. Select the IAM role you want to attach to your instance in the IAM role dropdown. 9. Click Update IAM role. 10. Repeat steps 3 to 9 for each EC2 instance in your AWS account that requires an IAM role to be attached. From Command Line: 1. Run the describe-instances command to list all EC2 instance IDs, available in the selected AWS region: aws ec2 describe-instances --region --query 'Reservations[*].Instances[*].InstanceId' 2. Run the associate-iam-instance-profile command to attach an instance profile (which is attached to an IAM role) to the EC2 instance: aws ec2 associate-iam-instance-profile --region --instance-id --iam-instance-profile Name="Instance-Profile-Name" 3. Run the describe-instances command again for the recently modified EC2 instance. The command output should return the instance profile ARN and ID: aws ec2 describe-instances --region --instance-id --query 'Reservations[*].Instances[*].IamInstanceProfile' 4. Repeat steps 1 to 3 for each EC2 instance in your AWS account that requires an IAM role to be attached. 1.18]
    Technical security Preventive
    Control user privileges. CC ID 11665
    [Ensure IAM Users Receive Permissions Only Through Groups (Automated) Description: IAM users are granted access to services, functions, and data through IAM policies. There are four ways to define policies for a user: 1) Edit the user policy directly, aka an inline, or user, policy; 2) attach a policy directly to a user; 3) add the user to an IAM group that has an attached policy; 4) add the user to an IAM group that has an inline policy. Only the third implementation is recommended. Rationale: Assigning IAM policy only through groups unifies permissions management to a single, flexible layer consistent with organizational functional roles. By unifying permissions management, the likelihood of excessive permissions is reduced. Audit: Perform the following to determine if an inline policy is set or a policy is directly attached to users: 1. Run the following to get a list of IAM users: aws iam list-users --query 'Users[*].UserName' --output text 2. For each user returned, run the following command to determine if any policies are attached to them: aws iam list-attached-user-policies --user-name aws iam list-user-policies --user-name 3. If any policies are returned, the user has an inline policy or direct policy attachment. Remediation: Perform the following to create an IAM group and assign a policy to it: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, click Groups and then click Create New Group. 3. In the Group Name box, type the name of the group and then click Next Step . 4. In the list of policies, select the check box for each policy that you want to apply to all members of the group. Then click Next Step . 5. Click Create Group Perform the following to add a user to a given group: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, click Groups 3. Select the group to add a user to 4. Click Add Users To Group 5. Select the users to be added to the group 6. Click Add Users Perform the following to remove a direct association between a user and policy: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. 2. In the left navigation pane, click on Users 3. For each user: o Select the user o Click on the Permissions tab o Expand Permissions policies o Click X for each policy; then click Detach or Remove (depending on policy type) 1.15]
    Technical security Preventive
    Enforce usage restrictions for superuser accounts. CC ID 07064
    [{administrative tasks} Eliminate use of the 'root' user for administrative and daily tasks (Manual) Description: With the creation of an AWS account, a 'root user' is created that cannot be disabled or deleted. That user has unrestricted access to and control over all resources in the AWS account. It is highly recommended that the use of this account be avoided for everyday tasks. Rationale: The 'root user' has unrestricted access to and control over all account resources. Use of it is inconsistent with the principles of least privilege and separation of duties, and can lead to unnecessary harm due to error or account compromise. Audit: From Console: 1. Login to the AWS Management Console at https://console.aws.amazon.com/iam/ 2. In the left pane, click Credential Report 3. Click on Download Report 4. Open of Save the file locally 5. Locate the under the user column 6. Review password_last_used, access_key_1_last_used_date, access_key_2_last_used_date to determine when the 'root user' was last used. From Command Line: Run the following CLI commands to provide a credential report for determining the last time the 'root user' was used: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,5,11,16 | grep -B1 '' Review password_last_used, access_key_1_last_used_date, access_key_2_last_used_date to determine when the root user was last used. Note: There are a few conditions under which the use of the 'root' user account is required. Please see the reference links for all of the tasks that require use of the 'root' user. Remediation: If you find that the 'root' user account is being used for daily activity to include administrative tasks that do not require the 'root' user: 1. Change the 'root' user password. 2. Deactivate or delete any access keys associated with the 'root' user. **Remember, anyone who has 'root' user credentials for your AWS account has unrestricted access to and control of all the resources in your account, including billing information. 1.7]
    Technical security Preventive
    Establish, implement, and maintain user accounts in accordance with the organizational Governance, Risk, and Compliance framework. CC ID 00526
    [Ensure IAM users are managed centrally via identity federation or AWS Organizations for multi-account environments (Manual) Description: In multi-account environments, IAM user centralization facilitates greater user control. User access beyond the initial account is then provided via role assumption. Centralization of users can be accomplished through federation with an external identity provider or through the use of AWS Organizations. Rationale: Centralizing IAM user management to a single identity store reduces complexity and thus the likelihood of access management errors. Audit: For multi-account AWS environments with an external identity provider: 1. Determine the master account for identity federation or IAM user management 2. Login to that account through the AWS Management Console 3. Click Services 4. Click IAM 5. Click Identity providers 6. Verify the configuration Then, determine all accounts that should not have local users present. For each account: 1. Determine all accounts that should not have local users present 2. Log into the AWS Management Console 3. Switch role into each identified account 4. Click Services 5. Click IAM 6. Click Users 7. Confirm that no IAM users representing individuals are present For multi-account AWS environments implementing AWS Organizations without an external identity provider: 1. Determine all accounts that should not have local users present 2. Log into the AWS Management Console 3. Switch role into each identified account 4. Click Services 5. Click IAM 6. Click Users 7. Confirm that no IAM users representing individuals are present Remediation: The remediation procedure will vary based on the individual organization's implementation of identity federation and/or AWS Organizations with the acceptance criteria that no non-service IAM users, and non-root accounts, are present outside the account providing centralized IAM user management. 1.21]
    Technical security Preventive
    Refrain from allowing individuals to self-enroll into multifactor authentication from untrusted devices. CC ID 17173 Technical security Preventive
    Implement phishing-resistant multifactor authentication techniques. CC ID 16541 Technical security Preventive
    Use the latest approved version of all assets. CC ID 00897
    [{Instance Metadata Service} Ensure that EC2 Metadata Service only allows IMDSv2 (Automated) Description: When enabling the Metadata Service on AWS EC2 instances, users have the option of using either Instance Metadata Service Version 1 (IMDSv1; a request/response method) or Instance Metadata Service Version 2 (IMDSv2; a session-oriented method). Rationale: Instance metadata is data about your instance that you can use to configure or manage the running instance. Instance metadata is divided into categories, for example, host name, events, and security groups. When enabling the Metadata Service on AWS EC2 instances, users have the option of using either Instance Metadata Service Version 1 (IMDSv1; a request/response method) or Instance Metadata Service Version 2 (IMDSv2; a session-oriented method). With IMDSv2, every request is now protected by session authentication. A session begins and ends a series of requests that software running on an EC2 instance uses to access the locally-stored EC2 instance metadata and credentials. Allowing Version 1 of the service may open EC2 instances to Server-Side Request Forgery (SSRF) attacks, so Amazon recommends utilizing Version 2 for better instance security. Audit: From Console: 1. Sign in to the AWS Management Console and navigate to the EC2 dashboard at https://console.aws.amazon.com/ec2/. 2. In the left navigation panel, under the INSTANCES section, choose Instances. 3. Select the EC2 instance that you want to examine. 4. Check for the IMDSv2 status, and ensure that it is set to Required. From Command Line: 1. Run the describe-instances command using appropriate filtering to list the IDs of all the existing EC2 instances currently available in the selected region: aws ec2 describe-instances --region --output table --query "Reservations[*].Instances[*].InstanceId" 2. The command output should return a table with the requested instance IDs. 3. Now run the describe-instances command using an instance ID returned at the previous step and custom filtering to determine whether the selected instance has IMDSv2: aws ec2 describe-instances --region --instance-ids --query "Reservations[*].Instances[*].MetadataOptions" --output table 4. Ensure for all ec2 instances HttpTokens is set to required and State is set to applied. 5. Repeat steps no. 3 and 4 to verify other EC2 instances provisioned within the current region. 6. Repeat steps no. 1 – 5 to perform the audit process for other AWS regions. Remediation: From Console: 1. Sign in to the AWS Management Console and navigate to the EC2 dashboard at https://console.aws.amazon.com/ec2/. 2. In the left navigation panel, under the INSTANCES section, choose Instances. 3. Select the EC2 instance that you want to examine. 4. Choose Actions > Instance Settings > Modify instance metadata options. 5. Ensure Instance metadata service is set to Enable and set IMDSv2 to Required. 6. Repeat steps no. 1 – 5 to perform the remediation process for other EC2 Instances in the all applicable AWS region(s). From Command Line: 1. Run the describe-instances command using appropriate filtering to list the IDs of all the existing EC2 instances currently available in the selected region: aws ec2 describe-instances --region --output table -query "Reservations[*].Instances[*].InstanceId" 2. The command output should return a table with the requested instance IDs. 3. Now run the modify-instance-metadata-options command using an instance ID returned at the previous step to update the Instance Metadata Version: aws ec2 modify-instance-metadata-options --instance-id --http-tokens required --region 4. Repeat steps no. 1 – 3 to perform the remediation process for other EC2 Instances in the same AWS region. 5. Change the region by updating --region and repeat the entire process for other regions. 5.6]
    System hardening through configuration management Preventive
    Establish, implement, and maintain authenticators. CC ID 15305
    [{not used} Ensure credentials unused for 45 days or greater are disabled (Automated) Description: AWS IAM users can access AWS resources using different types of credentials, such as passwords or access keys. It is recommended that all credentials that have been unused in 45 or greater days be deactivated or removed. Rationale: Disabling or removing unnecessary credentials will reduce the window of opportunity for credentials associated with a compromised or abandoned account to be used. Audit: Perform the following to determine if unused credentials exist: From Console: 1. Login to the AWS Management Console 2. Click Services 3. Click IAM 4. Click on Users 5. Click the Settings (gear) icon. 6. Select Console last sign-in, Access key last used, and Access Key Id 7. Click on Close 8. Check and ensure that Console last sign-in is less than 45 days ago. Note - Never means the user has never logged in. 9. Check and ensure that Access key age is less than 45 days and that Access key last used does not say None If the user hasn't signed into the Console in the last 45 days or Access keys are over 45 days old refer to the remediation. From Command Line: Download Credential Report: 1. Run the following commands: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,5,6,9,10,11,14,15,16 | grep -v '^' Ensure unused credentials do not exist: 2. For each user having password_enabled set to TRUE , ensure password_last_used_date is less than 45 days ago. • When password_enabled is set to TRUE and password_last_used is set to No_Information , ensure password_last_changed is less than 45 days ago. 3. For each user having an access_key_1_active or access_key_2_active to TRUE , ensure the corresponding access_key_n_last_used_date is less than 45 days ago. • When a user having an access_key_x_active (where x is 1 or 2) to TRUE and corresponding access_key_x_last_used_date is set to N/A', ensure access_key_x_last_rotated` is less than 45 days ago. Remediation: From Console: Perform the following to manage Unused Password (IAM user console access) 1. Login to the AWS Management Console: 2. Click Services 3. Click IAM 4. Click on Users 5. Click on Security Credentials 6. Select user whose Console last sign-in is greater than 45 days 7. Click Security credentials 8. In section Sign-in credentials, Console password click Manage 9. Under Console Access select Disable 10.Click Apply Perform the following to deactivate Access Keys: 1. Login to the AWS Management Console: 2. Click Services 3. Click IAM 4. Click on Users 5. Click on Security Credentials 6. Select any access keys that are over 45 days old and that have been used and • Click on Make Inactive 7. Select any access keys that are over 45 days old and that have not been used and • Click the X to Delete 1.12]
    System hardening through configuration management Preventive
    Disallow personal data in authenticators. CC ID 13864 System hardening through configuration management Preventive
    Restrict access to authentication files to authorized personnel, as necessary. CC ID 12127 System hardening through configuration management Preventive
    Protect authenticators or authentication factors from unauthorized modification and disclosure. CC ID 15317 System hardening through configuration management Preventive
    Implement safeguards to protect authenticators from unauthorized access. CC ID 15310 System hardening through configuration management Preventive
    Employ multifactor authentication for accounts with administrative privilege. CC ID 12496
    [Ensure MFA is enabled for the 'root' user account (Automated) Description: The 'root' user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password as well as for an authentication code from their AWS MFA device. Note: When virtual MFA is used for 'root' accounts, it is recommended that the device used is NOT a personal device, but rather a dedicated mobile device (tablet or phone) that is managed to be kept charged and secured independent of any individual personal devices. ("non-personal virtual MFA") This lessens the risks of losing access to the MFA due to device loss, device trade-in or if the individual owning the device is no longer employed at the company. Rationale: Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that emits a time-sensitive key and have knowledge of a credential. Audit: Perform the following to determine if the 'root' user account has MFA setup: From Console: 1. Login to the AWS Management Console 2. Click Services 3. Click IAM 4. Click on Credential Report 5. This will download a .csv file which contains credential usage for all IAM users within an AWS Account - open this file 6. For the user, ensure the mfa_active field is set to TRUE . From Command Line: 1. Run the following command: aws iam get-account-summary | grep "AccountMFAEnabled" 2. Ensure the AccountMFAEnabled property is set to 1 Remediation: Perform the following to establish MFA for the 'root' user account: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. Note: to manage MFA devices for the 'root' AWS account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials. 2. Choose Dashboard , and under Security Status , expand Activate MFA on your root account. 3. Choose Activate MFA 4. In the wizard, choose A virtual MFA device and then choose Next Step. 5. IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes. 6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications.) If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device). 7. Determine whether the MFA app supports QR codes, and then do one of the following: o Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code. o In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application. When you are finished, the virtual MFA device starts generating one-time passwords. In the Manage MFA Device wizard, in the Authentication Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password into the Authentication Code 2 box. Choose Assign Virtual MFA. 1.5
    Ensure hardware MFA is enabled for the 'root' user account (Manual) Description: The 'root' user account is the most privileged user in an AWS account. MFA adds an extra layer of protection on top of a user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password as well as for an authentication code from their AWS MFA device. For Level 2, it is recommended that the 'root' user account be protected with a hardware MFA. Rationale: A hardware MFA has a smaller attack surface than a virtual MFA. For example, a hardware MFA does not suffer the attack surface introduced by the mobile smartphone on which a virtual MFA resides. Note: Using hardware MFA for many, many AWS accounts may create a logistical device management issue. If this is the case, consider implementing this Level 2 recommendation selectively to the highest security AWS accounts and the Level 1 recommendation applied to the remaining accounts. Audit: Perform the following to determine if the 'root' user account has a hardware MFA setup: 1. Run the following command to determine if the 'root' account has MFA setup: aws iam get-account-summary | grep "AccountMFAEnabled" The AccountMFAEnabled property is set to 1 will ensure that the 'root' user account has MFA (Virtual or Hardware) Enabled. If AccountMFAEnabled property is set to 0 the account is not compliant with this recommendation. 2. If AccountMFAEnabled property is set to 1, determine 'root' account has Hardware MFA enabled. Run the following command to list all virtual MFA devices: aws iam list-virtual-mfa-devices If the output contains one MFA with the following Serial Number, it means the MFA is virtual, not hardware and the account is not compliant with this recommendation: "SerialNumber": "arn:aws:iam::__:mfa/root-account-mfadevice" Remediation: Perform the following to establish a hardware MFA for the 'root' user account: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. Note: to manage MFA devices for the AWS 'root' user account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials. 2. Choose Dashboard , and under Security Status , expand Activate MFA on your root account. 3. Choose Activate MFA 4. In the wizard, choose A hardware MFA device and then choose Next Step. 5. In the Serial Number box, enter the serial number that is found on the back of the MFA device. 6. In the Authentication Code 1 box, enter the six-digit number displayed by the MFA device. You might need to press the button on the front of the device to display the number. 7. Wait 30 seconds while the device refreshes the code, and then enter the next six-digit number into the Authentication Code 2 box. You might need to press the button on the front of the device again to display the second number. 8. Choose Next Step. The MFA device is now associated with the AWS account. The next time you use your AWS account credentials to sign in, you must type a code from the hardware MFA device. Remediation for this recommendation is not available through AWS CLI. 1.6]
    System hardening through configuration management Preventive
Common Controls and
mandates by Classification
54 Mandated Controls - bold    
15 Implied Controls - italic     135 Implementation

There are three types of Common Control classifications; corrective, detective, and preventive. Common Controls at the top level have the default assignment of Impact Zone.

Number of Controls
204 Total
  • Corrective
    2
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE TYPE
    Change the authenticator for shared accounts when the group membership changes. CC ID 14249 System hardening through configuration management Business Processes
    Configure the look-up secret authenticator to dispose of memorized secrets after their use. CC ID 13817 System hardening through configuration management Configuration
  • Detective
    7
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE TYPE
    Approve the data classification scheme. CC ID 13858 Leadership and high level objectives Establish/Maintain Documentation
    Identify information system users. CC ID 12081 Technical security Technical Security
    Ensure the root account is the first entry in password files. CC ID 16323 System hardening through configuration management Data and Information Management
    Configure the log to capture hardware and software access attempts. CC ID 01220
    [Ensure AWS Management Console authentication failures are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for failed console authentication attempts. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring failed console logins may decrease lead time to detect an attempt to brute force a credential, which may provide an indicator, such as source IP address, that can be used in other event correlation. Impact: Monitoring for these failures may create a large number of alerts, more so in larger environments. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = ConsoleLogin) && ($.errorMessage = "Failed authentication") }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for AWS management Console Login Failures and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = ConsoleLogin) && ($.errorMessage = "Failed authentication") }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 -namespace 'CISBenchmark' --alarm-actions 4.6]
    System hardening through configuration management Log Management
    Configure the log to capture access to restricted data or restricted information. CC ID 00644
    [Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket (Automated) Description: S3 Bucket Access Logging generates a log that contains access records for each request made to your S3 bucket. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. It is recommended that bucket access logging be enabled on the CloudTrail S3 bucket. Rationale: By enabling S3 bucket logging on target S3 buckets, it is possible to capture all events which may affect objects within any target buckets. Configuring logs to be placed in a separate bucket allows access to log information which can be useful in security and incident response workflows. Audit: Perform the following ensure the CloudTrail S3 bucket has access logging is enabled: From Console: 1. Go to the Amazon CloudTrail console at https://console.aws.amazon.com/cloudtrail/home 2. In the API activity history pane on the left, click Trails 3. In the Trails pane, note the bucket names in the S3 bucket column 4. Sign in to the AWS Management Console and open the S3 console at https://console.aws.amazon.com/s3. 5. Under All Buckets click on a target S3 bucket 6. Click on Properties in the top right of the console 7. Under Bucket: _ _ click on Logging 8. Ensure Enabled is checked. From Command Line: 1. Get the name of the S3 bucket that CloudTrail is logging to: aws cloudtrail describe-trails --query 'trailList[*].S3BucketName' 2. Ensure Bucket Logging is enabled: aws s3api get-bucket-logging --bucket Ensure command does not return empty output. Sample Output for a bucket with logging enabled: { "LoggingEnabled": { "TargetPrefix": "", "TargetBucket": "" } } Remediation: Perform the following to enable S3 bucket logging: From Console: 1. Sign in to the AWS Management Console and open the S3 console at https://console.aws.amazon.com/s3. 2. Under All Buckets click on the target S3 bucket 3. Click on Properties in the top right of the console 4. Under Bucket: click on Logging 5. Configure bucket logging o Click on the Enabled checkbox o Select Target Bucket from list o Enter a Target Prefix 6. Click Save. From Command Line: 1. Get the name of the S3 bucket that CloudTrail is logging to: aws cloudtrail describe-trails --region --query trailList[*].S3BucketName 2. Copy and add target bucket name at , Prefix for logfile at and optionally add an email address in the following template and save it as : { "LoggingEnabled": { "TargetBucket": "", "TargetPrefix": "", "TargetGrants": [ { "Grantee": { "Type": "AmazonCustomerByEmail", "EmailAddress": "" }, "Permission": "FULL_CONTROL" } ] } } 3. Run the put-bucket-logging command with bucket name and as input: for more information refer to put-bucket-logging: aws s3api put-bucket-logging --bucket --bucket-logging-status file:// Default Value: Logging is disabled. 3.4]
    System hardening through configuration management Log Management
    Configure the log to capture actions taken by individuals with root privileges or administrative privileges and add logging option to the root file system. CC ID 00645
    [{root user} Ensure usage of 'root' account is monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for 'root' login attempts to detect the unauthorized use, or attempts to use the root account. Rationale: Monitoring for 'root' account logins will provide visibility into the use of a fully privileged account and an opportunity to reduce the use of it. Cloud Watch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ $.userIdentity.type = "Root" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != "AwsServiceEvent" }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for 'Root' account usage and the taken from audit step 1. aws logs put-metric-filter --log-group-name `` -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filterpattern '{ $.userIdentity.type = "Root" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != "AwsServiceEvent" }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metricname `` --statistic Sum --period 300 --threshold 1 -comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 -namespace 'CISBenchmark' --alarm-actions 4.3]
    System hardening through configuration management Log Management
    Configure the log to capture changes to User privileges, audit policies, and trust policies by enabling audit policy changes. CC ID 01698
    [Ensure S3 bucket policy changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for changes to S3 bucket policies. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to S3 bucket policies may reduce time to detect and correct permissive policies on sensitive S3 buckets. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for S3 bucket policy changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.8]
    System hardening through configuration management Log Management
  • IT Impact Zone
    2
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE TYPE
    Technical security CC ID 00508 Technical security IT Impact Zone
    System hardening through configuration management CC ID 00860 System hardening through configuration management IT Impact Zone
  • Preventive
    193
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE TYPE
    Establish, implement, and maintain a data classification scheme. CC ID 11628
    [Ensure all data in Amazon S3 has been discovered, classified and secured when required. (Manual) Description: Amazon S3 buckets can contain sensitive data, that for security purposes should be discovered, monitored, classified and protected. Macie along with other 3rd party tools can automatically provide an inventory of Amazon S3 buckets. Rationale: Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Impact: There is a cost associated with using Amazon Macie. There is also typically a cost associated with 3rd Party tools that perform similar processes and protection. Audit: Perform the following steps to determine if Macie is running: From Console: 1. Login to the Macie console at https://console.aws.amazon.com/macie/ 2. In the left hand pane click on By job under findings. 3. Confirm that you have a Job setup for your S3 Buckets When you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below. If you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation. Remediation: Perform the steps below to enable and configure Amazon Macie From Console: 1. Log on to the Macie console at https://console.aws.amazon.com/macie/ 2. Click Get started. 3. Click Enable Macie. Setup a repository for sensitive data discovery results 1. In the Left pane, under Settings, click Discovery results. 2. Make sure Create bucket is selected. 3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number. 4. Click on Advanced. 5. Block all public access, make sure Yes is selected. 6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket. 7. Click on Save Create a job to discover sensitive data 1. In the left pane, click S3 buckets. Macie displays a list of all the S3 buckets for your account. 2. Select the check box for each bucket that you want Macie to analyze as part of the job 3. Click Create job. 4. Click Quick create. 5. For the Name and description step, enter a name and, optionally, a description of the job. 6. Then click Next. 7. For the Review and create step, click Submit. Review your findings 1. In the left pane, click Findings. 2. To view the details of a specific finding, choose any field other than the check box for the finding. If you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool. 2.1.3]
    Leadership and high level objectives Establish/Maintain Documentation
    Take into account the characteristics of the geographical, behavioral and functional setting for all datasets. CC ID 15046 Leadership and high level objectives Data and Information Management
    Disseminate and communicate the data classification scheme to interested personnel and affected parties. CC ID 16804 Leadership and high level objectives Communicate
    Identify roles, tasks, information, systems, and assets that fall under the organization's mandated Authority Documents. CC ID 00688
    [Ensure all data in Amazon S3 has been discovered, classified and secured when required. (Manual) Description: Amazon S3 buckets can contain sensitive data, that for security purposes should be discovered, monitored, classified and protected. Macie along with other 3rd party tools can automatically provide an inventory of Amazon S3 buckets. Rationale: Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Impact: There is a cost associated with using Amazon Macie. There is also typically a cost associated with 3rd Party tools that perform similar processes and protection. Audit: Perform the following steps to determine if Macie is running: From Console: 1. Login to the Macie console at https://console.aws.amazon.com/macie/ 2. In the left hand pane click on By job under findings. 3. Confirm that you have a Job setup for your S3 Buckets When you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below. If you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation. Remediation: Perform the steps below to enable and configure Amazon Macie From Console: 1. Log on to the Macie console at https://console.aws.amazon.com/macie/ 2. Click Get started. 3. Click Enable Macie. Setup a repository for sensitive data discovery results 1. In the Left pane, under Settings, click Discovery results. 2. Make sure Create bucket is selected. 3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number. 4. Click on Advanced. 5. Block all public access, make sure Yes is selected. 6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket. 7. Click on Save Create a job to discover sensitive data 1. In the left pane, click S3 buckets. Macie displays a list of all the S3 buckets for your account. 2. Select the check box for each bucket that you want Macie to analyze as part of the job 3. Click Create job. 4. Click Quick create. 5. For the Name and description step, enter a name and, optionally, a description of the job. 6. Then click Next. 7. For the Review and create step, click Submit. Review your findings 1. In the left pane, click Findings. 2. To view the details of a specific finding, choose any field other than the check box for the finding. If you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool. 2.1.3]
    Leadership and high level objectives Business Processes
    Enable and configure logging on network access controls in accordance with organizational standards. CC ID 01963
    [Ensure Network Access Control Lists (NACL) changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. NACLs are used as a stateless packet filter to control ingress and egress traffic for subnets within a VPC. It is recommended that a metric filter and alarm be established for changes made to NACLs. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to NACLs will help ensure that AWS resources and services are not unintentionally exposed. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for NACL changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 -filter-pattern '{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` -metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.11]
    Monitoring and measurement Configuration
    Establish, implement, and maintain an access control program. CC ID 11702 Technical security Establish/Maintain Documentation
    Establish, implement, and maintain an access rights management plan. CC ID 00513 Technical security Establish/Maintain Documentation
    Establish and maintain contact information for user accounts, as necessary. CC ID 15418
    [Maintain current contact details (Manual) Description: Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy or indicative of likely security compromise is observed by the AWS Abuse team. Contact details should not be for a single individual, as circumstances may arise where that individual is unavailable. Email contact details should point to a mail alias which forwards email to multiple individuals within the organization; where feasible, phone contact details should point to a PABX hunt group or other call-forwarding system. Rationale: If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question, so it is in both the customers' and AWS' best interests that prompt contact can be established. This is best achieved by setting AWS account contact details to point to resources which have multiple individuals as recipients, such as email aliases and PABX hunt groups. Audit: This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:*Billing ) 1. Sign in to the AWS Management Console and open the Billing and Cost Management console at https://console.aws.amazon.com/billing/home#/. 2. On the navigation bar, choose your account name, and then choose Account. 3. On the Account Settings page, review and verify the current details. 4. Under Contact Information, review and verify the current details. Remediation: This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:*Billing ). 1. Sign in to the AWS Management Console and open the Billing and Cost Management console at https://console.aws.amazon.com/billing/home#/. 2. On the navigation bar, choose your account name, and then choose Account. 3. On the Account Settings page, next to Account Settings, choose Edit. 4. Next to the field that you need to update, choose Edit. 5. After you have entered your changes, choose Save changes. 6. After you have made your changes, choose Done. 7. To edit your contact information, under Contact Information, choose Edit. 8. For the fields that you want to change, type your updated information, and then choose Update. 1.1
    Ensure security contact information is registered (Manual) Description: AWS provides customers with the option of specifying the contact information for account's security team. It is recommended that this information be provided. Rationale: Specifying security-specific contact information will help ensure that security advisories sent by AWS reach the team in your organization that is best equipped to respond to them. Audit: Perform the following to determine if security contact information is present: From Console: 1. Click on your account name at the top right corner of the console 2. From the drop-down menu Click My Account 3. Scroll down to the Alternate Contacts section 4. Ensure contact information is specified in the Security section From Command Line: 1. Run the following command: aws account get-alternate-contact --alternate-contact-type SECURITY 2. Ensure proper contact information is specified for the Security contact. Remediation: Perform the following to establish security contact information: From Console: 1. Click on your account name at the top right corner of the console. 2. From the drop-down menu Click My Account 3. Scroll down to the Alternate Contacts section 4. Enter contact information in the Security section From Command Line: Run the following command with the following input parameters: --email-address, --name, and --phone-number. aws account put-alternate-contact --alternate-contact-type SECURITY 1.2]
    Technical security Data and Information Management
    Control access rights to organizational assets. CC ID 00004 Technical security Technical Security
    Configure access control lists in accordance with organizational standards. CC ID 16465
    [Ensure that public access is not given to RDS Instance (Automated) Description: Ensure and verify that RDS database instances provisioned in your AWS account do restrict unauthorized access in order to minimize security risks. To restrict access to any publicly accessible RDS database instance, you must disable the database Publicly Accessible flag and update the VPC security group associated with the instance. Rationale: Ensure that no public-facing RDS database instances are provisioned in your AWS account and restrict unauthorized access in order to minimize security risks. When the RDS instance allows unrestricted access (0.0.0.0/0), everyone and everything on the Internet can establish a connection to your database and this can increase the opportunity for malicious activities such as brute force attacks, PostgreSQL injections, or DoS/DDoS attacks. Audit: From Console: 1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. Under the navigation panel, On RDS Dashboard, click Databases. 3. Select the RDS instance that you want to examine. 4. Click Instance Name from the dashboard, Under `Connectivity and Security. 5. On the Security, check if the Publicly Accessible flag status is set to Yes, follow the below-mentioned steps to check database subnet access. • In the networking section, click the subnet link available under Subnets • The link will redirect you to the VPC Subnets page. • Select the subnet listed on the page and click the Route Table tab from the dashboard bottom panel. If the route table contains any entries with the destination CIDR block set to 0.0.0.0/0 and with an Internet Gateway attached. • The selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and can be accessible from the Internet. 6. Repeat steps no. 4 and 5 to determine the type (public or private) and subnet for other RDS database instances provisioned in the current region. 7. Change the AWS region from the navigation bar and repeat the audit process for other regions. From Command Line: 1. Run describe-db-instances command to list all RDS database names, available in the selected AWS region: aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. The command output should return each database instance identifier. 3. Run again describe-db-instances command using the PubliclyAccessible parameter as query filter to reveal the database instance Publicly Accessible flag status: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].PubliclyAccessible' 4. Check for the Publicly Accessible parameter status, If the Publicly Accessible flag is set to Yes. Then selected RDS database instance is publicly accessible and insecure, follow the below-mentioned steps to check database subnet access 5. Run again describe-db-instances command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC subnet(s) associated with the selected instance: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].DBSubnetGroup.Subnets[]' • The command output should list the subnets available in the selected database subnet group. 6. Run describe-route-tables command using the ID of the subnet returned at the previous step to describe the routes of the VPC route table associated with the selected subnet: aws ec2 describe-route-tables --region --filters "Name=association.subnet-id,Values=" --query 'RouteTables[*].Routes[]' • If the command returns the route table associated with database instance subnet ID. Check the GatewayId and DestinationCidrBlock attributes values returned in the output. If the route table contains any entries with the GatewayId value set to igw-xxxxxxxx and the DestinationCidrBlock value set to 0.0.0.0/0, the selected RDS database instance was provisioned inside a public subnet. • Or • If the command returns empty results, the route table is implicitly associated with subnet, therefore the audit process continues with the next step 7. Run again describe-db-instances command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC ID associated with the selected instance: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].DBSubnetGroup.VpcId' • The command output should show the VPC ID in the selected database subnet group 8. Now run describe-route-tables command using the ID of the VPC returned at the previous step to describe the routes of the VPC main route table implicitly associated with the selected subnet: aws ec2 describe-route-tables --region --filters "Name=vpcid,Values=" "Name=association.main,Values=true" --query 'RouteTables[*].Routes[]' • The command output returns the VPC main route table implicitly associated with database instance subnet ID. Check the GatewayId and DestinationCidrBlock attributes values returned in the output. If the route table contains any entries with the GatewayId value set to igw-xxxxxxxx and the DestinationCidrBlock value set to 0.0.0.0/0, the selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and does not adhere to AWS security best practices. Remediation: From Console: 1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. Under the navigation panel, On RDS Dashboard, click Databases. 3. Select the RDS instance that you want to update. 4. Click Modify from the dashboard top menu. 5. On the Modify DB Instance panel, under the Connectivity section, click on Additional connectivity configuration and update the value for Publicly Accessible to Not publicly accessible to restrict public access. Follow the below steps to update subnet configurations: • Select the Connectivity and security tab, and click on the VPC attribute value inside the Networking section. • Select the Details tab from the VPC dashboard bottom panel and click on Route table configuration attribute value. • On the Route table details page, select the Routes tab from the dashboard bottom panel and click on Edit routes. • On the Edit routes page, update the Destination of Target which is set to igwxxxxx and click on Save routes. 6. On the Modify DB Instance panel Click on Continue and In the Scheduling of modifications section, perform one of the following actions based on your requirements: • Select Apply during the next scheduled maintenance window to apply the changes automatically during the next scheduled maintenance window. • Select Apply immediately to apply the changes right away. With this option, any pending modifications will be asynchronously applied as soon as possible, regardless of the maintenance window setting for this RDS database instance. Note that any changes available in the pending modifications queue are also applied. If any of the pending modifications require downtime, choosing this option can cause unexpected downtime for the application. 7. Repeat steps 3 to 6 for each RDS instance available in the current region. 8. Change the AWS region from the navigation bar to repeat the process for other regions. 2.3.3]
    Technical security Configuration
    Define roles for information systems. CC ID 12454
    [Ensure a support role has been created to manage incidents with AWS Support (Automated) Description: AWS provides a support center that can be used for incident notification and response, as well as technical support and customer services. Create an IAM Role, with the appropriate policy assigned, to allow authorized users to manage incidents with AWS Support. Rationale: By implementing least privilege for access control, an IAM Role will require an appropriate IAM Policy to allow Support Center Access in order to manage Incidents with AWS Support. Audit: From Command Line: 1. List IAM policies, filter for the 'AWSSupportAccess' managed policy, and note the "Arn" element value: aws iam list-policies --query "Policies[?PolicyName == 'AWSSupportAccess']" 2. Check if the 'AWSSupportAccess' policy is attached to any role: aws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess 3. In Output, Ensure PolicyRoles does not return empty. 'Example: Example: PolicyRoles: [ ]' If it returns empty refer to the remediation below. Remediation: From Command Line: 1. Create an IAM role for managing incidents with AWS: • Create a trust relationship policy document that allows to manage AWS incidents, and save it locally as /tmp/TrustPolicy.json: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "" }, "Action": "sts:AssumeRole" } ] } 2. Create the IAM role using the above trust policy: aws iam create-role --role-name --assume-role-policydocument file:///tmp/TrustPolicy.json 3. Attach 'AWSSupportAccess' managed policy to the created IAM role: aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess --role-name 1.17]
    Technical security Human Resources Management
    Enable role-based access control for objects and users on information systems. CC ID 12458
    [Ensure IAM instance roles are used for AWS resource access from instances (Automated) Description: AWS access from within AWS instances can be done by either encoding AWS keys into AWS API calls or by assigning the instance to a role which has an appropriate permissions policy for the required access. "AWS Access" means accessing the APIs of AWS in order to access AWS resources or manage AWS account resources. Rationale: AWS IAM roles reduce the risks associated with sharing and rotating credentials that can be used outside of AWS itself. If credentials are compromised, they can be used from outside of the AWS account they give access to. In contrast, in order to leverage role permissions an attacker would need to gain and maintain access to a specific instance to use the privileges associated with it. Additionally, if credentials are encoded into compiled applications or other hard to change mechanisms, then they are even more unlikely to be properly rotated due to service disruption risks. As time goes on, credentials that cannot be rotated are more likely to be known by an increasing number of individuals who no longer work for the organization owning the credentials. Audit: From Console: 1. Sign in to the AWS Management Console and navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. 2. In the left navigation panel, choose Instances. 3. Select the EC2 instance you want to examine. 4. Select Actions. 5. Select View details. 6. Select Security in the lower panel. • If the value for Instance profile arn is an instance profile ARN, then an instance profile (that contains an IAM role) is attached. • If the value for IAM Role is blank, no role is attached. • If the value for IAM Role contains a role • If the value for IAM Role is "No roles attached to instance profile: ", then an instance profile is attached to the instance, but it does not contain an IAM role. 7. Repeat steps 3 to 6 for each EC2 instance in your AWS account. From Command Line: 1. Run the describe-instances command to list all EC2 instance IDs, available in the selected AWS region. The command output will return each instance ID: aws ec2 describe-instances --region --query 'Reservations[*].Instances[*].InstanceId' 2. Run the describe-instances command again for each EC2 instance using the IamInstanceProfile identifier in the query filter to check if an IAM role is attached: aws ec2 describe-instances --region --instance-id --query 'Reservations[*].Instances[*].IamInstanceProfile' 3. If an IAM role is attached, the command output will show the IAM instance profile ARN and ID. 4. Repeat steps 1 to 3 for each EC2 instance in your AWS account. Remediation: From Console: 1. Sign in to the AWS Management Console and navigate to EC2 dashboard at https://console.aws.amazon.com/ec2/. 2. In the left navigation panel, choose Instances. 3. Select the EC2 instance you want to modify. 4. Click Actions. 5. Click Security. 6. Click Modify IAM role. 7. Click Create new IAM role if a new IAM role is required. 8. Select the IAM role you want to attach to your instance in the IAM role dropdown. 9. Click Update IAM role. 10. Repeat steps 3 to 9 for each EC2 instance in your AWS account that requires an IAM role to be attached. From Command Line: 1. Run the describe-instances command to list all EC2 instance IDs, available in the selected AWS region: aws ec2 describe-instances --region --query 'Reservations[*].Instances[*].InstanceId' 2. Run the associate-iam-instance-profile command to attach an instance profile (which is attached to an IAM role) to the EC2 instance: aws ec2 associate-iam-instance-profile --region --instance-id --iam-instance-profile Name="Instance-Profile-Name" 3. Run the describe-instances command again for the recently modified EC2 instance. The command output should return the instance profile ARN and ID: aws ec2 describe-instances --region --instance-id --query 'Reservations[*].Instances[*].IamInstanceProfile' 4. Repeat steps 1 to 3 for each EC2 instance in your AWS account that requires an IAM role to be attached. 1.18]
    Technical security Technical Security
    Control user privileges. CC ID 11665
    [Ensure IAM Users Receive Permissions Only Through Groups (Automated) Description: IAM users are granted access to services, functions, and data through IAM policies. There are four ways to define policies for a user: 1) Edit the user policy directly, aka an inline, or user, policy; 2) attach a policy directly to a user; 3) add the user to an IAM group that has an attached policy; 4) add the user to an IAM group that has an inline policy. Only the third implementation is recommended. Rationale: Assigning IAM policy only through groups unifies permissions management to a single, flexible layer consistent with organizational functional roles. By unifying permissions management, the likelihood of excessive permissions is reduced. Audit: Perform the following to determine if an inline policy is set or a policy is directly attached to users: 1. Run the following to get a list of IAM users: aws iam list-users --query 'Users[*].UserName' --output text 2. For each user returned, run the following command to determine if any policies are attached to them: aws iam list-attached-user-policies --user-name aws iam list-user-policies --user-name 3. If any policies are returned, the user has an inline policy or direct policy attachment. Remediation: Perform the following to create an IAM group and assign a policy to it: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, click Groups and then click Create New Group. 3. In the Group Name box, type the name of the group and then click Next Step . 4. In the list of policies, select the check box for each policy that you want to apply to all members of the group. Then click Next Step . 5. Click Create Group Perform the following to add a user to a given group: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, click Groups 3. Select the group to add a user to 4. Click Add Users To Group 5. Select the users to be added to the group 6. Click Add Users Perform the following to remove a direct association between a user and policy: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. 2. In the left navigation pane, click on Users 3. For each user: o Select the user o Click on the Permissions tab o Expand Permissions policies o Click X for each policy; then click Detach or Remove (depending on policy type) 1.15]
    Technical security Technical Security
    Establish and maintain a list of individuals authorized to perform privileged functions. CC ID 17005 Technical security Establish/Maintain Documentation
    Enforce usage restrictions for superuser accounts. CC ID 07064
    [{administrative tasks} Eliminate use of the 'root' user for administrative and daily tasks (Manual) Description: With the creation of an AWS account, a 'root user' is created that cannot be disabled or deleted. That user has unrestricted access to and control over all resources in the AWS account. It is highly recommended that the use of this account be avoided for everyday tasks. Rationale: The 'root user' has unrestricted access to and control over all account resources. Use of it is inconsistent with the principles of least privilege and separation of duties, and can lead to unnecessary harm due to error or account compromise. Audit: From Console: 1. Login to the AWS Management Console at https://console.aws.amazon.com/iam/ 2. In the left pane, click Credential Report 3. Click on Download Report 4. Open of Save the file locally 5. Locate the under the user column 6. Review password_last_used, access_key_1_last_used_date, access_key_2_last_used_date to determine when the 'root user' was last used. From Command Line: Run the following CLI commands to provide a credential report for determining the last time the 'root user' was used: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,5,11,16 | grep -B1 '' Review password_last_used, access_key_1_last_used_date, access_key_2_last_used_date to determine when the root user was last used. Note: There are a few conditions under which the use of the 'root' user account is required. Please see the reference links for all of the tasks that require use of the 'root' user. Remediation: If you find that the 'root' user account is being used for daily activity to include administrative tasks that do not require the 'root' user: 1. Change the 'root' user password. 2. Deactivate or delete any access keys associated with the 'root' user. **Remember, anyone who has 'root' user credentials for your AWS account has unrestricted access to and control of all the resources in your account, including billing information. 1.7]
    Technical security Technical Security
    Establish, implement, and maintain user accounts in accordance with the organizational Governance, Risk, and Compliance framework. CC ID 00526
    [Ensure IAM users are managed centrally via identity federation or AWS Organizations for multi-account environments (Manual) Description: In multi-account environments, IAM user centralization facilitates greater user control. User access beyond the initial account is then provided via role assumption. Centralization of users can be accomplished through federation with an external identity provider or through the use of AWS Organizations. Rationale: Centralizing IAM user management to a single identity store reduces complexity and thus the likelihood of access management errors. Audit: For multi-account AWS environments with an external identity provider: 1. Determine the master account for identity federation or IAM user management 2. Login to that account through the AWS Management Console 3. Click Services 4. Click IAM 5. Click Identity providers 6. Verify the configuration Then, determine all accounts that should not have local users present. For each account: 1. Determine all accounts that should not have local users present 2. Log into the AWS Management Console 3. Switch role into each identified account 4. Click Services 5. Click IAM 6. Click Users 7. Confirm that no IAM users representing individuals are present For multi-account AWS environments implementing AWS Organizations without an external identity provider: 1. Determine all accounts that should not have local users present 2. Log into the AWS Management Console 3. Switch role into each identified account 4. Click Services 5. Click IAM 6. Click Users 7. Confirm that no IAM users representing individuals are present Remediation: The remediation procedure will vary based on the individual organization's implementation of identity federation and/or AWS Organizations with the acceptance criteria that no non-service IAM users, and non-root accounts, are present outside the account providing centralized IAM user management. 1.21]
    Technical security Technical Security
    Configure firewalls to deny all traffic by default, except explicitly designated traffic. CC ID 00547
    [{do not allow} Ensure no Network ACLs allow ingress from 0.0.0.0/0 to remote server administration ports (Automated) Description: The Network Access Control List (NACL) function provide stateless filtering of ingress and egress network traffic to AWS resources. It is recommended that no NACL allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389, using either the TDP (6), UDP (17) or ALL (-1) protocols Rationale: Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise. Audit: From Console: Perform the following to determine if the account is configured as prescribed: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Network ACLs 3. For each network ACL, perform the following: o Select the network ACL o Click the Inbound Rules tab o Ensure no rule exists that has a port range that includes port 22, 3389, using the protocols TDP (6), UDP (17) or ALL (-1) or other remote server administration ports for your environment and has a Source of 0.0.0.0/0 and shows ALLOW Note: A Port value of ALL or a port range such as 0-1024 are inclusive of port 22, 3389, and other remote server administration ports Remediation: From Console: Perform the following: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Network ACLs 3. For each network ACL to remediate, perform the following: o Select the network ACL o Click the Inbound Rules tab o Click Edit inbound rules o Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click Delete to remove the offending inbound rule o Click Save 5.1
    {do not allow} Ensure no security groups allow ingress from 0.0.0.0/0 to remote server administration ports (Automated) Description: Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389, using either the TDP (6), UDP (17) or ALL (-1) protocols Rationale: Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise. Impact: When updating an existing environment, ensure that administrators have access to remote server administration ports through another mechanism before removing access by deleting the 0.0.0.0/0 inbound rule. Audit: Perform the following to determine if the account is configured as prescribed: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Security Groups 3. For each security group, perform the following: 4. Select the security group 5. Click the Inbound Rules tab 6. Ensure no rule exists that has a port range that includes port 22, 3389, using the protocols TDP (6), UDP (17) or ALL (-1) or other remote server administration ports for your environment and has a Source of 0.0.0.0/0 Note: A Port value of ALL or a port range such as 0-1024 are inclusive of port 22, 3389, and other remote server administration ports. Remediation: Perform the following to implement the prescribed state: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Security Groups 3. For each security group, perform the following: 4. Select the security group 5. Click the Inbound Rules tab 6. Click the Edit inbound rules button 7. Identify the rules to be edited or removed 8. Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click Delete to remove the offending inbound rule 9. Click Save rules 5.2
    {do not allow} Ensure no security groups allow ingress from ::/0 to remote server administration ports (Automated) Description: Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port 22 and RDP to port 3389. Rationale: Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise. Impact: When updating an existing environment, ensure that administrators have access to remote server administration ports through another mechanism before removing access by deleting the ::/0 inbound rule. Audit: Perform the following to determine if the account is configured as prescribed: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Security Groups 3. For each security group, perform the following: 4. Select the security group 5. Click the Inbound Rules tab 6. Ensure no rule exists that has a port range that includes port 22, 3389, or other remote server administration ports for your environment and has a Source of ::/0 Note: A Port value of ALL or a port range such as 0-1024 are inclusive of port 22, 3389, and other remote server administration ports. Remediation: Perform the following to implement the prescribed state: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click Security Groups 3. For each security group, perform the following: 4. Select the security group 5. Click the Inbound Rules tab 6. Click the Edit inbound rules button 7. Identify the rules to be edited or removed 8. Either A) update the Source field to a range other than ::/0, or, B) Click Delete to remove the offending inbound rule 9. Click Save rules 5.3
    Ensure the default security group of every VPC restricts all traffic (Automated) Description: A VPC comes with a default security group whose initial settings deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances assigned to the security group. If you don't specify a security group when you launch an instance, the instance is automatically assigned to this default security group. Security groups provide stateful filtering of ingress/egress network traffic to AWS resources. It is recommended that the default security group restrict all traffic. The default VPC in every region should have its default security group updated to comply. Any newly created VPCs will automatically contain a default security group that will need remediation to comply with this recommendation. NOTE: When implementing this recommendation, VPC flow logging is invaluable in determining the least privilege port access required by systems to work properly because it can log all packet acceptances and rejections occurring under the current security groups. This dramatically reduces the primary barrier to least privilege engineering - discovering the minimum ports required by systems in the environment. Even if the VPC flow logging recommendation in this benchmark is not adopted as a permanent security measure, it should be used during any period of discovery and engineering for least privileged security groups. Rationale: Configuring all VPC default security groups to restrict all traffic will encourage least privilege security group development and mindful placement of AWS resources into security groups which will in-turn reduce the exposure of those resources. Impact: Implementing this recommendation in an existing VPC containing operating resources requires extremely careful migration planning as the default security groups are likely to be enabling many ports that are unknown. Enabling VPC flow logging (of accepts) in an existing environment that is known to be breach free will reveal the current pattern of ports being used for each instance to communicate successfully. Audit: Perform the following to determine if the account is configured as prescribed: Security Group State 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. Repeat the next steps for all VPCs - including the default VPC in each AWS region: 3. In the left pane, click Security Groups 4. For each default security group, perform the following: 5. Select the default security group 6. Click the Inbound Rules tab 7. Ensure no rule exist 8. Click the Outbound Rules tab 9. Ensure no rules exist Security Group Members 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. Repeat the next steps for all default groups in all VPCs - including the default VPC in each AWS region: 3. In the left pane, click Security Groups 4. Copy the id of the default security group. 5. Change to the EC2 Management Console at https://console.aws.amazon.com/ec2/v2/home 6. In the filter column type 'Security Group ID : < security group id from #4 >' Remediation: Security Group Members Perform the following to implement the prescribed state: 1. Identify AWS resources that exist within the default security group 2. Create a set of least privilege security groups for those resources 3. Place the resources in those security groups 4. Remove the resources noted in #1 from the default security group Security Group State 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. Repeat the next steps for all VPCs - including the default VPC in each AWS region: 3. In the left pane, click Security Groups 4. For each default security group, perform the following: 5. Select the default security group 6. Click the Inbound Rules tab 7. Remove any inbound rules 8. Click the Outbound Rules tab 9. Remove any Outbound rules Recommended: IAM groups allow you to edit the "name" field. After remediating default groups rules for all VPCs in all regions, edit this field to add text similar to "DO NOT USE. DO NOT ADD RULES" 5.4]
    Technical security Configuration
    Configure firewalls to generate an audit log. CC ID 12038
    [Ensure security group changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. Security Groups are a stateful packet filter that controls ingress and egress traffic within a VPC. It is recommended that a metric filter and alarm be established for detecting changes to Security Groups. Rationale: Monitoring changes to security group will help ensure that resources and services are not unintentionally exposed. CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Impact: This may require additional 'tuning' to eliminate false positive and filter out expected activity so anomalies are easier to detect. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query "MetricAlarms[?MetricName== '']" 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for security groups changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name "" -filter-name "" --metric-transformations metricName= "" ,metricNamespace="CISBenchmark",metricValue=1 --filter-pattern "{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }" Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name "" Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn "" --protocol --notification-endpoint "" Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name "" --metric-name "" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 -namespace "CISBenchmark" --alarm-actions "" 4.10]
    Technical security Audits and Risk Management
    Restrict access to restricted data and restricted information on a need to know basis. CC ID 12453
    [Ensure access to AWSCloudShellFullAccess is restricted (Manual) Description: AWS CloudShell is a convenient way of running CLI commands against AWS services; a managed IAM policy ('AWSCloudShellFullAccess') provides full access to CloudShell, which allows file upload and download capability between a user's local system and the CloudShell environment. Within the CloudShell environment a user has sudo permissions, and can access the internet. So it is feasible to install file transfer software (for example) and move data from CloudShell to external internet servers. Rationale: Access to this policy should be restricted as it presents a potential channel for data exfiltration by malicious cloud admins that are given full permissions to the service. AWS documentation describes how to create a more restrictive IAM policy which denies file transfer permissions. Audit: From Console 1. Open the IAM console at https://console.aws.amazon.com/iam/ 2. In the left pane, select Policies 3. Search for and select AWSCloudShellFullAccess 4. On the Entities attached tab, ensure that there are no entities using this policy From Command Line 1. List IAM policies, filter for the 'AWSCloudShellFullAccess' managed policy, and note the "Arn" element value: aws iam list-policies --query "Policies[?PolicyName == 'AWSCloudShellFullAccess']" 2. Check if the 'AWSCloudShellFullAccess' policy is attached to any role: aws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSCloudShellFullAccess 3. In Output, Ensure PolicyRoles returns empty. 'Example: Example: PolicyRoles: [ ]' If it does not return empty refer to the remediation below. Note: Keep in mind that other policies may grant access. Remediation: From Console 1. Open the IAM console at https://console.aws.amazon.com/iam/ 2. In the left pane, select Policies 3. Search for and select AWSCloudShellFullAccess 4. On the Entities attached tab, for each item, check the box and select Detach 1.22]
    Technical security Data and Information Management
    Implement multifactor authentication techniques. CC ID 00561
    [Ensure multi-factor authentication (MFA) is enabled for all IAM users that have a console password (Automated) Description: Multi-Factor Authentication (MFA) adds an extra layer of authentication assurance beyond traditional credentials. With MFA enabled, when a user signs in to the AWS Console, they will be prompted for their user name and password as well as for an authentication code from their physical or virtual MFA token. It is recommended that MFA be enabled for all accounts that have a console password. Rationale: Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that displays a time-sensitive key and have knowledge of a credential. Audit: Perform the following to determine if a MFA device is enabled for all IAM users having a console password: From Console: 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. In the left pane, select Users 3. If the MFA or Password age columns are not visible in the table, click the gear icon at the upper right corner of the table and ensure a checkmark is next to both, then click Close. 4. Ensure that for each user where the Password age column shows a password age, the MFA column shows Virtual, U2F Security Key, or Hardware. From Command Line: 1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their password and MFA status: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,8 2. The output of this command will produce a table similar to the following: user,password_enabled,mfa_active elise,false,false brandon,true,true rakesh,false,false helene,false,false paras,true,true anitha,false,false 3. For any column having password_enabled set to true , ensure mfa_active is also set to true. Remediation: Perform the following to enable MFA: From Console: 1. Sign in to the AWS Management Console and open the IAM console at 'https://console.aws.amazon.com/iam/' 2. In the left pane, select Users. 3. In the User Name list, choose the name of the intended MFA user. 4. Choose the Security Credentials tab, and then choose Manage MFA Device. 5. In the Manage MFA Device wizard, choose Virtual MFA device, and then choose Continue. IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes. 6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications at https://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications). If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device). 7. Determine whether the MFA app supports QR codes, and then do one of the following: • Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code. • In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application. When you are finished, the virtual MFA device starts generating one-time passwords. 8. In the Manage MFA Device wizard, in the MFA Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second onetime password into the MFA Code 2 box. 9. Click Assign MFA. 1.10]
    Technical security Configuration
    Refrain from allowing individuals to self-enroll into multifactor authentication from untrusted devices. CC ID 17173 Technical security Technical Security
    Implement phishing-resistant multifactor authentication techniques. CC ID 16541 Technical security Technical Security
    Document and approve requests to bypass multifactor authentication. CC ID 15464 Technical security Establish/Maintain Documentation
    Encrypt in scope data or in scope information, as necessary. CC ID 04824
    [Ensure that encryption is enabled for EFS file systems (Automated) Description: EFS data should be encrypted at rest using AWS KMS (Key Management Service). Rationale: Data should be encrypted at rest to reduce the risk of a data breach via direct access to the storage device. Audit: From Console: 1. Login to the AWS Management Console and Navigate to `Elastic File System (EFS) dashboard. 2. Select File Systems from the left navigation panel. 3. Each item on the list has a visible Encrypted field that displays data at rest encryption status. 4. Validate that this field reads Encrypted for all EFS file systems in all AWS regions. From CLI: 1. Run describe-file-systems command using custom query filters to list the identifiers of all AWS EFS file systems currently available within the selected region: aws efs describe-file-systems --region --output table --query 'FileSystems[*].FileSystemId' 2. The command output should return a table with the requested file system IDs. 3. Run describe-file-systems command using the ID of the file system that you want to examine as identifier and the necessary query filters: aws efs describe-file-systems --region --file-system-id --query 'FileSystems[*].Encrypted' 4. The command output should return the file system encryption status true or false. If the returned value is false, the selected AWS EFS file system is not encrypted and if the returned value is true, the selected AWS EFS file system is encrypted. Remediation: It is important to note that EFS file system data at rest encryption must be turned on when creating the file system. If an EFS file system has been created without data at rest encryption enabled then you must create another EFS file system with the correct configuration and transfer the data. Steps to create an EFS file system with data encrypted at rest: From Console: 1. Login to the AWS Management Console and Navigate to Elastic File System (EFS) dashboard. 2. Select File Systems from the left navigation panel. 3. Click Create File System button from the dashboard top menu to start the file system setup process. 4. On the Configure file system access configuration page, perform the following actions. • Choose the right VPC from the VPC dropdown list. • Within Create mount targets section, select the checkboxes for all of the Availability Zones (AZs) within the selected VPC. These will be your mount targets. • Click Next step to continue. 5. Perform the following on the Configure optional settings page. • Create tags to describe your new file system. • Choose performance mode based on your requirements. • Check Enable encryption checkbox and choose aws/elasticfilesystem from Select KMS master key dropdown list to enable encryption for the new file system using the default master key provided and managed by AWS KMS. • Click Next step to continue. 6. Review the file system configuration details on the review and create page and then click Create File System to create your new AWS EFS file system. 7. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system. 8. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed. 9. Change the AWS region from the navigation bar and repeat the entire process for other aws regions. From CLI: 1. Run describe-file-systems command to describe the configuration information available for the selected (unencrypted) file system (see Audit section to identify the right resource): aws efs describe-file-systems --region --file-system-id 2. The command output should return the requested configuration information. 3. To provision a new AWS EFS file system, you need to generate a universally unique identifier (UUID) in order to create the token required by the create-filesystem command. To create the required token, you can use a randomly generated UUID from "https://www.uuidgenerator.net". 4. Run create-file-system command using the unique token created at the previous step. aws efs create-file-system --region --creation-token --performance-mode generalPurpose -encrypted 5. The command output should return the new file system configuration metadata. 6. Run create-mount-target command using the newly created EFS file system ID returned at the previous step as identifier and the ID of the Availability Zone (AZ) that will represent the mount target: aws efs create-mount-target --region --file-system-id --subnet-id 7. The command output should return the new mount target metadata. 8. Now you can mount your file system from an EC2 instance. 9. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system. 10. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed. aws efs delete-file-system --region --file-system-id 11. Change the AWS region by updating the --region and repeat the entire process for other aws regions. Default Value: EFS file system data is encrypted at rest by default when creating a file system via the Console. Encryption at rest is not enabled by default when creating a new file system using the AWS CLI, API, and SDKs. 2.4.1
    Ensure CloudTrail logs are encrypted at rest using KMS CMKs (Automated) Description: AWS CloudTrail is a web service that records AWS API calls for an account and makes those logs available to users and resources in accordance with IAM policies. AWS Key Management Service (KMS) is a managed service that helps create and control the encryption keys used to encrypt account data, and uses Hardware Security Modules (HSMs) to protect the security of encryption keys. CloudTrail logs can be configured to leverage server side encryption (SSE) and KMS customer created master keys (CMK) to further protect CloudTrail logs. It is recommended that CloudTrail be configured to use SSE-KMS. Rationale: Configuring CloudTrail to use SSE-KMS provides additional confidentiality controls on log data as a given user must have S3 read permission on the corresponding log bucket and must be granted decrypt permission by the CMK policy. Impact: Customer created keys incur an additional cost. See https://aws.amazon.com/kms/pricing/ for more information. Audit: Perform the following to determine if CloudTrail is configured to use SSE-KMS: From Console: 1. Sign in to the AWS Management Console and open the CloudTrail console at https://console.aws.amazon.com/cloudtrail 2. In the left navigation pane, choose Trails . 3. Select a Trail 4. Under the S3 section, ensure Encrypt log files is set to Yes and a KMS key ID is specified in the KSM Key Id field. From Command Line: 1. Run the following command: aws cloudtrail describe-trails 2. For each trail listed, SSE-KMS is enabled if the trail has a KmsKeyId property defined. Remediation: Perform the following to configure CloudTrail to use SSE-KMS: From Console: 1. Sign in to the AWS Management Console and open the CloudTrail console at https://console.aws.amazon.com/cloudtrail 2. In the left navigation pane, choose Trails . 3. Click on a Trail 4. Under the S3 section click on the edit button (pencil icon) 5. Click Advanced 6. Select an existing CMK from the KMS key Id drop-down menu • Note: Ensure the CMK is located in the same region as the S3 bucket • Note: You will need to apply a KMS Key policy on the selected CMK in order for CloudTrail as a service to encrypt and decrypt log files using the CMK provided. Steps are provided here for editing the selected CMK Key policy 7. Click Save 8. You will see a notification message stating that you need to have decrypt permissions on the specified KMS key to decrypt log files. 9. Click Yes From Command Line: aws cloudtrail update-trail --name --kms-id aws kms put-key-policy --key-id --policy 3.5]
    Technical security Data and Information Management
    Change cryptographic keys in accordance with organizational standards. CC ID 01302
    [Ensure rotation for customer-created symmetric CMKs is enabled (Automated) Description: AWS Key Management Service (KMS) allows customers to rotate the backing key which is key material stored within the KMS which is tied to the key ID of the customercreated customer master key (CMK). It is the backing key that is used to perform cryptographic operations such as encryption and decryption. Automated key rotation currently retains all prior backing keys so that decryption of encrypted data can take place transparently. It is recommended that CMK key rotation be enabled for symmetric keys. Key rotation can not be enabled for any asymmetric CMK. Rationale: Rotating encryption keys helps reduce the potential impact of a compromised key as data encrypted with a new key cannot be accessed with a previous key that may have been exposed. Keys should be rotated every year, or upon event that would result in the compromise of that key. Impact: Creation, management, and storage of CMKs may require additional time from an administrator. Audit: From Console: 1. Sign in to the AWS Management Console and open the KMS console at: https://console.aws.amazon.com/kms. 2. In the left navigation pane, click Customer-managed keys. 3. Select a customer managed CMK where Key spec = SYMMETRIC_DEFAULT. 4. Select the Key rotation tab. 5. Ensure the Automatically rotate this KMS key every year checkbox is checked. 6. Repeat steps 3–5 for all customer-managed CMKs where "Key spec = SYMMETRIC_DEFAULT". From Command Line: 1. Run the following command to get a list of all keys and their associated KeyIds: aws kms list-keys 2. For each key, note the KeyId and run the following command: describe-key --key-id 3. If the response contains "KeySpec = SYMMETRIC_DEFAULT", run the following command: aws kms get-key-rotation-status --key-id 4. Ensure KeyRotationEnabled is set to true. 5. Repeat steps 2–4 for all remaining CMKs. Remediation: From Console: 1. Sign in to the AWS Management Console and open the KMS console at: https://console.aws.amazon.com/kms. 2. In the left navigation pane, click Customer-managed keys. 3. Select a key where Key spec = SYMMETRIC_DEFAULT that does not have automatic rotation enabled. 4. Select the Key rotation tab. 5. Check the Automatically rotate this KMS key every year checkbox. 6. Click Save. 7. Repeat steps 3–6 for all customer-managed CMKs that do not have automatic rotation enabled. From Command Line: 1. Run the following command to enable key rotation: aws kms enable-key-rotation --key-id 3.6]
    Technical security Data and Information Management
    Establish, implement, and maintain Public Key certificate procedures. CC ID 07085
    [Ensure that all the expired SSL/TLS certificates stored in AWS IAM are removed (Automated) Description: To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates. Use IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM. IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage. IAM supports deploying server certificates in all regions, but you must obtain your certificate from an external provider for use with AWS. You cannot upload an ACM certificate to IAM. Additionally, you cannot manage your certificates from the IAM Console. Rationale: Removing expired SSL/TLS certificates eliminates the risk that an invalid certificate will be deployed accidentally to a resource such as AWS Elastic Load Balancer (ELB), which can damage the credibility of the application/website behind the ELB. As a best practice, it is recommended to delete expired certificates. Audit: From Console: Getting the certificates expiration information via AWS Management Console is not currently supported. To request information about the SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI). From Command Line: Run list-server-certificates command to list all the IAM-stored server certificates: aws iam list-server-certificates The command output should return an array that contains all the SSL/TLS certificates currently stored in IAM and their metadata (name, ID, expiration date, etc): { "ServerCertificateMetadataList": [ { "ServerCertificateId": "EHDGFRW7EJFYTE88D", "ServerCertificateName": "MyServerCertificate", "Expiration": "2018-07-10T23:59:59Z", "Path": "/", "Arn": "arn:aws:iam::012345678910:servercertificate/MySSLCertificate", "UploadDate": "2018-06-10T11:56:08Z" } ] } Verify the ServerCertificateName and Expiration parameter value (expiration date) for each SSL/TLS certificate returned by the list-server-certificates command and determine if there are any expired server certificates currently stored in AWS IAM. If so, use the AWS API to remove them. If this command returns: { { "ServerCertificateMetadataList": [] } This means that there are no expired certificates, It DOES NOT mean that no certificates exist. Remediation: From Console: Removing expired certificates via AWS Management Console is not currently supported. To delete SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI). From Command Line: To delete Expired Certificate run following command by replacing with the name of the certificate to delete: aws iam delete-server-certificate --server-certificate-name When the preceding command is successful, it does not return any output. Default Value: By default, expired certificates won't get deleted. 1.19]
    Technical security Establish/Maintain Documentation
    Establish, implement, and maintain information security procedures. CC ID 12006
    [Ensure all data in Amazon S3 has been discovered, classified and secured when required. (Manual) Description: Amazon S3 buckets can contain sensitive data, that for security purposes should be discovered, monitored, classified and protected. Macie along with other 3rd party tools can automatically provide an inventory of Amazon S3 buckets. Rationale: Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Impact: There is a cost associated with using Amazon Macie. There is also typically a cost associated with 3rd Party tools that perform similar processes and protection. Audit: Perform the following steps to determine if Macie is running: From Console: 1. Login to the Macie console at https://console.aws.amazon.com/macie/ 2. In the left hand pane click on By job under findings. 3. Confirm that you have a Job setup for your S3 Buckets When you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below. If you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation. Remediation: Perform the steps below to enable and configure Amazon Macie From Console: 1. Log on to the Macie console at https://console.aws.amazon.com/macie/ 2. Click Get started. 3. Click Enable Macie. Setup a repository for sensitive data discovery results 1. In the Left pane, under Settings, click Discovery results. 2. Make sure Create bucket is selected. 3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number. 4. Click on Advanced. 5. Block all public access, make sure Yes is selected. 6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket. 7. Click on Save Create a job to discover sensitive data 1. In the left pane, click S3 buckets. Macie displays a list of all the S3 buckets for your account. 2. Select the check box for each bucket that you want Macie to analyze as part of the job 3. Click Create job. 4. Click Quick create. 5. For the Name and description step, enter a name and, optionally, a description of the job. 6. Then click Next. 7. For the Review and create step, click Submit. Review your findings 1. In the left pane, click Findings. 2. To view the details of a specific finding, choose any field other than the check box for the finding. If you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool. 2.1.3]
    Operational management Business Processes
    Disseminate and communicate the information security procedures to all interested personnel and affected parties. CC ID 16303 Operational management Communicate
    Document the roles and responsibilities for all activities that protect restricted data in the information security procedures. CC ID 12304 Operational management Establish/Maintain Documentation
    Establish, implement, and maintain system hardening procedures. CC ID 12001 System hardening through configuration management Establish/Maintain Documentation
    Use the latest approved version of all assets. CC ID 00897
    [{Instance Metadata Service} Ensure that EC2 Metadata Service only allows IMDSv2 (Automated) Description: When enabling the Metadata Service on AWS EC2 instances, users have the option of using either Instance Metadata Service Version 1 (IMDSv1; a request/response method) or Instance Metadata Service Version 2 (IMDSv2; a session-oriented method). Rationale: Instance metadata is data about your instance that you can use to configure or manage the running instance. Instance metadata is divided into categories, for example, host name, events, and security groups. When enabling the Metadata Service on AWS EC2 instances, users have the option of using either Instance Metadata Service Version 1 (IMDSv1; a request/response method) or Instance Metadata Service Version 2 (IMDSv2; a session-oriented method). With IMDSv2, every request is now protected by session authentication. A session begins and ends a series of requests that software running on an EC2 instance uses to access the locally-stored EC2 instance metadata and credentials. Allowing Version 1 of the service may open EC2 instances to Server-Side Request Forgery (SSRF) attacks, so Amazon recommends utilizing Version 2 for better instance security. Audit: From Console: 1. Sign in to the AWS Management Console and navigate to the EC2 dashboard at https://console.aws.amazon.com/ec2/. 2. In the left navigation panel, under the INSTANCES section, choose Instances. 3. Select the EC2 instance that you want to examine. 4. Check for the IMDSv2 status, and ensure that it is set to Required. From Command Line: 1. Run the describe-instances command using appropriate filtering to list the IDs of all the existing EC2 instances currently available in the selected region: aws ec2 describe-instances --region --output table --query "Reservations[*].Instances[*].InstanceId" 2. The command output should return a table with the requested instance IDs. 3. Now run the describe-instances command using an instance ID returned at the previous step and custom filtering to determine whether the selected instance has IMDSv2: aws ec2 describe-instances --region --instance-ids --query "Reservations[*].Instances[*].MetadataOptions" --output table 4. Ensure for all ec2 instances HttpTokens is set to required and State is set to applied. 5. Repeat steps no. 3 and 4 to verify other EC2 instances provisioned within the current region. 6. Repeat steps no. 1 – 5 to perform the audit process for other AWS regions. Remediation: From Console: 1. Sign in to the AWS Management Console and navigate to the EC2 dashboard at https://console.aws.amazon.com/ec2/. 2. In the left navigation panel, under the INSTANCES section, choose Instances. 3. Select the EC2 instance that you want to examine. 4. Choose Actions > Instance Settings > Modify instance metadata options. 5. Ensure Instance metadata service is set to Enable and set IMDSv2 to Required. 6. Repeat steps no. 1 – 5 to perform the remediation process for other EC2 Instances in the all applicable AWS region(s). From Command Line: 1. Run the describe-instances command using appropriate filtering to list the IDs of all the existing EC2 instances currently available in the selected region: aws ec2 describe-instances --region --output table -query "Reservations[*].Instances[*].InstanceId" 2. The command output should return a table with the requested instance IDs. 3. Now run the modify-instance-metadata-options command using an instance ID returned at the previous step to update the Instance Metadata Version: aws ec2 modify-instance-metadata-options --instance-id --http-tokens required --region 4. Repeat steps no. 1 – 3 to perform the remediation process for other EC2 Instances in the same AWS region. 5. Change the region by updating --region and repeat the entire process for other regions. 5.6]
    System hardening through configuration management Technical Security
    Include risk information when communicating critical security updates. CC ID 14948 System hardening through configuration management Communicate
    Configure Least Functionality and Least Privilege settings to organizational standards. CC ID 07599 System hardening through configuration management Configuration
    Configure "Block public access (bucket settings)" to organizational standards. CC ID 15444
    [Ensure that S3 Buckets are configured with 'Block public access (bucket settings)' (Automated) Description: Amazon S3 provides Block public access (bucket settings) and Block public access (account settings) to help you manage public access to Amazon S3 resources. By default, S3 buckets and objects are created with public access disabled. However, an IAM principal with sufficient S3 permissions can enable public access at the bucket and/or object level. While enabled, Block public access (bucket settings) prevents an individual bucket, and its contained objects, from becoming publicly accessible. Similarly, Block public access (account settings) prevents all buckets, and contained objects, from becoming publicly accessible across the entire account. Rationale: Amazon S3 Block public access (bucket settings) prevents the accidental or malicious public exposure of data contained within the respective bucket(s). Amazon S3 Block public access (account settings) prevents the accidental or malicious public exposure of data contained within all buckets of the respective AWS account. Whether blocking public access to all or some buckets is an organizational decision that should be based on data sensitivity, least privilege, and use case. Impact: When you apply Block Public Access settings to an account, the settings apply to all AWS Regions globally. The settings might not take effect in all Regions immediately or simultaneously, but they eventually propagate to all Regions. Audit: If utilizing Block Public Access (bucket settings) From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Edit public access settings'. 4. Ensure that block public access settings are set appropriately for this bucket 5. Repeat for all the buckets in your AWS account. From Command Line: 1. List all of the S3 Buckets aws s3 ls 2. Find the public access setting on that bucket aws s3api get-public-access-block --bucket Output if Block Public access is enabled: { "PublicAccessBlockConfiguration": { "BlockPublicAcls": true, "IgnorePublicAcls": true, "BlockPublicPolicy": true, "RestrictPublicBuckets": true } } If the output reads false for the separate configuration settings then proceed to the remediation. If utilizing Block Public Access (account settings) From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Choose Block public access (account settings) 3. Ensure that block public access settings are set appropriately for your AWS account. From Command Line: To check Public access settings for this account status, run the following command, aws s3control get-public-access-block --account-id --region Output if Block Public access is enabled: { "PublicAccessBlockConfiguration": { "IgnorePublicAcls": true, "BlockPublicPolicy": true, "BlockPublicAcls": true, "RestrictPublicBuckets": true } } If the output reads false for the separate configuration settings then proceed to the remediation. Remediation: If utilizing Block Public Access (bucket settings) From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Edit public access settings'. 4. Click 'Block all public access' 5. Repeat for all the buckets in your AWS account that contain sensitive data. From Command Line: 1. List all of the S3 Buckets aws s3 ls 2. Set the Block Public Access to true on that bucket aws s3api put-public-access-block --bucket --public-accessblock-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPu blicBuckets=true" If utilizing Block Public Access (account settings) From Console: If the output reads true for the separate configuration settings then it is set on the account. 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Choose Block Public Access (account settings) 3. Choose Edit to change the block public access settings for all the buckets in your AWS account 4. Choose the settings you want to change, and then choose Save. For details about each setting, pause on the i icons. 5. When you're asked for confirmation, enter confirm. Then Click Confirm to save your changes. From Command Line: To set Block Public access settings for this account, run the following command: aws s3control put-public-access-block --public-access-block-configuration BlockPublicAcls=true, IgnorePublicAcls=true, BlockPublicPolicy=true, RestrictPublicBuckets=true --account-id 2.1.4]
    System hardening through configuration management Configuration
    Configure S3 Bucket Policies to organizational standards. CC ID 15431
    [Ensure S3 Bucket Policy is set to deny HTTP requests (Automated) Description: At the Amazon S3 bucket level, you can configure permissions through a bucket policy making the objects accessible only through HTTPS. Rationale: By default, Amazon S3 allows both HTTP and HTTPS requests. To achieve only allowing access to Amazon S3 objects through HTTPS you also have to explicitly deny access to HTTP requests. Bucket policies that allow HTTPS requests without explicitly denying HTTP requests will not comply with this recommendation. Audit: To allow access to HTTPS you can use a condition that checks for the key "aws:SecureTransport: true". This means that the request is sent through HTTPS but that HTTP can still be used. So to make sure you do not allow HTTP access confirm that there is a bucket policy that explicitly denies access for HTTP requests and that it contains the key "aws:SecureTransport": "false". From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Permissions', then Click on Bucket Policy. 4. Ensure that a policy is listed that matches: '{ "Sid": , "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::/*", "Condition": { "Bool": { "aws:SecureTransport": "false" }' and will be specific to your account 5. Repeat for all the buckets in your AWS account. From Command Line: 1. List all of the S3 Buckets aws s3 ls 2. Using the list of buckets run this command on each of them: aws s3api get-bucket-policy --bucket | grep aws:SecureTransport NOTE : If Error being thrown by CLI, it means no Policy has been configured for specified S3 bucket and by default it's allowing both HTTP and HTTPS requests. 3. Confirm that aws:SecureTransport is set to false aws:SecureTransport:false 4. Confirm that the policy line has Effect set to Deny 'Effect:Deny' Remediation: From Console: 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Permissions'. 4. Click 'Bucket Policy' 5. Add this to the existing policy filling in the required information { "Sid": ", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::/*", "Condition": { "Bool": { "aws:SecureTransport": "false" } } } 6. Save 7. Repeat for all the buckets in your AWS account that contain sensitive data. From Console using AWS Policy Generator: 1. Repeat steps 1-4 above. 2. Click on Policy Generator at the bottom of the Bucket Policy Editor 3. Select Policy Type S3 Bucket Policy 4. Add Statements • Effect = Deny • Principal = * • AWS Service = Amazon S3 • Actions = * • Amazon Resource Name = 5. Generate Policy 6. Copy the text and add it to the Bucket Policy. From Command Line: 1. Export the bucket policy to a json file. aws s3api get-bucket-policy --bucket --query Policy --output text > policy.json 2. Modify the policy.json file by adding in this statement: { "Sid": ", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::/*", "Condition": { "Bool": { "aws:SecureTransport": "false" } } } 3. Apply this modified policy back to the S3 bucket: aws s3api put-bucket-policy --bucket --policy file://policy.json Default Value: Both HTTP and HTTPS Request are allowed 2.1.1]
    System hardening through configuration management Configuration
    Establish, implement, and maintain authenticators. CC ID 15305
    [{not used} Ensure credentials unused for 45 days or greater are disabled (Automated) Description: AWS IAM users can access AWS resources using different types of credentials, such as passwords or access keys. It is recommended that all credentials that have been unused in 45 or greater days be deactivated or removed. Rationale: Disabling or removing unnecessary credentials will reduce the window of opportunity for credentials associated with a compromised or abandoned account to be used. Audit: Perform the following to determine if unused credentials exist: From Console: 1. Login to the AWS Management Console 2. Click Services 3. Click IAM 4. Click on Users 5. Click the Settings (gear) icon. 6. Select Console last sign-in, Access key last used, and Access Key Id 7. Click on Close 8. Check and ensure that Console last sign-in is less than 45 days ago. Note - Never means the user has never logged in. 9. Check and ensure that Access key age is less than 45 days and that Access key last used does not say None If the user hasn't signed into the Console in the last 45 days or Access keys are over 45 days old refer to the remediation. From Command Line: Download Credential Report: 1. Run the following commands: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,5,6,9,10,11,14,15,16 | grep -v '^' Ensure unused credentials do not exist: 2. For each user having password_enabled set to TRUE , ensure password_last_used_date is less than 45 days ago. • When password_enabled is set to TRUE and password_last_used is set to No_Information , ensure password_last_changed is less than 45 days ago. 3. For each user having an access_key_1_active or access_key_2_active to TRUE , ensure the corresponding access_key_n_last_used_date is less than 45 days ago. • When a user having an access_key_x_active (where x is 1 or 2) to TRUE and corresponding access_key_x_last_used_date is set to N/A', ensure access_key_x_last_rotated` is less than 45 days ago. Remediation: From Console: Perform the following to manage Unused Password (IAM user console access) 1. Login to the AWS Management Console: 2. Click Services 3. Click IAM 4. Click on Users 5. Click on Security Credentials 6. Select user whose Console last sign-in is greater than 45 days 7. Click Security credentials 8. In section Sign-in credentials, Console password click Manage 9. Under Console Access select Disable 10.Click Apply Perform the following to deactivate Access Keys: 1. Login to the AWS Management Console: 2. Click Services 3. Click IAM 4. Click on Users 5. Click on Security Credentials 6. Select any access keys that are over 45 days old and that have been used and • Click on Make Inactive 7. Select any access keys that are over 45 days old and that have not been used and • Click the X to Delete 1.12]
    System hardening through configuration management Technical Security
    Establish, implement, and maintain an authenticator standard. CC ID 01702 System hardening through configuration management Establish/Maintain Documentation
    Disallow personal data in authenticators. CC ID 13864 System hardening through configuration management Technical Security
    Establish, implement, and maintain an authenticator management system. CC ID 12031 System hardening through configuration management Establish/Maintain Documentation
    Establish, implement, and maintain a repository of authenticators. CC ID 16372 System hardening through configuration management Data and Information Management
    Establish, implement, and maintain authenticator procedures. CC ID 12002
    [Ensure security questions are registered in the AWS account (Manual) Description: The AWS support portal allows account owners to establish security questions that can be used to authenticate individuals calling AWS customer service for support. It is recommended that security questions be established. Rationale: When creating a new AWS account, a default super user is automatically created. This account is referred to as the 'root user' or 'root' account. It is recommended that the use of this account be limited and highly controlled. During events in which the 'root' password is no longer accessible or the MFA token associated with 'root' is lost/destroyed it is possible, through authentication using secret questions and associated answers, to recover 'root' user login access. Audit: From Console: 1. Login to the AWS account as the 'root' user 2. On the top right you will see the 3. Click on the 4. From the drop-down menu Click My Account 5. In the Configure Security Challenge Questions section on the Personal Information page, configure three security challenge questions. 6. Click Save questions . Remediation: From Console: 1. Login to the AWS Account as the 'root' user 2. Click on the from the top right of the console 3. From the drop-down menu Click My Account 4. Scroll down to the Configure Security Questions section 5. Click on Edit 6. Click on each Question • From the drop-down select an appropriate question • Click on the Answer section • Enter an appropriate answer o Follow process for all 3 questions 7. Click Update when complete 8. Save Questions and Answers and place in a secure physical location 1.3
    Do not setup access keys during initial user setup for all IAM users that have a console password (Manual) Description: AWS console defaults to no check boxes selected when creating a new IAM user. When creating the IAM User credentials you have to determine what type of access they require. Programmatic access: The IAM user might need to make API calls, use the AWS CLI, or use the Tools for Windows PowerShell. In that case, create an access key (access key ID and a secret access key) for that user. AWS Management Console access: If the user needs to access the AWS Management Console, create a password for the user. Rationale: Requiring the additional steps be taken by the user for programmatic access after their profile has been created will give a stronger indication of intent that access keys are [a] necessary for their work and [b] once the access key is established on an account that the keys may be in use somewhere in the organization. Note: Even if it is known the user will need access keys, require them to create the keys themselves or put in a support ticket to have them created as a separate step from user creation. Audit: Perform the following to determine if access keys were created upon user creation and are being used and rotated as prescribed: From Console: 1. Login to the AWS Management Console 2. Click Services 3. Click IAM 4. Click on a User where column Password age and Access key age is not set to None 5. Click on Security credentials Tab 6. Compare the user Creation time to the Access Key Created date. 7. For any that match, the key was created during initial user setup. • Keys that were created at the same time as the user profile and do not have a last used date should be deleted. Refer to the remediation below. From Command Line: 1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their access keys utilization: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,9,11,14,16 2. The output of this command will produce a table similar to the following: user,password_enabled,access_key_1_active,access_key_1_last_used_date,access_ key_2_active,access_key_2_last_used_date elise,false,true,2015-04-16T15:14:00+00:00,false,N/A brandon,true,true,N/A,false,N/A rakesh,false,false,N/A,false,N/A helene,false,true,2015-11-18T17:47:00+00:00,false,N/A paras,true,true,2016-08-28T12:04:00+00:00,true,2016-03-04T10:11:00+00:00 anitha,true,true,2016-06-08T11:43:00+00:00,true,N/A 3. For any user having password_enabled set to true AND access_key_last_used_date set to N/A refer to the remediation below. Remediation: Perform the following to delete access keys that do not pass the audit: From Console: 1. Login to the AWS Management Console: 2. Click Services 3. Click IAM 4. Click on Users 5. Click on Security Credentials 6. As an Administrator • Click on the X (Delete) for keys that were created at the same time as the user profile but have not been used. 7. As an IAM User • Click on the X (Delete) for keys that were created at the same time as the user profile but have not been used. From Command Line: aws iam delete-access-key --access-key-id --user-name 1.11
    {be active} Ensure there is only one active access key available for any single IAM user (Automated) Description: Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK) Rationale: Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API. One of the best ways to protect your account is to not allow users to have multiple access keys. Audit: From Console: 1. Sign in to the AWS Management Console and navigate to IAM dashboard at https://console.aws.amazon.com/iam/. 2. In the left navigation panel, choose Users. 3. Click on the IAM user name that you want to examine. 4. On the IAM user configuration page, select Security Credentials tab. 5. Under Access Keys section, in the Status column, check the current status for each access key associated with the IAM user. If the selected IAM user has more than one access key activated then the users access configuration does not adhere to security best practices and the risk of accidental exposures increases. • Repeat steps no. 3 – 5 for each IAM user in your AWS account. From Command Line: 1. Run list-users command to list all IAM users within your account: aws iam list-users --query "Users[*].UserName" The command output should return an array that contains all your IAM user names. 2. Run list-access-keys command using the IAM user name list to return the current status of each access key associated with the selected IAM user: aws iam list-access-keys --user-name The command output should expose the metadata ("Username", "AccessKeyId", "Status", "CreateDate") for each access key on that user account. 3. Check the Status property value for each key returned to determine each keys current state. If the Status property value for more than one IAM access key is set to Active, the user access configuration does not adhere to this recommendation, refer to the remediation below. • Repeat steps no. 2 and 3 for each IAM user in your AWS account. Remediation: From Console: 1. Sign in to the AWS Management Console and navigate to IAM dashboard at https://console.aws.amazon.com/iam/. 2. In the left navigation panel, choose Users. 3. Click on the IAM user name that you want to examine. 4. On the IAM user configuration page, select Security Credentials tab. 5. In Access Keys section, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working. 6. In the same Access Keys section, identify your non-operational access keys (other than the chosen one) and deactivate it by clicking the Make Inactive link. 7. If you receive the Change Key Status confirmation box, click Deactivate to switch off the selected key. 8. Repeat steps no. 3 – 7 for each IAM user in your AWS account. From Command Line: 1. Using the IAM user and access key information provided in the Audit CLI, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working. 2. Run the update-access-key command below using the IAM user name and the non-operational access key IDs to deactivate the unnecessary key(s). Refer to the Audit section to identify the unnecessary access key ID for the selected IAM user Note - the command does not return any output: aws iam update-access-key --access-key-id --status Inactive --user-name 3. To confirm that the selected access key pair has been successfully deactivated run the list-access-keys audit command again for that IAM User: aws iam list-access-keys --user-name • The command output should expose the metadata for each access key associated with the IAM user. If the non-operational key pair(s) Status is set to Inactive, the key has been successfully deactivated and the IAM user access configuration adheres now to this recommendation. 4. Repeat steps no. 1 – 3 for each IAM user in your AWS account. 1.13]
    System hardening through configuration management Establish/Maintain Documentation
    Restrict access to authentication files to authorized personnel, as necessary. CC ID 12127 System hardening through configuration management Technical Security
    Configure authenticator activation codes in accordance with organizational standards. CC ID 17032 System hardening through configuration management Configuration
    Configure authenticators to comply with organizational standards. CC ID 06412 System hardening through configuration management Configuration
    Configure the system to require new users to change their authenticator on first use. CC ID 05268 System hardening through configuration management Configuration
    Configure authenticators so that group authenticators or shared authenticators are prohibited. CC ID 00519 System hardening through configuration management Configuration
    Configure the system to prevent unencrypted authenticator use. CC ID 04457 System hardening through configuration management Configuration
    Disable store passwords using reversible encryption. CC ID 01708 System hardening through configuration management Configuration
    Configure the system to encrypt authenticators. CC ID 06735 System hardening through configuration management Configuration
    Configure the system to mask authenticators. CC ID 02037 System hardening through configuration management Configuration
    Configure the authenticator policy to ban the use of usernames or user identifiers in authenticators. CC ID 05992 System hardening through configuration management Configuration
    Configure the "minimum number of digits required for new passwords" setting to organizational standards. CC ID 08717 System hardening through configuration management Establish/Maintain Documentation
    Configure the "minimum number of upper case characters required for new passwords" setting to organizational standards. CC ID 08718 System hardening through configuration management Establish/Maintain Documentation
    Configure the system to refrain from specifying the type of information used as password hints. CC ID 13783 System hardening through configuration management Configuration
    Configure the "minimum number of lower case characters required for new passwords" setting to organizational standards. CC ID 08719 System hardening through configuration management Establish/Maintain Documentation
    Disable machine account password changes. CC ID 01737 System hardening through configuration management Configuration
    Configure the "minimum number of special characters required for new passwords" setting to organizational standards. CC ID 08720 System hardening through configuration management Establish/Maintain Documentation
    Configure the "require new passwords to differ from old ones by the appropriate minimum number of characters" setting to organizational standards. CC ID 08722 System hardening through configuration management Establish/Maintain Documentation
    Configure the "password reuse" setting to organizational standards. CC ID 08724
    [Ensure IAM password policy prevents password reuse (Automated) Description: IAM password policies can prevent the reuse of a given password by the same user. It is recommended that the password policy prevent the reuse of passwords. Rationale: Preventing password reuse increases account resiliency against brute force login attempts. Audit: Perform the following to ensure the password policy is configured as prescribed: From Console: 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Ensure "Prevent password reuse" is checked 5. Ensure "Number of passwords to remember" is set to 24 From Command Line: aws iam get-account-password-policy Ensure the output of the above command includes "PasswordReusePrevention": 24 Remediation: Perform the following to set the password policy as prescribed: From Console: 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Check "Prevent password reuse" 5. Set "Number of passwords to remember" is set to 24 From Command Line: aws iam update-account-password-policy --password-reuse-prevention 24 Note: All commands starting with "aws iam update-account-password-policy" can be combined into a single command. 1.9]
    System hardening through configuration management Establish/Maintain Documentation
    Configure the "Disable Remember Password" setting. CC ID 05270 System hardening through configuration management Configuration
    Configure the "Minimum password age" to organizational standards. CC ID 01703 System hardening through configuration management Configuration
    Configure the LILO/GRUB password. CC ID 01576 System hardening through configuration management Configuration
    Configure the system to use Apple's Keychain Access to store passwords and certificates. CC ID 04481 System hardening through configuration management Configuration
    Change the default password to Apple's Keychain. CC ID 04482 System hardening through configuration management Configuration
    Configure Apple's Keychain items to ask for the Keychain password. CC ID 04483 System hardening through configuration management Configuration
    Configure the Syskey Encryption Key and associated password. CC ID 05978 System hardening through configuration management Configuration
    Configure the "Accounts: Limit local account use of blank passwords to console logon only" setting. CC ID 04505 System hardening through configuration management Configuration
    Configure the "System cryptography: Force strong key protection for user keys stored in the computer" setting. CC ID 04534 System hardening through configuration management Configuration
    Configure interactive logon for accounts that do not have assigned authenticators in accordance with organizational standards. CC ID 05267 System hardening through configuration management Configuration
    Enable or disable remote connections from accounts with empty authenticators, as appropriate. CC ID 05269 System hardening through configuration management Configuration
    Configure the "Send LanMan compatible password" setting. CC ID 05271 System hardening through configuration management Configuration
    Configure the authenticator policy to ban or allow authenticators as words found in dictionaries, as appropriate. CC ID 05993 System hardening through configuration management Configuration
    Configure the authenticator policy to ban or allow authenticators as proper names, as necessary. CC ID 17030 System hardening through configuration management Configuration
    Set the most number of characters required for the BitLocker Startup PIN correctly. CC ID 06054 System hardening through configuration management Configuration
    Set the default folder for BitLocker recovery passwords correctly. CC ID 06055 System hardening through configuration management Configuration
    Notify affected parties to keep authenticators confidential. CC ID 06787 System hardening through configuration management Behavior
    Discourage affected parties from recording authenticators. CC ID 06788 System hardening through configuration management Behavior
    Configure the "shadow password for all accounts in /etc/passwd" setting to organizational standards. CC ID 08721 System hardening through configuration management Establish/Maintain Documentation
    Configure the "password hashing algorithm" setting to organizational standards. CC ID 08723 System hardening through configuration management Establish/Maintain Documentation
    Configure the "Disable password strength validation for Peer Grouping" setting to organizational standards. CC ID 10866 System hardening through configuration management Configuration
    Configure the "Set the interval between synchronization retries for Password Synchronization" setting to organizational standards. CC ID 11185 System hardening through configuration management Configuration
    Configure the "Set the number of synchronization retries for servers running Password Synchronization" setting to organizational standards. CC ID 11187 System hardening through configuration management Configuration
    Configure the "Turn off password security in Input Panel" setting to organizational standards. CC ID 11296 System hardening through configuration management Configuration
    Configure the "Turn on the Windows to NIS password synchronization for users that have been migrated to Active Directory" setting to organizational standards. CC ID 11355 System hardening through configuration management Configuration
    Configure the authenticator display screen to organizational standards. CC ID 13794 System hardening through configuration management Configuration
    Configure the authenticator field to disallow memorized secrets found in the memorized secret list. CC ID 13808 System hardening through configuration management Configuration
    Configure the authenticator display screen to display the memorized secret as an option. CC ID 13806 System hardening through configuration management Configuration
    Disseminate and communicate with the end user when a memorized secret entered into an authenticator field matches one found in the memorized secret list. CC ID 13807 System hardening through configuration management Communicate
    Configure the memorized secret verifiers to refrain from allowing anonymous users to access memorized secret hints. CC ID 13823 System hardening through configuration management Configuration
    Configure the system to allow paste functionality for the authenticator field. CC ID 13819 System hardening through configuration management Configuration
    Configure the system to require successful authentication before an authenticator for a user account is changed. CC ID 13821 System hardening through configuration management Configuration
    Protect authenticators or authentication factors from unauthorized modification and disclosure. CC ID 15317 System hardening through configuration management Technical Security
    Obscure authentication information during the login process. CC ID 15316 System hardening through configuration management Configuration
    Issue temporary authenticators, as necessary. CC ID 17062 System hardening through configuration management Process or Activity
    Renew temporary authenticators, as necessary. CC ID 17061 System hardening through configuration management Process or Activity
    Disable authenticators, as necessary. CC ID 17060 System hardening through configuration management Process or Activity
    Change authenticators, as necessary. CC ID 15315
    [Ensure access keys are rotated every 90 days or less (Automated) Description: Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests that you make to AWS. AWS users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. It is recommended that all access keys be regularly rotated. Rationale: Rotating access keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used. Access keys should be rotated to ensure that data cannot be accessed with an old key which might have been lost, cracked, or stolen. Audit: Perform the following to determine if access keys are rotated as prescribed: From Console: 1. Go to Management Console (https://console.aws.amazon.com/iam) 2. Click on Users 3. Click setting icon 4. Select Console last sign-in 5. Click Close 6. Ensure that Access key age is less than 90 days ago. note) None in the Access key age means the user has not used the access key. From Command Line: aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d The access_key_1_last_rotated and the access_key_2_last_rotated fields in this file notes The date and time, in ISO 8601 date-time format, when the user's access key was created or last changed. If the user does not have an active access key, the value in this field is N/A (not applicable). Remediation: Perform the following to rotate access keys: From Console: 1. Go to Management Console (https://console.aws.amazon.com/iam) 2. Click on Users 3. Click on Security Credentials 4. As an Administrator o Click on Make Inactive for keys that have not been rotated in 90 Days 5. As an IAM User o Click on Make Inactive or Delete for keys which have not been rotated or used in 90 Days 6. Click on Create Access Key 7. Update programmatic call with new Access Key credentials From Command Line: 1. While the first access key is still active, create a second access key, which is active by default. Run the following command: aws iam create-access-key At this point, the user has two active access keys. 2. Update all applications and tools to use the new access key. 3. Determine whether the first access key is still in use by using this command: aws iam get-access-key-last-used 4. One approach is to wait several days and then check the old access key for any use before proceeding. Even if step Step 3 indicates no use of the old key, it is recommended that you do not immediately delete the first access key. Instead, change the state of the first access key to Inactive using this command: aws iam update-access-key 5. Use only the new access key to confirm that your applications are working. Any applications and tools that still use the original access key will stop working at this point because they no longer have access to AWS resources. If you find such an application or tool, you can switch its state back to Active to reenable the first access key. Then return to step Step 2 and update this application to use the new key. 6. After you wait some period of time to ensure that all applications and tools have been updated, you can delete the first access key with this command: aws iam delete-access-key 1.14]
    System hardening through configuration management Configuration
    Implement safeguards to protect authenticators from unauthorized access. CC ID 15310 System hardening through configuration management Technical Security
    Change all default authenticators. CC ID 15309 System hardening through configuration management Configuration
    Configure user accounts. CC ID 07036 System hardening through configuration management Configuration
    Configure accounts with administrative privilege. CC ID 07033
    [{does not exist} Ensure no 'root' user account access key exists (Automated) Description: The 'root' user account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account. It is recommended that all access keys associated with the 'root' user account be deleted. Rationale: Deleting access keys associated with the 'root' user account limits vectors by which the account can be compromised. Additionally, deleting the 'root' access keys encourages the creation and use of role based accounts that are least privileged. Audit: Perform the following to determine if the 'root' user account has access keys: From Console: 1. Login to the AWS Management Console. 2. Click Services. 3. Click IAM. 4. Click on Credential Report. 5. This will download a .csv file which contains credential usage for all IAM users within an AWS Account - open this file. 6. For the user, ensure the access_key_1_active and access_key_2_active fields are set to FALSE. From Command Line: Run the following command: aws iam get-account-summary | grep "AccountAccessKeysPresent" If no 'root' access keys exist the output will show "AccountAccessKeysPresent": 0,. If the output shows a "1", then 'root' keys exist and should be deleted. Remediation: Perform the following to delete active 'root' user access keys. From Console: 1. Sign in to the AWS Management Console as 'root' and open the IAM console at https://console.aws.amazon.com/iam/. 2. Click on at the top right and select My Security Credentials from the drop down list. 3. On the pop out screen Click on Continue to Security Credentials. 4. Click on Access Keys (Access Key ID and Secret Access Key). 5. Under the Status column (if there are any Keys which are active). 6. Click Delete (Note: Deleted keys cannot be recovered). Note: While a key can be made inactive, this inactive key will still show up in the CLI command from the audit procedure, and may lead to a key being falsely flagged as being non-compliant. 1.4]
    System hardening through configuration management Configuration
    Employ multifactor authentication for accounts with administrative privilege. CC ID 12496
    [Ensure MFA is enabled for the 'root' user account (Automated) Description: The 'root' user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password as well as for an authentication code from their AWS MFA device. Note: When virtual MFA is used for 'root' accounts, it is recommended that the device used is NOT a personal device, but rather a dedicated mobile device (tablet or phone) that is managed to be kept charged and secured independent of any individual personal devices. ("non-personal virtual MFA") This lessens the risks of losing access to the MFA due to device loss, device trade-in or if the individual owning the device is no longer employed at the company. Rationale: Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that emits a time-sensitive key and have knowledge of a credential. Audit: Perform the following to determine if the 'root' user account has MFA setup: From Console: 1. Login to the AWS Management Console 2. Click Services 3. Click IAM 4. Click on Credential Report 5. This will download a .csv file which contains credential usage for all IAM users within an AWS Account - open this file 6. For the user, ensure the mfa_active field is set to TRUE . From Command Line: 1. Run the following command: aws iam get-account-summary | grep "AccountMFAEnabled" 2. Ensure the AccountMFAEnabled property is set to 1 Remediation: Perform the following to establish MFA for the 'root' user account: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. Note: to manage MFA devices for the 'root' AWS account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials. 2. Choose Dashboard , and under Security Status , expand Activate MFA on your root account. 3. Choose Activate MFA 4. In the wizard, choose A virtual MFA device and then choose Next Step. 5. IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes. 6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications.) If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device). 7. Determine whether the MFA app supports QR codes, and then do one of the following: o Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code. o In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application. When you are finished, the virtual MFA device starts generating one-time passwords. In the Manage MFA Device wizard, in the Authentication Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password into the Authentication Code 2 box. Choose Assign Virtual MFA. 1.5
    Ensure hardware MFA is enabled for the 'root' user account (Manual) Description: The 'root' user account is the most privileged user in an AWS account. MFA adds an extra layer of protection on top of a user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password as well as for an authentication code from their AWS MFA device. For Level 2, it is recommended that the 'root' user account be protected with a hardware MFA. Rationale: A hardware MFA has a smaller attack surface than a virtual MFA. For example, a hardware MFA does not suffer the attack surface introduced by the mobile smartphone on which a virtual MFA resides. Note: Using hardware MFA for many, many AWS accounts may create a logistical device management issue. If this is the case, consider implementing this Level 2 recommendation selectively to the highest security AWS accounts and the Level 1 recommendation applied to the remaining accounts. Audit: Perform the following to determine if the 'root' user account has a hardware MFA setup: 1. Run the following command to determine if the 'root' account has MFA setup: aws iam get-account-summary | grep "AccountMFAEnabled" The AccountMFAEnabled property is set to 1 will ensure that the 'root' user account has MFA (Virtual or Hardware) Enabled. If AccountMFAEnabled property is set to 0 the account is not compliant with this recommendation. 2. If AccountMFAEnabled property is set to 1, determine 'root' account has Hardware MFA enabled. Run the following command to list all virtual MFA devices: aws iam list-virtual-mfa-devices If the output contains one MFA with the following Serial Number, it means the MFA is virtual, not hardware and the account is not compliant with this recommendation: "SerialNumber": "arn:aws:iam::__:mfa/root-account-mfadevice" Remediation: Perform the following to establish a hardware MFA for the 'root' user account: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. Note: to manage MFA devices for the AWS 'root' user account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials. 2. Choose Dashboard , and under Security Status , expand Activate MFA on your root account. 3. Choose Activate MFA 4. In the wizard, choose A hardware MFA device and then choose Next Step. 5. In the Serial Number box, enter the serial number that is found on the back of the MFA device. 6. In the Authentication Code 1 box, enter the six-digit number displayed by the MFA device. You might need to press the button on the front of the device to display the number. 7. Wait 30 seconds while the device refreshes the code, and then enter the next six-digit number into the Authentication Code 2 box. You might need to press the button on the front of the device again to display the second number. 8. Choose Next Step. The MFA device is now associated with the AWS account. The next time you use your AWS account credentials to sign in, you must type a code from the hardware MFA device. Remediation for this recommendation is not available through AWS CLI. 1.6]
    System hardening through configuration management Technical Security
    Establish, implement, and maintain network parameter modification procedures. CC ID 01517 System hardening through configuration management Establish/Maintain Documentation
    Configure routing tables to organizational standards. CC ID 15438
    [Ensure routing tables for VPC peering are "least access" (Manual) Description: Once a VPC peering connection is established, routing tables must be updated to establish any connections between the peered VPCs. These routes can be as specific as desired - even peering a VPC to only a single host on the other side of the connection. Rationale: Being highly selective in peering routing tables is a very effective way of minimizing the impact of breach as resources outside of these routes are inaccessible to the peered VPC. Audit: Review routing tables of peered VPCs for whether they route all subnets of each VPC and whether that is necessary to accomplish the intended purposes for peering the VPCs. From Command Line: 1. List all the route tables from a VPC and check if "GatewayId" is pointing to a (e.g. pcx-1a2b3c4d) and if "DestinationCidrBlock" is as specific as desired. aws ec2 describe-route-tables --filter "Name=vpc-id,Values=" --query "RouteTables[*].{RouteTableId:RouteTableId, VpcId:VpcId, Routes:Routes, AssociatedSubnets:Associations[*].SubnetId}" Remediation: Remove and add route table entries to ensure that the least number of subnets or hosts as is required to accomplish the purpose for peering are routable. From Command Line: 1. For each containing routes non compliant with your routing policy (which grants more than desired "least access"), delete the non compliant route: aws ec2 delete-route --route-table-id --destination-cidrblock 2. Create a new compliant route: aws ec2 create-route --route-table-id --destination-cidrblock --vpc-peering-connection-id 5.5]
    System hardening through configuration management Configuration
    Configure Services settings to organizational standards. CC ID 07434 System hardening through configuration management Configuration
    Configure AWS Config to organizational standards. CC ID 15440
    [Ensure AWS Config is enabled in all regions (Automated) Description: AWS Config is a web service that performs configuration management of supported AWS resources within your account and delivers log files to you. The recorded information includes the configuration item (AWS resource), relationships between configuration items (AWS resources), any configuration changes between resources. It is recommended AWS Config be enabled in all regions. Rationale: The AWS configuration item history captured by AWS Config enables security analysis, resource change tracking, and compliance auditing. Impact: It is recommended AWS Config be enabled in all regions. Audit: Process to evaluate AWS Config configuration per region From Console: 1. Sign in to the AWS Management Console and open the AWS Config console at https://console.aws.amazon.com/config/. 2. On the top right of the console select target Region. 3. If a Config recorder is enabled in this region, you should navigate to the Settings page from the navigation menu on the left hand side. If a Config recorder is not yet enabled in this region then you should select "Get Started". 4. Ensure "Record all resources supported in this region" is checked. 5. Ensure "Include global resources (e.g., AWS IAM resources)" is checked, unless it is enabled in another region (this is only required in one region) 6. Ensure the correct S3 bucket has been defined. 7. Ensure the correct SNS topic has been defined. 8. Repeat steps 2 to 7 for each region. From Command Line: 1. Run this command to show all AWS Config recorders and their properties: aws configservice describe-configuration-recorders 2. Evaluate the output to ensure that all recorders have a recordingGroup object which includes "allSupported": true. Additionally, ensure that at least one recorder has "includeGlobalResourceTypes": true Note: There is one more parameter "ResourceTypes" in recordingGroup object. We don't need to check the same as whenever we set "allSupported": true, AWS enforces resource types to be empty ("ResourceTypes":[]) Sample Output: { "ConfigurationRecorders": [ { "recordingGroup": { "allSupported": true, "resourceTypes": [], "includeGlobalResourceTypes": true }, "roleARN": "arn:aws:iam:::role/servicerole/", "name": "default" } ] } 3. Run this command to show the status for all AWS Config recorders: aws configservice describe-configuration-recorder-status 4. In the output, find recorders with name key matching the recorders that were evaluated in step 2. Ensure that they include "recording": true and "lastStatus": "SUCCESS" Remediation: To implement AWS Config configuration: From Console: 1. Select the region you want to focus on in the top right of the console 2. Click Services 3. Click Config 4. If a Config recorder is enabled in this region, you should navigate to the Settings page from the navigation menu on the left hand side. If a Config recorder is not yet enabled in this region then you should select "Get Started". 5. Select "Record all resources supported in this region" 6. Choose to include global resources (IAM resources) 7. Specify an S3 bucket in the same account or in another managed AWS account 8. Create an SNS Topic from the same AWS account or another managed AWS account From Command Line: 1. Ensure there is an appropriate S3 bucket, SNS topic, and IAM role per the AWS Config Service prerequisites. 2. Run this command to create a new configuration recorder: aws configservice put-configuration-recorder --configuration-recorder name=default,roleARN=arn:aws:iam::012345678912:role/myConfigRole --recordinggroup allSupported=true,includeGlobalResourceTypes=true 3. Create a delivery channel configuration file locally which specifies the channel attributes, populated from the prerequisites set up previously: { "name": "default", "s3BucketName": "my-config-bucket", "snsTopicARN": "arn:aws:sns:us-east-1:012345678912:my-config-notice", "configSnapshotDeliveryProperties": { "deliveryFrequency": "Twelve_Hours" } } 4. Run this command to create a new delivery channel, referencing the json configuration file made in the previous step: aws configservice put-delivery-channel --delivery-channel file://deliveryChannel.json 5. Start the configuration recorder by running the following command: aws configservice start-configuration-recorder --configuration-recorder-name default 3.3]
    System hardening through configuration management Configuration
    Configure Logging settings in accordance with organizational standards. CC ID 07611 System hardening through configuration management Configuration
    Configure "CloudTrail" to organizational standards. CC ID 15443
    [Ensure CloudTrail is enabled in all regions (Automated) Description: AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail provides a history of AWS API calls for an account, including API calls made via the Management Console, SDKs, command line tools, and higher-level AWS services (such as CloudFormation). Rationale: The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. Additionally, • ensuring that a multi-regions trail exists will ensure that unexpected activity occurring in otherwise unused regions is detected • ensuring that a multi-regions trail exists will ensure that Global Service Logging is enabled for a trail by default to capture recording of events generated on AWS global services • for a multi-regions trail, ensuring that management events configured for all type of Read/Writes ensures recording of management operations that are performed on all resources in an AWS account Impact: S3 lifecycle features can be used to manage the accumulation and management of logs over time. See the following AWS resource for more information on these features: 1. https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html Audit: Perform the following to determine if CloudTrail is enabled for all regions: From Console: 1. Sign in to the AWS Management Console and open the CloudTrail console at https://console.aws.amazon.com/cloudtrail 2. Click on Trails on the left navigation pane • You will be presented with a list of trails across all regions 3. Ensure at least one Trail has Yes specified in the Multi-region trail column 4. Click on a trail via the link in the Name column 5. Ensure Logging is set to ON 6. Ensure Multi-region trail is set to Yes 7. In section Management Events ensure API activity set to ALL From Command Line: aws cloudtrail describe-trails Ensure IsMultiRegionTrail is set to true aws cloudtrail get-trail-status --name Ensure IsLogging is set to true aws cloudtrail get-event-selectors --trail-name Ensure there is at least one fieldSelector for a Trail that equals Management This should NOT output any results for Field: "readOnly" if either true or false is returned one of the checkboxes is not selected for read or write Example of correct output: "TrailARN": "", "AdvancedEventSelectors": [ { "Name": "Management events selector", "FieldSelectors": [ { "Field": "eventCategory", "Equals": [ "Management" ] Remediation: Perform the following to enable global (Multi-region) CloudTrail logging: From Console: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/cloudtrail 2. Click on Trails on the left navigation pane 3. Click Get Started Now , if presented • Click Add new trail • Enter a trail name in the Trail name box • A trail created in the console is a multi-region trail by default • Specify an S3 bucket name in the S3 bucket box • Specify the AWS KMS alias under the Log file SSE-KMS encryption section or create a new key • Click Next 4. Ensure Management events check box is selected. 5. Ensure both Read and Write are check under API activity 6. Click Next 7. review your trail settings and click Create trail From Command Line: aws cloudtrail create-trail --name --bucket-name --is-multi-region-trail aws cloudtrail update-trail --name --is-multi-region-trail Note: Creating CloudTrail via CLI without providing any overriding options configures Management Events to set All type of Read/Writes by default. Default Value: Not Enabled 3.1]
    System hardening through configuration management Configuration
    Configure "CloudTrail log file validation" to organizational standards. CC ID 15437
    [Ensure CloudTrail log file validation is enabled (Automated) Description: CloudTrail log file validation creates a digitally signed digest file containing a hash of each log that CloudTrail writes to S3. These digest files can be used to determine whether a log file was changed, deleted, or unchanged after CloudTrail delivered the log. It is recommended that file validation be enabled on all CloudTrails. Rationale: Enabling log file validation will provide additional integrity checking of CloudTrail logs. Audit: Perform the following on each trail to determine if log file validation is enabled: From Console: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/cloudtrail 2. Click on Trails on the left navigation pane 3. For Every Trail: • Click on a trail via the link in the Name column • Under the General details section, ensure Log file validation is set to Enabled From Command Line: aws cloudtrail describe-trails Ensure LogFileValidationEnabled is set to true for each trail Remediation: Perform the following to enable log file validation on a given trail: From Console: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/cloudtrail 2. Click on Trails on the left navigation pane 3. Click on target trail 4. Within the General details section click edit 5. Under the Advanced settings section 6. Check the enable box under Log file validation 7. Click Save changes From Command Line: aws cloudtrail update-trail --name --enable-log-file-validation Note that periodic validation of logs using these digests can be performed by running the following command: aws cloudtrail validate-logs --trail-arn --start-time --end-time Default Value: Not Enabled 3.2]
    System hardening through configuration management Configuration
    Configure "VPC flow logging" to organizational standards. CC ID 15436
    [Ensure VPC flow logging is enabled in all VPCs (Automated) Description: VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs. It is recommended that VPC Flow Logs be enabled for packet "Rejects" for VPCs. Rationale: VPC Flow Logs provide visibility into network traffic that traverses the VPC and can be used to detect anomalous traffic or insight during security workflows. Impact: By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods: 1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/Settin gLogRetention.html Audit: Perform the following to determine if VPC Flow logs are enabled: From Console: 1. Sign into the management console 2. Select Services then VPC 3. In the left navigation pane, select Your VPCs 4. Select a VPC 5. In the right pane, select the Flow Logs tab. 6. Ensure a Log Flow exists that has Active in the Status column. From Command Line: 1. Run describe-vpcs command (OSX/Linux/UNIX) to list the VPC networks available in the current AWS region: aws ec2 describe-vpcs --region --query Vpcs[].VpcId 2. The command output returns the VpcId available in the selected region. 3. Run describe-flow-logs command (OSX/Linux/UNIX) using the VPC ID to determine if the selected virtual network has the Flow Logs feature enabled: aws ec2 describe-flow-logs --filter "Name=resource-id,Values=" 4. If there are no Flow Logs created for the selected VPC, the command output will return an empty list []. 5. Repeat step 3 for other VPCs available in the same region. 6. Change the region by updating --region and repeat steps 1 - 5 for all the VPCs. Remediation: Perform the following to determine if VPC Flow logs is enabled: From Console: 1. Sign into the management console 2. Select Services then VPC 3. In the left navigation pane, select Your VPCs 4. Select a VPC 5. In the right pane, select the Flow Logs tab. 6. If no Flow Log exists, click Create Flow Log 7. For Filter, select Reject 8. Enter in a Role and Destination Log Group 9. Click Create Log Flow 10. Click on CloudWatch Logs Group Note: Setting the filter to "Reject" will dramatically reduce the logging data accumulation for this recommendation and provide sufficient information for the purposes of breach detection, research and remediation. However, during periods of least privilege security group engineering, setting this the filter to "All" can be very helpful in discovering existing traffic flows required for proper operation of an already running environment. From Command Line: 1. Create a policy document and name it as role_policy_document.json and paste the following content: { "Version": "2012-10-17", "Statement": [ { "Sid": "test", "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } 2. Create another policy document and name it as iam_policy.json and paste the following content: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action":[ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogGroups", "logs:DescribeLogStreams", "logs:PutLogEvents", "logs:GetLogEvents", "logs:FilterLogEvents" ], "Resource": "*" } ] } 3. Run the below command to create an IAM role: aws iam create-role --role-name --assume-role-policydocument file://role_policy_document.json 4. Run the below command to create an IAM policy: aws iam create-policy --policy-name --policy-document file://iam-policy.json 5. Run attach-group-policy command using the IAM policy ARN returned at the previous step to attach the policy to the IAM role (if the command succeeds, no output is returned): aws iam attach-group-policy --policy-arn arn:aws:iam:::policy/ --group-name 6. Run describe-vpcs to get the VpcId available in the selected region: aws ec2 describe-vpcs --region 7. The command output should return the VPC Id available in the selected region. 8. Run create-flow-logs to create a flow log for the vpc: aws ec2 create-flow-logs --resource-type VPC --resource-ids -traffic-type REJECT --log-group-name --deliver-logspermission-arn 9. Repeat step 8 for other vpcs available in the selected region. 10. Change the region by updating --region and repeat remediation procedure for other vpcs. 3.7]
    System hardening through configuration management Configuration
    Configure "object-level logging" to organizational standards. CC ID 15433
    [Ensure that Object-level logging for write events is enabled for S3 bucket (Automated) Description: S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don't log data events and so it is recommended to enable Object-level logging for S3 buckets. Rationale: Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity within your S3 Buckets using Amazon CloudWatch Events. Impact: Enabling logging for these object level events may significantly increase the number of events logged and may incur additional cost. Audit: From Console: 1. Login to the AWS Management Console and navigate to CloudTrail dashboard at https://console.aws.amazon.com/cloudtrail/ 2. In the left panel, click Trails and then click on the CloudTrail Name that you want to examine. 3. Review General details 4. Confirm that Multi-region trail is set to Yes 5. Scroll down to Data events 6. Confirm that it reads: Data Events:S3 Log selector template Log all events If 'basic events selectors' is being used it should read: Data events: S3 Bucket Name: All current and future S3 buckets Write: Enabled 7. Repeat steps 2 to 6 to verify that Multi-region trail and Data events logging of S3 buckets in CloudTrail. If the CloudTrails do not have multi-region and data events configured for S3 refer to the remediation below. From Command Line: 1. Run list-trails command to list the names of all Amazon CloudTrail trails currently available in all AWS regions: aws cloudtrail list-trails 2. The command output will be a list of all the trail names to include. "TrailARN": "arn:aws:cloudtrail:::trail/", "Name": "", "HomeRegion": "" 3. Next run 'get-trail- command to determine Multi-region. aws cloudtrail get-trail --name --region 4. The command output should include: "IsMultiRegionTrail": true, 5. Next run get-event-selectors command using the Name of the trail and the region returned in step 2 to determine if Data events logging feature is enabled within the selected CloudTrail trail for all S3 buckets: aws cloudtrail get-event-selectors --region --trail-name --query EventSelectors[*].DataResources[] 6. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector. "Type": "AWS::S3::Object", "Values": [ "arn:aws:s3" 7. If the get-event-selectors command returns an empty array '[]', the Data events are not included in the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded. 8. Repeat steps 1 to 5 for auditing each CloudTrail to determine if Data events for S3 are covered. If Multi-region is not set to true and the Data events does not show S3 defined as shown refer to the remediation procedure below. Remediation: From Console: 1. Login to the AWS Management Console and navigate to S3 dashboard at https://console.aws.amazon.com/s3/ 2. In the left navigation panel, click buckets and then click on the S3 Bucket Name that you want to examine. 3. Click Properties tab to see in detail bucket configuration. 4. In the AWS Cloud Trail data events' section select the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by slicking the Configure in Cloudtrailbutton or navigating to the Cloudtrail console linkhttps://console.aws.amazon.com/cloudtrail/` 5. Once the Cloudtrail is selected, Select the data Data Events check box. 6. Select S3 from the `Data event type drop down. 7. Select Log all events from the Log selector template drop down. 8. Repeat steps 2 to 5 to enable object-level logging of write events for other S3 buckets. From Command Line: 1. To enable object-level data events logging for S3 buckets within your AWS account, run put-event-selectors command using the name of the trail that you want to reconfigure as identifier: aws cloudtrail put-event-selectors --region --trail-name --event-selectors '[{ "ReadWriteType": "WriteOnly", "IncludeManagementEvents":true, "DataResources": [{ "Type": "AWS::S3::Object", "Values": ["arn:aws:s3:::/"] }] }]' 2. The command output will be object-level event trail configuration. 3. If you want to enable it for all buckets at once then change Values parameter to ["arn:aws:s3"] in command given above. 4. Repeat step 1 for each s3 bucket to update object-level logging of write events. 5. Change the AWS region by updating the --region command parameter and perform the process for other regions. 3.8
    Ensure that Object-level logging for read events is enabled for S3 bucket (Automated) Description: S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don't log data events and so it is recommended to enable Object-level logging for S3 buckets. Rationale: Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity using Amazon CloudWatch Events. Impact: Enabling logging for these object level events may significantly increase the number of events logged and may incur additional cost. Audit: From Console: 1. Login to the AWS Management Console and navigate to CloudTrail dashboard at https://console.aws.amazon.com/cloudtrail/ 2. In the left panel, click Trails and then click on the CloudTrail Name that you want to examine. 3. Review General details 4. Confirm that Multi-region trail is set to Yes 5. Scroll down to Data events 6. Confirm that it reads: Data Events:S3 Log selector template Log all events If 'basic events selectors' is being used it should read: Data events: S3 Bucket Name: All current and future S3 buckets Write: Enabled 7. Repeat steps 2 to 6 to verify that Multi-region trail and Data events logging of S3 buckets in CloudTrail. If the CloudTrails do not have multi-region and data events configured for S3 refer to the remediation below. From Command Line: 1. Run describe-trails command to list the names of all Amazon CloudTrail trails currently available in the selected AWS region: aws cloudtrail describe-trails --region --output table --query trailList[*].Name 2. The command output will be table of the requested trail names. 3. Run get-event-selectors command using the name of the trail returned at the previous step and custom query filters to determine if Data events logging feature is enabled within the selected CloudTrail trail configuration for s3 bucket resources: aws cloudtrail get-event-selectors --region --trail-name --query EventSelectors[*].DataResources[] 4. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector. 5. If the get-event-selectors command returns an empty array, the Data events are not included into the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded. 6. Repeat steps 1 to 5 for auditing each s3 bucket to identify other trails that are missing the capability to log Data events. 7. Change the AWS region by updating the --region command parameter and perform the audit process for other regions. Remediation: From Console: 1. Login to the AWS Management Console and navigate to S3 dashboard at https://console.aws.amazon.com/s3/ 2. In the left navigation panel, click buckets and then click on the S3 Bucket Name that you want to examine. 3. Click Properties tab to see in detail bucket configuration. 4. In the AWS Cloud Trail data events' section select the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by slicking the Configure in Cloudtrailbutton or navigating to the Cloudtrail console linkhttps://console.aws.amazon.com/cloudtrail/` 5. Once the Cloudtrail is selected, Select the data Data Events check box. 6. Select S3 from the `Data event type drop down. 7. Select Log all events from the Log selector template drop down. 8. Repeat steps 2 to 5 to enable object-level logging of write events for other S3 buckets. From Command Line: 1. To enable object-level data events logging for S3 buckets within your AWS account, run put-event-selectors command using the name of the trail that you want to reconfigure as identifier: aws cloudtrail put-event-selectors --region --trail-name --event-selectors '[{ "ReadWriteType": "ReadOnly", "IncludeManagementEvents":true, "DataResources": [{ "Type": "AWS::S3::Object", "Values": ["arn:aws:s3:::/"] }] }]' 2. The command output will be object-level event trail configuration. 3. If you want to enable it for all buckets at once then change Values parameter to ["arn:aws:s3"] in command given above. 4. Repeat step 1 for each s3 bucket to update object-level logging of read events. 5. Change the AWS region by updating the --region command parameter and perform the process for other regions. 3.9]
    System hardening through configuration management Configuration
    Configure all logs to capture auditable events or actionable events. CC ID 06332 System hardening through configuration management Configuration
    Configure the log to capture AWS Organizations changes. CC ID 15445
    [Ensure AWS Organizations changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for AWS Organizations changes made in the master AWS Account. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring AWS Organizations changes can help you prevent any unwanted, accidental or intentional modifications that may lead to unauthorized access or other security breaches. This monitoring technique helps you to ensure that any unexpected changes performed within your AWS Organizations can be investigated and any unwanted changes can be rolled back. Audit: If you are using CloudTrails and CloudWatch, perform the following: 1. Ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: • Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails, Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active: aws cloudtrail get-trail-status --name Ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events: aws cloudtrail get-event-selectors --trail-name • Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All. 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = "AcceptHandshake") || ($.eventName = "AttachPolicy") || ($.eventName = "CreateAccount") || ($.eventName = "CreateOrganizationalUnit") || ($.eventName = "CreatePolicy") || ($.eventName = "DeclineHandshake") || ($.eventName = "DeleteOrganization") || ($.eventName = "DeleteOrganizationalUnit") || ($.eventName = "DeletePolicy") || ($.eventName = "DetachPolicy") || ($.eventName = "DisablePolicyType") || ($.eventName = "EnablePolicyType") || ($.eventName = "InviteAccountToOrganization") || ($.eventName = "LeaveOrganization") || ($.eventName = "MoveAccount") || ($.eventName = "RemoveAccountFromOrganization") || ($.eventName = "UpdatePolicy") || ($.eventName = "UpdateOrganizationalUnit")) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4: aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic: aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for AWS Organizations changes and the taken from audit step 1: aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 -filter-pattern '{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = "AcceptHandshake") || ($.eventName = "AttachPolicy") || ($.eventName = "CreateAccount") || ($.eventName = "CreateOrganizationalUnit") || ($.eventName = "CreatePolicy") || ($.eventName = "DeclineHandshake") || ($.eventName = "DeleteOrganization") || ($.eventName = "DeleteOrganizationalUnit") || ($.eventName = "DeletePolicy") || ($.eventName = "DetachPolicy") || ($.eventName = "DisablePolicyType") || ($.eventName = "EnablePolicyType") || ($.eventName = "InviteAccountToOrganization") || ($.eventName = "LeaveOrganization") || ($.eventName = "MoveAccount") || ($.eventName = "RemoveAccountFromOrganization") || ($.eventName = "UpdatePolicy") || ($.eventName = "UpdateOrganizationalUnit")) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify: aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2: aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2: aws cloudwatch put-metric-alarm --alarm-name `` -metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.15]
    System hardening through configuration management Configuration
    Configure the log to capture Identity and Access Management policy changes. CC ID 15442
    [Ensure IAM policy changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established changes made to Identity and Access Management (IAM) policies. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to IAM policies will help ensure authentication and authorization controls remain intact. Impact: Monitoring these changes may cause a number of "false positives" more so in larger environments. This alert may need more tuning then others to eliminate some of those erroneous alerts. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventNa me=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolic y)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=Del etePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersi on)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.event Name=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGr oupPolicy)||($.eventName=DetachGroupPolicy)}" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for IAM policy changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name `` -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 -filter-pattern '{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventNa me=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolic y)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=Del etePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersi on)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.event Name=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGr oupPolicy)||($.eventName=DetachGroupPolicy)}' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` -metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.4]
    System hardening through configuration management Configuration
    Configure the log to capture management console sign-in without multi-factor authentication. CC ID 15441
    [Ensure management console sign-in without MFA is monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for console logins that are not protected by multi-factor authentication (MFA). Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring for single-factor console logins will increase visibility into accounts that are not protected by MFA. These type of accounts are more susceptible to compromise and unauthorized access. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name Ensure in the output that IsLogging is set to TRUE • Ensure identified Multi-region 'Cloudtrail' captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure in the output there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = "ConsoleLogin") && ($.additionalEventData.MFAUsed != "Yes") }" Or (To reduce false positives incase Single Sign-On (SSO) is used in organization): "filterPattern": "{ ($.eventName = "ConsoleLogin") && ($.additionalEventData.MFAUsed != "Yes") && ($.userIdentity.type = "IAMUser") && ($.responseElements.ConsoleLogin = "Success") }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for AWS Management Console sign-in without MFA and the taken from audit step 1. Use Command: aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = "ConsoleLogin") && ($.additionalEventData.MFAUsed != "Yes") }' Or (To reduce false positives incase Single Sign-On (SSO) is used in organization): aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = "ConsoleLogin") && ($.additionalEventData.MFAUsed != "Yes") && ($.userIdentity.type = "IAMUser") && ($.responseElements.ConsoleLogin = "Success") }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold -evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.2]
    System hardening through configuration management Configuration
    Configure the log to capture route table changes. CC ID 15439
    [Ensure route table changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. Routing tables are used to route network traffic between subnets and to network gateways. It is recommended that a metric filter and alarm be established for changes to route tables. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to route tables will help ensure that all VPC traffic flows through an expected path and prevent any accidental or intentional modifications that may lead to uncontrolled network traffic. An alarm should be triggered every time an AWS API call is performed to create, replace, delete, or disassociate a Route Table. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{($.eventSource = ec2.amazonaws.com) && ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for route table changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold -evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.13]
    System hardening through configuration management Configuration
    Configure the log to capture virtual private cloud changes. CC ID 15435
    [Ensure VPC changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is possible to have more than 1 VPC within an account, in addition it is also possible to create a peer connection between 2 VPCs enabling network traffic to route between VPCs. It is recommended that a metric filter and alarm be established for changes made to VPCs. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. VPCs in AWS are logically isolated virtual networks that can be used to launch AWS resources. Monitoring changes to VPC configuration will help ensure VPC traffic flow is not getting impacted. Changes to VPCs can impact network accessibility from the public internet and additionally impact VPC traffic flow to and from resources launched in the VPC. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for VPC changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 -filter-pattern '{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` -metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions 4.14]
    System hardening through configuration management Configuration
    Configure the log to capture changes to encryption keys. CC ID 15432
    [Ensure disabling or scheduled deletion of customer created CMKs is monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for customer created CMKs which have changed state to disabled or scheduled deletion. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Data encrypted with disabled or deleted keys will no longer be accessible. Changes in the state of a CMK should be monitored to make sure the change is intentional. Impact: Creation, storage, and management of CMK may create additional labor requirements compared to the use of Provide Managed Keys. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for disabled or scheduled for deletion CMK's and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metrictransformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.7]
    System hardening through configuration management Configuration
    Configure the log to capture unauthorized API calls. CC ID 15429
    [Ensure unauthorized API calls are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for unauthorized API calls. Rationale: Monitoring unauthorized API calls will help reduce time to detect malicious activity and can alert you to a potential security incident. CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Impact: This alert may be triggered by normal read-only console activities that attempt to opportunistically gather optional information, but gracefully fail if they don't have permissions. If an excessive number of alerts are being generated then an organization may wish to consider adding read access to the limited IAM user permissions simply to quiet the alerts. In some cases doing this may allow the users to actually view some areas of the system - any additional access given should be reviewed for alignment with the original limited IAM user intent. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with "Name":note` • From value associated with "CloudWatchLogsLogGroupArn" note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name <"Name" as shown in describetrails> Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this that you captured in step 1: aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.errorCode ="*UnauthorizedOperation") || ($.errorCode ="AccessDenied*") && ($.sourceIPAddress!="delivery.logs.amazonaws.com") && ($.eventName!="HeadBucket") }", 4. Note the "filterName" value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query "MetricAlarms[?MetricName == `unauthorized_api_calls_metric`]" 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for unauthorized API calls and the taken from audit step 1. aws logs put-metric-filter --log-group-name "cloudtrail_log_group_name" -filter-name "" --metric-transformations metricName=unauthorized_api_calls_metric,metricNamespace=CISBenchmark,metricValue=1 --filter-pattern "{ ($.errorCode ="*UnauthorizedOperation") || ($.errorCode ="AccessDenied*") && ($.sourceIPAddress!="delivery.logs.amazonaws.com") && ($.eventName!="HeadBucket") }" Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. Note: Capture the TopicArn displayed when creating the SNS Topic in Step 2. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name "unauthorized_api_calls_alarm" --metric-name "unauthorized_api_calls_metric" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold -evaluation-periods 1 --namespace "CISBenchmark" --alarm-actions 4.1]
    System hardening through configuration management Configuration
    Configure the log to capture changes to network gateways. CC ID 15421
    [Ensure changes to network gateways are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. Network gateways are required to send/receive traffic to a destination outside of a VPC. It is recommended that a metric filter and alarm be established for changes to network gateways. Rationale: CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Monitoring changes to network gateways will help ensure that all ingress/egress traffic traverses the VPC border via a controlled path. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for network gateways changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.12]
    System hardening through configuration management Configuration
    Configure the log to capture configuration changes. CC ID 06881
    [Ensure AWS Config configuration changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for detecting changes to AWS Config's configurations. Rationale: Monitoring changes to AWS Config configuration will help ensure sustained visibility of configuration items within the AWS account. CloudWatch is an AWS native service that allows you to observe and monitor resources and applications. CloudTrail Logs can also be sent to an external Security information and event management (SIEM) environment for monitoring and alerting. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the output from the above command contains the following: "filterPattern": "{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel) ||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for AWS Configuration changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel) ||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluationperiods 1 --namespace 'CISBenchmark' --alarm-actions 4.9
    Ensure CloudTrail configuration changes are monitored (Manual) Description: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs, or an external Security information and event management (SIEM) environment, where metric filters and alarms can be established. It is recommended that a metric filter and alarm be utilized for detecting changes to CloudTrail's configurations. Rationale: Monitoring changes to CloudTrail's configuration will help ensure sustained visibility to activities performed in the AWS account. Impact: These steps can be performed manually in a company's existing SIEM platform in cases where CloudTrail logs are monitored outside of the AWS monitoring tools within CloudWatch. Audit: If you are using CloudTrails and CloudWatch, perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured, or that the filters are configured in the appropriate SIEM alerts: 1. Identify the log group name configured for use with active multi-region CloudTrail: • List all CloudTrails: aws cloudtrail describe-trails • Identify Multi region Cloudtrails: Trails with "IsMultiRegionTrail" set to true • From value associated with CloudWatchLogsLogGroupArn note Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*, would be NewGroup • Ensure Identified Multi region CloudTrail is active aws cloudtrail get-trail-status --name ensure IsLogging is set to TRUE • Ensure identified Multi-region Cloudtrail captures all Management Events aws cloudtrail get-event-selectors --trail-name Ensure there is at least one Event Selector for a Trail with IncludeManagementEvents set to true and ReadWriteType set to All 2. Get a list of all associated metric filters for this : aws logs describe-metric-filters --log-group-name "" 3. Ensure the filterPattern output from the above command contains the following: "filterPattern": "{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }" 4. Note the value associated with the filterPattern found in step 3. 5. Get a list of CloudWatch alarms and filter on the captured in step 4. aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' 6. Note the AlarmActions value - this will provide the SNS topic ARN value. 7. Ensure there is at least one active subscriber to the SNS topic aws sns list-subscriptions-by-topic --topic-arn at least one subscription should have "SubscriptionArn" with valid aws ARN. Example of valid "SubscriptionArn": "arn:aws:sns::::" Remediation: If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for cloudtrail configuration changes and the taken from audit step 1. aws logs put-metric-filter --log-group-name -filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn --protocol --notification-endpoint Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 -namespace 'CISBenchmark' --alarm-actions 4.5]
    System hardening through configuration management Configuration
    Configure the log to capture user account additions, modifications, and deletions. CC ID 16482 System hardening through configuration management Log Management
    Configure Key, Certificate, Password, Authentication and Identity Management settings in accordance with organizational standards. CC ID 07621 System hardening through configuration management Configuration
    Configure "MFA Delete" to organizational standards. CC ID 15430
    [Ensure MFA Delete is enabled on S3 buckets (Manual) Description: Once MFA Delete is enabled on your sensitive and classified S3 bucket it requires the user to have two forms of authentication. Rationale: Adding MFA delete to an S3 bucket, requires additional authentication when you change the version state of your bucket or you delete and object version adding another layer of security in the event your security credentials are compromised or unauthorized access is granted. Impact: Enabling MFA delete on an S3 bucket could required additional administrator oversight. Enabling MFA delete may impact other services that automate the creation and/or deletion of S3 buckets. Audit: Perform the steps below to confirm MFA delete is configured on an S3 Bucket From Console: 1. Login to the S3 console at https://console.aws.amazon.com/s3/ 2. Click the Check box next to the Bucket name you want to confirm 3. In the window under Properties 4. Confirm that Versioning is Enabled 5. Confirm that MFA Delete is Enabled From Command Line: 1. Run the get-bucket-versioning aws s3api get-bucket-versioning --bucket my-bucket Output example: Enabled Enabled If the Console or the CLI output does not show Versioning and MFA Delete enabled refer to the remediation below. Remediation: Perform the steps below to enable MFA delete on an S3 bucket. Note: -You cannot enable MFA Delete using the AWS Management Console. You must use the AWS CLI or API. -You must use your 'root' account to enable MFA Delete on S3 buckets. From Command line: 1. Run the s3api put-bucket-versioning command aws s3api put-bucket-versioning --profile my-root-profile --bucket Bucket_Name --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa “arn:aws:iam::aws_account_id:mfa/root-account-mfa-device passcode" 2.1.2]
    System hardening through configuration management Configuration
    Configure Identity and Access Management policies to organizational standards. CC ID 15422
    [Ensure IAM policies that allow full "*:*" administrative privileges are not attached (Automated) Description: IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended and considered a standard security advice to grant least privilege -that is, granting only the permissions required to perform a task. Determine what users need to do and then craft policies for them that let the users perform only those tasks, instead of allowing full administrative privileges. Rationale: It's more secure to start with a minimum set of permissions and grant additional permissions as necessary, rather than starting with permissions that are too lenient and then trying to tighten them later. Providing full administrative privileges instead of restricting to the minimum set of permissions that the user is required to do exposes the resources to potentially unwanted actions. IAM policies that have a statement with "Effect": "Allow" with "Action": "*" over "Resource": "*" should be removed. Audit: Perform the following to determine what policies are created: From Command Line: 1. Run the following to get a list of IAM policies: aws iam list-policies --only-attached --output text 2. For each policy returned, run the following command to determine if any policies is allowing full administrative privileges on the account: aws iam get-policy-version --policy-arn --version-id 3. In output ensure policy should not have any Statement block with "Effect": "Allow" and Action set to "*" and Resource set to "*" Remediation: From Console: Perform the following to detach the policy that has full administrative privileges: 1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. 2. In the navigation pane, click Policies and then search for the policy name found in the audit step. 3. Select the policy that needs to be deleted. 4. In the policy action menu, select first Detach 5. Select all Users, Groups, Roles that have this policy attached 6. Click Detach Policy 7. In the policy action menu, select Detach 8. Select the newly detached policy and select Delete From Command Line: Perform the following to detach the policy that has full administrative privileges as found in the audit step: 1. Lists all IAM users, groups, and roles that the specified managed policy is attached to. aws iam list-entities-for-policy --policy-arn 2. Detach the policy from all IAM Users: aws iam detach-user-policy --user-name --policy-arn 3. Detach the policy from all IAM Groups: aws iam detach-group-policy --group-name --policy-arn 4. Detach the policy from all IAM Roles: aws iam detach-role-policy --role-name --policy-arn 1.16]
    System hardening through configuration management Configuration
    Configure the Identity and Access Management Access analyzer to organizational standards. CC ID 15420
    [Ensure that IAM Access analyzer is enabled for all regions (Automated) Description: Enable IAM Access analyzer for IAM policies about all resources in each active AWS region. IAM Access Analyzer is a technology introduced at AWS reinvent 2019. After the Analyzer is enabled in IAM, scan results are displayed on the console showing the accessible resources. Scans show resources that other accounts and federated users can access, such as KMS keys and IAM roles. So the results allow you to determine if an unintended user is allowed, making it easier for administrators to monitor least privileges access. Access Analyzer analyzes only policies that are applied to resources in the same AWS Region. Rationale: AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data. Access Analyzer identifies resources that are shared with external principals by using logicbased reasoning to analyze the resource-based policies in your AWS environment. IAM Access Analyzer continuously monitors all policies for S3 bucket, IAM roles, KMS (Key Management Service) keys, AWS Lambda functions, and Amazon SQS(Simple Queue Service) queues. Audit: From Console: 1. Open the IAM console at https://console.aws.amazon.com/iam/ 2. Choose Access analyzer 3. Click 'Analyzers' 4. Ensure that at least one analyzer is present 5. Ensure that the STATUS is set to Active 6. Repeat these step for each active region From Command Line: 1. Run the following command: aws accessanalyzer list-analyzers | grep status 2. Ensure that at least one Analyzer the status is set to ACTIVE 3. Repeat the steps above for each active region. If an Access analyzer is not listed for each region or the status is not set to active refer to the remediation procedure below. Remediation: From Console: Perform the following to enable IAM Access analyzer for IAM policies: 1. Open the IAM console at https://console.aws.amazon.com/iam/. 2. Choose Access analyzer. 3. Choose Create analyzer. 4. On the Create analyzer page, confirm that the Region displayed is the Region where you want to enable Access Analyzer. 5. Enter a name for the analyzer. Optional as it will generate a name for you automatically. 6. Add any tags that you want to apply to the analyzer. Optional. 7. Choose Create Analyzer. 8. Repeat these step for each active region From Command Line: Run the following command: aws accessanalyzer create-analyzer --analyzer-name --type Repeat this command above for each active region. Note: The IAM Access Analyzer is successfully configured only when the account you use has the necessary permissions. 1.20]
    System hardening through configuration management Configuration
    Configure the "Minimum password length" to organizational standards. CC ID 07711
    [Ensure IAM password policy requires minimum length of 14 or greater (Automated) Description: Password policies are, in part, used to enforce password complexity requirements. IAM password policies can be used to ensure password are at least a given length. It is recommended that the password policy require a minimum password length 14. Rationale: Setting a password complexity policy increases account resiliency against brute force login attempts. Audit: Perform the following to ensure the password policy is configured as prescribed: From Console: 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Ensure "Minimum password length" is set to 14 or greater. From Command Line: aws iam get-account-password-policy Ensure the output of the above command includes "MinimumPasswordLength": 14 (or higher) Remediation: Perform the following to set the password policy as prescribed: From Console: 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Set "Minimum password length" to 14 or greater. 5. Click "Apply password policy" From Command Line: aws iam update-account-password-policy --minimum-password-length 14 Note: All commands starting with "aws iam update-account-password-policy" can be combined into a single command. 1.8]
    System hardening through configuration management Configuration
    Configure Encryption settings in accordance with organizational standards. CC ID 07625
    [Ensure that encryption-at-rest is enabled for RDS Instances (Automated) Description: Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances. After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance. Rationale: Databases are likely to hold sensitive and critical data, it is highly recommended to implement encryption in order to protect your data from unauthorized access or disclosure. With RDS encryption enabled, the data stored on the instance's underlying storage, the automated backups, read replicas, and snapshots, are all encrypted. Audit: From Console: 1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/ 2. In the navigation pane, under RDS dashboard, click Databases. 3. Select the RDS Instance that you want to examine 4. Click Instance Name to see details, then click on Configuration tab. 5. Under Configuration Details section, In Storage pane search for the Encryption Enabled Status. 6. If the current status is set to Disabled, Encryption is not enabled for the selected RDS Instance database instance. 7. Repeat steps 3 to 7 to verify encryption status of other RDS Instance in same region. 8. Change region from the top of the navigation bar and repeat audit for other regions. From Command Line: 1. Run describe-db-instances command to list all RDS Instance database names, available in the selected AWS region, Output will return each Instance database identifier-name. aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. Run again describe-db-instances command using the RDS Instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status True Or False. aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].StorageEncrypted' 3. If the StorageEncrypted parameter value is False, Encryption is not enabled for the selected RDS database instance. 4. Repeat steps 1 to 3 for auditing each RDS Instance and change Region to verify for other regions Remediation: From Console: 1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on Databases 3. Select the Database instance that needs to be encrypted. 4. Click on Actions button placed at the top right and select Take Snapshot. 5. On the Take Snapshot page, enter a database name of which you want to take a snapshot in the Snapshot Name field and click on Take Snapshot. 6. Select the newly created snapshot and click on the Action button placed at the top right and select Copy snapshot from the Action menu. 7. On the Make Copy of DB Snapshot page, perform the following: • In the New DB Snapshot Identifier field, Enter a name for the new snapshot. • Check Copy Tags, New snapshot must have the same tags as the source snapshot. • Select Yes from the Enable Encryption dropdown list to enable encryption, You can choose to use the AWS default encryption key or custom key from Master Key dropdown list. 8. Click Copy Snapshot to create an encrypted copy of the selected instance snapshot. 9. Select the new Snapshot Encrypted Copy and click on the Action button placed at the top right and select Restore Snapshot button from the Action menu, This will restore the encrypted snapshot to a new database instance. 10. On the Restore DB Instance page, enter a unique name for the new database instance in the DB Instance Identifier field. 11. Review the instance configuration details and click Restore DB Instance. 12. As the new instance provisioning process is completed can update application configuration to refer to the endpoint of the new Encrypted database instance Once the database endpoint is changed at the application level, can remove the unencrypted instance. From Command Line: 1. Run describe-db-instances command to list all RDS database names available in the selected AWS region, The command output should return the database instance identifier. aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. Run create-db-snapshot command to create a snapshot for the selected database instance, The command output will return the new snapshot with name DB Snapshot Name. aws rds create-db-snapshot --region --db-snapshot-identifier --db-instance-identifier 3. Now run list-aliases command to list the KMS keys aliases available in a specified region, The command output should return each key alias currently available. For our RDS encryption activation process, locate the ID of the AWS default KMS key. aws kms list-aliases --region 4. Run copy-db-snapshot command using the default KMS key ID for RDS instances returned earlier to create an encrypted copy of the database instance snapshot, The command output will return the encrypted instance snapshot configuration. aws rds copy-db-snapshot --region --source-db-snapshotidentifier --target-db-snapshot-identifier --copy-tags --kms-key-id 5. Run restore-db-instance-from-db-snapshot command to restore the encrypted snapshot created at the previous step to a new database instance, If successful, the command output should return the new encrypted database instance configuration. aws rds restore-db-instance-from-db-snapshot --region --dbinstance-identifier --db-snapshot-identifier 6. Run describe-db-instances command to list all RDS database names, available in the selected AWS region, Output will return database instance identifier name Select encrypted database name that we just created DB-Name-Encrypted. aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 7. Run again describe-db-instances command using the RDS instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status True. aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].StorageEncrypted' 2.3.1]
    System hardening through configuration management Configuration
    Configure "Elastic Block Store volume encryption" to organizational standards. CC ID 15434
    [Ensure EBS Volume Encryption is Enabled in all Regions (Automated) Description: Elastic Compute Cloud (EC2) supports encryption at rest when using the Elastic Block Store (EBS) service. While disabled by default, forcing encryption at EBS volume creation is supported. Rationale: Encrypting data at rest reduces the likelihood that it is unintentionally exposed and can nullify the impact of disclosure if the encryption remains unbroken. Impact: Losing access or removing the KMS key in use by the EBS volumes will result in no longer being able to access the volumes. Audit: From Console: 1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ 2. Under Account attributes, click EBS encryption. 3. Verify Always encrypt new EBS volumes displays Enabled. 4. Review every region in-use. Note: EBS volume encryption is configured per region. From Command Line: 1. Run aws --region ec2 get-ebs-encryption-by-default 2. Verify that "EbsEncryptionByDefault": true is displayed. 3. Review every region in-use. Note: EBS volume encryption is configured per region. Remediation: From Console: 1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ 2. Under Account attributes, click EBS encryption. 3. Click Manage. 4. Click the Enable checkbox. 5. Click Update EBS encryption 6. Repeat for every region requiring the change. Note: EBS volume encryption is configured per region. From Command Line: 1. Run aws --region ec2 enable-ebs-encryption-by-default 2. Verify that "EbsEncryptionByDefault": true is displayed. 3. Repeat every region requiring the change. Note: EBS volume encryption is configured per region. 2.2.1]
    System hardening through configuration management Configuration
    Configure "Encryption Oracle Remediation" to organizational standards. CC ID 15366 System hardening through configuration management Configuration
    Configure the "encryption provider" to organizational standards. CC ID 14591 System hardening through configuration management Configuration
    Configure the "Microsoft network server: Digitally sign communications (always)" to organizational standards. CC ID 07626 System hardening through configuration management Configuration
    Configure the "Domain member: Digitally encrypt or sign secure channel data (always)" to organizational standards. CC ID 07657 System hardening through configuration management Configuration
    Configure the "Domain member: Digitally sign secure channel data (when possible)" to organizational standards. CC ID 07678 System hardening through configuration management Configuration
    Configure the "Network Security: Configure encryption types allowed for Kerberos" to organizational standards. CC ID 07799 System hardening through configuration management Configuration
    Configure the "System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing" to organizational standards. CC ID 07822 System hardening through configuration management Configuration
    Configure the "Configure use of smart cards on fixed data drives" to organizational standards. CC ID 08361 System hardening through configuration management Configuration
    Configure the "Enforce drive encryption type on removable data drives" to organizational standards. CC ID 08363 System hardening through configuration management Configuration
    Configure the "Configure TPM platform validation profile for BIOS-based firmware configurations" to organizational standards. CC ID 08370 System hardening through configuration management Configuration
    Configure the "Configure use of passwords for removable data drives" to organizational standards. CC ID 08394 System hardening through configuration management Configuration
    Configure the "Configure use of hardware-based encryption for removable data drives" to organizational standards. CC ID 08401 System hardening through configuration management Configuration
    Configure the "Require additional authentication at startup" to organizational standards. CC ID 08422 System hardening through configuration management Configuration
    Configure the "Deny write access to fixed drives not protected by BitLocker" to organizational standards. CC ID 08429 System hardening through configuration management Configuration
    Configure the "Configure startup mode" to organizational standards. CC ID 08430 System hardening through configuration management Configuration
    Configure the "Require client MAPI encryption" to organizational standards. CC ID 08446 System hardening through configuration management Configuration
    Configure the "Configure dial plan security" to organizational standards. CC ID 08453 System hardening through configuration management Configuration
    Configure the "Allow access to BitLocker-protected removable data drives from earlier versions of Windows" to organizational standards. CC ID 08457 System hardening through configuration management Configuration
    Configure the "Enforce drive encryption type on fixed data drives" to organizational standards. CC ID 08460 System hardening through configuration management Configuration
    Configure the "Allow Secure Boot for integrity validation" to organizational standards. CC ID 08461 System hardening through configuration management Configuration
    Configure the "Configure use of passwords for operating system drives" to organizational standards. CC ID 08478 System hardening through configuration management Configuration
    Configure the "Choose how BitLocker-protected removable drives can be recovered" to organizational standards. CC ID 08484 System hardening through configuration management Configuration
    Configure the "Validate smart card certificate usage rule compliance" to organizational standards. CC ID 08492 System hardening through configuration management Configuration
    Configure the "Allow enhanced PINs for startup" to organizational standards. CC ID 08495 System hardening through configuration management Configuration
    Configure the "Choose how BitLocker-protected operating system drives can be recovered" to organizational standards. CC ID 08499 System hardening through configuration management Configuration
    Configure the "Allow access to BitLocker-protected fixed data drives from earlier versions of Windows" to organizational standards. CC ID 08505 System hardening through configuration management Configuration
    Configure the "Choose how BitLocker-protected fixed drives can be recovered" to organizational standards. CC ID 08509 System hardening through configuration management Configuration
    Configure the "Configure use of passwords for fixed data drives" to organizational standards. CC ID 08513 System hardening through configuration management Configuration
    Configure the "Choose drive encryption method and cipher strength" to organizational standards. CC ID 08537 System hardening through configuration management Configuration
    Configure the "Choose default folder for recovery password" to organizational standards. CC ID 08541 System hardening through configuration management Configuration
    Configure the "Prevent memory overwrite on restart" to organizational standards. CC ID 08542 System hardening through configuration management Configuration
    Configure the "Deny write access to removable drives not protected by BitLocker" to organizational standards. CC ID 08549 System hardening through configuration management Configuration
    Configure the "opt encrypted" flag to organizational standards. CC ID 14534 System hardening through configuration management Configuration
    Configure the "Provide the unique identifiers for your organization" to organizational standards. CC ID 08552 System hardening through configuration management Configuration
    Configure the "Enable use of BitLocker authentication requiring preboot keyboard input on slates" to organizational standards. CC ID 08556 System hardening through configuration management Configuration
    Configure the "Require encryption on device" to organizational standards. CC ID 08563 System hardening through configuration management Configuration
    Configure the "Enable S/MIME for OWA 2007" to organizational standards. CC ID 08564 System hardening through configuration management Configuration
    Configure the "Control use of BitLocker on removable drives" to organizational standards. CC ID 08566 System hardening through configuration management Configuration
    Configure the "Configure use of hardware-based encryption for fixed data drives" to organizational standards. CC ID 08568 System hardening through configuration management Configuration
    Configure the "Configure use of smart cards on removable data drives" to organizational standards. CC ID 08570 System hardening through configuration management Configuration
    Configure the "Enforce drive encryption type on operating system drives" to organizational standards. CC ID 08573 System hardening through configuration management Configuration
    Configure the "Disallow standard users from changing the PIN or password" to organizational standards. CC ID 08574 System hardening through configuration management Configuration
    Configure the "Use enhanced Boot Configuration Data validation profile" to organizational standards. CC ID 08578 System hardening through configuration management Configuration
    Configure the "Allow network unlock at startup" to organizational standards. CC ID 08588 System hardening through configuration management Configuration
    Configure the "Enable S/MIME for OWA 2010" to organizational standards. CC ID 08592 System hardening through configuration management Configuration
    Configure the "Configure minimum PIN length for startup" to organizational standards. CC ID 08594 System hardening through configuration management Configuration
    Configure the "Configure TPM platform validation profile" to organizational standards. CC ID 08598 System hardening through configuration management Configuration
    Configure the "Configure use of hardware-based encryption for operating system drives" to organizational standards. CC ID 08601 System hardening through configuration management Configuration
    Configure the "Reset platform validation data after BitLocker recovery" to organizational standards. CC ID 08607 System hardening through configuration management Configuration
    Configure the "Configure TPM platform validation profile for native UEFI firmware configurations" to organizational standards. CC ID 08614 System hardening through configuration management Configuration
    Configure the "Do not enable BitLocker until recovery information is stored to AD DS for fixed data drives" setting to organizational standards. CC ID 10039 System hardening through configuration management Configuration
    Configure the "Save BitLocker recovery information to AD DS for fixed data drives" setting to organizational standards. CC ID 10040 System hardening through configuration management Configuration
    Configure the "Omit recovery options from the BitLocker setup wizard" setting to organizational standards. CC ID 10041 System hardening through configuration management Configuration
    Configure the "Do not enable BitLocker until recovery information is stored to AD DS for operating system drives" setting to organizational standards. CC ID 10042 System hardening through configuration management Configuration
    Configure the "Save BitLocker recovery information to AD DS for operating system drives" setting to organizational standards. CC ID 10043 System hardening through configuration management Configuration
    Configure the "Allow BitLocker without a compatible TPM" setting to organizational standards. CC ID 10044 System hardening through configuration management Configuration
    Configure the "Do not enable BitLocker until recovery information is stored to AD DS for removable data drives" setting to organizational standards. CC ID 10045 System hardening through configuration management Configuration
    Configure the "Save BitLocker recovery information to AD DS for removable data drives" setting to organizational standards. CC ID 10046 System hardening through configuration management Configuration
    Configure Security settings in accordance with organizational standards. CC ID 08469 System hardening through configuration management Configuration
    Configure AWS Security Hub to organizational standards. CC ID 17166
    [Ensure AWS Security Hub is enabled (Automated) Description: Security Hub collects security data from across AWS accounts, services, and supported third-party partner products and helps you analyze your security trends and identify the highest priority security issues. When you enable Security Hub, it begins to consume, aggregate, organize, and prioritize findings from AWS services that you have enabled, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie. You can also enable integrations with AWS partner security products. Rationale: AWS Security Hub provides you with a comprehensive view of your security state in AWS and helps you check your environment against security industry standards and best practices - enabling you to quickly assess the security posture across your AWS accounts. Impact: It is recommended AWS Security Hub be enabled in all regions. AWS Security Hub requires AWS Config to be enabled. Audit: The process to evaluate AWS Security Hub configuration per region From Console: 1. Sign in to the AWS Management Console and open the AWS Security Hub console at https://console.aws.amazon.com/securityhub/. 2. On the top right of the console, select the target Region. 3. If presented with the Security Hub > Summary page then Security Hub is set-up for the selected region. 4. If presented with Setup Security Hub or Get Started With Security Hub - follow the online instructions. 5. Repeat steps 2 to 4 for each region. From Command Line: Run the following to list the Securityhub status: aws securityhub describe-hub This will list the Securityhub status by region. Audit for the presence of a 'SubscribedAt' value Example output: { "HubArn": "", "SubscribedAt": "2022-08-19T17:06:42.398Z", "AutoEnableControls": true } An error will be returned if Securityhub is not enabled. Example error: An error occurred (InvalidAccessException) when calling the DescribeHub operation: Account is not subscribed to AWS Security Hub Remediation: To grant the permissions required to enable Security Hub, attach the Security Hub managed policy AWSSecurityHubFullAccess to an IAM user, group, or role. Enabling Security Hub From Console: 1. Use the credentials of the IAM identity to sign in to the Security Hub console. 2. When you open the Security Hub console for the first time, choose Enable AWS Security Hub. 3. On the welcome page, Security standards list the security standards that Security Hub supports. 4. Choose Enable Security Hub. From Command Line: 1. Run the enable-security-hub command. To enable the default standards, include --enable-default-standards. aws securityhub enable-security-hub --enable-default-standards 2. To enable the security hub without the default standards, include --no-enabledefault-standards. aws securityhub enable-security-hub --no-enable-default-standards 4.16]
    System hardening through configuration management Configuration
    Configure Patch Management settings in accordance with organizational standards. CC ID 08519
    [Ensure Auto Minor Version Upgrade feature is Enabled for RDS Instances (Automated) Description: Ensure that RDS database instances have the Auto Minor Version Upgrade flag enabled in order to receive automatically minor engine upgrades during the specified maintenance window. So, RDS instances can get the new features, bug fixes, and security patches for their database engines. Rationale: AWS RDS will occasionally deprecate minor engine versions and provide new ones for an upgrade. When the last version number within the release is replaced, the version changed is considered minor. With Auto Minor Version Upgrade feature enabled, the version upgrades will occur automatically during the specified maintenance window so your RDS instances can get the new features, bug fixes, and security patches for their database engines. Audit: From Console: 1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on Databases. 3. Select the RDS instance that wants to examine. 4. Click on the Maintenance and backups panel. 5. Under the Maintenance section, search for the Auto Minor Version Upgrade status. • If the current status is set to Disabled, means the feature is not set and the minor engine upgrades released will not be applied to the selected RDS instance From Command Line: 1. Run describe-db-instances command to list all RDS database names, available in the selected AWS region: aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. The command output should return each database instance identifier. 3. Run again describe-db-instances command using the RDS instance identifier returned earlier to determine the Auto Minor Version Upgrade status for the selected instance: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].AutoMinorVersionUpgrade' 4. The command output should return the feature current status. If the current status is set to true, the feature is enabled and the minor engine upgrades will be applied to the selected RDS instance. Remediation: From Console: 1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on Databases. 3. Select the RDS instance that wants to update. 4. Click on the Modify button placed on the top right side. 5. On the Modify DB Instance: page, In the Maintenance section, select Auto minor version upgrade click on the Yes radio button. 6. At the bottom of the page click on Continue, check to Apply Immediately to apply the changes immediately, or select Apply during the next scheduled maintenance window to avoid any downtime. 7. Review the changes and click on Modify DB Instance. The instance status should change from available to modifying and back to available. Once the feature is enabled, the Auto Minor Version Upgrade status should change to Yes. From Command Line: 1. Run describe-db-instances command to list all RDS database instance names, available in the selected AWS region: aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' 2. The command output should return each database instance identifier. 3. Run the modify-db-instance command to modify the selected RDS instance configuration this command will apply the changes immediately, Remove -apply-immediately to apply changes during the next scheduled maintenance window and avoid any downtime: aws rds modify-db-instance --region --db-instance-identifier --auto-minor-version-upgrade --apply-immediately 4. The command output should reveal the new configuration metadata for the RDS instance and check AutoMinorVersionUpgrade parameter value. 5. Run describe-db-instances command to check if the Auto Minor Version Upgrade feature has been successfully enable: aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].AutoMinorVersionUpgrade' 6. The command output should return the feature current status set to true, the feature is enabled and the minor engine upgrades will be applied to the selected RDS instance. 2.3.2]
    System hardening through configuration management Configuration
    Configure "Select when Preview Builds and Feature Updates are received" to organizational standards. CC ID 15399 System hardening through configuration management Configuration
    Configure "Select when Quality Updates are received" to organizational standards. CC ID 15355 System hardening through configuration management Configuration
    Configure the "Check for missing Windows Updates" to organizational standards. CC ID 08520 System hardening through configuration management Configuration