A Review Of Download SCS-C02 Dumps Questions

Your success in Amazon-Web-Services SCS-C02 is our sole target and we develop all our SCS-C02 braindumps in a way that facilitates the attainment of this target. Not only is our SCS-C02 study material the best you can find, it is also the most detailed and the most updated. SCS-C02 Practice Exams for Amazon-Web-Services SCS-C02 are written to the highest standards of technical accuracy.

Free demo questions for Amazon-Web-Services SCS-C02 Exam Dumps Below:

NEW QUESTION 1
A company has a large fleet of Linux Amazon EC2 instances and Windows EC2 instances that run in private subnets. The company wants all remote administration to be performed as securely as possible in the AWS Cloud.
Which solution will meet these requirements?

  • A. Do not use SSH-RSA private keys during the launch of new instance
  • B. Implement AWS Systems Manager Session Manager.
  • C. Generate new SSH-RSA private keys for existing instance
  • D. Implement AWS Systems Manager Session Manager.
  • E. Do not use SSH-RSA private keys during the launch of new instance
  • F. Configure EC2 Instance Connect.
  • G. Generate new SSH-RSA private keys for existing instance
  • H. Configure EC2 Instance Connect.

Answer: A

Explanation:
AWS Systems Manager Session Manager is a fully managed service that allows you to securely and remotely administer your EC2 instances without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. Session Manager provides an interactive browser-based shell or CLI access to your instances, as well as port forwarding and auditing capabilities. Session Manager works with both Linux and Windows instances, and supports hybrid environments and edge devices.
EC2 Instance Connect is a feature that allows you to use SSH to connect to your Linux instances using
short-lived keys that are generated on demand and delivered securely through the AWS metadata service. EC2 Instance Connect does not require any additional software installation or configuration on the instance, but it does require you to use SSH-RSA keys during the launch of new instances.
The correct answer is to use Session Manager, as it provides more security and flexibility than EC2 Instance Connect, and does not require SSH-RSA keys or inbound ports. Session Manager also works with Windows instances, while EC2 Instance Connect does not.
Verified References:
SCS-C02 dumps exhibit https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
SCS-C02 dumps exhibit https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Connect-using-EC2-Instance-Connect.html
SCS-C02 dumps exhibit https://repost.aws/questions/QUnV4R9EoeSdW0GT3cKBUR7w/what-is-the-difference-between-ec-2-ins

NEW QUESTION 2
A business stores website images in an Amazon S3 bucket. The firm serves the photos to end users through Amazon CloudFront. The firm learned lately that the photographs are being accessible from nations in which it does not have a distribution license.
Which steps should the business take to safeguard the photographs and restrict their distribution? (Select two.)

  • A. Update the S3 bucket policy to restrict access to a CloudFront origin access identity (OAI).
  • B. Update the website DNS record to use an Amazon Route 53 geolocation record deny list of countries where the company lacks a license.
  • C. Add a CloudFront geo restriction deny list of countries where the company lacks a license.
  • D. Update the S3 bucket policy with a deny list of countries where the company lacks a license.
  • E. Enable the Restrict Viewer Access option in CloudFront to create a deny list of countries where the company lacks a license.

Answer: AC

Explanation:
For Enable Geo-Restriction, choose Yes. For Restriction Type, choose Whitelist to allow access to certain countries, or choose Blacklist to block access from certain countries. https://IAM.amazon.com/premiumsupport/knowledge-center/cloudfront-geo-restriction/

NEW QUESTION 3
A company uses an external identity provider to allow federation into different IAM accounts. A security engineer for the company needs to identify the federated user that terminated a production Amazon EC2 instance a week ago.
What is the FASTEST way for the security engineer to identify the federated user?

  • A. Review the IAM CloudTrail event history logs in an Amazon S3 bucket and look for the Terminatelnstances event to identify the federated user from the role session name.
  • B. Filter the IAM CloudTrail event history for the Terminatelnstances event and identify the assumed IAM rol
  • C. Review the AssumeRoleWithSAML event call in CloudTrail to identify the corresponding username.
  • D. Search the IAM CloudTrail logs for the Terminatelnstances event and note the event tim
  • E. Review the IAM Access Advisor tab for all federated role
  • F. The last accessed time should match the time when the instance was terminated.
  • G. Use Amazon Athena to run a SQL query on the IAM CloudTrail logs stored in an Amazon S3 bucket and filter on the Terminatelnstances even
  • H. Identify the corresponding role and run another query to filter the AssumeRoleWithWebldentity event for the user name.

Answer: B

Explanation:
The fastest way to identify the federated user who terminated a production Amazon EC2 instance is to filter the IAM CloudTrail event history for the TerminateInstances event and identify the assumed IAM role. Then, review the AssumeRoleWithSAML event call in CloudTrail to identify the corresponding username. This method does not require any additional tools or queries, and it directly links the IAM role with the federated user.
Option A is incorrect because the role session name may not be the same as the federated user name, and it may not be unique or descriptive enough to identify the user.
Option C is incorrect because the IAM Access Advisor tab only shows when a role was last accessed, not by whom or for what purpose. It also does not show the specific time of access, only the date.
Option D is incorrect because using Amazon Athena to run SQL queries on the IAM CloudTrail logs is not the fastest way to identify the federated user, as it requires creating a table schema and running multiple queries. It also assumes that the federation is done using web identity providers, not SAML providers, as indicated by the AssumeRoleWithWebIdentity event.
References:
SCS-C02 dumps exhibit AWS Identity and Access Management
SCS-C02 dumps exhibit Logging AWS STS API Calls with AWS CloudTrail
SCS-C02 dumps exhibit [Using Amazon Athena to Query S3 Data for CloudTrail Analysis]

NEW QUESTION 4
A company has a batch-processing system that uses Amazon S3, Amazon EC2, and AWS Key Management Service (AWS KMS). The system uses two AWS accounts: Account A and Account B.
Account A hosts an S3 bucket that stores the objects that will be processed. The S3 bucket also stores the results of the processing. All the S3 bucket objects are encrypted by a KMS key that is managed in
Account A.
Account B hosts a VPC that has a fleet of EC2 instances that access the S3 buck-et in Account A by using statements in the bucket policy. The VPC was created with DNS hostnames enabled and DNS resolution enabled.
A security engineer needs to update the design of the system without changing any of the system's code. No AWS API calls from the batch-processing EC2 in-stances can travel over the internet.
Which combination of steps will meet these requirements? (Select TWO.)

  • A. In the Account B VPC, create a gateway VPC endpoint for Amazon S3. For the gateway VPC endpoint,create a resource policy that allows the s3:GetObject, s3:ListBucket, s3:PutObject, and s3:PutObjectAcl actions for the S3 bucket.
  • B. In the Account B VPC, create an interface VPC endpoint for Amazon S3. For the interface VPC endpoint, create a resource policy that allows the s3:GetObject, s3:ListBucket, s3:PutObject, and s3:PutObjectAcl actions for the S3 bucket.
  • C. In the Account B VPC, create an interface VPC endpoint for AWS KM
  • D. For the interface VPC endpoint, create a resource policy that allows the kms:Encrypt, kms:Decrypt, and kms:GenerateDataKey actions for the KMS ke
  • E. Ensure that private DNS is turned on for the endpoint.
  • F. In the Account B VPC, create an interface VPC endpoint for AWS KM
  • G. For the interface VPC endpoint, create a resource policy that allows the kms:Encrypt, kms:Decrypt, and kms:GenerateDataKey actions for the KMS ke
  • H. Ensure that private DNS is turned off for the endpoint.
  • I. In the Account B VPC, verify that the S3 bucket policy allows the s3:PutObjectAcl action for cross-account us
  • J. In the Account B VPC, create a gateway VPC endpoint for Amazon S3. For the gateway VPC endpoint, create a resource policy that allows the s3:GetObject, s3:ListBucket, and s3:PutObject actions for the S3 bucket.

Answer: BC

NEW QUESTION 5
An international company wants to combine AWS Security Hub findings across all the company's AWS Regions and from multiple accounts. In addition, the company
wants to create a centralized custom dashboard to correlate these findings with operational data for deeper
analysis and insights. The company needs an analytics tool to search and visualize Security Hub findings. Which combination of steps will meet these requirements? (Select THREE.)

  • A. Designate an AWS account as a delegated administrator for Security Hu
  • B. Publish events to Amazon CloudWatch from the delegated administrator account, all member accounts, and required Regions that are enabled for Security Hub findings.
  • C. Designate an AWS account in an organization in AWS Organizations as a delegated administrator for Security Hu
  • D. Publish events to Amazon EventBridge from the delegated administrator account, all member accounts, and required Regions that are enabled for Security Hub findings.
  • E. In each Region, create an Amazon EventBridge rule to deliver findings to an Amazon Kinesis data strea
  • F. Configure the Kinesis data streams to output the logs to a single Amazon S3 bucket.
  • G. In each Region, create an Amazon EventBridge rule to deliver findings to an Amazon Kinesis Data Firehose delivery strea
  • H. Configure the Kinesis Data Firehose delivery streams to deliver the logs to a single Amazon S3 bucket.
  • I. Use AWS Glue DataBrew to crawl the Amazon S3 bucket and build the schem
  • J. Use AWS Glue Data Catalog to query the data and create views to flatten nested attribute
  • K. Build Amazon QuickSight dashboards by using Amazon Athena.
  • L. Partition the Amazon S3 dat
  • M. Use AWS Glue to crawl the S3 bucket and build the schem
  • N. Use Amazon Athena to query the data and create views to flatten nested attribute
  • O. Build Amazon QuickSight dashboards that use the Athena views.

Answer: BDF

Explanation:
The correct answer is B, D, and F. Designate an AWS account in an organization in AWS Organizations as a delegated administrator for Security Hub. Publish events to Amazon EventBridge from the delegated administrator account, all member accounts, and required Regions that are enabled for Security Hub findings. In each Region, create an Amazon EventBridge rule to deliver findings to an Amazon Kinesis Data Firehose delivery stream. Configure the Kinesis Data Firehose delivery streams to deliver the logs to a single Amazon S3 bucket. Partition the Amazon S3 data. Use AWS Glue to crawl the S3 bucket and build the schema. Use Amazon Athena to query the data and create views to flatten nested attributes. Build Amazon QuickSight dashboards that use the Athena views.
According to the AWS documentation, AWS Security Hub is a service that provides you with a comprehensive view of your security state across your AWS accounts, and helps you check your environment against security standards and best practices. You can use Security Hub to aggregate security findings from various sources, such as AWS services, partner products, or your own applications.
To use Security Hub with multiple AWS accounts and Regions, you need to enable AWS Organizations with all features enabled. This allows you to centrally manage your accounts and apply policies across your organization. You can also use Security Hub as a service principal for AWS Organizations, which lets you designate a delegated administrator account for Security Hub. The delegated administrator account can enable Security Hub automatically in all existing and future accounts in your organization, and can view and manage findings from all accounts.
According to the AWS documentation, Amazon EventBridge is a serverless event bus that makes it easy to connect applications using data from your own applications, integrated software as a service (SaaS) applications, and AWS services. You can use EventBridge to create rules that match events from various sources and route them to targets for processing.
To use EventBridge with Security Hub findings, you need to enable Security Hub as an event source in EventBridge. This will allow you to publish events from Security Hub to EventBridge in the same Region. You can then create EventBridge rules that match Security Hub findings based on criteria such as severity, type, or resource. You can also specify targets for your rules, such as Lambda functions, SNS topics, or Kinesis Data Firehose delivery streams.
According to the AWS documentation, Amazon Kinesis Data Firehose is a fully managed service that delivers real-time streaming data to destinations such as Amazon S3, Amazon Redshift, Amazon Elasticsearch Service (Amazon ES), and Splunk. You can use Kinesis Data Firehose to transform and enrich your data before delivering it to your destination.
To use Kinesis Data Firehose with Security Hub findings, you need to create a Kinesis Data Firehose delivery stream in each Region where you have enabled Security Hub. You can then configure the delivery stream to receive events from EventBridge as a source, and deliver the logs to a single S3 bucket as a destination. You can also enable data transformation or compression on the delivery stream if needed.
According to the AWS documentation, Amazon S3 is an object storage service that offers scalability, data availability, security, and performance. You can use S3 to store and retrieve any amount of data from anywhere on the web. You can also use S3 features such as lifecycle management, encryption, versioning, and replication to optimize your storage.
To use S3 with Security Hub findings, you need to create an S3 bucket that will store the logs from Kinesis Data Firehose delivery streams. You can then partition the data in the bucket by using prefixes such as account ID or Region. This will improve the performance and cost-effectiveness of querying the data.
According to the AWS documentation, AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy to prepare and load your data for analytics. You can use Glue to crawl your data sources, identify data formats, and suggest schemas and transformations. You can also use Glue Data Catalog as a central metadata repository for your data assets.
To use Glue with Security Hub findings, you need to create a Glue crawler that will crawl the S3 bucket and build the schema for the data. The crawler will create tables in the Glue Data Catalog that you can query using standard SQL.
According to the AWS documentation, Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. You can use Athena with Glue Data Catalog as a metadata store for your tables.
To use Athena with Security Hub findings, you need to create views in Athena that will flatten nested attributes in the data. For example, you can create views that extract fields such as account ID, Region, resource type, resource ID, finding type, finding title, and finding description from the JSON data. You can then query the views using SQL and join them with other tables if needed.
According to the AWS documentation, Amazon QuickSight is a fast, cloud-powered business intelligence
service that makes it easy to deliver insights to everyone in your organization. You can use QuickSight to create and publish interactive dashboards that include machine learning insights. You can also use QuickSight to connect to various data sources, such as Athena, S3, or RDS.
To use QuickSight with Security Hub findings, you need to create QuickSight dashboards that use the Athena views as data sources. You can then visualize and analyze the findings using charts, graphs, maps, or tables. You can also apply filters, calculations, or aggregations to the data. You can then share the dashboards with your users or embed them in your applications.

NEW QUESTION 6
A company is running its workloads in a single AWS Region and uses AWS Organizations. A security engineer must implement a solution to prevent users from launching resources in other Regions.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create an IAM policy that has an aws RequestedRegion condition that allows actions only in the designated Region Attach the policy to all users.
  • B. Create an I AM policy that has an aws RequestedRegion condition that denies actions that are not in the designated Region Attach the policy to the AWS account in AWS Organizations.
  • C. Create an IAM policy that has an aws RequestedRegion condition that allows the desired actions Attach the policy only to the users who are in the designated Region.
  • D. Create an SCP that has an aws RequestedRegion condition that denies actions that are not in the designated Regio
  • E. Attach the SCP to the AWS account in AWS Organizations.

Answer: D

Explanation:
Although you can use a IAM policy to prevent users launching resources in other regions. The best practice is to use SCP when using AWS organizations. https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_general.htm

NEW QUESTION 7
A security engineer needs to develop a process to investigate and respond to po-tential security events on a company's Amazon EC2 instances. All the EC2 in-stances are backed by Amazon Elastic Block Store (Amazon EBS). The company uses AWS Systems Manager to manage all the EC2 instances and has installed Systems Manager Agent (SSM Agent) on all the EC2 instances.
The process that the security engineer is developing must comply with AWS secu-rity best practices and must meet the following requirements:
• A compromised EC2 instance's volatile memory and non-volatile memory must be preserved for forensic purposes.
• A compromised EC2 instance's metadata must be updated with corresponding inci-dent ticket information.
• A compromised EC2 instance must remain online during the investigation but must be isolated to prevent the spread of malware.
• Any investigative activity during the collection of volatile data must be cap-tured as part of the process. Which combination of steps should the security engineer take to meet these re-quirements with the LEAST
operational overhead? (Select THREE.)

  • A. Gather any relevant metadata for the compromised EC2 instanc
  • B. Enable ter-mination protectio
  • C. Isolate the instance by updating the instance's secu-rity groups to restrict acces
  • D. Detach the instance from anyAuto Scaling groups that the instance is a member o
  • E. Deregister the instance from any Elastic Load Balancing (ELB) resources.
  • F. Gather any relevant metadata for the compromised EC2 instanc
  • G. Enable ter-mination protectio
  • H. Move the instance to an isolation subnet that denies all source and destination traffi
  • I. Associate the instance with the subnet to restrict acces
  • J. Detach the instance from any Auto Scaling groups that the instance is a member o
  • K. Deregister the instance from any Elastic Load Balancing (ELB) resources.
  • L. Use Systems Manager Run Command to invoke scripts that collect volatile data.
  • M. Establish a Linux SSH or Windows Remote Desktop Protocol (RDP) session to the compromised EC2 instance to invoke scripts that collect volatile data.
  • N. Create a snapshot of the compromised EC2 instance's EBS volume for follow-up investigation
  • O. Tag the instance with any relevant metadata and inci-dent ticket information.
  • P. Create a Systems Manager State Manager association to generate an EBS vol-ume snapshot of the compromised EC2 instanc
  • Q. Tag the instance with any relevant metadata and incident ticket information.

Answer: ACE

NEW QUESTION 8
A company has developed a new Amazon RDS database application. The company must secure the ROS database credentials for encryption in transit and encryption at rest. The company also must rotate the credentials automatically on a regular basis.
Which solution meets these requirements?

  • A. Use IAM Systems Manager Parameter Store to store the database credentiai
  • B. Configure automatic rotation of the credentials.
  • C. Use IAM Secrets Manager to store the database credential
  • D. Configure automat* rotation of the credentials
  • E. Store the database credentials in an Amazon S3 bucket that is configured with server-side encryption with S3 managed encryption keys (SSE-S3) Rotate the credentials with IAM database authentication.
  • F. Store the database credentials m Amazon S3 Glacier, and use S3 Glacier Vault Lock Configure an IAM Lambda function to rotate the credentials on a scheduled basts

Answer: A

NEW QUESTION 9
A company's Security Engineer has been tasked with restricting a contractor's IAM account access to the company's Amazon EC2 console without providing access to any other AWS services. The contractor's IAM account must not be able to gain access to any other AWS service, even if the IAM account is assigned additional permissions based on IAM group membership.
What should the Security Engineer do to meet these requirements?

  • A. Create an Inline IAM user policy that allows for Amazon EC2 access for the contractor's IAM user.
  • B. Create an IAM permissions boundary policy that allows Amazon EC2 acces
  • C. Associate the contractor's IAM account with the IAM permissions boundary policy.
  • D. Create an IAM group with an attached policy that allows for Amazon EC2 acces
  • E. Associate the contractor's IAM account with the IAM group.
  • F. Create an IAM role that allows for EC2 and explicitly denies all other service
  • G. Instruct the contractor to always assume this role.

Answer: B

NEW QUESTION 10
A company’s public Application Load Balancer (ALB) recently experienced a DDoS attack. To mitigate this issue. the company deployed Amazon CloudFront in front of the ALB so that users would not directly access the Amazon EC2 instances behind the ALB.
The company discovers that some traffic is still coming directly into the ALB and is still being handled by the EC2 instances.
Which combination of steps should the company take to ensure that the EC2 instances will receive traffic only from CloudFront? (Choose two.)

  • A. Configure CloudFront to add a cache key policy to allow a custom HTTP header that CloudFront sends to the ALB.
  • B. Configure CloudFront to add a custom: HTTP header to requests that CloudFront sends to the ALB.
  • C. Configure the ALB to forward only requests that contain the custom HTTP header.
  • D. Configure the ALB and CloudFront to use the X-Forwarded-For header to check client IP addresses.
  • E. Configure the ALB and CloudFront to use the same X.509 certificate that is generated by AWS Certificate Manager (ACM).

Answer: BC

Explanation:
To prevent users from directly accessing an Application Load Balancer and allow access only through CloudFront, complete these high-level steps: Configure CloudFront to add a custom HTTP header to requests that it sends to the Application Load Balancer. Configure the Application Load Balancer to only forward requests that contain the custom HTTP header. (Optional) Require HTTPS to improve the security of this solution.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/restrict-access-to-load-balancer.html

NEW QUESTION 11
A Security Engineer creates an Amazon S3 bucket policy that denies access to all users. A few days later, the Security Engineer adds an additional statement to the bucket policy to allow read-only access to one other employee. Even after updating the policy, the employee still receives an access denied message.
What is the likely cause of this access denial?

  • A. The ACL in the bucket needs to be updated
  • B. The IAM policy does not allow the user to access the bucket
  • C. It takes a few minutes for a bucket policy to take effect
  • D. The allow permission is being overridden by the deny

Answer: D

NEW QUESTION 12
A company is deploying an Amazon EC2-based application. The application will include a custom health-checking component that produces health status data in JSON format. A Security Engineer must
implement a secure solution to monitor application availability in near-real time by analyzing the hearth status data.
Which approach should the Security Engineer use?

  • A. Use Amazon CloudWatch monitoring to capture Amazon EC2 and networking metrics Visualizemetrics using Amazon CloudWatch dashboards.
  • B. Run the Amazon Kinesis Agent to write the status data to Amazon Kinesis Data Firehose Store the streaming data from Kinesis Data Firehose in Amazon Redshif
  • C. (hen run a script on the pool data and analyze the data in Amazon Redshift
  • D. Write the status data directly to a public Amazon S3 bucket from the health-checking component Configure S3 events to invoke an IAM Lambda function that analyzes the data
  • E. Generate events from the health-checking component and send them to Amazon CloudWatch Events.Include the status data as event payload
  • F. Use CloudWatch Events rules to invoke an IAM Lambda function that analyzes the data.

Answer: A

Explanation:
Amazon CloudWatch monitoring is a service that collects and tracks metrics from AWS resources and applications, and provides visualization tools and alarms to monitor performance and availability1. The health status data in JSON format can be sent to CloudWatch as custom metrics2, and then displayed in CloudWatch dashboards3. The other options are either inefficient or insecure for monitoring application availability in near-real time.

NEW QUESTION 13
A company has a web-based application using Amazon CloudFront and running on Amazon Elastic Container Service (Amazon ECS) behind an Application Load Balancer (ALB). The ALB is terminating TLS and balancing load across ECS service tasks A security engineer needs to design a solution to ensure that application content is accessible only through CloudFront and that I is never accessible directly.
How should the security engineer build the MOST secure solution?

  • A. Add an origin custom header Set the viewer protocol policy to HTTP and HTTPS Set the origin protocol pokey to HTTPS only Update the application to validate the CloudFront custom header
  • B. Add an origin custom header Set the viewer protocol policy to HTTPS only Set the origin protocol policy to match viewer Update the application to validate the CloudFront custom header.
  • C. Add an origin custom header Set the viewer protocol policy to redirect HTTP to HTTPS Set the origin protocol policy to HTTP only Update the application to validate the CloudFront custom header.
  • D. Add an origin custom header Set the viewer protocol policy to redirect HTTP to HTTP
  • E. Set the origin protocol policy to HTTPS only Update the application to validate the CloudFront custom header

Answer: D

Explanation:
To ensure that application content is accessible only through CloudFront and not directly, the security engineer should do the following:
SCS-C02 dumps exhibit Add an origin custom header. This is a header that CloudFront adds to the requests that it sends to the origin, but viewers cannot see or modify.
SCS-C02 dumps exhibit Set the viewer protocol policy to redirect HTTP to HTTPS. This ensures that the viewers always use HTTPS when they access the website through CloudFront.
SCS-C02 dumps exhibit Set the origin protocol policy to HTTPS only. This ensures that CloudFront always uses HTTPS when it connects to the origin.
SCS-C02 dumps exhibit Update the application to validate the CloudFront custom header. This means that the application checks if the request has the custom header and only responds if it does. Otherwise, it denies or ignores the request. This prevents users from bypassing CloudFront and accessing the content directly on the origin.

NEW QUESTION 14
A company wants to deploy a distributed web application on a fleet of EC2 instances. The fleet will be fronted by a Classic Load Balancer that will be configured to terminate the TLS connection The company wants to make sure that all past and current TLS traffic to the Classic Load Balancer stays secure even if the certificate private key is leaked.
To ensure the company meets these requirements, a Security Engineer can configure a Classic Load Balancer with:

  • A. An HTTPS listener that uses a certificate that is managed by Amazon Certification Manager.
  • B. An HTTPS listener that uses a custom security policy that allows only perfect forward secrecy cipher suites
  • C. An HTTPS listener that uses the latest IAM predefined ELBSecuntyPolicy-TLS-1 -2-2017-01 security policy
  • D. A TCP listener that uses a custom security policy that allows only perfect forward secrecy cipher suites.

Answer: B

Explanation:
this is a way to configure a Classic Load Balancer with perfect forward secrecy cipher suites. Perfect forward secrecy is a property of encryption protocols that ensures that past and current TLS traffic stays secure even if the certificate private key is leaked. Cipher suites are sets of algorithms that determine how encryption is performed. A custom security policy is a set of cipher suites and protocols that you can select for your load balancer to support. An HTTPS listener is a process that checks for connection requests using encrypted SSL/TLS protocol. By using an HTTPS listener that uses a custom security policy that allows only perfect forward secrecy cipher suites, you can ensure that your Classic Load Balancer meets the requirements. The other options are either invalid or insufficient for configuring a Classic Load Balancer with perfect forward secrecy cipher suites.

NEW QUESTION 15
A company's Chief Security Officer has requested that a Security Analyst review and improve the security posture of each company IAM account The Security Analyst decides to do this by Improving IAM account root user security.
Which actions should the Security Analyst take to meet these requirements? (Select THREE.)

  • A. Delete the access keys for the account root user in every account.
  • B. Create an admin IAM user with administrative privileges and delete the account root user in every account.
  • C. Implement a strong password to help protect account-level access to the IAM Management Console by the account root user.
  • D. Enable multi-factor authentication (MFA) on every account root user in all accounts.
  • E. Create a custom IAM policy to limit permissions to required actions for the account root user and attach the policy to the account root user.
  • F. Attach an IAM role to the account root user to make use of the automated credential rotation in IAM STS.

Answer: ADE

Explanation:
because these are the actions that can improve IAM account root user security. IAM account root user is a user that has complete access to all AWS resources and services in an account. IAM account root user security is a set of best practices that help protect the account root user from unauthorized or accidental use. Deleting the access keys for the account root user in every account can help prevent programmatic access by the account root user, which reduces the risk of compromise or misuse. Enabling MFA on every account root user in all accounts can help add an extra layer of security for console access by requiring a verification code in addition to a password. Creating a custom IAM policy to limit permissions to required actions for the account root user and attaching the policy to the account root user can help enforce the principle of least privilege and restrict the account root user from performing unnecessary or dangerous actions. The other options are either invalid or ineffective for improving IAM account root user security.

NEW QUESTION 16
......

P.S. Dumps-hub.com now are offering 100% pass ensure SCS-C02 dumps! All SCS-C02 exam questions have been updated with correct answers: https://www.dumps-hub.com/SCS-C02-dumps.html (235 New Questions)