A Review Of Validated SAA-C03 Simulations

Master the SAA-C03 AWS Certified Solutions Architect - Associate (SAA-C03) content and be ready for exam day success quickly with this Testking SAA-C03 exam answers. We guarantee it!We make it a reality and give you real SAA-C03 questions in our Amazon-Web-Services SAA-C03 braindumps.Latest 100% VALID Amazon-Web-Services SAA-C03 Exam Questions Dumps at below page. You can use our Amazon-Web-Services SAA-C03 braindumps and pass your exam.

Amazon-Web-Services SAA-C03 Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1

A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received.
Which solution will meet these requirements?

  • A. Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application receives an orde
  • B. Subscribe an AWS Lambda function to the topic to perform processing.
  • C. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an orde
  • D. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.
  • E. Use an API Gateway authorizer to block any requests while the application processes an order.
  • F. Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an orde
  • G. Configure the SQS standard queue to invoke an AWS Lambda function for processing.

Answer: B

Explanation:
To ensure that orders are processed in the order that they are received, the best solution is to use an Amazon SQS FIFO (First-In-First-Out) queue. This type of queue maintains the exact order in which messages are sent and received. In this case, the application can send information about new orders to an Amazon API Gateway REST API, which can then use an API Gateway integration to send a message to an Amazon SQS FIFO queue for processing. The queue can then be configured to invoke an AWS Lambda function to perform the necessary processing on each order. This ensures that orders are processed in the exact order in which they are received.

NEW QUESTION 2

A solutions architect is using an AWS CloudFormation template to deploy a three-tier web application. The web application consists of a web tier and an application tier that stores and retrieves user data in Amazon DynamoDB tables. The web and application tiers are hosted on Amazon EC2 instances, and the database tier is not publicly accessible. The application EC2 instances need to access the DynamoDB tables without exposing API credentials in the template.
What should the solutions architect do to meet these requirements?

  • A. Create an IAM role to read the DynamoDB table
  • B. Associate the role with the application instances by referencing an instance profile.
  • C. Create an IAM role that has the required permissions to read and write from the DynamoDB table
  • D. Add the role to the EC2 instance profile, and associate the instance profile with the application instances.
  • E. Use the parameter section in the AWS CloudFormation template to have the user input access and secret keys from an already-created IAM user that has the required permissions to read and write from the DynamoDB tables.
  • F. Create an IAM user in the AWS CloudFormation template that has the required permissions to read and write from the DynamoDB table
  • G. Use the GetAtt function to retrieve the access and secret keys, and pass them to the application instances through the user data.

Answer: B

Explanation:
it allows the application EC2 instances to access the DynamoDB tables without exposing API credentials in the template. By creating an IAM role that has the required permissions to read and write from the DynamoDB tables and adding it to the EC2 instance profile, the application instances can use temporary security credentials that are automatically rotated by AWS. This is a secure and best practice way to grant access to AWS resources from EC2 instances. References:
✑ IAM Roles for Amazon EC2
✑ Using Instance Profiles

NEW QUESTION 3

A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream data each day.
What should a solutions architect do to transmit and process the clickstream data?

  • A. Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR duster with the data to generate analytics
  • B. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use tor analysis
  • C. Cache the data to Amazon CloudFron: Store the data in an Amazon S3 bucket When an object is added to the S3 bucket, run an AWS Lambda function to process the data tor analysis.
  • D. Collect the data from Amazon Kinesis Data Stream
  • E. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake Load the data in Amazon Redshift for analysis

Answer: D

Explanation:
https://aws.amazon.com/es/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/

NEW QUESTION 4

A company is building a new dynamic ordering website. The company wants to minimize server maintenance and patching. The website must be highly available and must scale read and write capacity as quickly as possible to meet changes in user demand.
Which solution will meet these requirements?

  • A. Host static content in Amazon S3 Host dynamic content by using Amazon API Gateway and AWS Lambda Use Amazon DynamoDB with on-demand capacity for the database Configure Amazon CloudFront to deliver the website content
  • B. Host static content in Amazon S3 Host dynamic content by using Amazon API Gateway and AWS Lambda Use Amazon Aurora with Aurora Auto Scaling for the database Configure Amazon CloudFront to deliver the website content
  • C. Host al the website content on Amazon EC2 instances Create an Auto Scaling group to scale the EC2 Instances Use an Application Load Balancer to distribute traffic Use Amazon DynamoDB with provisioned write capacity for the database
  • D. Host at the website content on Amazon EC2 instances Create an Auto Scaling group toscale the EC2 instances Use an Application Load Balancer to distribute traffic Use Amazon Aurora with Aurora Auto Scaling for the database

Answer: A

Explanation:
Key phrase in the Question is must scale read and write capacity. Aurora is only for Read. Amazon DynamoDB has two read/write capacity modes for processing reads and writes on your tables: On-demand Provisioned (default, free-tier eligible) https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html

NEW QUESTION 5

A company is developing an application that will run on a production Amazon Elastic Kubernetes Service (Amazon EKS) cluster The EKS cluster has managed node groups that are provisioned with On-Demand Instances.
The company needs a dedicated EKS cluster for development work. The company will use the development cluster infrequently to test the resiliency of the application. The EKS cluster must manage all the nodes.
Which solution will meet these requirements MOST cost-effectively?

  • A. Create a managed node group that contains only Spot Instances.
  • B. Create two managed node group
  • C. Provision one node group with On-Demand Instance
  • D. Provision the second node group with Spot Instances.
  • E. Create an Auto Scaling group that has a launch configuration that uses Spot Instance
  • F. Configure the user data to add the nodes to the EKS cluster.
  • G. Create a managed node group that contains only On-Demand Instances.

Answer: A

Explanation:
Spot Instances are EC2 instances that are available at up to a 90% discount compared to On-Demand prices. Spot Instances are suitable for stateless, fault-tolerant, and flexible workloads that can tolerate interruptions. Spot Instances can be reclaimed by EC2 when the demand for On-Demand capacity increases, but they provide a two-minute warning before termination. EKS managed node groups automate the provisioning and lifecycle management of nodes for EKS clusters. Managed node groups can use Spot Instances to reduce costs and scale the cluster based on demand. Managed node groups also support features such as Capacity Rebalancing and Capacity Optimized allocation strategy to improve the availability and resilience of Spot Instances. This solution will meet the requirements most cost-effectively, as it leverages the lowest-priced EC2 capacity and does not require any manual intervention.
References:
✑ 1 explains how to create and use managed node groups with EKS.
✑ 2 describes how to use Spot Instances with managed node groups.
✑ 3 provides an overview of Spot Instances and their benefits.

NEW QUESTION 6

A company uses an organization in AWS Organizations to manage AWS accounts that contain applications. The company sets up a dedicated monitoring member account in the organization. The company wants to query and visualize observability data across the accounts by using Amazon CloudWatch.
Which solution will meet these requirements?

  • A. Enable CloudWatch cross-account observability for the monitoring accoun
  • B. Deploy an AWS CloudFormation template provided by the monitoring account in each AWS account to share the data with the monitoring account.
  • C. Set up service control policies (SCPs) to provide access to CloudWatch in the monitoring account under the Organizations root organizational unit (OU).
  • D. Configure a new 1AM user in the monitoring accoun
  • E. In each AWS account, configure an 1AM policy to have access to query and visualize the CloudWatch data in the accoun
  • F. Attach the new 1AM policy to the new I AM user.
  • G. Create a new 1AM user in the monitoring accoun
  • H. Create cross-account 1AM policies in each AWS accoun
  • I. Attach the 1AM policies to the new 1AM user.

Answer: A

Explanation:
CloudWatch cross-account observability is a feature that allows you to monitor and troubleshoot applications that span multiple accounts within a Region. You can seamlessly search, visualize, and analyze your metrics, logs, traces, and Application Insights applications in any of the linked accounts without account boundaries1. To enable CloudWatch cross-account observability, you need to set up one or more AWS accounts as monitoring accounts and link them with multiple source accounts. A monitoring account is a central AWS account that can view and interact with observability data shared by other accounts. A source account is an individual AWS account that shares observability data and resources with one or more monitoring accounts1. To create links between monitoring accounts and source accounts, you can use the CloudWatch console, the AWS CLI, or the AWS API. You can also use AWS Organizations to link accounts in an organization or organizational unit to the monitoring account1. CloudWatch provides a CloudFormation template that you can deploy in each source account to share observability data with the monitoring account. The template creates a sink resource in the monitoring account and an observability link resource in the source account. The template also creates the necessary IAM roles and policies to allow cross-account access to the observability data2. Therefore, the solution that meets the requirements of the question is to enable CloudWatch cross- account observability for the monitoring account and deploy the CloudFormation template provided by the monitoring account in each AWS account to share the data with the monitoring account.
The other options are not valid because:
✑ Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines3.
SCPs do not provide access to CloudWatch in the monitoring account, but rather restrict the actions that users and roles can perform in the source accounts. SCPs are not required to enable CloudWatch cross-account observability, as the CloudFormation template creates the necessary IAM roles and policies for cross- account access2.
✑ IAM users are entities that you create in AWS to represent the people or
applications that use them to interact with AWS. IAM users can have permissions to access the resources in your AWS account4. Configuring a new IAM user in the monitoring account and an IAM policy in each AWS account to have access to query and visualize the CloudWatch data in the account is not a valid solution, as it does not enable CloudWatch cross-account observability. This solution would require the IAM user to switch between different accounts to view the observability data, which is not seamless and efficient. Moreover, this solution would not allow the IAM user to search, visualize, and analyze metrics, logs, traces, and Application Insights applications across multiple accounts in a single place1.
✑ Cross-account IAM policies are policies that allow you to delegate access to
resources that are in different AWS accounts that you own. You attach a cross- account policy to a user or group in one account, and then specify which accounts the user or group can access5. Creating a new IAM user in the monitoring account and cross-account IAM policies in each AWS account is not a valid solution, as it does not enable CloudWatch cross-account observability. This solution would also require the IAM user to switch between different accounts to view the observability data, which is not seamless and efficient. Moreover, this solution would not allow the IAM user to search, visualize, and analyze metrics, logs, traces, and Application Insights applications across multiple accounts in a single place1.
References: CloudWatch cross-account observability, CloudFormation template for CloudWatch cross-account observability, Service control policies, IAM users, Cross- account IAM policies

NEW QUESTION 7

A development team is collaborating with another company to create an integrated product. The other company needs to access an Amazon Simple Queue Service (Amazon SQS) queue that is contained in the development team's account. The other company wants to poll the queue without giving up its own account permissions to do so.
How should a solutions architect provide access to the SQS queue?

  • A. Create an instance profile that provides the other company access to the SQS queue.
  • B. Create an 1AM policy that provides the other company access to the SQS queue.
  • C. Create an SQS access policy that provides the other company access to the SQS queue.
  • D. Create an Amazon Simple Notification Service (Amazon SNS) access policy that provides the other company access to the SQS queue.

Answer: C

Explanation:
To provide access to the SQS queue to the other company without giving up its own account permissions, a solutions architect should create an SQS access policy that provides the other company access to the SQS queue. An SQS access policy is a resource-based policy that defines who can access the queue and what actions they can perform. The policy can specify the AWS account ID of the other company as a principal, and grant permissions for actions such as sqs:ReceiveMessage, sqs:DeleteMessage, and sqs:GetQueueAttributes. This way, the other company can poll the queue using its own credentials, without needing to assume a role or use cross-account access
keys. References:
✑ Using identity-based policies (IAM policies) for Amazon SQS
✑ Using custom policies with the Amazon SQS access policy language

NEW QUESTION 8

A company has developed a new video game as a web application. The application is in a three-tier architecture in a VPC with Amazon RDS for MySQL in the database layer. Several players will compete concurrently online. The game's developers want to display a top-10 scoreboard in near-real time and offer the ability to stop and restore the game while preserving the current scores.
What should a solutions architect do to meet these requirements?

  • A. Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web application to display.
  • B. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
  • C. Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard in a section of the application.
  • D. Create a read replica on Amazon RDS for MySQL to run queries to compute thescoreboard and serve the read traffic to the web application.

Answer: B

Explanation:
This answer is correct because it meets the requirements of displaying a top- 10 scoreboard in near-real time and offering the ability to stop and restore the game while preserving the current scores. Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. You can use Amazon ElastiCache for Redis to set up an ElastiCache for Redis cluster to compute and cache the scores for the web application to display. You can use Redis data structures such as sorted sets and hashes to store and rank the scores of the players, and use Redis commands such as ZRANGE and ZADD to retrieve and update the scores efficiently. You can also use Redis persistence features such as snapshots and append-only files (AOF) to enable point-in-time recovery of your data, which can help you stop and restore the game while preserving the current scores.
References:
✑ https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html
✑ https://redis.io/topics/data-types
✑ https://redis.io/topics/persistence

NEW QUESTION 9

A development team has launched a new application that is hosted on Amazon EC2 instances inside a development VPC. A solution architect needs to create a new VPC in the same account. The new VPC will be peered with the development VPC. The VPC CIDR block for the development VPC is 192. 168. 00/24. The solutions architect needs to create a CIDR block for the new VPC. The CIDR block must be valid for a VPC peering connection to the development VPC.
What is the SMALLEST CIOR block that meets these requirements?

  • A. 10.0.1.0/32
  • B. 192.168.0.0/24
  • C. 192.168.1.0/32
  • D. 10.0.1.0/24

Answer: D

Explanation:
The allowed block size is between a /28 netmask and /16 netmask. The CIDR block must not overlap with any existing CIDR block that's associated with the VPC. https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html

NEW QUESTION 10

A company wants to share accounting data with an external auditor. The data is stored in an Amazon RDS DB instance that resides in a private subnet. The auditor has its own AWS account and requires its own copy of the database.
What is the MOST secure way for the company to share the database with the auditor?

  • A. Create a read replica of the databas
  • B. Configure IAM standard database authentication to grant the auditor access.
  • C. Export the database contents to text file
  • D. Store the files in an Amazon S3 bucke
  • E. Create a new IAM user for the audito
  • F. Grant the user access to the S3 bucket.
  • G. Copy a snapshot of the database to an Amazon S3 bucke
  • H. Create an IAM use
  • I. Share the user's keys with the auditor to grant access to the object in the $3 bucket.
  • J. Create an encrypted snapshot of the databas
  • K. Share the snapshot with the audito
  • L. Allow access to the AWS Key Management Service (AWS KMS) encryption key.

Answer: D

Explanation:
This answer is correct because it meets the requirements of sharing the database with the auditor in a secure way. You can create an encrypted snapshot of the database by using AWS Key Management Service (AWS KMS) to encrypt the snapshot
with a customer managed key. You can share the snapshot with the auditor by modifying the permissions of the snapshot and specifying the AWS account ID of the auditor. You can also allow access to the AWS KMS encryption key by adding a key policy statement that grants permissions to the auditor’s account. This way, you can ensure that only the auditor can access and restore the snapshot in their own AWS account.
References:
✑ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ShareSnapsh ot.html
✑ https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html#key- policy-default-allow-root-enable-iam

NEW QUESTION 11

A company wants to manage Amazon Machine Images (AMIs). The company currently copies AMIs to the same AWS Region where the AMIs were created. The company needs to design an application that captures AWS API calls and sends alerts whenever the Amazon EC2 CreateImage API operation is called within the company’s account.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert when a CreateImage API call is detected.
  • B. Configure AWS CloudTrail with an Amazon Simple Notification Service (Amazon SNS) notification that occurs when updated logs are sent to Amazon S3. Use Amazon Athena to create a new table and to query on CreateImage when an API call is detected.
  • C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the CreateImage API cal
  • D. Configure the target as an Amazon Simple Notification Service (Amazon SNS) topic to send an alert when a CreateImage API call is detected.
  • E. Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for AWS CloudTrail log
  • F. Create an AWS Lambda function to send an alert to an Amazon Simple Notification Service (Amazon SNS) topic when a CreateImage API call is detected.

Answer: C

Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/monitor-ami- events.html#:~:text=For%20example%2C%20you%20can%20create%20an%20EventBrid ge%20rule%20that%20detects%20when%20the%20AMI%20creation%20process%20has
%20completed%20and%20then%20invokes%20an%20Amazon%20SNS%20topic%20to% 20send%20an%20email%20notification%20to%20you.

NEW QUESTION 12

A company copies 200 TB of data from a recent ocean survey onto AWS Snowball Edge Storage Optimized devices. The company has a high performance computing (HPC) cluster that is hosted on AWS to look for oil and gas deposits. A solutions architect must provide the cluster with consistent sub-millisecond latency and high-throughput access to the data on the Snowball Edge Storage Optimized devices. The company is sending the devices back to AWS.
Which solution will meet these requirements?

  • A. Create an Amazon S3 bucke
  • B. Import the data into the S3 bucke
  • C. Configure an AWS Storage Gateway file gateway to use the S3 bucke
  • D. Access the file gateway from the HPC cluster instances.
  • E. Create an Amazon S3 bucke
  • F. Import the data into the S3 bucke
  • G. Configure an Amazon FSx for Lustre file system, and integrate it with the S3 bucke
  • H. Access the FSx for Lustre file system from the HPC cluster instances.
  • I. Create an Amazon S3 bucket and an Amazon Elastic File System (Amazon EFS) file syste
  • J. Import the data into the S3 bucke
  • K. Copy the data from the S3 bucket to the EFS file syste
  • L. Access the EFS file system from the HPC cluster instances.
  • M. Create an Amazon FSx for Lustre file syste
  • N. Import the data directly into the FSx for Lustre file syste
  • O. Access the FSx for Lustre file system from the HPC cluster instances.

Answer: B

Explanation:
To provide the HPC cluster with consistent sub-millisecond latency and high- throughput access to the data on the Snowball Edge Storage Optimized devices, a solutions architect should configure an Amazon FSx for Lustre file system, and integrate it with an Amazon S3 bucket. This solution has the following benefits:
✑ It allows the HPC cluster to access the data on the Snowball Edge devices using a
POSIX-compliant file system that is optimized for fast processing of large datasets1.
✑ It enables the data to be imported from the Snowball Edge devices into the S3
bucket using the AWS Snow Family Console or the AWS CLI2. The data can then be accessed from the FSx for Lustre file system using the S3 integration feature3.
✑ It supports high availability and durability of the data, as the FSx for Lustre file
system can automatically copy the data to and from the S3 bucket3. The data can also be accessed from other AWS services or applications using the S3 API4.
References:
✑ 1: https://aws.amazon.com/fsx/lustre/
✑ 2: https://docs.aws.amazon.com/snowball/latest/developer-guide/using- adapter.html
✑ 3: https://docs.aws.amazon.com/fsx/latest/LustreGuide/create-fs-linked-data- repo.html
✑ 4: https://docs.aws.amazon.com/fsx/latest/LustreGuide/export-data-repo.html

NEW QUESTION 13

A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images Which method is the MOST cost-effective for hosting the website?

  • A. Containerize the website and host it in AWS Fargate.
  • B. Create an Amazon S3 bucket and host the website there
  • C. Deploy a web server on an Amazon EC2 instance to host the website.
  • D. Configure an Application Loa d Balancer with an AWS Lambda target that uses the Express js framework.

Answer: B

Explanation:
In Static Websites, Web pages are returned by the server which are prebuilt. They use simple languages such as HTML, CSS, or JavaScript.
There is no processing of content on the server (according to the user) in Static Websites. Web pages are returned by the server with no change therefore, static Websites are fast.
There is no interaction with databases.
Also, they are less costly as the host does not need to support server-side processing with different languages.
============
In Dynamic Websites, Web pages are returned by the server which are processed during runtime means they are not prebuilt web pages but they are built during runtime according to the user’s demand.
These use server-side scripting languages such as PHP, Node.js, ASP.NET and many more supported by the server.
So, they are slower than static websites but updates and interaction with databases are possible.

NEW QUESTION 14

A company runs workloads on AWS. The company needs to connect to a service from an
external provider. The service is hosted in the provider's VPC. According to the company’s security team, the connectivity must be private and must be restricted to the target service. The connection must be initiated only from the company’s VPC.
Which solution will mast these requirements?

  • A. Create a VPC peering connection between the company's VPC and the provider's VP
  • B. Update the route table to connect to the target service.
  • C. Ask the provider to create a virtual private gateway in its VP
  • D. Use AWS PrivateLink to connect to the target service.
  • E. Create a NAT gateway in a public subnet of the company's VP
  • F. Update the route table to connect to the target service.
  • G. Ask the provider to create a VPC endpoint for the target servic
  • H. Use AWS PrivateLink to connect to the target service.

Answer: D

Explanation:
**AWS PrivateLink provides private connectivity between VPCs, AWS services, and your on-premises networks, without exposing your traffic to the public internet**. AWS PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify your network architecture. Interface **VPC endpoints**, powered by AWS PrivateLink, connect you to services hosted by AWS Partners and supported solutions available in AWS Marketplace. https://aws.amazon.com/privatelink/

NEW QUESTION 15

A company uses a 100 GB Amazon RDS for Microsoft SQL Server Single-AZ DB instance in the us-east-1 Region to store customer transactions. The company needs high availability and automate recovery for the DB instance.
The company must also run reports on the RDS database several times a year. The report process causes transactions to take longer than usual to post to the customer‘ accounts.
Which combination of steps will meet these requirements? (Select TWO.)

  • A. Modify the DB instance from a Single-AZ DB instance to a Multi-AZ deployment.
  • B. Take a snapshot of the current DB instanc
  • C. Restore the snapshot to a new RDS deployment in another Availability Zone.
  • D. Create a read replica of the DB instance in a different Availability Zon
  • E. Point All requests for reports to the read replica.
  • F. Migrate the database to RDS Custom.
  • G. Use RDS Proxy to limit reporting requests to the maintenance window.

Answer: AC

Explanation:
https://medium.com/awesome-cloud/aws-difference-between-multi-az-and-read-replicas-in-amazon-rds-60fe848ef53a

NEW QUESTION 16

A company runs an application that receives data from thousands of geographically dispersed remote devices that use UDP The application processes the data immediately and sends a message back to the device if necessary No data is stored.
The company needs a solution that minimizes latency for the data transmission from the devices. The solution also must provide rapid failover to another AWS Region
Which solution will meet these requirements?

  • A. Configure an Amazon Route 53 failover routing policy Create a Network Load Balancer (NLB) in each of the two Regions Configure the NLB to invoke an AWS Lambda function to process the data
  • B. Use AWS Global Accelerator Create a Network Load Balancer (NLB) in each of the two Regions as an endpoin
  • C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluster Set the ECS service as the target for the NLB Process the data in Amazon ECS.
  • D. Use AWS Global Accelerator Create an Application Load Balancer (ALB) in each of the two Regions as an endpoint Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluste
  • E. Set the ECS service as the target for the ALB Process the data in Amazon ECS
  • F. Configure an Amazon Route 53 failover routing policy Create an Application Load Balancer (ALB) in each of the two Regions Create an Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type Create an ECS service on the cluster Set the ECS service as the target for the ALB Process the data in Amazon ECS

Answer: B

Explanation:
To meet the requirements of minimizing latency for data transmission from the devices and providing rapid failover to another AWS Region, the best solution would be to use AWS Global Accelerator in combination with a Network Load Balancer (NLB) and Amazon Elastic Container Service (Amazon ECS). AWS Global Accelerator is a service that improves the availability and performance of applications by using static IP addresses (Anycast) to route traffic to optimal AWS endpoints. With Global Accelerator, you can direct traffic to multiple Regions and endpoints, and provide automatic failover to another AWS Region.

NEW QUESTION 17

A company has an AWS Direct Connect connection from its corporate data center to its VPC in the us-east-1 Region. The company recently acquired a corporation that has several VPCs and a Direct Connect connection between its on-premises data center and the eu-west-2 Region. The CIDR blocks for the VPCs of the company and the corporation do not overlap. The company requires connectivity between two Regions and the data centers. The company needs a solution that is scalable while reducing operational overhead.
What should a solutions architect do to meet these requirements?

  • A. Set up inter-Region VPC peering between the VPC in us-east-1 and the VPCs in eu- west-2.
  • B. Create private virtual interfaces from the Direct Connect connection in us-east-1 to the VPCs in eu-west-2.
  • C. Establish VPN appliances in a fully meshed VPN network hosted by Amazon EC2. Use AWS VPN CloudHub to send and receive data between the data centers and each VPC.
  • D. Connect the existing Direct Connect connection to a Direct Connect gatewa
  • E. Route traffic from the virtual private gateways of the VPCs in each Region to the Direct Connect gateway.

Answer: D

Explanation:
This solution meets the requirements because it allows the company to use a single Direct Connect connection to connect to multiple VPCs in different Regions using a Direct Connect gateway. A Direct Connect gateway is a globally available resource that enables you to connect your on-premises network to VPCs in any AWS Region, except the AWS China Regions. You can associate a Direct Connect gateway with a transit gateway or a virtual private gateway in each Region. By routing traffic from the virtual private gateways of the VPCs to the Direct Connect gateway, you can enable inter-Region and on- premises connectivity for your VPCs. This solution is scalable because you can add more VPCs in different Regions to the Direct Connect gateway without creating additional connections. This solution also reduces operational overhead because you do not need to manage multiple VPN appliances, VPN connections, or VPC peering connections. References:
✑ Direct Connect gateways
✑ Inter-Region VPC peering

NEW QUESTION 18

An application runs on Amazon EC2 instances across multiple Availability Zones The instances run in an Amazon EC2 Auto Scaling group behind an Application Load Balancer The application performs best when the CPU utilization of the EC2 instances is at or near 40%.
What should a solutions architect do to maintain the desired performance across all instances in the group?

  • A. Use a simple scaling policy to dynamically scale the Auto Scaling group
  • B. Use a target tracking policy to dynamically scale the Auto Scaling group
  • C. Use an AWS Lambda function to update the desired Auto Scaling group capacity.
  • D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group

Answer: B

Explanation:
https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html

NEW QUESTION 19

What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket are encrypted?

  • A. Update the bucket policy to deny if the PutObject does not have an s3 x-amz-acl header set
  • B. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-aci header set to private.
  • C. Update the bucket policy to deny if the PutObject does not have an aws SecureTransport header set to true
  • D. Update the bucket policy to deny if the PutObject does not have an x-amz-server-side- encryption header set.

Answer: D

Explanation:
https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon- s3/#:~:text=Solution%20overview,console%2C%20CLI%2C%20or%20SDK.&text=To%20e ncrypt%20an%20object%20at,S3%2C%20or%20SSE%2DKMS.

NEW QUESTION 20
......

Thanks for reading the newest SAA-C03 exam dumps! We recommend you to try the PREMIUM Allfreedumps.com SAA-C03 dumps in VCE and PDF here: https://www.allfreedumps.com/SAA-C03-dumps.html (551 Q&As Dumps)