The Secret Of Amazon-Web-Services SAA-C03 Testing Software

Want to know Certleader SAA-C03 Exam practice test features? Want to lear more about Amazon-Web-Services AWS Certified Solutions Architect - Associate (SAA-C03) certification experience? Study Pinpoint Amazon-Web-Services SAA-C03 answers to Leading SAA-C03 questions at Certleader. Gat a success with an absolute guarantee to pass Amazon-Web-Services SAA-C03 (AWS Certified Solutions Architect - Associate (SAA-C03)) test on your first attempt.

Also have SAA-C03 free dumps questions for you:

NEW QUESTION 1

A company is designing a containerized application that will use Amazon Elastic Container Service (Amazon ECS). The application needs to access a shared file system that is highly durable and can recover data to another AWS Region with a recovery point objective (RPO) of 8 hours. The file system needs to provide a mount target in each Availability Zone within a Region.
A solutions architect wants to use AWS Backup to manage the replication to another Region.
Which solution will meet these requirements?

  • A. ‘Amazon FSx for Windows File Server with a Multi-AZ deployment
  • B. Amazon FSx for NetApp ONTAP with a Multi-AZ deployment
  • C. ‘Amazon Elastic File System (Amazon EFS) with the Standard storage class
  • D. Amazon FSx for OpenZFS

Answer: B

Explanation:
This answer is correct because it meets the requirements of accessing a shared file system that is highly durable, can recover data to another AWS Region, and can provide a mount target in each Availability Zone within a Region. Amazon FSx for NetApp ONTAP is a fully managed service that provides enterprise-grade data management and storage for your Windows and Linux applications. You can use Amazon FSx for NetApp ONTAP to create file systems that span multiple Availability Zones within an AWS Region, providing high availability and durability. You can also use AWS Backup to manage the replication of your file systems to another AWS Region, with a recovery point objective (RPO) of 8 hours or less. AWS Backup is a fully managed backup service that automates and centralizes backup of data over AWS services. You can use AWS Backup to create backup policies and monitor activity for your AWS resources in one place.
References:
✑ https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/what-is.html
✑ https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html

NEW QUESTION 2

A company runs a shopping application that uses Amazon DynamoDB to store customer information. In case of data corruption, a solutions architect needs to design a solution that meets a recovery point objective (RPO) of 15 minutes and a recovery time objective (RTO) of 1 hour.
What should the solutions architect recommend to meet these requirements?

  • A. Configure DynamoDB global table
  • B. For RPO recovery, point the application to a different AWS Region.
  • C. Configure DynamoDB point-in-time recover
  • D. For RPO recovery, restore to the desired point in time.
  • E. Export the DynamoDB data to Amazon S3 Glacier on a daily basi
  • F. For RPO recovery, import the data from S3 Glacier to DynamoDB.
  • G. Schedule Amazon Elastic Block Store (Amazon EBS) snapshots for the DynamoDB table every 15 minute
  • H. For RPO recovery, restore the DynamoDB table by using the EBS snapshot.

Answer: B

Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecov ery.html

NEW QUESTION 3

A company’s reporting system delivers hundreds of .csv files to an Amazon S3 bucket each day. The company must convert these files to Apache Parquet format and must store the files in a transformed data bucket.
Which solution will meet these requirements with the LEAST development effort?

  • A. Create an Amazon EMR cluster with Apache Spark installe
  • B. Write a Spark application to transform the dat
  • C. Use EMR File System (EMRFS) to write files to the transformed data bucket.
  • D. Create an AWS Glue crawler to discover the dat
  • E. Create an AWS Glue extract, transform, and load (ETL) job to transform the dat
  • F. Specify the transformed data bucket in the output step.
  • G. Use AWS Batch to create a job definition with Bash syntax to transform the data and output the data to the transformed data bucke
  • H. Use the job definition to submit a jo
  • I. Specify an array job as the job type.
  • J. Create an AWS Lambda function to transform the data and output the data to the transformed data bucke
  • K. Configure an event notification for the S3 bucke
  • L. Specify the Lambda function as the destination for the event notification.

Answer: B

Explanation:
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/three- aws-glue-etl-job-types-for-converting-data-to-apache-parquet.html

NEW QUESTION 4

A company needs to configure a real-time data ingestion architecture for its application. The company needs an API. a process that transforms data as the data is streamed, and a storage solution for the data.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data strea
  • B. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data sourc
  • C. Use AWS Lambda functions to transform the dat
  • D. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.
  • E. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glu
  • F. Stop source/destination checking on the EC2 instanc
  • G. Use AWS Glue to transform the data and to send the data to Amazon S3.
  • H. Configure an Amazon API Gateway API to send data to an Amazon Kinesis datastrea
  • I. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data sourc
  • J. Use AWS Lambda functions to transform the dat
  • K. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.
  • L. Configure an Amazon API Gateway API to send data to AWS Glu
  • M. Use AWS Lambda functions to transform the dat
  • N. Use AWS Glue to send the data to Amazon S3.

Answer: C

Explanation:
It uses Amazon Kinesis Data Firehose which is a fully managed service for delivering real-time streaming data to destinations such as Amazon S3. This service requires less operational overhead as compared to option A, B, and D. Additionally, it also uses Amazon API Gateway which is a fully managed service for creating, deploying, and managing APIs. These services help in reducing the operational overhead and automating the data ingestion process.

NEW QUESTION 5

A company has a multi-tier application deployed on several Amazon EC2 instances in an Auto Scaling group. An Amazon RDS for Oracle instance is the application’s data layer that uses Oracle-specific
PL/SQL functions. Traffic to the application has been steadily increasing. This is causing the EC2 instances to become overloaded and the RDS instance to run out of storage. The Auto Scaling group does not have any scaling metrics and defines the minimum healthy instance count only. The company predicts that traffic will continue to increase at a steady but unpredictable rate before levelling off.
What should a solutions architect do to ensure the system can automatically scale for the increased traffic? (Select TWO.)

  • A. Configure storage Auto Scaling on the RDS for Oracle Instance.
  • B. Migrate the database to Amazon Aurora to use Auto Scaling storage.
  • C. Configure an alarm on the RDS for Oracle Instance for low free storage space
  • D. Configure the Auto Scaling group to use the average CPU as the scaling metric
  • E. Configure the Auto Scaling group to use the average free memory as the seeing metric

Answer: AD

Explanation:
Auto scaling storage RDS will ease storage issues and migrating Oracle Pl/Sql to Aurora is cumbersome. Also Aurora has auto storage scaling by default. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes. html#USER_PIOPS.Autoscaling

NEW QUESTION 6

A company has migrated an application to Amazon EC2 Linux instances. One of these EC2 instances runs several 1-hour tasks on a schedule. These tasks were written by different teams and have no common programming language. The company is concerned about performance and scalability while these tasks run on a single instance. A solutions architect needs to implement a solution to resolve these concerns.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use AWS Batch to run the tasks as job
  • B. Schedule the jobs by using Amazon EventBridge (Amazon CloudWatch Events).
  • C. Convert the EC2 instance to a containe
  • D. Use AWS App Runner to create the containeron demand to run the tasks as jobs.
  • E. Copy the tasks into AWS Lambda function
  • F. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events).
  • G. Create an Amazon Machine Image (AMI) of the EC2 instance that runs the task
  • H. Create an Auto Scaling group with the AMI to run multiple copies of the instance.

Answer: A

Explanation:
AWS Batch is a fully managed service that enables users to run batch jobs on AWS. It can handle different types of tasks written in different languages and run them on EC2 instances. It also integrates with Amazon EventBridge (Amazon CloudWatch Events) to schedule jobs based on time or event triggers. This solution will meet the requirements of performance, scalability and low operational overhead12.
* B. Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs. This solution will not meet the requirement of low operational overhead, as it involves converting the EC2 instance to a container and using AWS App Runner, which is a service that automatically builds and deploys web applications and load balances traffic2. This is not necessary for running batch jobs.
* C. Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events). This solution will not meet the requirement of performance, as AWS Lambda has a limit of 15 minutes for execution time and 10 GB for memory allocation3. These limits may not be sufficient for running 1-hour tasks.
* D. Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple copies of the instance. This solution will not meet the requirement of low operational overhead, as it involves creating and maintaining AMIs and Auto Scaling groups, which are additional resources that need to be configured and managed2.
Reference URL: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/compute- services.html

NEW QUESTION 7

A gaming company is designing a highly available architecture. The application runs on a modified Linux kernel and supports only UDP-based traffic. The company needs the front- end tier to provide the best possible user experience. That tier must have low latency, route traffic to the nearest edge location, and provide static IP addresses for entry into the application endpoints.
What should a solutions architect do to meet these requirements?

  • A. Configure Amazon Route 53 to forward requests to an Application Load Balance
  • B. Use AWS Lambda for the application in AWS Application Auto Scaling.
  • C. Configure Amazon CloudFront to forward requests to a Network Load Balance
  • D. Use AWS Lambda for the application in an AWS Application Auto Scaling group.
  • E. Configure AWS Global Accelerator to forward requests to a Network Load Balance
  • F. Use Amazon EC2 instances for the application in an EC2 Auto Scaling group.
  • G. Configure Amazon API Gateway to forward requests to an Application Load Balance
  • H. Use Amazon EC2 instances for the application in an EC2 Auto Scaling group.

Answer: C

Explanation:
AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non- HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for
HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.

NEW QUESTION 8

A company has a production workload that runs on 1,000 Amazon EC2 Linux instances. The workload is powered by third-party software. The company needs to patch the third- party software on all EC2 instances as quickly as possible to remediate a critical security vulnerability.
What should a solutions architect do to meet these requirements?

  • A. Create an AWS Lambda function to apply the patch to all EC2 instances.
  • B. Configure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances.
  • C. Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances.
  • D. Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.

Answer: B

Explanation:
https://docs.aws.amazon.com/systems-manager/latest/userguide/about-windows-app-patching.html

NEW QUESTION 9

A company recently migrated a message processing system to AWS. The system receives messages into an ActiveMQ queue running on an Amazon EC2 instance. Messages are processed by a consumer application running on Amazon EC2. The consumer application processes the messages and writes results to a MySQL database funning on Amazon EC2. The company wants this application to be highly available with tow operational complexity
Which architecture otters the HGHEST availability?

  • A. Add a second ActiveMQ server to another Availably Zone Add an additional consumer EC2 instance in another Availability Zon
  • B. Replicate the MySQL database to another Availability Zone.
  • C. Use Amazon MO with active/standby brokers configured across two Availability Zones Add an additional consumer EC2 instance in another Availability Zon
  • D. Replicate the MySQL database to another Availability Zone.
  • E. Use Amazon MO with active/standby blotters configured across two Availability Zone
  • F. Add an additional consumer EC2 instance in another Availability Zon
  • G. Use Amazon ROS tor MySQL with Multi-AZ enabled.
  • H. Use Amazon MQ with active/standby brokers configured across two Availability Zones Add an Auto Scaling group for the consumer EC2 instances across two Availability Zone
  • I. Use Amazon RDS for MySQL with Multi-AZ enabled.

Answer: D

Explanation:
Amazon S3 is a highly scalable and durable object storage service that can store and retrieve any amount of data from anywhere on the web1. Users can configure the application to upload images directly from each user’s browser to Amazon S3 through the use of a presigned URL. A presigned URL is a URL that gives access to an object in an S3 bucket for a limited time and with a specific action, such as uploading an object2. Users can generate a presigned URL programmatically using the AWS SDKs or AWS CLI. By using a presigned URL, users can reduce coupling within the application and improve website performance, as they do not need to send the images to the web server first. AWS Lambda is a serverless compute service that runs code in response to events and automatically manages the underlying compute resources3. Users can configure S3 Event Notifications to invoke an AWS Lambda function when an image is uploaded. S3 Event Notifications is a feature that allows users to receive notifications when certain events happen in an S3 bucket, such as object creation or deletion. Users can configure S3 Event Notifications to invoke a Lambda function that resizes the image and stores it back in the same or a different S3 bucket. This way, users can offload the image resizing task from the web server to Lambda.

NEW QUESTION 10

A company runs a stateless web application in production on a group of Amazon EC2 On- Demand Instances behind an Application Load Balancer. The application experiences heavy usage during an 8-hour period each business day. Application usage is moderate and steady overnight Application usage is low during weekends.
The company wants to minimize its EC2 costs without affecting the availability of the application.
Which solution will meet these requirements?

  • A. Use Spot Instances for the entire workload.
  • B. Use Reserved instances for the baseline level of usage Use Spot Instances for any additional capacity that the application needs.
  • C. Use On-Demand Instances for the baseline level of usag
  • D. Use Spot Instances for any additional capacity that the application needs
  • E. Use Dedicated Instances for the baseline level of usag
  • F. Use On-Demand Instances for any additional capacity that the application needs

Answer: B

Explanation:
Reserved is cheaper than on demand the company has. And it's meet the availabilty (HA) requirement as to spot instance that can be disrupted at any time. PRICING BELOW. On- Demand: 0% There’s no commitment from you. You pay the most with this option. Reserved : 40%-60%1-year or 3-year commitment from you. You save money from that commitment. Spot 50%-90% Ridiculously inexpensive because there’s no commitment from the AWS side.

NEW QUESTION 11

A company has an AWS Lambda function that needs read access to an Amazon S3 bucket that is located in the same AWS account. Which solution will meet these requirement in the MOST secure manner?

  • A. Apply an S3 bucket pokey that grants road access to the S3 bucket
  • B. Apply an IAM role to the Lambda function Apply an IAM policy to the role to grant read access to the S3 bucket
  • C. Embed an access key and a secret key In the Lambda function's coda to grant the required IAM permissions for read access to the S3 bucket
  • D. Apply an IAM role to the Lambda functio
  • E. Apply an IAM policy to the role to grant read access to all S3 buckets In the account

Answer: B

Explanation:
This option is the most secure because it follows the principle of least privilege and grants only the necessary permissions to the Lambda function without exposing any credentials in the code. The IAM role can be configured as the Lambda function’s execution role and the IAM policy can specify the S3 bucket ARN and the s3:GetObject action12. Option A is less secure because it grants read access to any principal that has access to the S3 bucket, which could be more than the Lambda function. Option C is less secure because it embeds credentials in the code, which could be compromised or exposed. Option D is less secure because it grants read access to all S3 buckets in the account, which could be more than what the Lambda function needs.

NEW QUESTION 12

A company is storing sensitive user information in an Amazon S3 bucket The company wants to provide secure access to this bucket from the application tier running on Ama2on EC2 instances inside a VPC.
Which combination of steps should a solutions architect take to accomplish this? (Select TWO.)

  • A. Configure a VPC gateway endpoint for Amazon S3 within the VPC
  • B. Create a bucket policy to make the objects to the S3 bucket public
  • C. Create a bucket policy that limits access to only the application tier running in the VPC
  • D. Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance
  • E. Create a NAT instance and have the EC2 instances use the NAT instance to access the S3 bucket

Answer: AC

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-private-connection-no-authentication/

NEW QUESTION 13

A company is building a three-tier application on AWS. The presentation tier will serve a static website. The logic tier is a containerized application. This application will store data in a relational database. The company wants to simplify deployment and to reduce operational costs.
Which solution will meet these requirements?

  • A. Use Amazon S3 to host static conten
  • B. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute powe
  • C. Use a managed Amazon RDS cluster for the database.
  • D. Use Amazon CloudFront to host static conten
  • E. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 for compute powe
  • F. Use a managed Amazon RDS cluster for the database.
  • G. Use Amazon S3 to host static conten
  • H. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute powe
  • I. Use a managed Amazon RDS cluster for the database.
  • J. Use Amazon EC2 Reserved Instances to host static conten
  • K. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 for compute powe
  • L. Use a managed Amazon RDS cluster for the database.

Answer: A

Explanation:
Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance. You can use Amazon S3 to host static content for your website, such as HTML files, images, videos, etc. Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that allows you to run and scale containerized applications on AWS. AWS Fargate is a serverless compute engine for containers that works with both Amazon ECS and Amazon EKS. Fargate makes it easy for you to focus on building your applications by removing the need to provision and manage servers. You can use Amazon ECS with AWS Fargate for compute power for your containerized application logic tier. Amazon RDS is a managed relational database service that makes it easy to set up, operate, and scale a relational database in the cloud. You can use a managed Amazon RDS cluster for the database tier of your application. This solution will simplify deployment and reduce operational costs for your three-tier application. References: https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html

NEW QUESTION 14

A company is preparing to deploy a new serverless workload. A solutions architect must use the principle of least privilege to configure permissions that will be used to run an AWS Lambda function. An Amazon EventBridge (Amazon CloudWatch Events) rule will invoke the function.
Which solution meets these requirements?

  • A. Add an execution role to the function with lambda: InvokeFunction as the action and * asthe principal.
  • B. Add an execution role to the function with lambda: InvokeFunction as the action and Service:amazonaws.com as the principal.
  • C. Add a resource-based policy to the function with lambda:'* as the action and Service:events.amazonaws.com as the principal.
  • D. Add a resource-based policy to the function with lambda: InvokeFunction as the action and Service:events.amazonaws.com as the principal.

Answer: D

Explanation:
https://docs.aws.amazon.com/eventbridge/latest/userguide/resource-based- policies-eventbridge.html#lambda-permissions

NEW QUESTION 15

A company's website handles millions of requests each day, and the number of requests continues to increase. A solutions architect needs to improve the response time of the web application. The solutions architect determines that the application needs to decrease latency when retrieving product details from the
Amazon DynamoDB table.
Which solution will meet these requirements with the LEAST amount of operational overhead?

  • A. Set up a DynamoDB Accelerator (DAX) cluste
  • B. Route all read requests through DAX.
  • C. Set up Amazon ElastiCache for Redis between the DynamoDB table and the web applicatio
  • D. Route all read requests through Redis.
  • E. Set up Amazon ElastiCache for Memcached between the DynamoDB table and the web applicatio
  • F. Route all read requests through Memcached.
  • G. Set up Amazon DynamoDB Streams on the table, and have AWS Lambda read from the table and populate Amazon ElastiCach
  • H. Route all read requests through ElastiCache.

Answer: A

Explanation:
it allows the company to improve the response time of the web application and decrease latency when retrieving product details from the Amazon DynamoDB table.
By setting up a DynamoDB Accelerator (DAX) cluster, the company can use a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement. By routing all read requests through DAX, the company can reduce the number of read operations on the DynamoDB table and improve the user experience. References:
✑ Amazon DynamoDB Accelerator (DAX)
✑ Using DAX with DynamoDB

NEW QUESTION 16

A solutions architect must migrate a Windows Internet Information Services (IIS) web application to AWS The application currently relies on a file share hosted in the user's on- premises network-attached storage (NAS) The solutions architect has proposed migrating the MS web servers to Amazon EC2 instances in multiple Availability Zones that are connected to the storage solution, and configuring an Elastic Load Balancer attached to the instances
Which replacement to the on-premises file share is MOST resilient and durable?

  • A. Migrate the file share to Amazon RDS
  • B. Migrate the file share to AWS Storage Gateway
  • C. Migrate the file share to Amazon FSx for Windows File Server
  • D. Migrate the file share to Amazon Elastic File System (Amazon EFS)

Answer: C

Explanation:
This answer is correct because it provides a resilient and durable replacement for the on-premises file share that is compatible with Windows IIS web servers. Amazon FSx for Windows File Server is a fully managed service that provides shared file storage built on Windows Server. It supports the SMB protocol and integrates with Microsoft Active Directory, which enables seamless access and authentication for Windows-based applications. Amazon FSx for Windows File Server also offers the following benefits:
✑ Resilience: Amazon FSx for Windows File Server can be deployed in multiple
Availability Zones, which provides high availability and failover protection. It also supports automatic backups and restores, as well as self-healing features that detect and correct issues.
✑ Durability: Amazon FSx for Windows File Server replicates data within and across
Availability Zones, and stores data on highly durable storage devices. It also supports encryption at rest and in transit, as well as file access auditing and data deduplication.
✑ Performance: Amazon FSx for Windows File Server delivers consistent sub-
millisecond latencies and high throughput for file operations. It also supports SSD storage, native Windows features such as Distributed File System (DFS) Namespaces and Replication, and user-driven performance scaling.
References:
✑ Amazon FSx for Windows File Server
✑ Using Microsoft Windows file shares

NEW QUESTION 17

A company hosts a multiplayer gaming application on AWS. The company wants the application to read data with sub-millisecond latency and run one-time queries on historical data.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use Amazon RDS for data that is frequently accesse
  • B. Run a periodic custom script to export the data to an Amazon S3 bucket.
  • C. Store the data directly in an Amazon S3 bucke
  • D. Implement an S3 Lifecycle policy to move older data to S3 Glacier Deep Archive for long-term storag
  • E. Run one-time queries on the data in Amazon S3 by using Amazon Athena
  • F. Use Amazon DynamoDB with DynamoDB Accelerator (DAX) for data that is frequentlyaccesse
  • G. Export the data to an Amazon S3 bucket by using DynamoDB table expor
  • H. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
  • I. Use Amazon DynamoDB for data that is frequently accessed Turn on streaming to Amazon Kinesis Data Stream
  • J. Use Amazon Kinesis Data Firehose to read the data from Kinesis Data Stream
  • K. Store the records in an Amazon S3 bucket.

Answer: C

Explanation:
As they would like to retrieve the data with sub-millisecond, DynamoDB with DAX is the answer. DynamoDB supports some of the world's largest scale applications by providing consistent, single-digit millisecond response times at any scale. You can build applications with virtually unlimited throughput and storage. https://aws.amazon.com/dynamodb/dax/?nc1=h_ls

NEW QUESTION 18

A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company's solutions architect must recommend a solution to detect and protect against large-scale DDoS attacks.
Which solution meets these requirements?

  • A. Enable Amazon GuardDuty on the account.
  • B. Enable Amazon Inspector on the EC2 instances.
  • C. Enable AWS Shield and assign Amazon Route 53 to it.
  • D. Enable AWS Shield Advanced and assign the ELB to it.

Answer: D

Explanation:
https://aws.amazon.com/shield/faqs/

NEW QUESTION 19

An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet.
Which solution will provide private network connectivity to Amazon S3?

  • A. Create a gateway VPC endpoint to the S3 bucket.
  • B. Stream the logs to Amazon CloudWatch Log
  • C. Export the logs to the S3 bucket.
  • D. Create an instance profile on Amazon EC2 to allow S3 access.
  • E. Create an Amazon API Gateway API with a private link to access the S3 endpoint.

Answer: A

Explanation:
VPC endpoint allows you to connect to AWS services using a private network instead of using the public Internet

NEW QUESTION 20
......

Thanks for reading the newest SAA-C03 exam dumps! We recommend you to try the PREMIUM DumpSolutions.com SAA-C03 dumps in VCE and PDF here: https://www.dumpsolutions.com/SAA-C03-dumps/ (551 Q&As Dumps)