Amazon AWS-Solution-Architect-Associate Dumps 2019

aws solution architect associate dumps are updated and aws solution architect associate questions are verified by experts. Once you have completely prepared with our aws solution architect associate certification you will be ready for the real AWS-Solution-Architect-Associate exam without a problem. We have aws solution architect associate dumps. PASSED aws solution architect associate exam dumps First attempt! Here What I Did.

Amazon AWS-Solution-Architect-Associate Free Dumps Questions Online, Read and Test Now.

A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files. This repository is growing at 500 GB a year and must be presented as a single logical volume. The customer is becoming increasingly constrained with their local storage capacity and wants an off-site backup of this data, while maintaining low-latency access to their frequently accessed data. Which AWS Storage Gateway configuration meets the customer requirements?

  • A. Gateway-Cached volumes with snapshots scheduled to Amazon 53
  • B. Gateway-Stored volumes with snapshots scheduled to Amazon 53
  • C. Gateway-Virtual Tape Library with snapshots to Amazon 53
  • D. Gateway-Virtual Tape Library with snapshots to Amazon Glacier

Answer: D

Your application is using an ELB in front of an Auto Scaling group of web/application sewers deployed across two AZs and a MuIti-AZ RDS Instance for data persistence.
The database CPU is often above 80% usage and 90% of 1/0 operations on the database are reads. To improve performance you recently added a single-node Memcached EIastiCache Cluster to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%.
Do you need to change anything in the architecture to maintain the high availability or the application with the anticipated additional load? Why?

  • A. Yes, you should deploy two Memcached EIastiCache Clusters in different AZs because the RDS instance will not be able to handle the load if the cache node fails.
  • B. No, if the cache node fails you can always get the same data from the DB without having any availability impact.
  • C. No, if the cache node fails the automated EIastiCache node recovery feature will prevent any availability impact.
  • D. Yes, you should deploy the Memcached EIastiCache Cluster with two nodes in the same AZ as the RDS DB master instance to handle the load if one cache node fails.

Answer: A

Explanation: EIastiCache for Memcached
The primary goal of caching is typically to offload reads from your database or other primary data source. In most apps, you have hot spots of data that are regularly queried, but only updated periodically. Think of the front page of a blog or news site, or the top 100 leaderboard in an online game. In this type of case, your app can receive dozens, hundreds, or even thousands of requests for the same data before it's updated again. Having your caching layer handle these queries has several advantages. First, it's considerably cheaper to add an in-memory cache than to scale up to a larger database cluster. Second,
an in-memory cache is also easier to scale out, because it's easier to distribute an in-memory cache horizontally than a relational database.
Last, a caching layer provides a request buffer in the event of a sudden spike in usage. If your app or game ends up on the front page of Reddit or the App Store, it's not unheard of to see a spike that is 10 to 100 times your normal application load. Even if you autoscale your application instances, a IOx request spike will likely make your database very unhappy.
Let's focus on EIastiCache for Memcached first, because it is the best fit for a caching focused solution. We'II revisit Redislater in the paper, and weigh its advantages and disadvantages.
Architecture with EIastiCache for Memcached
When you deploy an EIastiCache Memcached cluster, it sits in your application as a separate tier alongside your database. As mentioned previously, Amazon EIastiCache does not directly communicate with your database tier, or indeed have any particular knowledge of your database. A simplified deployment for a web application looks something like this:
AWS-Solution-Architect-Associate dumps exhibit
In this architecture diagram, the Amazon EC2 application instances are in an Auto Scaling group, located behind a load balancer using Elastic Load Balancing, which distributes requests among the instances. As requests come into a given EC2 instance, that EC2 instance is responsible for communicating with EIastiCache and the database tier. For development purposes, you can begin with a single EIastiCache node to test your application, and then scale to additional cluster nodes by modifying t he EIastiCache cluster. As you add additional cache nodes, the EC2 application instances are able to distribute cache keys across multiple EIastiCache nodes. The most common practice is to use client-side sharding to distribute keys across cache nodes, which we will discuss later in this paper.
AWS-Solution-Architect-Associate dumps exhibit
When you launch an EIastiCache cluster, you can choose the Availability Zone(s) that the cluster lives in. For best performance, you should configure your cluster to use the same Availability Zones as your application servers. To launch an EIastiCache cluster in a specific Availability Zone, make sure to specify the Preferred Zone(s) option during cache cluster creation. The Availability Zones that you specify will be where EIastiCache will launch your cache nodes. We recommend that you select Spread Nodes Across Zones, which tells EIastiCache to distribute cache nodes across these zones as evenly as possible. This distribution will mitigate the impact of an Availability Zone disruption on your E|astiCache nodes. The trade-off is that some of the requests from your application to EIastiCache will go to a node in a different Availability Zone, meaning latency will be slightly higher.
For more details, refer to Creating a Cache Cluster in the Amazon EIastiCache User Guide.
As mentioned at the outset, EIastiCache can be coupled with a wide variety of databases. Here is an example architecture that uses Amazon DynamoDB instead of Amazon RDS and IV|ySQL:
AWS-Solution-Architect-Associate dumps exhibit
This combination of DynamoDB and EIastiCache is very popular with mobile and game companies, because DynamoDB allows for higher write throughput at lower cost than traditional relational databases. In addition, DynamoDB uses a key-value access pattern similar to EIastiCache, which also simplifies the programming model. Instead of using relational SQL for the primary database but then key-value patterns for the cache, both the primary database and cache can be programmed similarly.
In this architecture pattern, DynamoDB remains the source of truth for data, but application reads are offloaded to EIastiCache for a speed boost.

The common use cases for DynamoDB Fine-Grained Access Control (FGAC) are cases in which the end user wants .

  • A. to change the hash keys of the table directly
  • B. to check if an IAM policy requires the hash keys of the tables directly
  • C. to read or modify any codecommit key of the table directly, without a middle-tier service
  • D. to read or modify the table directly, without a middle-tier service

Answer: D

Explanation: FGAC can benefit any application that tracks information in a DynamoDB table, where the end user (or application client acting on behalf of an end user) wants to read or modify the table directly, without a middle-tier service. For instance, a developer of a mobile app named Acme can use FGAC to track the
top score of every Acme user in a DynamoDB table. FGAC allows the application client to modify only the top score for the user that is currently running the application.

Can we attach an EBS volume to more than one EC2 instance at the same time?

  • A. Yes.
  • B. No
  • C. Only EC2-optimized EBS volumes.
  • D. Only in read mode.

Answer: A

You have just been given a scope for a new client who has an enormous amount of data(petabytes) that he constantly needs analysed. Currently he is paying a huge amount of money for a data warehousing company to do this for him and is wondering if AWS can provide a cheaper solution. Do you think AWS has a solution for this?

  • A. Ye
  • B. Amazon SimpIeDB
  • C. N
  • D. Not presently
  • E. Ye
  • F. Amazon Redshift
  • G. Ye
  • H. Your choice of relational AMIs on Amazon EC2 and EBS

Answer: C

Explanation: Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. You can start small for just $0.25 per hour with no commitments or upfront costs and scale to a petabyte or more for $1,000 per terabyte per year, less than a tenth of most other data warehousing solutions. Amazon Redshift delivers fast query performance by using columnar storage technology to improve I/O efficiency and parallelizing queries across multiple nodes. Redshift uses standard PostgreSQL JDBC and ODBC drivers, allowing you to use a wide range of familiar SQL clients. Data load speed scales linearly with cluster size, with integrations to Amazon S3, Amazon DynamoDB, Amazon Elastic MapReduce, Amazon Kinesis or any SSH-enabled host.

For which of the following use cases are Simple Workflow Service (SWF) and Amazon EC2 an appropriate solution? Choose 2 answers

  • A. Using as an endpoint to collect thousands of data points per hourfrom a distributed fileet of sensors
  • B. Managing a multi-step and multi-decision checkout process of an e-commerce website
  • C. Orchestrating the execution of distributed and auditable business processes
  • D. Using as an SNS (Simple Notification Service) endpoint to trigger execution of video transcoding jobs
  • E. Using as a distributed session store for your web application

Answer: AB

A customer enquires about whether all his data is secure on AWS and is especially concerned about Elastic Map Reduce (EMR) so you need to inform him of some of the security features in place for AWS. Which of the below statements would be an incorrect response to your customers enquiry?

  • A. Amazon ENIR customers can choose to send data to Amazon S3 using the HTTPS protocol for secure transmission.
  • B. Amazon S3 provides authentication mechanisms to ensure that stored data is secured against unauthorized access.
  • C. Every packet sent in the AWS network uses Internet Protocol Security (IPsec).
  • D. Customers may encrypt the input data before they upload it to Amazon S3.

Answer: C

Explanation: Amazon S3 provides authentication mechanisms to ensure that stored data is secured against unauthorized access. Unless the customer who is uploading the data specifies otherwise, only that customer can access the data. Amazon EMR customers can also choose to send data to Amazon S3
using the HTTPS protocol for secure transmission. In addition, Amazon EMR always uses HTTPS to send data between Amazon S3 and Amazon EC2. For added security, customers may encrypt the input data before they upload it to Amazon S3 (using any common data compression tool); they then need to add a decryption step to the beginning of their cluster when Amazon EMR fetches the data from Amazon S3. Reference:

In Amazon AWS, which of the following statements is true of key pairs?

  • A. Key pairs are used only for Amazon SDKs.
  • B. Key pairs are used only for Amazon EC2 and Amazon CIoudFront.
  • C. Key pairs are used only for Elastic Load Balancing and AWS IAM.
  • D. Key pairs are used for all Amazon service

Answer: B

Explanation: Key pairs consist of a public and private key, where you use the private key to create a digital signature, and then AWS uses the corresponding public key to validate the signature. Key pairs are used only for Amazon EC2 and Amazon CIoudFront.

Which Amazon Storage behaves like raw, unformatted, external block devices that you can attach to your instances?

  • A. None of these.
  • B. Amazon Instance Storage
  • C. Amazon EBS
  • D. All of these

Answer: C

Which of the following features ensures even distribution of traffic to Amazon EC2 instances in multiple Availability Zones registered with a load balancer?

  • A. Elastic Load Balancing request routing
  • B. An Amazon Route 53 weighted routing policy
  • C. Elastic Load Balancing cross-zone load balancing
  • D. An Amazon Route 53 latency routing pol icy

Answer: A

Explanation: Reference:|ancing/

You've been hired to enhance the overall security posture for a very large e-commerce site They have a well architected multi-tier application running in a VPC that uses ELBs in front of both the web and the app
tier with static assets served directly from 53 They are using a combination of RDS and DynamoOB for their dynamic data and then archMng nightly into 53 for further processing with EMR
They are concerned because they found QUESTION able log entries and suspect someone is attempting to gain unauthorized access.
Which approach provides a cost effective scalable mitigation to this kind of attack?

  • A. Recommend that they lease space at a DirectConnect partner location and establish a IG DirectConnect connection to their vPC they would then establish Internet connectMty into their space, filter the traffic in hardware Web Application Firewall (WAF). And then pass the traffic through the DirectConnect connection into their application running in their VPC,
  • B. Add previously identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier sub net.
  • C. Add a WAF tier by creating a new ELB and an AutoScaIing group of EC2 Instances running a host based WAF They would redirect Route 53 to resolve to the new WAF tier ELB The WAF tier would thier pass the traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group
  • D. Remove all but TLS 1 2 from the web tier ELB and enable Advanced Protocol Filtering This will enable the ELB itself to perform WAF functionality.

Answer: C

Your team has a tomcat-based Java application you need to deploy into development, test and production environments. After some research, you opt to use Elastic Beanstalk due to its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis. Similarly, other software teams in your org want access to that same restored data via their EC2 instances in your VPC .The
optimal setup for persistence and security that meets the above requirements would be the following.

  • A. Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets.
  • B. Create your RDS instance separately and add its IP address to your appIication's DB connection strings in your code Alter its security group to allow access to it from hosts within your VPC's IP address block.
  • C. Create your RDS instance separately and pass its DNS name to your app's DB connection string as an environment variabl
  • D. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself.
  • E. Create your RDS instance separately and pass its DNS name to your's DB connection string as an environment variable Alter its security group to allow access to It from hosts In your application subnets.

Answer: A

You've created your first load balancer and have registered your EC2 instances with the load balancer. Elastic Load Balancing routinely performs health checks on all the registered EC2 instances and automatically distributes all incoming requests to the DNS name of your load balancer across your registered, healthy EC2 instances. By default, the load balancer uses the _ protocol for checking the health of your instances.

  • A. HTTPS
  • B. HTTP
  • C. ICMP
  • D. IPv6

Answer: B

Explanation: In Elastic Load Balancing a health configuration uses information such as protocol, ping port, ping path (URL), response timeout period, and health check interval to determine the health state of the instances registered with the load balancer.
Currently, HTTP on port 80 is the default health check. Reference:|asticLoadBaIancing/latest/DeveIoperGuide/TerminoIogyandKeyConcepts. html

A customer implemented AWS Storage Gateway with a gateway-cached volume at their main office.
An event takes the link between the main and branch office offline. Which methods will enable the branch office to access their data? Choose 3 answers

  • A. Use a HTTPS GET to the Amazon 53 bucket where the files are located.
  • B. Restore by implementing a lifecycle policy on the Amazon 53 bucket.
  • C. IV|ake an Amazon Glacier Restore API ca II to load the files into another Amazon 53 bucket within four to six hours.
  • D. Launch a new AWS Storage Gateway instance AM in Amazon EC2, and restore from a gateway snapshot
  • E. Create an Amazon EBS volume from a gateway snapshot, and mount it to an Amazon EC2 instance.
  • F. Launch an AWS Storage Gateway virtual iSCSI device at the branch office, and restore from a gateway snapshot

Answer: ADF

What is the maximum response time for a Business level Premium Support case?

  • A. 120 seconds
  • B. 1 hour
  • C. 10 minutes
  • D. 12 hours

Answer: B

A user wants to increase the durability and availability of the EBS volume. Which of the below mentioned actions should he perform?

  • A. Take regular snapshots.
  • B. Create an AMI.
  • C. Create EBS with higher capacity.
  • D. Access EBS regularl

Answer: A

Explanation: In Amazon Web Services, Amazon EBS volumes that operate with 20 GB or less of modified data since their most recent snapshot can expect an annual failure rate (AFR) between 0.1% and 0.5%. For this reason, to maximize both durability and availability of their Amazon EBS data, the user should frequently create snapshots of the Amazon EBS volumes.

You are very concerned about security on your network because you have multiple programmers testing APIs and SDKs and you have no idea what is happening. You think C|oudTrai| may help but are not sure what it does. Which of the following statements best describes the AWS service CIoudTraiI?

  • A. With AWS CIoudTraiI you can get a history of AWS API calls and related events for your account.
  • B. With AWS CIoudTraiI you can get a history of IAM users for your account.
  • C. With AWS CIoudTraiI you can get a history of S3 Iogfiles for your account.
  • D. With AWS CIoudTraiI you can get a history of CIoudFormation JSON scripts used for your accoun

Answer: A

Explanation: With AWS CIoudTraiI, you can get a history of AWS API calls for your account, including API calls made via the AWS IV|anagement Console, the AWS SDKs, the command line tools, and higher-level AWS services. You can also identify which users and accounts called AWS APIs for services that support CIoudTraiI, the source IP address the calls were made from, and when the calls occurred.
You can identify which users and accounts called AWS for services that support CIoudTraiI, the source IP address the calls were made from, and when the calls occurred. You can integrate CIoudTraiI into applications using the API, automate trail creation for your organization, check the status of your trails, and control how administrators turn CIoudTraiI logging on and off.

After setting up a Virtual Private Cloud (VPC) network, a more experienced cloud engineer suggests that to achieve low network latency and high network throughput you should look into setting up a placement group. You know nothing about this, but begin to do some research about it and are especially curious about its limitations. Which of the below statements is wrong in describing the limitations of a placement group?

  • A. Although launching multiple instance types into a placement group is possible, this reduces the likelihood that the required capacity will be available for your launch to succeed.
  • B. A placement group can span multiple Availability Zones.
  • C. You can't move an existing instance into a placement group.
  • D. A placement group can span peered VPCs

Answer: B

Explanation: A placement group is a logical grouping of instances within a single Availability Zone. Using placement groups enables applications to participate in a low-latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking.
Placement groups have the following limitations:
The name you specify for a placement group a name must be unique within your AWS account. A placement group can't span multiple Availability Zones.
Although launching multiple instance types into a placement group is possible, this reduces the likelihood that the required capacity will be available for your launch to succeed. We recommend using the same instance type for all instances in a placement group.
You can't merge placement groups. Instead, you must terminate the instances in one placement group, and then relaunch those instances into the other placement group.
A placement group can span peered VPCs; however, you will not get full-bisection bandwidth between instances in peered VPCs. For more information about VPC peering connections, see VPC Peering in the Amazon VPC User Guide.
You can't move an existing instance into a placement group. You can create an AM from your existing instance, and then launch a new instance from the AMI into a placement group.

P.S. Certleader now are offering 100% pass ensure AWS-Solution-Architect-Associate dumps! All AWS-Solution-Architect-Associate exam questions have been updated with correct answers: (672 New Questions)