Amazon AWS-Solution-Architect-Associate Exam Questions 2019

Your success in aws solution architect associate dumps is our sole target and we develop all our aws solution architect associate exam dumps in a way that facilitates the attainment of this target. Not only is our aws solution architect associate dumps material the best you can find, it is also the most detailed and the most updated. aws solution architect associate questions for Amazon AWS-Solution-Architect-Associate are written to the highest standards of technical accuracy.

Free demo questions for Amazon AWS-Solution-Architect-Associate Exam Dumps Below:

NEW QUESTION 1
You have a load balancer configured for VPC, and all back-end Amazon EC2 instances are in service. However, your web browser times out when connecting to the load baIancer's DNS name. Which options are probable causes of this behavior? Choose 2 answers

  • A. The load balancer was not configured to use a public sub net with an Internet gateway configured
  • B. The Amazon EC2 instances do not have a dynamically allocated private IP address
  • C. The security groups or network ACLs are not property configured for web traffic.
  • D. The load balancer is not configured in a private subnet with a NAT instance.
  • E. The VPC does not have a VGW configure

Answer: AC

NEW QUESTION 2
Which of the following are t rue regarding AWS CIoudTraiI? Choose 3 answers

  • A. CIoudTraiI is enabled globally
  • B. CIoudTraiI is enabled by default
  • C. CIoudTraiI is enabled on a per-region basis
  • D. CIoudTraiI is enabled on a per-service basis.
  • E. Logs can be delivered to a single Amazon 53 bucket for aggregation.
  • F. CIoudTraiI is enabled for all available services within a region.
  • G. Logs can only be processed and delivered to the region in which they are generate

Answer: CDE

Explanation: Reference: http://aws.amazon.com/c|oudtraiI/faqs/

NEW QUESTION 3
Which is the default region in AWS?

  • A. eu-west-1
  • B. us-east-1
  • C. us-east-2
  • D. ap-southeast-1

Answer: B

NEW QUESTION 4
You have set up an Auto Scaling group. The cool down period for the Auto Scaling group is 7 minutes. The first instance is launched after 3 minutes, while the second instance is launched after 4 minutes. How many minutes after the first instance is launched will Auto Scaling accept another scaling actMty request?

  • A. 11 minutes
  • B. 7 minutes
  • C. 10 minutes
  • D. 14 minutes

Answer: A

Explanation: If an Auto Scaling group is launching more than one instance, the cool down period for each instance starts after that instance is launched. The group remains locked until the last instance that was launched has completed its cool down period. In this case the cool down period for the first instance starts after 3 minutes and finishes at the 10th minute (3+7 cool down), while for the second instance it starts at the 4th minute and finishes at the 11th minute (4+7 cool down). Thus, the Auto Scaling group will receive another request only after 11 minutes.
Reference:http://docs.aws.amazon.com/AutoScaIing/latest/Deve|operGuide/AS_Concepts.htmI

NEW QUESTION 5
A _ is the concept of allowing (or disallowing) an entity such as a user, group, or role some type of access to one or more resources.

  • A. user
  • B. AWS Account
  • C. resource
  • D. permission

Answer: B

NEW QUESTION 6
A photo-sharing service stores pictures in Amazon Simple Storage Service (53) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for the Amazon 53 operations?

  • A. SANIL-based Identity Federation
  • B. Cross-Account Access
  • C. AWS Identity and Access Management roles
  • D. Web Identity Federation

Answer: D

NEW QUESTION 7
You have been storing massive amounts of data on Amazon Glacier for the past 2 years and now start to wonder if there are any limitations on this. What is the correct answer to your QUESTION ?

  • A. The total volume of data is limited but the number of archives you can store are unlimited.
  • B. The total volume of data is unlimited but the number of archives you can store are limited.
  • C. The total volume of data and number of archives you can store are unlimited.
  • D. The total volume of data is limited and the number of archives you can store are limite

Answer: C

Explanation: An archive is a durably stored block of information. You store your data in Amazon Glacier as archives. You may upload a single file as an archive, but your costs will be lower if you aggregate your data. TAR and ZIP are common formats that customers use to aggregate multiple files into a single file before uploading to Amazon Glacier.
The total volume of data and number of archives you can store are unlimited. IndMdual Amazon Glacier archives can range in size from 1 byte to 40 terabytes.
The largest archive that can be uploaded in a single upload request is 4 gigabytes.
For items larger than 100 megabytes, customers should consider using the MuItipart upload capability. Archives stored in Amazon Glacier are immutable, i.e. archives can be uploaded and deleted but cannot be edited or overwritten.
Reference: https://aws.amazon.com/gIacier/faqs/

NEW QUESTION 8
True or False: When using IAM to control access to your RDS resources, the key names that can be used are case sensitive. For example, aws:CurrentTime is NOT equivalent to AWS:currenttime.

  • A. TRUE
  • B. FALSE

Answer: A

NEW QUESTION 9
Your company runs a customer facing event registration site This site is built with a 3-tier architecture with web and application tier servers and a MySQL database The application requires 6 web tier servers and 6 application tier servers for normal operation, but can run on a minimum of 65% server capacity and a single MySQL database. When deploying this application in a region with three availability zones (AZs) which architecture provides high availability?

  • A. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 3 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB, and one RDS (Relational Database Service) instance deployed with read replicas in the other AZ.
  • B. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) Instance deployed with read replicas in the two other AZs.
  • C. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 3 EC2 instances m each AZ inside an Auto Scaling Group behind an ELS and a Multi-AZ RDS (Relational Database Service) deployment.
  • D. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto Scaling Group behind an ELB (elastic load balancer). And an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an EL
  • E. And a MuIti-AZ RDS (Relational Database services) deployment.

Answer: D

Explanation: Amazon RDS MuIti-AZ Deployments
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a MuIti-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Enhanced Durability
MuIti-AZ deployments for the MySQL, Oracle, and PostgreSQL engines utilize synchronous physical replication to keep data on the standby up-to-date with the primary. MuIti-AZ deployments for the SQL Server engine use synchronous logical replication to achieve the same result, employing SQL
Server-native Mrroring technology. Both approaches safeguard your data in the event of a DB Instance failure or loss of an Availability Zone.
If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby. Compare this to a Single-AZ deployment: in case of a Single-AZ database failure, a user-initiated point-in-time-restore operation will be required. This operation can take several hours to complete, and any data updates that occurred after the latest restorable time (typically within the last five minutes) will not be available.
Amazon Aurora employs a highly durable, SSD-backed virtualized storage layer purpose-built for database workloads. Amazon Aurora automatically replicates your volume six ways, across three Availability Zones. Amazon Aurora storage is fault-tolerant, transparently handling the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and replaced automatically.
Increased Availability
You also benefit from enhanced database availability when running Multi-AZ deployments. If an Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete: typically under one minute for Amazon Aurora and one to two minutes for other database engines (see the RDS FAQ for details).
The availability benefits of MuIti-AZ deployments also extend to planned maintenance and backups. In the case of system upgrades like QS patching or DB Instance scaling, these operations are applied first on
the standby, prior to the automatic failover. As a result, your availability impact is, again, only the time required for automatic failover to complete.
Unlike Single-AZ deployments, 1/0 actMty is not suspended on your primary during backup for MuIti-AZ deployments for the MySQL, Oracle, and PostgreSQL engines, because the backup is taken from the standby. However, note that you may still experience elevated latencies for a few minutes during backups for MuIti-AZ deployments.
On instance failure in Amazon Aurora deployments, Amazon RDS uses RDS MuIti-AZ technology to automate failover to one of up to 15 Amazon Aurora Replicas you have created in any of three Availability Zones. If no Amazon Aurora Replicas have been provisioned, in the case of a failure, Amazon RDS will attempt to create a new Amazon Aurora DB instance for you automatically.

NEW QUESTION 10
Your fortune 500 company has under taken a TCO analysis evaluating the use of Amazon 53 versus acquiring more hardware The outcome was that ail employees would be granted access to use Amazon 53 for storage of their personal documents.
Which of the following will you need to consider so you can set up a solution that incorporates single sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? (Choose 3 Answers)

  • A. Setting up a federation proxy or identity provider
  • B. Using AWS Security Token Service to generate temporary tokens
  • C. Tagging each folder in the bucket
  • D. Configuring IAM role
  • E. Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in the bucket

Answer: ABD

NEW QUESTION 11
An administrator is using Amazon CIoudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon DynamoDB for storage when creating the CIoudFormation template which of the following would allow the application instance access to the DynamoDB tables without exposing API credentials?

  • A. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and associate the Role to the application instances by referencing an instance profile.
  • B. Use the Parameter section in the Cloud Formation template to nave the user input Access and Secret Keys from an already created IAM user that has me permissions required to read and write from the required DynamoDB table.
  • C. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance.
  • D. Create an identity and Access Management user in the CIoudFormation template that has permissions to read and write from the required DynamoDB table, use the GetAtt function to retrieve the Access and secret keys and pass them to the application instance through user-data.

Answer: C

NEW QUESTION 12
Is creating a Read Replica of another Read Replica supported?

  • A. Only in certain regions
  • B. Only with MSSQL based RDS
  • C. Only for Oracle RDS types
  • D. No

Answer: D

NEW QUESTION 13
An existing client comes to you and says that he has heard that launching instances into a VPC (virtual private cloud) is a better strategy than launching instances into a EC2-classic which he knows is what you currently do. You suspect that he is correct and he has asked you to do some research about this and get back to him. Which of the following statements is true in regards to what ability launching your instances into a VPC instead of EC2-Classic gives you?

  • A. All of the things listed here.
  • B. Change security group membership for your instances while they're running
  • C. Assign static private IP addresses to your instances that persist across starts and stops
  • D. Define network interfaces, and attach one or more network interfaces to your instances

Answer: A

Explanation: By launching your instances into a VPC instead of EC2-Classic, you gain the ability to: Assign static private IP addresses to your instances that persist across starts and stops Assign multiple IP addresses to your instances
Define network interfaces, and attach one or more network interfaces to your instances Change security group membership for your instances while they're running
Control the outbound traffic from your instances (egress filtering) in addition to controlling the inbound traffic to them (ingress filtering)
Add an additional layer of access control to your instances in the form of network access control lists (ACL)
Run your instances on single-tenant hardware
Reference: http://media.amazonwebservices.com/AWS_CIoud_Best_Practices.pdf

NEW QUESTION 14
While creating the snapshots using the command line tools, which command should I be using?

  • A. ec2-deploy-snapshot
  • B. ec2-fresh-snapshot
  • C. ec2-create-snapshot
  • D. ec2-new-snapshot

Answer: C

NEW QUESTION 15
Can I detach the primary (ethO) network interface when the instance is running or stopped?

  • A. Yes, You can.
  • B. N
  • C. You cannot
  • D. Depends on the state of the interface at the time

Answer: B

NEW QUESTION 16
What is Oracle SQL Developer?

  • A. An AWS developer who is an expert in Amazon RDS using both the Oracle and SQL Server DB engines
  • B. A graphical Java tool distributed without cost by Oracle.
  • C. It is a variant of the SQL Sewer Management Studio designed by Microsoft to support Oracle DBMS functionalities
  • D. A different DBMS released by Microsoft free of cost

Answer: B

NEW QUESTION 17
In Amazon EC2, while sharing an Amazon EBS snapshot, can the snapshots with AWS IV|arketpIace product codes be public?

  • A. Yes, but only for US-based providers.
  • B. Yes, they can be public.
  • C. No, they cannot be made public.
  • D. Yes, they are automatically made public by the syste

Answer: C

Explanation: Snapshots with AWS Marketplace product codes can't be made public. Reference:
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.ht ml

NEW QUESTION 18
You have a lot of data stored in the AWS Storage Gateway and your manager has come to you asking about how the billing is calculated, specifically the Virtual Tape Shelf usage. What would be a correct response to this?

  • A. You are billed for the virtual tape data you store in Amazon Glacier and are billed for the size of the virtual tape.
  • B. You are billed for the virtual tape data you store in Amazon Glacier and billed for the portion of virtual tape capacity that you use, not for the size of the virtual tape.
  • C. You are billed for the virtual tape data you store in Amazon S3 and billed for the portion of virtual tape capacity that you use, not for the size of the virtual tape.
  • D. You are billed for the virtual tape data you store in Amazon S3 and are billed for the size of the virtual tape.

Answer: B

Explanation: The AWS Storage Gateway is a service connecting an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and AWS’s storage infrastructure.
AWS Storage Gateway billing is as follows. Volume storage usage (per GB per month):
You are billed for the Cached volume data you store in Amazon S3. You are only billed for volume capacity you use, not for the size of the volume you create.
Snapshot Storage usage (per GB per month): You are billed for the snapshots your gateway stores in Amazon S3. These snapshots are stored and billed as Amazon EBS snapshots. Snapshots are incremental backups, reducing your storage charges. When taking a new snapshot, only the data that has changed since your last snapshot is stored.
Virtual Tape Library usage (per GB per month):
You are billed for the virtual tape data you store in Amazon S3. You are only billed for the portion of virtual tape capacity that you use, not for the size of the virtual tape.
Virtual Tape Shelf usage (per GB per month):
You are billed for the virtual tape data you store in Amazon Glacier. You are only billed for the portion of virtual tape capacity that you use, not for the size of the virtual tape.
Reference: https://aws.amazon.com/storagegateway/faqs/

100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by Certleader, Get Full Dumps HERE: https://www.certleader.com/AWS-Solution-Architect-Associate-dumps.html (New 672 Q&As)