Amazon AWS-Solution-Architect-Associate Exam Questions 2019

It is more faster and easier to pass the aws solution architect associate exam dumps by using aws solution architect associate certification. Immediate access to the aws solution architect associate dumps and find the same core area aws solution architect associate certification with professionally verified answers, then PASS your exam with a high score now.

Online AWS-Solution-Architect-Associate free questions and answers of New Version:

NEW QUESTION 1
When controlling access to Amazon EC2 resources, each Amazon EBS Snapshot has a attribute that controls which AWS accounts can use the snapshot.

  • A. createVoIumePermission
  • B. LaunchPermission
  • C. SharePermission
  • D. RequestPermission

Answer: A

Explanation: Each Amazon EBS Snapshot has a createVoIumePermission attribute that you can set to one or more AWS Account IDs to share the AM with those AWS Accounts. To allow several AWS Accounts to use a particular EBS snapshot, you can use the snapshots's createVoIumePermission attribute to include a list of the accounts that can use it.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/UsingIAM.html

NEW QUESTION 2
Can I change the EC2 security groups after an instance is launched in EC2-Classic?

  • A. Yes, you can change security groups after you launch an instance in EC2-Classic.
  • B. No, you cannot change security groups after you launch an instance in EC2-Classic.
  • C. Yes, you can only when you remove rules from a security group.
  • D. Yes, you can only when you add rules to a security grou

Answer: B

Explanation: After you launch an instance in EC2-Classic, you can't change its security groups. However, you can add rules to or remove rules from a security group, and those changes are automatically applied to all instances that are associated with the security group.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-network-security.htmI

NEW QUESTION 3
What does the AWS Storage Gateway provide?

  • A. It allows to integrate on-premises IT environments with Cloud Storage.
  • B. A direct encrypted connection to Amazon 53.
  • C. It's a backup solution that provides an on-premises Cloud storage.
  • D. It provides an encrypted SSL endpoint for backups in the Clou

Answer: A

NEW QUESTION 4
An organization has a statutory requirement to protect the data at rest for the S3 objects. Which of the below mentioned options need not be enabled by the organization to achieve data security?

  • A. MFA delete for S3 objects
  • B. Client side encryption
  • C. Bucket versioning
  • D. Data replication

Answer: D

Explanation: AWS S3 provides multiple options to achieve the protection of data at REST. The options include Permission (Policy), Encryption (Client and Server Side), Bucket Versioning and MFA based delete. The user can enable any of these options to achieve data protection. Data replication is an internal facility by AWS where S3 replicates each object across all the Availability Zones and the organization need not
enable it in this case.
Reference: http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf

NEW QUESTION 5
You are planning and configuring some EBS volumes for an application. In order to get the most performance out of your EBS volumes, you should attach them to an instance with enough to support your volumes.

  • A. Redundancy
  • B. Storage
  • C. Bandwidth
  • D. Memory

Answer: C

Explanation: When you plan and configure EBS volumes for your application, it is important to consider the configuration of the instances that you will attach the volumes to. In order to get the most performance out of your EBS volumes, you should attach them to an instance with enough bandwidth to support your volumes, such as an EBS-optimized instance or an instance with 10 Gigabit network connectMty. This is especially important when you use General Purpose (SSD) or Provisioned IOPS (SSD) volumes, or when you stripe multiple volumes together in a RAID configuration.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-ec2-config.htmI

NEW QUESTION 6
Mike is appointed as Cloud Consultant in Netcrak Inc. Netcrak has the following VPCs set-up in the US East Region:
A VPC with CIDR block 10.10.0.0/16, a subnet in that VPC with CIDR block 10.10.1.0/24 A VPC with CIDR block 10.40.0.0/16, a subnet in that VPC with CIDR block 10.40.1.0/24
Netcrak Inc is trying to establish network connection between two subnets, a subnet with CIDR block 10.10.1.0/24 and another subnet with CIDR block 10.40.1.0/24. Which one of the following solutions should Mke recommend to Netcrak Inc?

  • A. Create 2 Virtual Private Gateways and configure one with each VPC.
  • B. Create one EC2 instance in each subnet, assign Elastic IPs to both instances, and configure a set up Site-to-Site VPN connection between both EC2 instances.
  • C. Create a VPC Peering connection between both VPCs.
  • D. Create 2 Internet Gateways, and attach one to each VP

Answer: C

Explanation: A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. EC2 instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account within a single region.
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does not rely on a separate piece of physical hardware.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.htm|

NEW QUESTION 7
You have been given a scope to set up an AWS Media Sharing Framework for a new start up photo
sharing company similar to flickr. The first thing that comes to mind about this is that it will obviously need a huge amount of persistent data storage for this framework. Which of the following storage options would be appropriate for persistent storage?

  • A. Amazon Glacier or Amazon S3
  • B. Amazon Glacier or AWS Import/Export
  • C. AWS Import/Export or Amazon C|oudFront
  • D. Amazon EBS volumes or Amazon S3

Answer: D

Explanation: Persistent storage-If you need persistent virtual disk storage similar to a physical disk drive for files or other data that must persist longer than the lifetime of a single Amazon EC2 instance, Amazon EBS volumes or Amazon S3 are more appropriate.
Reference: http://media.amazonwebservices.com/AWS_Storage_Options.pdf

NEW QUESTION 8
You have a video transcoding application running on Amazon EC2. Each instance pol Is a queue to find out which video should be transcoded, and then runs a transcoding process. If this process is interrupted, the video will be transcoded by another instance based on the queuing system. You have a large backlog of videos which need to be transcoded and would like to reduce this backlog by adding more instances. You will need these instances only until the backlog is reduced. Which type of Amazon EC2 instances should you use to reduce the backlog in the most cost efficient way?

  • A. Reserved instances
  • B. Spot instances
  • C. Dedicated instances
  • D. On-demand instances

Answer: B

Explanation: Reference: http://aws.amazon.com/ec2/purchasing-options/spot-instances/

NEW QUESTION 9
Please select the Amazon EC2 resource which can be tagged.

  • A. key pairs
  • B. Elastic IP addresses
  • C. placement groups
  • D. Amazon EBS snapshots

Answer: C

NEW QUESTION 10
Your company currently has a 2-tier web application running in an on-premises data center. You have experienced several infrastructure failures in the past two months resu Iting in significant financial losses. Your CIO is strongly agreeing to move the application to AWS. While working on achieving buy-in from the other company executives, he asks you to develop a disaster recovery plan to help improve Business continuity in the short term. He specifies a target Recovery Time Objective (RTO) of 4 hours and a Recovery Point Objective (RPO) of 1 hour or less. He also asks you to implement the solution within 2 weeks. Your database is 200GB in size and you have a 20Mbps Internet connection.
How would you do this while minimizing costs?

  • A. Create an EBS backed private AMI which includes a fresh install of your applicatio
  • B. Develop a CIoudFormation template which includes your AMI and the required EC2, AutoScaIing, and ELB resources to support deploying the application across Multiple- Availability-Zone
  • C. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.
  • D. Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zone
  • E. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.
  • F. Create an EBS backed private AMI which includes a fresh install of your applicatio
  • G. Setup a script in your data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an 53 bucket using multi-part upload.
  • H. Install your application on a compute-optimized EC2 instance capable of supporting the application 's average loa
  • I. Synchronously replicate transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection.

Answer: A

Explanation: Overview of Creating Amazon EBS-Backed AMIs
First, launch an instance from an AMI that's similar to the AMI that you'd like to create. You can connect to your instance and customize it. When the instance is configured correctly, ensure data integrity by
stopping the instance before you create an AMI, then create the image. When you create an Amazon EBS-backed AMI, we automatically register it for you.
Amazon EC2 powers down the instance before creating the AMI to ensure that everything on the instance is stopped and in a consistent state during the creation process. If you're confident that your instance is in a consistent state appropriate for AMI creation, you can tell Amazon EC2 not to power down and reboot the instance. Some file systems, such as XFS, can freeze and unfreeze actMty, making it safe to create the image without rebooting the instance.
During the AMI-creation process, Amazon EC2 creates snapshots of your instance's root volume and any other EBS volumes attached to your instance. If any volumes attached to the instance are encrypted, the new AMI only launches successfully on instances that support Amazon EBS encryption. For more information, see Amazon EBS Encryption.
Depending on the size of the volumes, it can take several minutes for the AMI-creation process to complete (sometimes up to 24 hours).You may find it more efficient to create snapshots of your volumes prior to creating your AMI. This way, only small, incremental snapshots need to be created when the AMI is created, and the process completes more quickly (the total time for snapshot creation remains the same). For more information, see Creating an Amazon EBS Snapshot.
After the process completes, you have a new AMI and snapshot created from the root volume of the instance. When you launch an instance using the new AMI, we create a new EBS volume for its root volume using the snapshot. Both the AMI and the snapshot incur charges to your account until you delete them. For more information, see Deregistering Your AMI.
If you add instance-store volumes or EBS volumes to your instance in addition to the root device volume, the block device mapping for the new AMI contains information for these volumes, and the block device mappings for instances that you launch from the new AMI automatically contain information for these volumes. The instance-store volumes specified in the block device mapping for the new instance are new and don't contain any data from the instance store volumes of the instance you used to create the AMI. The data on EBS volumes persists. For more information, see Block Device Mapping.

NEW QUESTION 11
After deploying a new website for a client on AWS, he asks if you can set it up so that if it fails it can be automatically redirected to a backup website that he has stored on a dedicated server elsewhere. You are wondering whether Amazon Route 53 can do this. Which statement below is correct in regards to Amazon Route 53?

  • A. Amazon Route 53 can't help detect an outag
  • B. You need to use another service.
  • C. Amazon Route 53 can help detect an outage of your website and redirect your end users to alternate locations.
  • D. Amazon Route 53 can help detect an outage of your website but can't redirect your end users to alternate locations.
  • E. Amazon Route 53 can't help detect an outage of your website, but can redirect your end users to alternate locations.

Answer: B

Explanation: With DNS Failover, Amazon Route 53 can help detect an outage of your website and redirect your end users to alternate locations where your application is operating properly.
Reference:
http://aws.amazon.com/about-aws/whats-new/2013/02/11/announcing-dns-faiIover-for-route-53/

NEW QUESTION 12
Just when you thought you knew every possible storage option on AWS you hear someone mention Reduced Redundancy Storage (RRS) within Amazon S3. What is the ideal scenario to use Reduced Redundancy Storage (RRS)?

  • A. Huge volumes of data
  • B. Sensitve data
  • C. Non-critical or reproducible data
  • D. Critical data

Answer: C

Explanation: Reduced Redundancy Storage (RRS) is a new storage option within Amazon S3 that enables customers to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage. RRS provides a lower cost, less durable, highly available storage option that is designed to sustain the loss of data in a single facility.
RRS is ideal for non-critical or reproducible data.
For example, RRS is a cost-effective solution for sharing media content that is durably stored elsewhere. RRS also makes sense if you are storing thumbnails and other resized images that can be easily reproduced from an original image.
Reference: https://aws.amazon.com/s3/faqs/

NEW QUESTION 13
Content and IV|edia Server is the latest requirement that you need to meet for a client.
The client has been very specific about his requirements such as low latency, high availability, durability, and access control. Potentially there will be millions of views on this server and because of "spiky" usage patterns, operations teams will need to provision static hardware, network, and management resources to support the maximum expected need. The Customer base will be initially low but is expected to grow and become more geographically distributed.
Which of the following would be a good solution for content distribution?

  • A. Amazon S3 as both the origin server and for caching
  • B. AWS Storage Gateway as the origin server and Amazon EC2 for caching
  • C. AWS CIoudFront as both the origin server and for caching
  • D. Amazon S3 as the origin server and Amazon CIoudFront for caching

Answer: D

Explanation: As your customer base grows and becomes more geographically distributed, using a high- performance edge cache like Amazon CIoudFront can provide substantial improvements in latency, fault tolerance, and cost.
By using Amazon S3 as the origin server for the Amazon CIoudFront distribution, you gain the advantages of fast in-network data transfer rates, simple publishing/caching workflow, and a unified security framework.
Amazon S3 and Amazon CIoudFront can be configured by a web service, the AWS Management Console, or a host of third-party management tools.
Reference:http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_media_02.pdf

NEW QUESTION 14
You need to create an Amazon Machine Image (AM) for a customer for an application which does not appear to be part of the standard AWS AM template that you can see in the AWS console. What are the alternative possibilities for creating an AM on AWS?

  • A. You can purchase an AMs from a third party but cannot create your own AM.
  • B. You can purchase an AMIs from a third party or can create your own AMI.
  • C. Only AWS can create AMIs and you need to wait till it becomes available.
  • D. Only AWS can create AMIs and you need to request them to create one for yo

Answer: B

Explanation: You can purchase an AMIs from a third party, including AMIs that come with service contracts from organizations such as Red Hat. You can also create an AMI and sell it to other Amazon EC2 users. After you create an AMI, you can keep it private so that only you can use it, or you can share it with a specified list of AWS accounts. You can also make your custom AMI public so that the community can
use it. Building a safe, secure, usable AMI for public consumption is a fairly straightforward process, if you follow a few simple guidelines.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.htm|

NEW QUESTION 15
You are designing a multi-platform web application for AWS The application will run on EC2 instances and will be accessed from PCs. tablets and smart phones Supported accessing platforms are Windows. MACOS. IOS and Android Separate sticky session and SSL certificate setups are required for different platform types which of the following describes the most cost effective and performance efficient architecture setup?

  • A. Setup a hybrid architecture to handle session state and SSL certificates on-prem and separate EC2 Instance groups running web applications for different platform types running in a VPC
  • B. Set up one ELB for all platforms to distribute load among multiple instance under it Each EC2 instance implements ail functionality for a particular platform.
  • C. Set up two ELBs The first ELB handles SSL certificates for all platforms and the second ELB handles session stickiness for all platforms for each ELB run separate EC2 instance groups to handle the web application for each platform.
  • D. Assign multiple ELBS to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type Session stickiness and SSL termination are done at the ELBs.

Answer: D

NEW QUESTION 16
Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months.
Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment processing If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step Customers are notified via email about order status and any critical issues with their orders such as payment failure.
Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders.
How can you implement the order fulfillment process while making sure that the emails are delivered reliably?

  • A. Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers.
  • B. Use SWF with an Auto Scaling group of actMty workers and a decider instance in another Auto Scaling group with min/max=I Use the decider instance to send emails to customers.
  • C. Use SWF with an Auto Scaling group of actMty workers and a decider instance in another Auto Scaling group with min/max=I use SES to send emails to customers.
  • D. Use an SOS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute the
  • E. Use SES to send emails to customers.

Answer: C

NEW QUESTION 17
Which of the following strategies can be used to control access to your Amazon EC2 instances?

  • A. DB security groups
  • B. IAM policies
  • C. None of these
  • D. EC2 security groups

Answer: D

Explanation: IAM policies allow you to specify what actions your IAM users are allowed to perform against your EC2 Instances. However, when it comes to access control, security groups are what you need in order to define and control the way you want your instances to be accessed, and whether or not certain kind of communications are allowed or not.
Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/UsingIAM.htmI

NEW QUESTION 18
You decide that you need to create a number of Auto Scaling groups to try and save some money as you have noticed that at certain times most of your EC2 instances are not being used. By default, what is the maximum number of Auto Scaling groups that AWS will allow you to create?

  • A. 12
  • B. Unlimited
  • C. 20
  • D. 2

Answer: C

Explanation: Auto Scaling is an AWS service that allows you to increase or decrease the number of EC2 instances within your appIication's architecture. With Auto Scaling, you create collections of EC2 instances, called Auto Scaling groups. You can create these groups from scratch, or from existing EC2 instances that are already in production.
Reference: http://docs.aws.amazon.com/general/latest/gr/aws_service_|imits.htm|#Iimits_autoscaIing

P.S. Easily pass AWS-Solution-Architect-Associate Exam with 672 Q&As Surepassexam Dumps & pdf Version, Welcome to Download the Newest Surepassexam AWS-Solution-Architect-Associate Dumps: https://www.surepassexam.com/AWS-Solution-Architect-Associate-exam-dumps.html (672 New Questions)