Accurate AWS-Solution-Architect-Associate Dumps Questions 2019

Cause all that matters here is passing exam with aws solution architect associate questions. Cause all that you need is a high score of aws solution architect associate certification. The only one thing you need to do is downloading aws solution architect associate dumps free now. We will not let you down with our money-back guarantee.

Check AWS-Solution-Architect-Associate free dumps before getting the full version:

NEW QUESTION 1
Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met.
Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable. Elastic and parallel The results of the analytic processing should be persisted for data mining
Which architecture outlined below win meet the initial requirements for the collection platform?

  • A. Utilize 53 to collect the inbound sensor data analyze the data from 53 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
  • B. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Red shift cluster using EMR.
  • C. Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Mcrosoft SQL Server RDS instance.
  • D. Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to Dynamo DB.

Answer: B

NEW QUESTION 2
You've been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud (VPC) The previous architect has already deployed a 3-tier VPC, The configuration is as follows:
VPC: vpc-2f8bc447 IGW: igw-2d8bc445 NACL: ad-208bc448
5ubnets and Route Tables: Web sewers: subnet-258bc44d
Application servers: subnet-248bc44c Database sewers: subnet-9189c6f9 Route Tables:
rrb-218bc449 rtb-238bc44b Associations:
subnet-258bc44d : rtb-218bc449 subnet-248bc44c : rtb-238bc44b subnet-9189c6f9 : rtb-238bc44b
You are now ready to begin deploying EC2 instances into the VPC Web servers must have direct access to the internet Application and database sewers cannot have direct access to the internet.
Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these sewers to retrieve updates from the Internet?

  • A. Create a bastion and NAT instance in subnet-258bc44d, and add a route from rtb- 238bc44b to the NAT instance.
  • B. Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within subnet-248bc44c.
  • C. Create a bastion and NAT instance in subnet-248bc44c, and add a route from rtb- 238bc44b to subneb258bc44d.
  • D. Create a bastion and NAT instance in subnet-258bc44d, add a route from rtb-238bc44b to Igw- 2d8bc445, and a new NACL that allows access between subnet-258bc44d and subnet -248bc44c.

Answer: A

NEW QUESTION 3
Multi-AZ deployment _ supported for Microsoft SQL Server DB Instances.

  • A. is not currently
  • B. is as of 2013
  • C. is planned to be in 2014
  • D. will never be

Answer: A

NEW QUESTION 4
Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required you may need to pay for a consultant.
How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery'?

  • A. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queu
  • B. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few day
  • C. CIoudFront to serve HLS transcoded videos from EC2.
  • D. Elastic Transcoder to transcode original high-resolution MP4 videos to HL
  • E. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few day
  • F. CIoudFront to serve HLS transcoded videos from EC2.
  • G. Elastic Transcoder to transcode original high-resolution NIP4 videos to HL
  • H. 53 to host videos with Lifecycle Management to archive original files to Glacier after a few day
  • I. C|oudFront to serve HLS transcoded videos from 53.
  • J. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queu
  • K. 53 to host videos with Lifecycle Management to archive all files to Glacier after a few day
  • L. CIoudFront to serve HLS transcoded videos from Glacier.

Answer: C

NEW QUESTION 5
A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an IPsec VPN. The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (53) keyspace specific to that user.
Which two approaches can satisfy these objectives? (Choose 2 answers)

  • A. Develop an identity broker that authenticates against IAM security Token service to assume a Lam role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials with access to the appropriate 53 bucket.
  • B. The application authenticates against LDAP and retrieves the name of an IAM role associated with the use
  • C. The application then ca Ils the IAM Security Token Service to assume that IAM role The application can use the temporary credentials to access the appropriate 53 bucket.
  • D. Develop an identity broker that authenticates against LDAP and then calls IAM Security To ken Service to get IAM federated user credentials The application calls the identity broker to get IAM federated user credentials with access to the appropriate 53 bucket.
  • E. The application authenticates against LDAP the application then calls the AWS identity and Access Management (IAM) Security service to log in to IAM using the LDAP credentials the application can use the IAM temporary credentials to access the appropriate 53 bucket.
  • F. The application authenticates against IAM Security Token Service using the LDAP credentials the application uses those temporary AWS security credentials to access the appropriate 53 bucket.

Answer: BC

NEW QUESTION 6
MySQL installations default to port _.

  • A. A.3306B.443
  • B. 80
  • C. 1158

Answer: A

NEW QUESTION 7
You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing. To make sure you Keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal.

  • A. Create IAM users in the Master account with full Admin permission
  • B. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account.
  • C. Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts.
  • D. Create IAM users in the Master account Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access.
  • E. Link the accounts using Consolidated Billin
  • F. This will give IAM users in the Master account access to resources in the Dev and Test accounts

Answer: C

Explanation: Bucket Owner Granting Cross-account Permission to objects It Does Not Own
In this example scenario, you own a bucket and you have enabled other AWS accounts to upload objects. That is, your bucket can have objects that other AWS accounts own.
Now, suppose as a bucket owner, you need to grant cross-account permission on objects, regardless of who the owner is, to a user in another account. For example, that user could be a billing application that needs to access object metadata. There are two core issues:
The bucket owner has no permissions on those objects created by other AWS accounts. So for the bucket owner to grant permissions on objects it does not own, the object owner, the AWS account that created the objects, must first grant permission to the bucket owner. The bucket owner can then delegate those permissions.
Bucket owner account can delegate permissions to users in its own account but it cannot delegate permissions to other AWS accounts, because cross-account delegation is not supported.
In this scenario, the bucket owner can create an AWS Identity and Access Management (IAM) role with permission to access objects, and grant another AWS account permission to assume the role temporarily enabling it to access objects in the bucket.
Background: Cross-Account Permissions and Using IAM Roles
IAM roles enable several scenarios to delegate access to your resources, and cross-account access is
one of the key scenarios. In this example, the bucket owner, Account A, uses an IAM role to temporarily delegate object access cross-account to users in another AWS account, Account C. Each IAM role you create has two policies attached to it:
A trust policy identifying another AWS account that can assume the role.
An access policy defining what permissions-for example, s3:Get0bject-are allowed when someone assumes the role. For a list of permissions you can specify in a policy, see Specifying Permissions in a Policy.
The AWS account identified in the trust policy then grants its user permission to assume the role. The user can then do the following to access objects:
Assume the role and, in response, get temporary security credentials. Using the temporary security credentials, access the objects in the bucket.
For more information about IAM roles, go to Roles (Delegation and Federation) in IAM User Guide. The following is a summary of the walkthrough steps:
Account A administrator user attaches a bucket policy granting Account B conditional permission to upload objects.
Account A administrator creates an IAM role, establishing trust with Account C, so users in t hat account can access Account A. The access policy attached to the role limits what user in Account C can do when the user accesses Account A.
Account B administrator uploads an object to the bucket owned by Account A, granting full-control permission to the bucket owner.
Account C administrator creates a user and attaches a user policy that al lows the user to assume the role.
User in Account C first assumes the role, which returns the user temporary security credentials. Using those temporary credentials, the user then accesses objects in the bucket.
For this example, you need three accounts. The following tab Ie shows how we refer to these accounts and the administrator users in these accounts. Per IAM guidelines (see About Using an
Administrator User to Create Resources and Grant Permissions) we do not use the account root
credentials in this walkthrough. Instead, you create an administrator user in each account and use those credentials in creating resources and granting them permissions

NEW QUESTION 8
Amazon Elastic Load Balancing is used to manage traffic on a fileet of Amazon EC2 instances, distributing traffic to instances across all availability zones within a region. Elastic Load Balancing has all the advantages of an on-premises load balancer, plus several security benefits.
Which of the following is not an advantage of ELB over an on-premise load balancer?

  • A. ELB uses a four-tier, key-based architecture for encryption.
  • B. ELB offers clients a single point of contact, and can also serve as the first line of defense against attacks on your network.
  • C. ELB takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer.
  • D. ELB supports end-to-end traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connections.

Answer: A

Explanation: Amazon Elastic Load Balancing is used to manage traffic on a fileet of Amazon EC2 instances, distributing traffic to instances across all availability zones within a region. Elastic Load Balancing has all the advantages of an on-premises load balancer, plus several security benefits:
Takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer
Offers clients a single point of contact, and can also serve as the first line of defense against attacks on your network
When used in an Amazon VPC, supports creation and management of security groups associated with your Elastic Load Balancing to provide additional networking and security options
Supports end-to-end traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connections. When TLS is used, the TLS server certificate used to terminate client connections can be managed centrally on the load balancer, rather than on every indMdual instance. Reference: http://d0.awsstatic.com/whitepapers/Security/AWS%20Security%20Whitepaper.pdf

NEW QUESTION 9
You are using Amazon SES as an email solution but are unsure of what its limitations are. Which statement below is correct in regards to that?

  • A. New Amazon SES users who have received production access can send up to 1,000 emails per 24-hour period, at a maximum rate of 10 emails per second.
  • B. Every Amazon SES sender has a the same set of sending limits
  • C. Sending limits are based on messages rather than on recipients
  • D. Every Amazon SES sender has a unique set of sending limits

Answer: D

Explanation: Amazon Simple Email Service (Amazon SES) is a highly scalable and cost-effective email-sending
service for businesses and developers. Amazon SES eliminates the complexity and expense of building an in-house email solution or licensing, installing, and operating a third-party email service for this type of email communication.
Every Amazon SES sender has a unique set of sending limits, which are calculated by Amazon SES on an ongoing basis:
Sending quota — the maximum number of emails you can send in a 24-hour period. Maximum send rate — the maximum number of emails you can send per second.
New Amazon SES users who have received production access can send up to 10,000 emails per 24-hour period, at a maximum rate of 5 emails per second. Amazon SES automatically adjusts these limits upward, as long as you send high-quality email. If your existing quota is not adequate for your needs and the system has not automatically increased your quota, you can submit an SES Sending Quota Increase case at any time.
Sending limits are based on recipients ratherthan on messages. You can check your sending limits at any time by using the Amazon SES console.
Note that if your email is detected to be of poor or QUESTION able quality (e.g., high complaint rates, high bounce rates, spam, or abusive content), Amazon SES might temporarily or permanently reduce your permitted send volume, or take other action as AWS deems appropriate.
Reference: https://aws.amazon.com/ses/faqs/

NEW QUESTION 10
Does Amazon DynamoDB support both increment and decrement atomic operations?

  • A. Only increment, since decrement are inherently impossible with DynamoDB's data model.
  • B. No, neither increment nor decrement operations.
  • C. Yes, both increment and decrement operations.
  • D. Only decrement, since increment are inherently impossible with DynamoDB's data mode

Answer: C

Explanation: Amazon DynamoDB supports increment and decrement atomic operations.
Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/APISummary.html

NEW QUESTION 11
By default, EBS volumes that are created and attached t o an instance at launch are deleted when t hat instance is terminated. You can modify this behavior by changing the value of the flag _ to false when you launch the instance

  • A. Delete On Termination
  • B. Remove On Deletion
  • C. Remove On Termination
  • D. Terminate On Deletion

Answer: A

NEW QUESTION 12
True or False: Automated backups are enabled by default for a new DB Instance.

  • A. TRUE
  • B. FALSE

Answer: A

NEW QUESTION 13
Groups can't _.

  • A. be nested more than 3 levels
  • B. be nested at all
  • C. be nested more than 4 levels
  • D. be nested more than 2 levels

Answer: B

NEW QUESTION 14
While creating an Amazon RDS DB, your first task is to set up a DB _ that controls what IP addresses or EC2 instances have access to your DB Instance.

  • A. Security Pool
  • B. Secure Zone
  • C. Security Token Pool
  • D. Security Group

Answer: D

NEW QUESTION 15
Does DynamoDB support in-place atomic updates?

  • A. Yes
  • B. No
  • C. It does support in-place non-atomic updates
  • D. It is not defined

Answer: A

Explanation: DynamoDB supports in-place atomic updates.
Reference:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/\NorkingWithItems.htmI#Working WithItems.AtomicCounters

NEW QUESTION 16
Amazon RDS creates an SSL certificate and installs the certificate on the DB Instance when Amazon RDS provisions the instance. These certificates are signed by a certificate authority. The _ is stored at https://rds.amazonaws.com/doc/rds-ssl-ca-cert.pem.

  • A. private key
  • B. foreign key
  • C. public key
  • D. protected key

Answer: A

NEW QUESTION 17
When should I choose Provisioned IOPS over Standard RDS storage?

  • A. If you have batch-oriented workloads
  • B. If you use production online transaction processing (OLTP) workloads.
  • C. If you have workloads that are not sensitive to consistent performance

Answer: A

NEW QUESTION 18
A user is observing the EC2 CPU utilization metric on CIoudWatch. The user has observed some interesting patterns while filtering over the 1 week period for a particular hour. The user wants to zoom that data point to a more granular period. How can the user do that easily with CIoudWatch?

  • A. The user can zoom a particular period by selecting that period with the mouse and then releasing the mouse
  • B. The user can zoom a particular period by specifying the aggregation data for that period
  • C. The user can zoom a particular period by double clicking on that period with the mouse
  • D. The user can zoom a particular period by specifying the period in the Time Range

Answer: A

Explanation: Amazon CIoudWatch provides the functionality to graph the metric data generated either by the AWS services or the custom metric to make it easier for the user to analyse. The AWS CIoudWatch console provides the option to change the granularity of a graph and zoom in to see data over a shorter time period. To zoom, the user has to click in the graph details pane, drag on the graph area for selection, and then release the mouse button.
Reference: http://docs.aws.amazon.com/AmazonCloudWatch/Iatest/Deve|operGuide/zoom_in_on_graph.htmI

Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From 2passeasy, Welcome to Download: https://www.2passeasy.com/dumps/AWS-Solution-Architect-Associate/ (New 672 Q&As Version)