Amazon AWS-Certified-DevOps-Engineer-Professional Study Guides 2019

It is more faster and easier to pass the by using . Immediate access to the and find the same core area with professionally verified answers, then PASS your exam with a high score now.

Amazon AWS-Certified-DevOps-Engineer-Professional Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
There are a number of ways to purchase compute capacity on AWS. Which orders the price per compute or memory unit from LOW to HIGH (cheapest to most expensive), on average?
(A) On-Demand (B) Spot (C) Reserved

  • A. A, B, C
  • B. C, B, A
  • C. B, C, A
  • D. A, C, B

Answer: C

Explanation: Spot instances are usually many, many times cheaper than on-demand prices. Reserved instances, depending on their term and utilization, can yield approximately 33% to 66% cost savings. On-Demand prices are the baseline price and are the most expensive way to purchase EC2 compute time. Reference: https://d0.awsstatic.com/whitepapers/Cost_Optimization_with_AWS.pdf

NEW QUESTION 2
What is required to achieve gigabit network throughput on EC2? You already selected cluster-compute, 10GB instances with enhanced networking, and your workload is already network-bound, but you are not seeing 10 gigabit speeds.

  • A. Enable biplex networking on your servers, so packets are non-blocking in both directions and there's no switching overhead.
  • B. Ensure the instances are in different VPCs so you don't saturate the Internet Gateway on any one VPC.
  • C. Select PIOPS for your drives and mount several, so you can provision sufficient disk throughput.
  • D. Use a placement group for your instances so the instances are physically near each other in the same Availability Zone.

Answer: D

Explanation: You are not guaranteed 10gigabit performance, except within a placement group.
A placement group is a logical grouping of instances within a single Availability Zone. Using placement groups enables applications to participate in a low-latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

NEW QUESTION 3
Your company needs to automate 3 layers of a large cloud deployment. You want to be able to track this depIoyment's evolution as it changes over time, and carefully control any alterations. What is a good way to automate a stack to meet these requirements?

  • A. Use OpsWorks Stacks with three layers to model the layering in your stack.
  • B. Use CloudFormation Nested Stack Templates, with three child stacks to represent the three logicallayers of your cloud.
  • C. Use AWS Config to declare a configuration set that AWS should roll out to your cloud.
  • D. Use Elastic Beanstalk Linked Applications, passing the important DNS entires between layers using the metadata interface.

Answer: B

Explanation: Only CIoudFormation allows source controlled, declarative templates as the basis for stack automation. Nested Stacks help achieve clean separation of layers while simultaneously providing a method to control all layers at once when needed.
Reference:
https://bIogs.aws.amazon.com/application-management/post/TxlT9JYOOS8AB9I/Use-Nested-Stacks-to- Create-Reusable-Templates-and-Support-Role-SpeciaIization

NEW QUESTION 4
What is web identity federation?

  • A. Use of an identity provider like Google or Facebook to become an AWS IAM User.
  • B. Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials.
  • C. Use of AWS IAM User tokens to log in as a Google or Facebook user.
  • D. Use of AWS STS Tokens to log in as a Google or Facebook use

Answer: B

Explanation: users of your app can sign in using a well-known identity provider (|dP) -such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account.
Reference: http://docs.aws.amazon.com/IANI/latest/UserGuide/id_roIes_providers_oidc.html

NEW QUESTION 5
You were just hired as a DevOps Engineer for a startup. Your startup uses AWS for 100% of their infrastructure. They currently have no automation at all for deployment, and they have had many failures while trying to deploy to production. The company has told you deployment process risk mitigation is the most important thing now, and you have a lot of budget fortools and AWS resources.
Their stack: 2-tier API
Data stored in DynamoDB or S3, depending on type Compute layer is EC2 in Auto Scaling Groups They use Route53 for DNS pointing to an ELB
An ELB balances load across the EC2 instances
The scaling group properly varies between 4 and 12 EC2 sewers.
Which of the following approaches, given this company's stack and their priorities, best meets the company's needs?

  • A. Model the stack in AWS Elastic Beanstalk as a single Application with multiple Environment
  • B. Use Elastic BeanstaIk's Rolling Deploy option to progressively roll out application code changes when promoting across environments.
  • C. Model the stack in 3 CIoudFormation templates: Data layer, compute layer, and networking laye
  • D. Write stack deployment and integration testing automation following Blue-Green methodologies.
  • E. Model the stack in AWS OpsWorks as a single Stack, with 1 compute layer and its associated EL
  • F. Use Chef and App Deployments to automate Rolling Deployment.
  • G. Model the stack in 1 CIoudFormation template, to ensure consistency and dependency graph resolutio
  • H. Write deployment and integration testing automation following Rolling Deployment methodologies.

Answer: B

Explanation: AWS recommends Blue-Green for zero-downtime deploys. Since you use DynamoDB, and neither AWS OpsWorks nor AWS Elastic Beanstalk directly supports DynamoDB, the option selecting CloudFormation and Blue-Green is correct.
You use various strategies to migrate the traffic from your current application stack (blue) to a new version of the application (green). This is a popular technique for deploying applications with zero downtime. The deployment services like AWS Elastic Beanstalk, AWS CIoudFormation, or AWS OpsWorks are particularly useful as they provide a simple way to clone your running application stack. You can set up a
new version of your application (green) by simply cloning current version of the application (blue). Reference: https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf

NEW QUESTION 6
You need to process long-running jobs once and only once. How might you do this?

  • A. Use an SNS queue and set the visibility timeout to long enough forjobs to process.
  • B. Use an SQS queue and set the reprocessing timeout to long enough forjobs to process.
  • C. Use an SQS queue and set the visibility timeout to long enough forjobs to process.
  • D. Use an SNS queue and set the reprocessing timeout to long enough forjobs to proces

Answer: C

Explanation: The message timeout defines how long after a successful receive request SQS waits before allowing jobs to be seen by other components, and proper configuration prevents duplicate processing.
Reference: http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/MessageLifecycIe.ht ml

NEW QUESTION 7
How does Amazon RDS multi Availability Zone model work?

  • A. A second, standby database is deployed and maintained in a different availability zone from master, using synchronous replication.
  • B. A second, standby database is deployed and maintained in a different availability zone from master using asynchronous replication.
  • C. A second, standby database is deployed and maintained in a different region from master using asynchronous replication.
  • D. A second, standby database is deployed and maintained in a different region from master using synchronous replication.

Answer: A

Explanation: In a MuIti-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.Mu|tiAZ.htmI

NEW QUESTION 8
You need to perform ad-hoc analysis on log data, including searching quickly for specific error codes and reference numbers. Which should you evaluate first?

  • A. AWS Elasticsearch Service
  • B. AWS RedShift
  • C. AWS EMR
  • D. AWS DynamoDB

Answer: A

Explanation: Amazon Elasticsearch Service (Amazon ES) is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics.
Reference:
http://docs.aws.amazon.com/elasticsearch-service/Iatest/developerguide/what-is-amazon-elasticsearch-s ervice.htmI

NEW QUESTION 9
You need to scale an RDS deployment. You are operating at 10% writes and 90% reads, based on your logging. How best can you scale this in a simple way?

  • A. Create a second master RDS instance and peer the RDS groups.
  • B. Cache all the database responses on the read side with CIoudFront.
  • C. Create read replicas for RDS since the load is mostly reads.
  • D. Create a Multi-AZ RDS installs and route read traffic to standb

Answer: C

Explanation: The high-availability feature is not a scaling solution for read-only scenarios; you cannot use a standby replica to serve read traffic. To service read-only traffic, you should use a Read Replica. For more information, see Working with PostgreSQL, MySQL, and NIariaDB Read Replicas.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.NIuItiAZ.htmI

NEW QUESTION 10
To monitor API calls against our AWS account by different users and entities, we can use to create a history of calls in bulk for later review, and use for reacting to AWS API calls in real-time.

  • A. AWS Config; AWS Inspector
  • B. AWS CIoudTraiI; AWS Config
  • C. AWS CIoudTraiI; CIoudWatch Events
  • D. AWS Config; AWS Lambda

Answer: C

Explanation: CIoudTraiI is a batch API call collection service, CIoudWatch Events enables real-time monitoring of calls through the Rules object interface.
Reference: https://aws.amazon.com/whitepapers/security-at-scaIe-governance-in-aws/

NEW QUESTION 11
For AWS Auto Scaling, what is the first transition state a new instance enters after leaving steady state when scaling out due to increased load?

  • A. EnteringStandby
  • B. Pending
  • C. Terminating:Wait
  • D. Detaching

Answer: B

Explanation: When a scale out event occurs, the Auto Scaling group launches the required number of EC2 instances, using its assigned launch configuration. These instances start in the Pending state. If you add a lifecycle hook to your Auto Scaling group, you can perform a custom action here. For more information, see Lifecycle Hooks.
Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveIoperGuide/AutoScaIingGroupLifecycIe.html

NEW QUESTION 12
When thinking of AWS Elastic Beanstalk, the 'Swap Environment URLs' feature most directly aids in what?

  • A. Immutable Rolling Deployments
  • B. MutabIe Rolling Deployments
  • C. Canary Deployments
  • D. Blue-Green Deployments

Answer: D

Explanation: Simply upload the new version of your application and let your deployment service (AWS Elastic Beanstalk, AWS CIoudFormation, or AWS OpsWorks) deploy a new version (green). To cut over to the new version, you simply replace the ELB URLs in your DNS records. Elastic Beanstalk has a Swap
Environment URLs feature to facilitate a simpler cutover process.
Reference: https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf

NEW QUESTION 13
You are designing a service that aggregates clickstream data in batch and delivers reports to subscribers via email only once per week. Data is extremely spikey, geographically distributed, high-scale, and unpredictable. How should you design this system?

  • A. Use a large RedShift cluster to perform the analysis, and a fileet of Lambdas to perform record inserts into the RedShift table
  • B. Lambda will scale rapidly enough for the traffic spikes.
  • C. Use a CIoudFront distribution with access log delivery to S3. Clicks should be recorded as querystring GETs to the distributio
  • D. Reports are built and sent by periodically running EMR jobs over the access logs in S3.
  • E. Use API Gateway invoking Lambdas which PutRecords into Kinesis, and EMR running Spark performing GetRecords on Kinesis to scale with spike
  • F. Spark on EMR outputs the analysis to S3, which are sent out via email.
  • G. Use AWS Elasticsearch service and EC2 Auto Scaling group
  • H. The Autoscaling groups scale based on click throughput and stream into the Elasticsearch domain, which is also scalabl
  • I. Use Kibana to generate reports periodically.

Answer: B

Explanation: Because you only need to batch analyze, anything using streaming is a waste of money. CIoudFront is a Gigabit-Scale HTTP(S) global request distribution service, so it can handle scale, geo-spread, spikes, and unpredictability. The Access Logs will contain the GET data and work just fine for batch analysis and email using EMR.
Can I use Amazon CIoudFront if I expect usage peaks higher than 10 Gbps or 15,000 RPS? Yes. Complete our request for higher limits here, and we will add more capacity to your account within two business days.
Reference: https://aws.amazon.com/Cloudfront/faqs/

NEW QUESTION 14
What is the scope of an EC2 EIP?

  • A. Placement Group
  • B. Availability Zone
  • C. Region
  • D. VPC

Answer: C

Explanation: An Elastic IP address is tied to a region and can be associated only with an instance in the same region. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/resources.htmI

NEW QUESTION 15
You run operations for a company that processes digital wallet payments at a very high volume. One second of downtime, during which you drop payments or are otherwise unavailable, loses you on average USD 100. You balance the financials of the transaction system once per day. Which database setup is best suited to address this business risk?

  • A. A multi-AZ RDS deployment with synchronous replication to multiple standbys and read-replicas for fast failover and ACID properties.
  • B. A multi-region, multi-master, active-active RDS configuration using database-level ACID design principles with database trigger writes for replication.
  • C. A multi-region, multi-master, active-active DynamoDB configuration using application control-level BASE design principles with change-stream write queue buffers for replication.
  • D. A multi-AZ DynamoDB setup with changes streamed to S3 via AWS Kinesis, for highly durable storage and BASE properties.

Answer: C

Explanation: Only the multi-master, multi-region DynamoDB answer makes sense. IV|u|ti-AZ deployments do not provide sufficient availability when a business loses USD 360,000 per hour of unavailability. As RDS does not natively support multi-region, and ACID does not perform well/at all over large distances between
regions, only the DynamoDB answer works. Reference:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.CrossRegionRepI.htmI

NEW QUESTION 16
Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced a large increase in traffic to a sustained 400 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue?

  • A. Your API Gateway deployment is throttling your requests.
  • B. Your AWS API Gateway Deployment is bottlenecking on request (de)seriaIization.
  • C. You did not request a limit increase on concurrent Lambda function executions.
  • D. You used Consistent Read requests on DynamoDB and are experiencing semaphore loc

Answer: C

Explanation: AWS API Gateway by default throttles at 500 requests per second steady-state, and 1000 requests per second at spike. Lambda, by default, throttles at 100 concurrent requests for safety. At 500 milliseconds (half of a second) per request, you can expect to support 200 requests per second at 100 concurrency. This is less than the 400 requests per second your system now requires. Make a limit increase request via the AWS Support Console.
AWS Lambda: Concurrent requests safety throttle per account -> 100
Reference: http://docs.aws.amazon.com/generaI/latest/gr/aws_service_Iimits.htm|#|imits_|ambda

NEW QUESTION 17
Your appIication's Auto Scaling Group scales up too quickly, too much, and stays scaled when traffic decreases. What should you do to fix this?

  • A. Set a longer cooldown period on the Group, so the system stops overshooting the target capacit
  • B. The issue is that the scaling system doesn't allow enough time for new instances to begin servicing requests before measuring aggregate load again.
  • C. Calculate the bottleneck or constraint on the compute layer, then select that as the new metric, and set the metric thresholds to the bounding values that begin to affect response latency.
  • D. Raise the CIoudWatch Alarms threshold associated with your autoscaling group, so the scaling takes more of an increase in demand before beginning.
  • E. Use larger instances instead of lots of smaller ones, so the Group stops scaling out so much and wasting resources as the OS level, since the OS uses a higher proportion of resources on smaller instances.

Answer: B

Explanation: Systems will always over-scale unless you choose the metric that runs out first and becomes constrained first. You also need to set the thresholds of the metric based on whether or not latency is affected by the change, tojustify adding capacity instead of wasting money.
Reference: http://docs.aws.amazon.com/AutoSca|ing/latest/DeveIoperGuide/poIicy_creating.htmI

P.S. Certleader now are offering 100% pass ensure AWS-Certified-DevOps-Engineer-Professional dumps! All AWS-Certified-DevOps-Engineer-Professional exam questions have been updated with correct answers: https://www.certleader.com/AWS-Certified-DevOps-Engineer-Professional-dumps.html (102 New Questions)