Amazon AWS-Certified-DevOps-Engineer-Professional Study Guides 2019

We provide in two formats. Download PDF & Practice Tests. Pass Amazon AWS-Certified-DevOps-Engineer-Professional Exam quickly & easily. The AWS-Certified-DevOps-Engineer-Professional PDF type is available for reading and printing. You can print more and practice many times. With the help of our product and material, you can easily pass the AWS-Certified-DevOps-Engineer-Professional exam.

Online Amazon AWS-Certified-DevOps-Engineer-Professional free dumps demo Below:

You need the absolute highest possible network performance for a cluster computing application. You already selected homogeneous instance types supporting 10 gigabit enhanced networking, made sure that your workload was network bound, and put the instances in a placement group. What is the last optimization you can make?

  • A. Use 9001 MTU instead of 1500 for Jumbo Frames, to raise packet body to packet overhead ratios.
  • B. Segregate the instances into different peered VPCs while keeping them all in a placement group, so each one has its own Internet Gateway.
  • C. Bake an AMI for the instances and relaunch, so the instances are fresh in the placement group and donot have noisy neighbors.
  • D. Turn off SYN/ACK on your TCP stack or begin using UDP for higher throughpu

Answer: A

Explanation: For instances that are collocated inside a placement group, jumbo frames help to achieve the maximum network throughput possible, and they are recommended in this case. For more information, see Placement Groups.

You have a high security requirement for your AWS accounts. What is the most rapid and sophisticated setup you can use to react to AWS API calls to your account?

  • A. Subscription to AWS Config via an SNS Topi
  • B. Use a Lambda Function to perform in-flight analysis and reactMty to changes as they occur.
  • C. Global AWS CIoudTraiI setup delivering to S3 with an SNS subscription to the deliver notifications, pushing into a Lambda, which inserts records into an ELK stack for analysis.
  • D. Use a CIoudWatch Rule ScheduIeExpression to periodically analyze IAM credential log
  • E. Push the deltas for events into an ELK stack and perform ad-hoc analysis there.
  • F. CIoudWatch Events Rules which trigger based on all AWS API calls, submitting all events to an AWS Kinesis Stream for arbitrary downstream analysis.

Answer: D

Explanation: CloudWatch Events allow subscription to AWS API calls, and direction of these events into Kinesis Streams. This allows a unified, near real-time stream for all API calls, which can be analyzed with any tooI(s) of your choosing downstream.
Reference: pe

What is a circular dependency in AWS CIoudFormation?

  • A. When a Template references an earlier version of itself.
  • B. When Nested Stacks depend on each other.
  • C. When Resources form a DependOn loop.
  • D. When a Template references a region, which references the original Templat

Answer: C

Explanation: To resolve a dependency error, add a DependsOn attribute to resources that depend on other resources in your template. In some cases, you must explicitly declare dependencies so that AWS CIoudFormation can create or delete resources in the correct order. For example, if you create an Elastic IP and a VPC
with an Internet gateway in the same stack, the Elastic IP must depend on the Internet gateway attachment. For additional information, see DependsOn Attribute.
Reference:|#troub|eshootin g-errors-dependence-error

Which is not a restriction on AWS EBS Snapshots?

  • A. Snapshots which are shared cannot be used as a basis for other snapshots.
  • B. You cannot share a snapshot containing an AWS Access Key ID or AWS Secret Access Key.
  • C. You cannot share unencrypted snapshots.
  • D. Snapshot restorations are restricted to the region in which the snapshots are create

Answer: A

Explanation: Snapshots shared with other users are usable in full by the recipient, including but limited to the ability to base modified volumes and snapshots.

You are getting a lot of empty receive requests when using Amazon SQS. This is making a lot of unnecessary network load on your instances. What can you do to reduce this load?

  • A. Subscribe your queue to an SNS topic instead.
  • B. Use as long of a poll as possible, instead of short polls.
  • C. Alter your visibility timeout to be shorter.
  • D. Use <code>sqsd</code> on your EC2 instance

Answer: B

Explanation: One benefit of long polling with Amazon SQS is the reduction of the number of empty responses, when there are no messages available to return, in reply to a ReceiveMessage request sent to an Amazon SQS queue. Long polling allows the Amazon SQS service to wait until a message is available in the queue before sending a response.

When thinking of AWS Elastic BeanstaIk's model, which is true?

  • A. Applications have many deployments, deployments have many environments.
  • B. Environments have many applications, applications have many deployments.
  • C. Applications have many environments, environments have many deployments.
  • D. Deployments have many environments, environments have many application

Answer: C

Explanation: Applications group logical services. Environments belong to Applications, and typically represent different deployment levels (dev, stage, prod, fo forth). Deployments belong to environments, and are pushes of bundles of code for the environments to run.

You run accounting software in the AWS cloud. This software needs to be online continuously during the day every day of the week, and has a very static requirement for compute resources. You also have other, unrelated batch jobs that need to run once per day at any time of your choosing. How should you minimize cost?

  • A. Purchase a Heavy Utilization Reserved Instance to run the accounting softwar
  • B. Turn it off after hour
  • C. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
  • D. Purchase a Medium Utilization Reserved Instance to run the accounting softwar
  • E. Turn it off after hour
  • F. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
  • G. Purchase a Light Utilization Reserved Instance to run the accounting softwar
  • H. Turn it off after hour
  • I. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
  • J. Purchase a Full Utilization Reserved Instance to run the accounting softwar
  • K. Turn it off after hour
  • L. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.

Answer: A

Explanation: Because the instance will always be online during the day, in a predictable manner, and there are a sequence of batch jobs to perform at any time, we should run the batch jobs when the account software is off. We can achieve Heavy Utilization by alternating these times, so we should purchase the reservation as such, as this represents the lowest cost. There is no such thing a "FuII" level utilization purchases on EC2.

You run a 2000-engineer organization. You are about to begin using AWS at a large scale for the first time. You want to integrate with your existing identity management system running on Microsoft Active Directory, because your organization is a power-user of Active Directory. How should you manage your AWS identities in the most simple manner?

  • A. Use a large AWS Directory Service Simple AD.
  • B. Use a large AWS Directory Service AD Connector.
  • C. Use an Sync Domain running on AWS Directory Service.
  • D. Use an AWS Directory Sync Domain running on AWS Lambda

Answer: B

Explanation: You must use AD Connector as a power-user of Microsoft Active Directory. Simple AD only works with a subset of AD functionality. Sync Domains do not exist; they are made up answers.
AD Connector is a directory gateway that allows you to proxy directory requests to your on-premises Nlicrosoft Active Directory, without caching any information in the cloud. AD Connector comes in 2 sizes; small and large. A small AD Connector is designed for smaller organizations of up to 500 users. A large AD Connector is designed for larger organizations of up to 5,000 users.

You are designing a system which needs, at minimum, 8 m4.Iarge instances operating to service traffic. When designing a system for high availability in the us-east-1 region, which has 6 Availability Zones, you company needs to be able to handle death of a full availability zone. How should you distribute the
servers, to save as much cost as possible, assuming all of the EC2 nodes are properly linked to an ELB? Your VPC account can utilize us-east-1's AZ's a through f, inclusive.

  • A. 3 servers in each of AZ's a through d, inclusive.
  • B. 8 servers in each of AZ's a and b.
  • C. 2 servers in each of AZ's a through e, inclusive.
  • D. 4 servers in each of AZ's a through c, inclusiv

Answer: C

Explanation: You need to design for N+1 redundancy on Availability Zones. ZONE_COUNT = (REQUIRED_INSTANCES / INSTANCE_COUNT_PER_ZONE) + 1. To minimize cost, spread the instances across as many possible zones as you can. By using a though e, you are allocating 5 zones. Using 2 instances, you have 10 total instances. If a single zone fails, you have 4 zones left, with 2 instances each, for a total of 8 instances. By spreading out as much as possible, you have increased cost by only 25% and significantly de-risked an availability zone failure.
Reference: regions-availability-zones

You need to run a very large batch data processing job one time per day. The source data exists entirely in S3, and the output of the processing job should also be written to S3 when finished. If you need to version control this processing job and all setup and teardown logic for the system, what approach should you use?

  • A. Model an AWS EMR job in AWS Elastic Beanstalk.
  • B. Model an AWS EMR job in AWS CloudFormation.
  • C. Model an AWS EMR job in AWS OpsWorks.
  • D. Model an AWS EMR job in AWS CLI Compose

Answer: B

Explanation: To declaratively model build and destroy of a cluster, you need to use AWS CIoudFormation. OpsWorks and Elastic Beanstalk cannot directly model EMR Clusters. The CLI is not declarative, and CLI Composer does not exist.

Your team wants to begin practicing continuous delivery using CIoudFormation, to enable automated builds and deploys of whole, versioned stacks or stack layers. You have a 3-tier, mission-critical system. Which of the following is NOT a best practice for using CIoudFormation in a continuous delivery environment?

  • A. Use the AWS CIoudFormation <code>VaIidateTempIate</code> call before publishing changes to AWS.
  • B. ModeI your stack in one template, so you can leverage CIoudFormation's state management and dependency resolution to propagate all changes.
  • C. Use CIoudFormation to create brand new infrastructure for all stateless resources on each push, and run integration tests on that set of infrastructure.
  • D. Parametrize the template and use <code>Mappings</code> to ensure your template works in multiple Regions.

Answer: B

Explanation: Putting all resources in one stack is a bad idea, since different tiers have different life cycles and frequencies of change. For additional guidance about organizing your stacks, you can use two common frameworks: a multi-layered architecture and service-oriented architecture (SOA).

Which of the following tools does not directly support AWS OpsWorks, for monitoring your stacks?

  • A. AWS Config
  • B. Amazon CIoudWatch Nletrics
  • C. AWS CloudTraiI
  • D. Amazon CIoudWatch Logs

Answer: A

Explanation: You can monitor your stacks in the following ways: AWS OpsWorks uses Amazon CIoudWatch to provide thirteen custom metrics with detailed monitoring for each instance in the stack; AWS OpsWorks integrates with AWS CIoudTraiI to log every AWS OpsWorks API call and store the data in an Amazon S3 bucket; You can use Amazon CIoudWatch Logs to monitor your stack's system, application, and custom logs. Reference:

You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?

  • A. Copy all log files into AWS S3 using a cron job on each instanc
  • B. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Lambd
  • C. Use the Lambda to analyze logs as soon as they come in and flag issues.
  • D. Begin using CIoudWatch Logs on every servic
  • E. Stream all Log Groups into S3 object
  • F. Use AWS EMR clusterjobs to perform ad-hoc MapReduce analysis and write new queries when needed.
  • G. Copy all log files into AWS S3 using a cron job on each instanc
  • H. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Kinesi
  • I. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.
  • J. Begin using CIoudWatch Logs on every servic
  • K. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster.

Answer: D

Explanation: The Elasticsearch and Kibana 4 combination is called the ELK Stack, and is designed specifically for real-time, ad-hoc log analysis and aggregation. All other answers introduce extra delay or require pre-defined queries.
Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS Cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics. Reference:

You are building a game high score table in DynamoDB. You will store each user's highest score for each game, with many games, all of which have relatively similar usage levels and numbers of players. You need to be able to look up the highest score for any game. What's the best DynamoDB key structure?

  • A. HighestScore as the hash / only key.
  • B. GameID as the hash key, HighestScore as the range key.
  • C. GameID as the hash / only key.
  • D. GameID as the range / only ke

Answer: B

Explanation: Since access and storage for games is uniform, and you need to have ordering within each game for the scores (to access the highest value), your hash (partition) key should be the GameID, and there should be a range key for HighestScore.
Reference: nesForTabIes.Partitions

When thinking of DynamoDB, what are true of Local Secondary Key properties?

  • A. Either the partition key or the sort key can be different from the table, but not both.
  • B. Only the sort key can be different from the table.
  • C. The partition key and sort key can be different from the table.
  • D. Only the partition key can be different from the tabl

Answer: B

Explanation: Global secondary index — an index with a partition key and a sort key that can be different from those on the table. A global secondary index is considered "gIobaI" because queries on the index can span all of the data in a table, across all partitions.

Which deployment method, when using AWS Auto Scaling Groups and Auto Scaling Launch Configurations, enables the shortest time to live for indMdual sewers?

  • A. Pre-baking AMIs with all code and configuration on deploys.
  • B. Using a Dockerfile bootstrap on instance launch.
  • C. Using UserData bootstrapping scripts.
  • D. Using AWS EC2 Run Commands to dynamically SSH into fileet

Answer: A

Explanation: Note that the bootstrapping process can be slower if you have a complex application or multiple applications to install. Managing a fileet of applications with several build tools and dependencies can be a challenging task during rollouts. Furthermore, your deployment service should be designed to do faster rollouts to take advantage of Auto Scaling. Prebaking is a process of embedding a significant portion of your application artifacts within your base AMI. During the deployment process you can customize application installations by using EC2 instance artifacts such as instance tags, instance metadata, and Auto Scaling groups.

You need to deploy a new application version to production. Because the deployment is high-risk, you need to roll the new version out to users over a number of hours, to make sure everything is working correctly. You need to be able to control the proportion of users seeing the new version of the application down to the percentage point.
You use ELB and EC2 with Auto Scaling Groups and custom AMIs with your code pre-installed assigned to Launch Configurations. There are no database-level changes during your deployment. You have been told you cannot spend too much money, so you must not increase the number of EC2 instances much at all during the deployment, but you also need to be able to switch back to the original version of code quickly if something goes wrong. What is the best way to meet these requirements?

  • A. Create a second ELB, Auto Scaling Launch Configuration, and Auto Scaling Group using the Launch Configuratio
  • B. Create AM|s with all code pre-installe
  • C. Assign the new AMI to the second Auto Scaling Launch Configuratio
  • D. Use Route53 Weighted Round Robin Records to adjust the proportion of traffic hitting the two ELBs.
  • E. Use the Blue-Green deployment method to enable the fastest possible rollback if neede
  • F. Create a full second stack of instances and cut the DNS over to the new stack of instances, and change the DNS back if a rollback is needed.
  • G. Create AMIs with all code pre-installe
  • H. Assign the new AMI to the Auto Scaling Launch Configuration, to replace the old on
  • I. Gradually terminate instances running the old code (launched with the old Launch Configuration) and allow the new AMIs to boot to adjust the traffic balance to the new cod
  • J. On rollback, reverse the process by doing the same thing, but changing the AMI on the Launch Config back to the original code.
  • K. Migrate to use AWS Elastic Beanstal
  • L. Use the established and well-tested Rolling Deployment setting AWS provides on the new Application Environment, publishing a zip bundle of the new code and adjusting the wait period to spread the deployment over tim
  • M. Re-deploy the old code bundle to rollback if needed.

Answer: A

Explanation: Only Weighted Round Robin DNS Records and reverse proxies allow such fine-grained tuning of traffic splits. The Blue-Green option does not meet the requirement that we mitigate costs and keep overall EC2 fileet size consistent, so we must select the 2 ELB and ASG option with WRR DNS tuning. This method is called A/B deployment and/or Canary deployment.

Recommend!! Get the Full AWS-Certified-DevOps-Engineer-Professional dumps in VCE and PDF From Certleader, Welcome to Download: (New 102 Q&As Version)