AWS-Certified-DevOps-Engineer-Professional Premium Bundle

AWS-Certified-DevOps-Engineer-Professional Premium Bundle

AWS Certified DevOps Engineer Professional Certification Exam

4.5 
(1725 ratings)
0 QuestionsPractice Tests
0 PDFPrint version
November 21, 2024Last update

Amazon AWS-Certified-DevOps-Engineer-Professional Free Practice Questions

Q1. What does it mean if you have zero IOPS and a non-empty I/O queue for all EBS volumes attached to a running EC2 instance?

A. The I/O queue is buffer flushing.

B. Your EBS disk head(s) is/are seeking magnetic stripes.

C. The EBS volume is unavailable.

D. You need to re-mount the EBS volume in the OS. 

Answer: C

Explanation:

This is the definition of Unavailable from the EC2 and EBS SLA.

"UnavaiIabIe" and "Unavai|abi|ity" mean... For Amazon EBS, when all of your attached volumes perform zero read write IO, with pending IO in the queue.

Reference: https://aws.amazon.com/ec2/s|a/

Q2. For AWS CIoudFormation, which stack state refuses UpdateStack calls?

A. <code>UPDATE_ROLLBACK_FAILED</code>

B. <code>UPDATE_ROLLBACK_COMPLETE</code>

C. <code>UPDATE_CONIPLETE</code>

D. <code>CREATE_COMPLETE</code> 

Answer: A

Explanation:

When a stack is in the UPDATE_ROLLBACK_FA|LED state, you can continue rolling it back to return it to a working state (to UPDATE_ROLLBACK_COMPLETE). You cannot update a stack that is in the UPDATE_ROLLBACK_FA|LED state. However, if you can continue to roll it back, you can return the  stack to its original settings and try to update it again.

Reference:

http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/using-cfn-updating-stacks-continueu pdateroIIback.htmI

Q3. There are a number of ways to purchase compute capacity on AWS. Which orders the price per compute or memory unit from LOW to HIGH (cheapest to most expensive), on average?

A. On-Demand B. Spot C. Reserved

A. A, B, C

B. C, B, A

C. B, C, A

D. A, C, B

Answer:

Explanation:

Spot instances are usually many, many times cheaper than on-demand prices. Reserved instances, depending on their term and utilization, can yield approximately 33% to 66% cost savings. On-Demand prices are the baseline price and are the most expensive way to purchase EC2 compute time.    Reference:       https://d0.awsstatic.com/whitepapers/Cost_Optimization_with_AWS.pdf

Q4. Why are more frequent snapshots or EBS Volumes faster?

A. Blocks in EBS Volumes are allocated lazily, since while logically separated from other EBS Volumes, Volumes often share the same physical hardware. Snapshotting the first time forces full block range allocation, so the second snapshot doesn't need to perform the allocation phase and is faster.

B. The snapshots are incremental so that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot.

C. AWS provisions more disk throughput for burst capacity during snapshots if the drive has been pre-warmed by snapshotting and reading all blocks.

D. The drive is pre-warmed, so block access is more rapid for volumes when every block on the device has already been read at least one time.

Answer:

Explanation:

After writing data to an EBS volume, you can periodically create a snapshot of the volume to use as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental so that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot. Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the volume.

Reference:        http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html

Q5. You need to create an audit log of all changes to customer banking data. You use DynamoDB to store this customer banking data. |t's important not to lose any information due to server failures. What is an elegant way to accomplish this?

A. Use a DynamoDB StreamSpecification and stream all changes to AWS Lambda. Log the changes to

AWS CIoudWatch Logs, removing sensitive information before logging.

B. Before writing to DynamoDB, do a pre-write acknoledgment to disk on the application sewer, removing sensitive information before logging. Periodically rotate these log files into S3.

C. Use a DynamoDB StreamSpecification and periodically flush to an EC2 instance store, removing sensitive information before putting the objects. Periodically flush these batches to S3.

D. Before writing to DynamoDB, do a pre-write acknoledgment to disk on the application sewer, removing sensitive information before logging. Periodically pipe these files into CloudWatch Logs.

Answer:

Explanation:

All suggested periodic options are sensitive to sewer failure during or between periodic flushes.   Streaming to Lambda and then logging to CIoudWatch Logs will make the system resilient to instance and Availability Zone failures.

Reference:      http://docs.aws.amazon.com/Iambda/latest/dg/with-ddb.html

Q6. Which of these techniques enables the fastest possible rollback times in the event of a failed deployment?

A. Rolling; Immutable

B. Rolling; Mutable

C. Canary or A/B

D. Blue-Green 

Answer: D

Explanation:

AWS specifically recommends Blue-Green for super-fast, zero-downtime deploys - and thus rollbacks, which are redeploying old code.

You use various strategies to migrate the traffic from your current application stack (blue) to a new version of the application (green). This is a popular technique for deploying applications with zero downtime. Reference:        https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf

Q7. You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has   no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?

A. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues.

B. Begin using CIoudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR clusterjobs to perform ad-hoc MapReduce analysis and write new queries when needed.

C. Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.

D. Begin using CIoudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster.

Answer:

Explanation:

The Elasticsearch and Kibana 4 combination is called the ELK Stack, and is designed specifically for real-time, ad-hoc log analysis and aggregation. All other answers introduce extra delay or require pre-defined queries.

Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS Cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics.    Reference:     https://aws.amazon.com/elasticsearch-service/

Q8. Which major database needs a BYO license?

A. PostgreSQL

B. NIariaDB

C. MySQL

D. Oracle 

Answer: D

Explanation:

Oracle is not open source, and requires a bring your own license model.

Reference:       http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_OracIe.htm|

Q9. What method should I use to author automation if I want to wait for a CIoudFormation stack to finish completing in a script?

A. Event subscription using SQS.

B. Event subscription using SNS.

C. Poll using <code>ListStacks</code> / <code>Iist-stacks</code>.

D. Poll using <code>GetStackStatus</code> / <code>get-stack-status</code>. 

Answer: C

Explanation:

Event driven systems are good for IFTTT logic, but only polling will make a script wait to complete. ListStacks / list-stacks is a real method, GetStackStatus / get-stack-status is not.

Reference:       http://docs.aws.amazon.com/cli/latest/reference/cloudformation/Iist-stacks.html

Q10. When thinking of AWS Elastic BeanstaIk's model, which is true?

A. Applications have many deployments, deployments have many environments.

B. Environments have many applications, applications have many deployments.

C. Applications have many environments, environments have many deployments.

D. Deployments have many environments, environments have many applications. 

Answer: C

Explanation:

Applications group logical services. Environments belong to Applications, and typically represent different deployment levels (dev, stage, prod, fo forth). Deployments belong to environments, and are pushes of bundles of code for the environments to run.

Reference:      http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/\NeIcome.html

Q11. You run accounting software in the AWS cloud. This software needs to be online continuously during the day every day of the week, and has a very static requirement for compute resources. You also have other, unrelated batch jobs that need to run once per day at any time of your choosing. How should you minimize cost?

A. Purchase a Heavy Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.

B. Purchase a Medium Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.

C. Purchase a Light Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.

D. Purchase a Full Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.

Answer:

Explanation:

Because the instance will always be online during the day, in a predictable manner, and there are a sequence of batch jobs to perform at any time, we should run the batch jobs when the account software is off. We can achieve Heavy Utilization by alternating these times, so we should purchase the reservation as such, as this represents the lowest cost. There is no such thing a "FuII" level utilization purchases on EC2.

Reference:       https://d0.awsstatic.com/whitepapers/Cost_Optimization_with_AWS.pdf

Q12. You are building a deployment system on AWS. You will deploy new code by bootstrapping instances in a private subnet in a VPC at runtime using UserData scripts pointing to an S3 zip file object, where your code is stored. An ELB in a public subnet has network interfaces and connectMty to the instances.  Requests from users of the system are routed to the ELB via a Route53 A Record Alias. You do not use  any VPC endpoints. Which is a risk of using this approach?

A. Route53 Alias records do not always update dynamically with ELB network changes after deploys.

B. If the NAT routing for the private subnet fails, deployments fail.

C. Kernel changes to the base AMI may render the code inoperable.

D. The instances cannot be in a private subnet if the ELB is in a public one. 

Answer: B

Explanation:

Since you are not using VPC endpoints, outbound requests for the code sitting in S3 are routed though the NAT for the VPC's private subnets. If this networking fails, runtime bootstrapping through code

download will fail due to network unavailability and lack of access to the Internet, and thus Amazon S3. Reference:        http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html

Q13. Fill the blanks: helps us track AWS API calls and transitions, helps to understand what resources we have now, and allows auditing credentials and logins.

A. AWS Config, CIoudTraiI, IAM Credential Reports

B. CIoudTraiI, IAM Credential Reports, AWS Config

C. CIoudTraiI, AWS Config, IAM Credential Reports

D. AWS Config, IAM Credential Reports, CIoudTraiI 

Answer: C

Explanation:

You can use AWS CIoudTraiI to get a history of AWS API calls and related events for your account. This includes calls made by using the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services.

Reference:        http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html

Q14. You need to process long-running jobs once and only once. How might you do this?

A. Use an SNS queue and set the visibility timeout to long enough forjobs to process.

B. Use an SQS queue and set the reprocessing timeout to long enough forjobs to process.

C. Use an SQS queue and set the visibility timeout to long enough forjobs to process.

D. Use an SNS queue and set the reprocessing timeout to long enough forjobs to process. 

Answer: C

Explanation:

The message timeout defines how long after a successful receive request SQS waits before allowing jobs to be seen by other components, and proper configuration prevents duplicate processing.

Reference: http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/MessageLifecycIe.ht ml

Q15. Your company wants to understand where cost is coming from in the company's production AWS account. There are a number of applications and services running at any given time. Without expending too much initial development time, how best can you give the business a good understanding of which applications cost the most per month to operate?

A. Create an automation script which periodically creates AWS Support tickets requesting detailed intra-month information about your bill.

B. Use custom CIoudWatch Metrics in your system, and put a metric data point whenever cost is incurred.

C. Use AWS Cost Allocation Tagging for all resources which support it. Use the Cost Explorer to analyze costs throughout the month.

D. Use the AWS Price API and constantly running resource inventory scripts to calculate total price based on multiplication of consumed resources over time.

Answer:

Explanation:

Cost Allocation Tagging is a built-in feature of AWS, and when coupled with the Cost Explorer, provides a simple and robust way to track expenses.

You can also use tags to filter views in Cost Explorer. Note that before you can filter views by tags in Cost Explorer, you must have applied tags to your resources and activate them, as described in the following sections. For more information about Cost Explorer, see Analyzing Your Costs with Cost Explorer. Reference:       http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html

START AWS-Certified-DevOps-Engineer-Professional EXAM