AWS-SysOps Premium Bundle

AWS-SysOps Premium Bundle

AWS Certified SysOps Administrator Associate Certification Exam

4.5 
(9600 ratings)
0 QuestionsPractice Tests
0 PDFPrint version
November 21, 2024Last update

Amazon AWS-SysOps Free Practice Questions

Q1. - (Topic 3) 

A user has created a VPC with CIDR 20.0.0.0/16. The user has created one subnet with CIDR 20.0.0.0/16 by mistake. The user is trying to create another subnet of CIDR 20.0.0.1/24. How can the user create the second subnet? 

A. There is no need to update the subnet as VPC automatically adjusts the CIDR of the first subnet based on the second subnet’s CIDR 

B. The user can modify the first subnet CIDR from the console 

C. It is not possible to create a second subnet as one subnet with the same CIDR as the VPC has been created 

D. The user can modify the first subnet CIDR with AWS CLI 

Answer:

Explanation: 

A Virtual Private Cloud (VPC. is a virtual network dedicated to the user’s AWS account. A user can create a subnet with VPC and launch instances inside the subnet. The user can create a subnet with the same size of VPC. However, he cannot create any other subnet since the CIDR of the second subnet will conflict with the first subnet. The user cannot modify the CIDR of a subnet once it is created. Thus, in this case if required, the user has to delete the subnet and create new subnets. 

Q2. - (Topic 2) 

A user has created an ELB with Auto Scaling. Which of the below mentioned offerings from ELB helps the user to stop sending new requests traffic from the load balancer to the EC2 instance when the instance is being deregistered while continuing in-flight requests? 

A. ELB sticky session 

B. ELB deregistration check 

C. ELB connection draining 

D. ELB auto registration Off 

Answer:

Explanation: 

The Elastic Load Balancer connection draining feature causes the load balancer to stop sending new requests to the back-end instances when the instances are deregistering or become unhealthy, while ensuring that inflight requests continue to be served. 

Q3. - (Topic 3) 

A user has deployed an application on an EBS backed EC2 instance. For a better performance of application, it requires dedicated EC2 to EBS traffic. How can the user achieve this? 

A. Launch the EC2 instance as EBS dedicated with PIOPS EBS 

B. Launch the EC2 instance as EBS enhanced with PIOPS EBS 

C. Launch the EC2 instance as EBS dedicated with PIOPS EBS 

D. Launch the EC2 instance as EBS optimized with PIOPS EBS 

Answer:

Explanation: 

Any application which has performance sensitive workloads and requires minimal variability with dedicated EC2 to EBS traffic should use provisioned IOPS EBS volumes, which are attached to an EBS-optimized EC2 instance or it should use an instance with 10 Gigabit network connectivity. Launching an instance that is EBSoptimized provides the user with a dedicated connection between the EC2 instance and the EBS volume. 

Q4. - (Topic 3) 

A user runs the command “dd if=/dev/xvdf of=/dev/null bs=1M” on an EBS volume created from a snapshot and attached to a Linux instance. Which of the below mentioned activities is the user performing with the step given above? 

A. Pre warming the EBS volume 

B. Initiating the device to mount on the EBS volume 

C. Formatting the volume 

D. Copying the data from a snapshot to the device 

Answer:

Explanation: 

When the user creates an EBS volume and is trying to access it for the first time it will encounter reduced IOPS due to wiping or initiating of the block storage. To avoid this as well as achieve the best performance it is required to pre warm the EBS volume. For a volume created from a snapshot and attached with a Linux OS, the “dd” command pre warms the existing data on EBS and any restored snapshots of volumes that have been previously fully pre warmed. This command maintains incremental snapshots; however, because this operation is read-only, it does not pre warm unused space that has never been written to on the original volume. In the command “dd if=/dev/xvdf of=/dev/null bs=1M” , the parameter “if=input file” should be set to the drive that the user wishes to warm. The “of=output file” parameter should be set to the Linux null virtual device, /dev/null. The “bs” parameter sets the block size of the read operation; for optimal performance, this should be set to 1 MB. 

Q5. - (Topic 1) 

You have a web application leveraging an Elastic Load Balancer (ELB) In front of the web servers deployed using an Auto Scaling Group Your database is running on Relational 

Database Service (RDS) The application serves out technical articles and responses to them in general there are more views of an article than there are responses to the article. On occasion, an article on the site becomes extremely popular resulting in significant traffic Increases that causes the site to go down. 

What could you do to help alleviate the pressure on the infrastructure while maintaining availability during these events? 

Choose 3 answers 

A. Leverage CloudFront for the delivery of the articles. 

B. Add RDS read-replicas for the read traffic going to your relational database 

C. Leverage ElastiCache for caching the most frequently used data. 

D. Use SOS to queue up the requests for the technical posts and deliver them out of the queue. 

E. Use Route53 health checks to fail over to an S3 bucket for an error page. 

Answer: A,C,E 

Q6. - (Topic 1) 

You need to design a VPC for a web-application consisting of an Elastic Load Balancer (ELB). a fleet of web/application servers, and an RDS database The entire Infrastructure must be distributed over 2 availability zones. 

Which VPC configuration works while assuring the database is not available from the Internet? 

A. One public subnet for ELB one public subnet for the web-servers, and one private subnet for the database 

B. One public subnet for ELB two private subnets for the web-servers, two private subnets for RDS 

C. Two public subnets for ELB two private subnets for the web-servers and two private subnets for RDS 

D. Two public subnets for ELB two public subnets for the web-servers, and two public subnets for RDS 

Answer:

Q7. - (Topic 3) 

A user has created a VPC with the public and private subnets using the VPC wizard. The VPC has CIDR 

20.0.0.0/16. The public subnet uses CIDR 20.0.1.0/24. The user is planning to host a web server in the public subnet (port 80. and a DB server in the private subnet (port 3306.. The user is configuring a security group for the public subnet (WebSecGrp. and the private subnet (DBSecGrp.. Which of the below mentioned entries is required in the private subnet database security group (DBSecGrp.? 

A. Allow Inbound on port 3306 for Source Web Server Security Group (WebSecGrp. 

B. Allow Inbound on port 3306 from source 20.0.0.0/16 

C. Allow Outbound on port 3306 for Destination Web Server Security Group (WebSecGrp. 

D. Allow Outbound on port 80 for Destination NAT Instance IP 

Answer:

Explanation: 

A user can create a subnet with VPC and launch instances inside that subnet. If the user has created a public private subnet to host the web server and DB server respectively, the user should configure that the instances in the private subnet can receive inbound traffic from the public subnet on the DB port. Thus, configure port 3306 in Inbound with the source as the Web Server Security Group (WebSecGrp.. The user should configure ports 80 and 443 for Destination 0.0.0.0/0 as the route table directs traffic to the NAT instance from the private subnet. 

Q8. - (Topic 3) 

A user has two EC2 instances running in two separate regions. The user is running an internal memory 

management tool, which captures the data and sends it to CloudWatch in US East, using a CLI with the same namespace and metric. Which of the below mentioned options is true with respect to the above statement? 

A. The setup will not work as CloudWatch cannot receive data across regions 

B. CloudWatch will receive and aggregate the data based on the namespace and metric 

C. CloudWatch will give an error since the data will conflict due to two sources 

D. CloudWatch will take the data of the server, which sends the data first 

Answer:

Explanation: 

Amazon CloudWatch does not differentiate the source of a metric when receiving custom data. If the user is publishing a metric with the same namespace and dimensions from different sources, CloudWatch will treat them as a single metric. If the data is coming with the same timezone within a minute, CloudWatch will aggregate the data. It treats these as a single metric, allowing the user to get the statistics, such as minimum, maximum, average, and the sum of all across all servers. 

Q9. - (Topic 1) 

Your application currently leverages AWS Auto Scaling to grow and shrink as load Increases/ decreases and has been performing well Your marketing team expects a steady ramp up in traffic to follow an upcoming campaign that will result in a 20x growth in traffic over 4 weeks Your forecast for the approximate number of Amazon EC2 instances necessary to meet the peak demand is 175. 

What should you do to avoid potential service disruptions during the ramp up in traffic? 

A. Ensure that you have pre-allocated 175 Elastic IP addresses so that each server will be able to obtain one as it launches 

B. Check the service limits in Trusted Advisor and adjust as necessary so the forecasted count remains within limits. 

C. Change your Auto Scaling configuration to set a desired capacity of 175 prior to the launch of the marketing campaign 

D. Pre-warm your Elastic Load Balancer to match the requests per second anticipated during peak demand prior to the marketing campaign 

Answer:

Q10. - (Topic 3) 

A user has configured ELB with Auto Scaling. The user suspended the Auto Scaling terminate process only for a while. What will happen to the availability zone rebalancing process (AZRebalance. during this period? 

A. Auto Scaling will not launch or terminate any instances 

B. Auto Scaling will allow the instances to grow more than the maximum size 

C. Auto Scaling will keep launching instances till the maximum instance size 

D. It is not possible to suspend the terminate process while keeping the launch active 

Answer:

Explanation: 

Auto Scaling performs various processes, such as Launch, Terminate, Availability Zone Rebalance (AZRebalance. etc. The AZRebalance process type seeks to maintain a balanced number of instances across Availability Zones within a region. If the user suspends the Terminate process, the AZRebalance process can cause the Auto Scaling group to grow up to ten percent larger than the maximum size. This is because Auto Scaling allows groups to temporarily grow larger than the maximum size during rebalancing activities. If Auto Scaling cannot terminate instances, the Auto Scaling group could remain up to ten percent larger than the maximum size until the user resumes the Terminate process type. 

Q11. - (Topic 3) 

A sys admin has enabled logging on ELB. Which of the below mentioned fields will not be a part of the log file name? 

A. Load Balancer IP 

B. EC2 instance IP 

C. S3 bucket name 

D. Random string 

Answer:

Explanation: 

Elastic Load Balancing access logs capture detailed information for all the requests made to the load balancer. Elastic Load Balancing publishes a log file from each load balancer node at the interval that the user has specified. The load balancer can deliver multiple logs for the same period. Elastic Load Balancing creates log file names in the following format: “{Bucket}/{Prefix}/AWSLogs/{AWS AccountID}/elasticloadbalancing/{Region}/{Year}/{Month}/{Day}/{AWS Account ID}_elasticloadbalancing_{Region}_{Load Balancer Name}_{End Time}_{Load Balancer IP}_{Random String}.log“ 

Q12. - (Topic 3) 

A user has enabled termination protection on an EC2 instance. The user has also set Instance initiated 

shutdown behaviour to terminate. When the user shuts down the instance from the OS, what will happen? 

A. The OS will shutdown but the instance will not be terminated due to protection 

B. It will terminate the instance 

C. It will not allow the user to shutdown the instance from the OS D. It is not possible to set the termination protection when an Instance initiated shutdown is set to Terminate 

Answer:

Explanation: 

It is always possible that someone can terminate an EC2 instance using the Amazon EC2 console, command line interface or API by mistake. If the admin wants to prevent the instance from being accidentally terminated, he can enable termination protection for that instance. The user can also setup shutdown behaviour for an EBS backed instance to guide the instance on what should be done when he initiates shutdown from the OS using Instance initiated shutdown behaviour. If the instance initiated behaviour is set to terminate and the user shuts off the OS even though termination protection is enabled, it will still terminate the instance. 

Q13. - (Topic 3) 

A user has launched an EC2 Windows instance from an instance store backed AMI. The user wants to convert the AMI to an EBS backed AMI. How can the user convert it? 

A. Attach an EBS volume to the instance and unbundle all the AMI bundled data inside the EBS 

B. A Windows based instance store backed AMI cannot be converted to an EBS backed AMI 

C. It is not possible to convert an instance store backed AMI to an EBS backed AMI 

D. Attach an EBS volume and use the copy command to copy all the ephermal content to the EBS Volume 

Answer:

Explanation: 

Generally when a user has launched an EC2 instance from an instance store backed AMI, it can be converted to an EBS backed AMI provided the user has attached the EBS volume to the instance and unbundles the AMI data to it. However, if the instance is a Windows instance, AWS does not allow this. In this case, since the instance is a Windows instance, the user cannot convert it to an EBS backed AMI. 

Q14. - (Topic 3) 

A user is using the AWS EC2. The user wants to make so that when there is an issue in the EC2 server, such as instance status failed, it should start a new instance in the user’s private cloud. Which AWS service helps to achieve this automation? 

A. AWS CloudWatch + Cloudformation 

B. AWS CloudWatch + AWS AutoScaling + AWS ELB 

C. AWS CloudWatch + AWS VPC 

D. AWS CloudWatch + AWS SNS 

Answer:

Explanation: 

Amazon SNS can deliver notifications by SMS text message or email to the Amazon Simple Queue Service (SQS. queues or to any HTTP endpoint. The user can configure a web service (HTTP End point. in his data centre which receives data and launches an instance in the private cloud. The user should configure the CloudWatch alarm to send a notification to SNS when the “StatusCheckFailed” metric is true for the EC2 instance. The SNS topic can be configured to send a notification to the user’s HTTP end point which launches an instance in the private cloud. 

Q15. - (Topic 3) 

An organization has configured Auto Scaling for hosting their application. The system admin wants to 

understand the Auto Scaling health check process. If the instance is unhealthy, Auto Scaling launches an 

instance and terminates the unhealthy instance. What is the order execution? 

A. Auto Scaling launches a new instance first and then terminates the unhealthy instance 

B. Auto Scaling performs the launch and terminate processes in a random order 

C. Auto Scaling launches and terminates the instances simultaneously 

D. Auto Scaling terminates the instance first and then launches a new instance 

Answer:

Explanation: 

Auto Scaling keeps checking the health of the instances at regular intervals and marks the instance for replacement when it is unhealthy. The ReplaceUnhealthy process terminates instances which are marked as unhealthy and subsequently creates new instances to replace them. This process first terminates the instance and then launches a new instance. 

Q16. - (Topic 1) 

Your team Is excited about the use of AWS because now they have access to programmable Infrastructure" You have been asked to manage your AWS infrastructure In a manner similar to the way you might manage application code You want to be able to deploy exact copies of different versions of your infrastructure, stage changes into different environments, revert back to previous versions, and identify what versions are running at any particular time (development test QA. production). 

Which approach addresses this requirement? 

A. Use cost allocation reports and AWS Opsworks to deploy and manage your infrastructure. 

B. Use AWS CloudWatch metrics and alerts along with resource tagging to deploy and manage your infrastructure. 

C. Use AWS Beanstalk and a version control system like GIT to deploy and manage your infrastructure. 

D. Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure. 

Answer:

Explanation: Reference: 

http://aws.amazon.com/opsworks/faqs/ 

START AWS-SysOps EXAM