AWS-SysOps Premium Bundle

AWS-SysOps Premium Bundle

AWS Certified SysOps Administrator Associate Certification Exam

4.5 
(20325 ratings)
0 QuestionsPractice Tests
0 PDFPrint version
November 21, 2024Last update

Amazon AWS-SysOps Free Practice Questions

Q1. - (Topic 3) 

A user is trying to create a PIOPS EBS volume with 8 GB size and 200 IOPS. Will AWS create the volume? 

A. Yes, since the ratio between EBS and IOPS is less than 30 

B. No, since the PIOPS and EBS size ratio is less than 30 

C. No, the EBS size is less than 10 GB 

D. Yes, since PIOPS is higher than 100 

Answer:

Explanation: 

A provisioned IOPS EBS volume can range in size from 10 GB to 1 TB and the user can provision up to 4000 IOPS per volume. The ratio of IOPS provisioned to the volume size requested should be a maximum of 30; for example, a volume with 3000 IOPS must be at least 100 GB. 

Q2. - (Topic 3) 

A user has launched 5 instances in EC2-CLASSIC and attached 5 elastic IPs to the five different instances in the US East region. The user is creating a VPC in the same region. The user wants to assign an elastic IP to the VPC instance. How can the user achieve this? 

A. The user has to request AWS to increase the number of elastic IPs associated with the account 

B. AWS allows 10 EC2 Classic IPs per region; so it will allow to allocate new Elastic IPs to the same region 

C. The AWS will not allow to create a new elastic IP in VPC; it will throw an error 

D. The user can allocate a new IP address in VPC as it has a different limit than EC2 

Answer:

Explanation: Section: (none) 

A Virtual Private Cloud (VPC. is a virtual network dedicated to the user’s AWS account. A user can create a subnet with VPC and launch instances inside that subnet. A user can have 5 IP addresses per region with EC2 Classic. The user can have 5 separate IPs with VPC in the same region as it has a separate limit than EC2 Classic. 

Q3. - (Topic 3) 

A user has created an Auto Scaling group using CLI. The user wants to enable CloudWatch detailed monitoring for that group. How can the user configure this? 

A. When the user sets an alarm on the Auto Scaling group, it automatically enables detail monitoring 

B. By default detailed monitoring is enabled for Auto Scaling 

C. Auto Scaling does not support detailed monitoring 

D. Enable detail monitoring from the AWS console 

Answer:

Explanation: 

CloudWatch is used to monitor AWS as well as the custom services. It provides either basic or detailed monitoring for the supported AWS products. In basic monitoring, a service sends data points to CloudWatch every five minutes, while in detailed monitoring a service sends data points to CloudWatch every minute. To enable detailed instance monitoring for a new Auto Scaling group, the user does not need to take any extra steps. When the user creates an Auto Scaling launch config as the first step for creating an Auto Scaling group, each launch configuration contains a flag named InstanceMonitoring.Enabled. The default value of this flag is true. Thus, the user does not need to set this flag if he wants detailed monitoring. 

Q4. - (Topic 3) 

A user is trying to understand the CloudWatch metrics for the AWS services. It is required that the user should first understand the namespace for the AWS services. Which of the below mentioned is not a valid namespace for the AWS services? 

A. AWS/StorageGateway 

B. AWS/CloudTrail 

C. AWS/ElastiCache 

D. AWS/SWF 

Answer:

Explanation: 

Amazon CloudWatch is basically a metrics repository. The AWS product puts metrics into this repository, and the user can retrieve the data or statistics based on those metrics. To distinguish the data for each service, the CloudWatch metric has a namespace. Namespaces are containers for metrics. All AWS services that provide the Amazon CloudWatch data use a namespace string, beginning with "AWS/". All the services which are supported by CloudWatch will have some namespace. CloudWatch does not monitor CloudTrail. Thus, the namespace “AWS/CloudTrail” is incorrect. 

Q5. - (Topic 2) 

A user is trying to aggregate all the CloudWatch metric data of the last 1 week. Which of the below mentioned statistics is not available for the user as a part of data aggregation? 

A. Aggregate 

B. Sum 

C. Sample data 

D. Average 

Answer:

Explanation: 

Amazon CloudWatch is basically a metrics repository. Either the user can send the custom data or an AWS product can put metrics into the repository, and the user can retrieve the statistics based on those metrics. The statistics are metric data aggregations over specified periods of time. Aggregations are made using the namespace, metric name, dimensions, and the data point unit of measure, within the time period that is specified by the user. CloudWatch supports Sum, Min, Max, Sample Data and Average statistics aggregation. 

Q6. - (Topic 3) 

A user has configured Auto Scaling with 3 instances. The user had created a new AMI after updating one of the instances. If the user wants to terminate two specific instances to ensure that Auto Scaling launches an instances with the new launch configuration, which command should he run? 

A. as-delete-instance-in-auto-scaling-group <Instance ID> --no-decrement-desired-capacity 

B. as-terminate-instance-in-auto-scaling-group <Instance ID> --update-desired-capacity 

C. as-terminate-instance-in-auto-scaling-group <Instance ID> --decrement-desired-capacity 

D. as-terminate-instance-in-auto-scaling-group <Instance ID> --no-decrement-desired-capacity 

Answer:

Explanation: 

The Auto Scaling command as-terminate-instance-in-auto-scaling-group <Instance ID> will terminate the specific instance ID. The user is required to specify the parameter as –no-decrement-desired-capacity to ensure that it launches a new instance from the launch config after terminating the instance. If the user specifies the parameter --decrement-desired-capacity then Auto Scaling will terminate the instance and decrease the desired capacity by 1. 

Q7. - (Topic 1) 

A media company produces new video files on-premises every day with a total size of around 100GBS after compression All files have a size of 1 -2 GB and need to be uploaded to Amazon S3 every night in a fixed time window between 3am and 5am Current upload takes almost 3 hours, although less than half of the available bandwidth is used. 

What step(s) would ensure that the file uploads are able to complete in the allotted time window? 

A. Increase your network bandwidth to provide faster throughput to S3 

B. Upload the files in parallel to S3 

C. Pack all files into a single archive, upload it to S3, then extract the files in AWS 

D. Use AWS Import/Export to transfer the video files 

Answer:

Explanation: Reference: 

http://aws.amazon.com/importexport/faqs/ 

Q8. - (Topic 2) 

A user has created a VPC with CIDR 20.0.0.0/24. The user has created a public subnet with CIDR 20.0.0.0/25. The user is trying to create the private subnet with CIDR 20.0.0.128/25. Which of the below mentioned statements is true in this scenario? 

A. It will not allow the user to create the private subnet due to a CIDR overlap 

B. It will allow the user to create a private subnet with CIDR as 20.0.0.128/25 

C. This statement is wrong as AWS does not allow CIDR 20.0.0.0/25 

D. It will not allow the user to create a private subnet due to a wrong CIDR range 

Answer:

Explanation: 

When the user creates a subnet in VPC, he specifies the CIDR block for the subnet. The CIDR block of a subnet can be the same as the CIDR block for the VPC (for a single subnet in the VPC., or a subset (to enable multiple subnets.. If the user creates more than one subnet in a VPC, the CIDR blocks of the subnets must not overlap. Thus, in this case the user has created a VPC with the CIDR block 20.0.0.0/24, which supports 256 IP addresses (20.0.0.0 to 20.0.0.255.. The user can break this CIDR block into two subnets, each supporting 128 IP addresses. One subnet uses the CIDR block 20.0.0.0/25 (for addresses 20.0.0.0 - 20.0.0.127. and the other uses the CIDR block 20.0.0.128/25 (for addresses 20.0.0.128 - 20.0.0.255.. 

Q9. - (Topic 3) 

A user has configured ELB with SSL using a security policy for secure negotiation between the client and load balancer. The ELB security policy supports various ciphers. Which of the below mentioned options helps identify the matching cipher at the client side to the ELB cipher list when client is requesting ELB DNS over SSL? 

A. Cipher Protocol 

B. Client Configuration Preference 

C. Server Order Preference 

D. Load Balancer Preference 

Answer:

Explanation: 

Elastic Load Balancing uses a Secure Socket Layer (SSL. negotiation configuration which is known as a Security Policy. It is used to negotiate the SSL connections between a client and the load balancer. When client is requesting ELB DNS over SSL and if the load balancer is configured to support the Server Order Preference, then the load balancer gets to select the first cipher in its list that matches any one of the ciphers in the client's list. Server Order Preference ensures that the load balancer determines which cipher is used for the SSL connection. 

Q10. - (Topic 3) 

A sys admin is using server side encryption with AWS S3. Which of the below mentioned statements helps the user understand the S3 encryption functionality? 

A. The server side encryption with the user supplied key works when versioning is enabled 

B. The user can use the AWS console, SDK and APIs to encrypt or decrypt the content for server side encryption with the user supplied key 

C. The user must send an AES-128 encrypted key 

D. The user can upload his own encryption key to the S3 console 

Answer:

Explanation: 

AWS S3 supports client side or server side encryption to encrypt all data at rest. The server side encryption can either have the S3 supplied AES-256 encryption key or the user can send the key along with each API call to supply his own encryption key. The encryption with the user supplied key (SSE-C. does not work with the AWS console. The S3 does not store the keys and the user has to send a key with each request. The SSE-C works when the user has enabled versioning. 

Q11. - (Topic 3) 

A user is collecting 1000 records per second. The user wants to send the data to CloudWatch using the custom namespace. Which of the below mentioned options is recommended for this activity? 

A. Aggregate the data with statistics, such as Min, max, Average, Sum and Sample data and send the data to CloudWatch 

B. Send all the data values to CloudWatch in a single command by separating them with a comma. CloudWatch will parse automatically 

C. Create one csv file of all the data and send a single file to CloudWatch 

D. It is not possible to send all the data in one call. Thus, it should be sent one by one. CloudWatch will aggregate the data automatically 

Answer:

Explanation: 

AWS CloudWatch supports the custom metrics. The user can always capture the custom data and upload the data to CloudWatch using CLI or APIs. The user can publish data to CloudWatch as single data points or as an aggregated set of data points called a statistic set using the command put-metric-data. It is recommended that when the user is having multiple data points per minute, he should aggregate the data so that it will minimize the number of calls to put-metric-data. In this case it will be single call to CloudWatch instead of 1000 calls if the data is aggregated. 

Q12. - (Topic 3) 

A user has launched an EC2 instance store backed instance in the US-East-1a zone. The user created AMI #1 and copied it to the Europe region. After that, the user made a few updates to the application running in the US-East-1a zone. The user makes an AMI#2 after the changes. If the user launches a new instance in Europe from the AMI #1 copy, which of the below mentioned statements is true? 

A. The new instance will have the changes made after the AMI copy as AWS just copies the reference of the original AMI during the copying. Thus, the copied AMI will have all the updated data 

B. The new instance will have the changes made after the AMI copy since AWS keeps updating the AMI 

C. It is not possible to copy the instance store backed AMI from one region to another 

D. The new instance in the EU region will not have the changes made after the AMI copy 

Answer:

Explanation: 

Within EC2, when the user copies an AMI, the new AMI is fully independent of the source AMI; there is no link to the original (source. AMI. The user can modify the source AMI without affecting the new AMI and vice a versa. Therefore, in this case even if the source AMI is modified, the copied AMI of the EU region will not have the changes. Thus, after copy the user needs to copy the new source AMI to the destination region to get those changes. 

Q13. - (Topic 3) 

A sys admin has enabled a log on ELB. Which of the below mentioned activities are not captured by the log? 

A. Response processing time 

B. Front end processing time 

C. Backend processing time 

D. Request processing time 

Answer:

Explanation: 

Elastic Load Balancing access logs capture detailed information for all the requests made to the load balancer. Each request will have details, such as client IP, request path, ELB IP, time, and latencies. The time will have information, such as Request Processing time, Backend Processing time and Response Processing time. 

Q14. - (Topic 3) 

A user has configured ELB with SSL using a security policy for secure negotiation between the client and load balancer. Which of the below mentioned SSL protocols is not supported by the security policy? 

A. TLS 1.3 

B. TLS 1.2 

C. SSL 2.0 

D. SSL 3.0 

Answer:

Explanation: 

Elastic Load Balancing uses a Secure Socket Layer (SSL. negotiation configuration which is known as a Security Policy. It is used to negotiate the SSL connections between a client and the load balancer. Elastic Load Balancing supports the following versions of the SSL protocol: TLS 1.2 TLS 1.1 TLS 1.0 SSL 3.0 SSL 2.0 

Q15. - (Topic 2) 

A user has launched 10 instances from the same AMI ID using Auto Scaling. The user is trying to see the 

average CPU utilization across all instances of the last 2 weeks under the CloudWatch console. How can the user achieve this? 

A. View the Auto Scaling CPU metrics 

B. Aggregate the data over the instance AMI ID 

C. The user has to use the CloudWatchanalyser to find the average data across instances 

D. It is not possible to see the average CPU utilization of the same AMI ID since the instance ID is different 

Answer:

Explanation: 

Amazon CloudWatch is basically a metrics repository. Either the user can send the custom data or an AWS product can put metrics into the repository, and the user can retrieve the statistics based on those metrics. The statistics are metric data aggregations over specified periods of time. Aggregations are made using the namespace, metric name, dimensions, and the data point unit of measure, within the time period that is specified by the user. To aggregate the data across instances launched with AMI, the user should select the AMI ID under EC2 metrics and select the aggregate average to view the data. 

Q16. - (Topic 3) 

A sys admin is trying to understand the sticky session algorithm. Please select the correct sequence of steps, both when the cookie is present and when it is not, to help the admin understand the implementation of the sticky session: 

ELB inserts the cookie in the response ELB chooses the instance based on the load balancing algorithm Check the cookie in the service request The cookie is found in the request The cookie is not found in the request 

A. 3,1,4,2 [Cookie is not Present] & 3,1,5,2 [Cookie is Present] 

B. 3,4,1,2 [Cookie is not Present] & 3,5,1,2 [Cookie is Present] 

C. 3,5,2,1 [Cookie is not Present] & 3,4,2,1 [Cookie is Present] 

D. 3,2,5,4 [Cookie is not Present] & 3,2,4,5 [Cookie is Present] 

Answer:

Explanation: 

Generally AWS ELB routes each request to a zone with the minimum load. The Elastic Load Balancer provides a feature called sticky session which binds the user’s session with a specific EC2 instance. The load balancer uses a special load-balancer-generated cookie to track the application instance for each request. When the load balancer receives a request, it first checks to see if this cookie is present in the request. If so, the request is sent to the application instance specified in the cookie. If there is no cookie, the load balancer chooses an application instance based on the existing load balancing algorithm. A cookie is inserted into the response for binding subsequent requests from the same user to that application instance. 

START AWS-SysOps EXAM