AWS-Solution-Architect-Associate Premium Bundle

AWS-Solution-Architect-Associate Premium Bundle

AWS Certified Solutions Architect - Associate Certification Exam

4.5 
(13500 ratings)
0 QuestionsPractice Tests
0 PDFPrint version
November 21, 2024Last update

Amazon AWS-Solution-Architect-Associate Free Practice Questions

Q1. Having just set up your first Amazon Virtual Private Cloud (Amazon VPC) network, which defined a default network interface, you decide that you need to create and attach an additional network interface, known as an elastic network interface (ENI) to one of your instances. Which of the following statements is true regarding attaching network interfaces to your instances in your VPC?

A. You can attach 5 EN|s per instance type.

B. You can attach as many ENIs as you want.

C. The number of ENIs you can attach varies by instance type.

D. You can attach 100 ENIs total regardless of instance type. 

Answer: C

Explanation:

Each instance in your VPC has a default network interface that is assigned a private IP address from the   IP address range of your VPC. You can create and attach an additional network interface, known as an elastic network interface (ENI), to any instance in your VPC. The number of EN|s you can attach varies by instance type.

Q2. A user is launching an EC2 instance in the US East region. Which of the below mentioned options is recommended by AWS with respect to the selection of the availability zone?

A. Always select the AZ while launching an instance

B. Always select the US-East-1-a zone for HA

C. Do not select the AZ; instead let AWS select the AZ

D. The user can never select the availability zone while launching an instance 

Answer: C

Explanation:

When launching an instance with EC2, AWS recommends not to select the availability zone (AZ). AWS specifies that the default Availability Zone should be accepted. This is because it enables AWS to select the best Availability Zone based on the system health and available capacity. If the user launches additional instances, only then an Availability Zone should be specified. This is to specify the same or different AZ from the running instances.

Reference:  http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

Q3. You have been given a scope to deploy some AWS infrastructure for a large organisation. The requirements are that you will have a lot of EC2 instances but may need to add more when the average utilization of your Amazon EC2 fileet is high and conversely remove them when CPU utilization is low. Which AWS services would be best to use to accomplish this?

A. Auto Scaling, Amazon CIoudWatch and AWS Elastic Beanstalk

B. Auto Scaling, Amazon CIoudWatch and Elastic Load Balancing.

C. Amazon CIoudFront, Amazon CIoudWatch and Elastic Load Balancing.

D. AWS Elastic Beanstalk , Amazon CIoudWatch and Elastic Load Balancing. 

Answer: B

Explanation:

Auto Scaling enables you to follow the demand curve for your applications closely, reducing the need to manually provision Amazon EC2 capacity in advance. For example, you can set a condition to add new

Amazon EC2 instances in increments to the Auto Scaling group when the average utilization of your Amazon EC2 fileet is high; and similarly, you can set a condition to remove instances in the same increments when CPU utilization is low. If you have predictable load changes, you can set a schedule through Auto Scaling to plan your scaling actMties. You can use Amazon CIoudWatch to send alarms to trigger scaling actMties and Elastic Load Balancing to help distribute traffic to your instances within Auto Scaling groups. Auto Scaling enables you to run your Amazon EC2 fileet at optimal utilization.  Reference: http://aws.amazon.com/autoscaIing/

Q4. A major client who has been spending a lot of money on his internet service provider asks you to set up an AWS Direct Connection to try and save him some money. You know he needs high-speed connectMty. Which connection port speeds are available on AWS Direct Connect?

A. 500Mbps and 1Gbps

B. 1Gbps and 10Gbps

C. 100Mbps and 1Gbps

D. 1Gbps 

Answer: B

Explanation:

AWS Direct Connect is a network service that provides an alternative to using the internet to utilize AWS cloud services.

Using AWS Direct Connect, data that would have previously been transported over the Internet can now be delivered through a private network connection between AWS and your datacenter or corporate network.

1Gbps and 10Gbps ports are available. Speeds of 50Mbps, 100Mbps, 200Mbps, 300Mbps, 400Mbps, and 500Mbps can be ordered from any APN partners supporting AWS Direct Connect.

Reference: https://aws.amazon.com/directconnect/faqs/

Q5. Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required you may need to pay for a consultant.

How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery'?

A. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CIoudFront to serve HLS transcoded videos from EC2.

B. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CIoudFront to serve HLS transcoded videos from EC2.

C. Elastic Transcoder to transcode original high-resolution NIP4 videos to HLS. 53 to host videos with Lifecycle Management to archive original files to Glacier after a few days. C|oudFront to serve HLS transcoded videos from 53.

D. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. 53 to host videos with Lifecycle Management to archive all files to Glacier after a few days. CIoudFront to serve HLS transcoded videos from Glacier.

Answer: C

Q6. A company wants to review the security requirements of Glacier. Which of the below mentioned statements is true with respect to the AWS Glacier data security?

A. All data stored on Glacier is protected with AES-256 serverside encryption.

B. All data stored on Glacier is protected with AES-128 serverside encryption.

C. The user can set the serverside encryption flag to encrypt the data stored on Glacier.

D. The data stored on Glacier is not encrypted by default. 

Answer: A

Explanation:

For Amazon Web Services, all the data stored on Amazon Glacier is protected using serverside encryption. AWS generates separate unique encryption keys for each Amazon Glacier archive, and encrypts it using AES-256. The encryption key then encrypts itself using AES-256 with a master key that is stored in a secure location.

Reference: http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf

Q7. A web company is looking to implement an external payment service into their highly available application deployed in a VPC Their application EC2 instances are behind a public lacing ELB Auto scaling is used to add additional instances as traffic increases under normal load the application runs 2 instances in the

Auto Scaling group but at peak it can scale 3x in size. The application instances need to communicate with the payment service over the Internet which requires whitelisting of all public IP addresses used to communicate with it. A maximum of 4 whitelisting IP addresses are allowed at a time and can be added through an API.

How should they architect their solution?

A. Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the MAT instances.

B. Whitelist the VPC Internet Gateway Public IP and route payment requests through the Internet Gateway.

C. Whitelist the ELB IP addresses and route payment requests from the Application servers through the ELB.

D. Automatically assign public IP addresses to the application instances in the Auto Scaling group and run a script on boot that adds each instances public IP address to the payment validation whitelist API.

Answer: D

Q8. Refer to the architecture diagram above of a batch processing solution using Simple Queue Service (SQS) to set up a message queue between EC2 instances which are used as batch processors Cloud Watch monitors the number of Job requests (queued messages) and an Auto Scaling group adds or deletes

batch sewers automatically based on parameters set in Cloud Watch alarms. You can use this architecture to implement which of the following features in a cost effective and efficient manner?

A. Reduce the overall lime for executing jobs through parallel processing by allowing a busy EC2 instance that receives a message to pass it to the next instance in a daisy-chain setup.

B. Implement fault tolerance against EC2 instance failure since messages would remain in SQS and worn can continue with recovery of EC2 instances implement fault tolerance against SQS failure by backing up messages to 53.

C. Implement message passing between EC2 instances within a batch by exchanging messages through SQS.

D. Coordinate number of EC2 instances with number of job requests automatically thus Improving cost effectiveness.

E. Handle high priority jobs before lower priority jobs by assigning a priority metadata fie Id to SQS messages.

Answer:

Explanation:

Reference:

There are cases where a large number of batch jobs may need processing, and where the jobs may need to be re-prioritized.

For example, one such case is one where there are differences between different levels of services for unpaid users versus subscriber users (such as the time until publication) in services enabling, for example, presentation fi les to be uploaded for publication from a web browser. When the user uploads a presentation file, the conversion processes, for example, for publication are performed as batch

processes on the system side, and the file is published after the conversion. Is it then necessary to be able to assign the level of priority to the batch processes for each type of subscriber.

Explanation of the Cloud Solution/Pattern

A queue is used in controlling batch jobs. The queue need only be provided with priority numbers. Job requests are controlled by the queue, and the job requests in the queue are processed by a batch server. In Cloud computing, a highly reliable queue is provided as a service, which you can use to

structure a highly reliable batch system with ease. You may prepare multiple queues depending on priority levels, with job requests put into the queues depending on their priority levels, to apply prioritization to batch processes. The performance (number) of batch servers corresponding to a queue must be in accordance with the priority level thereof.

Implementation

In AWS, the queue service is the Simple Queue Service (SQS). MuItipIe SQS queues may be prepared to prepare queues for indMdual priority levels (with a priority queue and a secondary queue).

Moreover, you may also use the message Delayed Send function to delay process execution. Use SQS to prepare multiple queues for the indMdual priority levels.

Place those processes to be executed immediately (job requests) in the high priority queue. Prepare numbers of batch servers, for processing the job requests of the queues, depending on the priority levels.

Queues have a message "Delayed Send" function. You can use this to delay the time for starting a process.

Configuration

Benefits

You can increase or decrease the number of servers for processing jobs to change automatically the processing speeds of the priority queues and secondary queues.

You can handle performance and service requirements through merely increasing or decreasing the number of EC2 instances used in job processing.

Even if an EC2 were to fail, the messages (jobs) would remain in the queue service, enabling processing to be continued immediately upon recovery of the EC2 instance, producing a system that is robust to failure.

Cautions

Depending on the balance between the number of EC2 instances for performing the processes and the number of messages that are queued, there may be cases where processing in the secondary queue may be completed first, so you need to monitor the processing speeds in the primary queue and the secondary queue.

Q9. A user is sending bulk emails using AWS SES. The emails are not reaching some of the targeted audience because they are not authorized by the ISPs. How can the user ensure that the emails are all delivered?

A. Send an email using DKINI with SES.

B. Send an email using SMTP with SES.

C. Open a ticket with AWS support to get it authorized with the ISP.

D. Authorize the ISP by sending emails from the development account. 

Answer: A

Explanation:

Domain Keys Identified MaiI (DKIM) is a standard that allows senders to sign their email messages and ISPs, and use those signatures to verify that those messages are legitimate and have not been modified by a third party in transit.

Reference: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/dkim.html

Q10. You have set up an S3 bucket with a number of images in it and you have decided that you want anybody to be able to access these images, even anonymous users. To accomplish this you create a bucket policy. You will need to use an Amazon S3 bucket policy that specifies a in the principal element,

which means anyone can access the bucket.

A. hash tag (#)

B. anonymous user

C. wildcard (*)

D. S3 user 

Answer: C

Explanation:

You can use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket. You can then use the generated document to set your bucket policy by using the Amazon S3 console, by a number of third-party tools, or via your application.

You use an Amazon S3 bucket policy that specifies a wildcard (*) in the principal element, which means anyone can access the bucket. With anonymous access, anyone (including users without an AWS account) will be able to access the bucket.

Reference:  http://docs.aws.amazon.com/IAM/|atest/UserGuide/iam-troubleshooting.htm|#d0e20565

Q11. A large real -estate brokerage is exploring the option o( adding a cost-effective location based alert to   their existing mobile application The application backend infrastructure currently runs on AWS Users who opt in to this service will receive alerts on their mobile device regarding real-estate otters in proximity to their location. For the alerts to be relevant delivery time needs to be in the low minute count the existing mobile app has 5 million users across the us Which one of the following architectural suggestions would you make to the customer?

A. The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances: DynamoDB will be used to store and retrieve relevant otters EC2 instances will communicate with mobile earners/device providers to push alerts back to mobile application.

B. Use AWS DirectConnect or VPN to establish connectMty with mobile carriers EC2 instances will receive the mobile applications ' location through carrier connection: ROS will be used to store and relevant relevant offers EC2 instances will communicate with mobile carriers to push alerts back to the mobile application

C. The mobile application will send device location using SOS. EC2 instances will retrieve the re Ievant others from DynamoDB AWS MobiIe Push will be used to send offers to the mobile application

D. The mobile application will send device location using AWS Nlobile Push EC2 instances will retrieve the relevant offers from DynamoDB EC2 instances will communicate with mobile carriers/device providers to push alerts back to the mobile application.

Answer: A

Q12. An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes the customer realizes that data corruption occurred roughly 1.5 hours ago.

What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure?

A. Take hourly DB backups to 53, with transaction logs stored in 53 every 5 minutes.

B. Use synchronous database master-slave replication between two availability zones.

C. Take hourly DB backups to EC2 Instance store volumes with transaction logs stored In 53 every 5 minutes.

D. Take 15 minute DB backups stored In Glacier with transaction logs stored in 53 every 5 minutes. 

Answer: A

Q13. Does DynamoDB support in-place atomic updates?

A. Yes

B. No

C. It does support in-place non-atomic updates

D. It is not defined 

Answer: A

Explanation:

DynamoDB supports in-place atomic updates. 

Reference:

http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/\NorkingWithItems.htmI#Working WithItems.AtomicCounters

Q14. SQL Sewer _ store log ins and passwords in the master database.

A. can be configured to but by default does not

B. doesn't

C. does 

Answer: C

Q15. Which of the below mentioned options is not available when an instance is launched by Auto Scaling with EC2 Classic?

A. Public IP

B. Elastic IP

C. Private DNS

D. Private IP 

Answer: B

Explanation:

Auto Scaling supports both EC2 classic and EC2-VPC. When an instance is launched as a part of EC2 classic, it will have the public IP and DNS as well as the private IP and DNS.

Reference:  http://docs.aws.amazon.com/AutoScaIing/latest/DeveIoperGuide/GettingStartedTutoriaI.html

START AWS-Solution-Architect-Associate EXAM