AWS-Solution-Architect-Associate Premium Bundle

AWS-Solution-Architect-Associate Premium Bundle

AWS Certified Solutions Architect - Associate Certification Exam

4.5 
(36720 ratings)
0 QuestionsPractice Tests
0 PDFPrint version
November 23, 2024Last update

Amazon AWS-Solution-Architect-Associate Free Practice Questions

Q1. A company wants to review the security requirements of Glacier. Which of the below mentioned statements is true with respect to the AWS Glacier data security?

A. All data stored on Glacier is protected with AES-256 serverside encryption.

B. All data stored on Glacier is protected with AES-128 serverside encryption.

C. The user can set the serverside encryption flag to encrypt the data stored on Glacier.

D. The data stored on Glacier is not encrypted by default. 

Answer: A

Explanation:

For Amazon Web Services, all the data stored on Amazon Glacier is protected using serverside encryption. AWS generates separate unique encryption keys for each Amazon Glacier archive, and encrypts it using AES-256. The encryption key then encrypts itself using AES-256 with a master key that is stored in a secure location.

Reference: http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf

Q2. You have been using T2 instances as your CPU requirements have not been that intensive. However you now start to think about larger instance types and start looking at M and IV|3 instances. You are a little confused as to the differences between them as they both seem to have the same ratio of CPU and memory. Which statement below is incorrect as to why you would use one over the other?

A. M3 instances are less expensive than M1 instances.

B. IV|3 instances are configured with more swap memory than M instances.

C. IV|3 instances provide better, more consistent performance that M instances for most use-cases.

D. M3 instances also offer SSD-based instance storage that delivers higher I/O performance. 

Answer: B

Explanation:

Amazon EC2 allows you to set up and configure everything about your instances from your operating system up to your applications. An Amazon Nlachine Image (AMI) is simply a packaged-up environment that includes all the necessary bits to set up and boot your instance.

M1 and M3 Standard instances have the same ratio of CPU and memory, some reasons below as to why you would use one over the other.

IV|3 instances provide better, more consistent performance that M instances for most use-cases. M3 instances also offer SSD-based instance storage that delivers higher I/O performance.

M3 instances are also less expensive than M1 instances. Due to these reasons, we recommend M3 for applications that require general purpose instances with a balance of compute, memory, and network resources.

However, if you need more disk storage than what is provided in M3 instances, you may still find M1 instances useful for running your applications.

Reference: https://aws.amazon.com/ec2/faqs/

Q3. You're running an application on-premises due to its dependency on non-x86 hardware and want to use AWS for data backup. Your backup application is only able to write to POSIX-compatible blockbased storage. You have 140TB of data and would like to mount it as a single folder on your file server Users must be able to access portions of this data while the backups are taking place. What backup solution would be most appropriate for this use case?

A. Use Storage Gateway and configure it to use Gateway Cached volumes.

B. Configure your backup software to use 53 as the target for your data backups.

C. Configure your backup software to use Glacier as the target for your data backups.

D. Use Storage Gateway and configure it to use Gateway Stored volumes. 

Answer: A

Explanation:

Gateway-Cached Volume Architecture

Gateway-cached volumes let you use Amazon Simple Storage Service (Amazon 53) as your primary data storage while retaining frequently accessed data locally in your storage gateway. Gateway cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data. You can create storage volumes   up to 32 TIB in size and attach to them as iSCSI devices from your on-premises application servers. Your gateway stores data that you write to these volumes in Amazon 53 and retains recently read data in your on-premises storage gateway's cache and upload buffer storage.

Gateway-cached volumes can range from 1 GIB to 32 TIB in size and must be rounded to the nearest GIB. Each gateway configured for gateway-cached volumes can support up to 32 volumes for a total maximum storage volume of 1,024 TIB (1 Pi B).

In the gateway-cached volume solution, AWS Storage Gateway stores all your on-premises application data in a storage volume in Amazon 53.

The following diagram provides an overview of the AWS Storage Gateway-cached volume deployment.

After you've installed the AWS Storage Gateway software appliance-the virtual machine (VM)-on a host in your data center and activated it, you can use the AWS Management Console to provision storage

volumes backed by Amazon 53. You can also provision storage volumes programmatically using the AWS Storage Gateway API or the AWS SDK libraries. You then mount these storage volumes to your on-premises application servers as iSCSI devices.

You also al locate disks on-premises for the VM. These on-premises disks serve the following purposes: Disks for use by the gateway as cache storage - As your applications write data to the storage volumes in AWS, the gateway initially stores the data on the on-premises disks referred to as cache storage before uploading the data to Amazon 53. The cache storage acts as the on-premises durable store for data that   is waiting to upload to Amazon 53 from the upload buffer.

The cache storage also lets the gateway store your appIication's recently accessed data on-premises for low-latency access. If your application requests data, the gateway first checks the cache storage for the data before checking Amazon 53.

You can use the following guidelines to determine the amount of disk space to allocate for cache storage. Generally, you should allocate at least 20 percent of your existing file store size as cache storage. Cache storage should also be larger than the upload buffer. This latter guideline helps ensure cache storage is large enough to persistently hold all data in the upload buffer that has not yet been uploaded to Amazon 53.

Disks for use by the gateway as the upload buffer - To prepare for upload to Amazon 53, your gateway also stores incoming data in a staging area, referred to as an upload buffer. Your gateway uploads this buffer data over an encrypted Secure Sockets Layer (SSL) connection to AWS, where it is stored encrypted in Amazon 53.

You can take incremental backups, called snapshots, of your storage volumes in Amazon 53. These point-in-time snapshots are also stored in Amazon 53 as Amazon EBS snapshots. When you take a new snapshot, only the data that has changed since your last snapshot is stored. You can initiate snapshots  on a scheduled or one-time basis. When you delete a snapshot, only the data not needed for any other snapshots is removed.

You can restore an Amazon EBS snapshot to a gateway storage volume if you need to recover a backup  of your data. Alternatively, for snapshots up to 16 TiB in size, you can use the snapshot as a starting point for a new Amazon EBS volume. You can then attach this new Amazon EBS volume to an Amazon EC2 instance.

All gateway-cached volume data and snapshot data is stored in Amazon 53 encrypted at rest using server-side encryption (SSE). However, you cannot access this data with the Amazon 53 API or other tools such as the Amazon 53 console.

Q4. An organization has developed a mobile application which allows end users to capture a photo on their mobile device, and store it inside an application. The application internally uploads the data to AWS S3. The organization wants each user to be able to directly upload data to S3 using their Google ID. How will the mobile app allow this?

A. Use the AWS Web identity federation for mobile applications, and use it to generate temporary security credentials for each user.

B. It is not possible to connect to AWS S3 with a Google ID.

C. Create an IAM user every time a user registers with their Google ID and use IAM to upload files to S3.

D. Create a bucket policy with a condition which allows everyone to upload if the login ID has a Google part to it.

Answer:

Explanation:

For Amazon Web Services, the Web identity federation allows you to create cloud-backed mobile apps that use public identity providers, such as login with Facebook, Google, or Amazon. It will create temporary security credentials for each user, which will be authenticated by the AWS services, such as S3.

Reference: http://docs.aws.amazon.com/STS/latest/UsingSTS/CreatingWIF.htmI

Q5. How many types of block devices does Amazon EC2 support A

A. 2

B. 3

C. 4

D. 1

Answer: A

Q6. Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required you may need to pay for a consultant.

How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery'?

A. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CIoudFront to serve HLS transcoded videos from EC2.

B. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CIoudFront to serve HLS transcoded videos from EC2.

C. Elastic Transcoder to transcode original high-resolution NIP4 videos to HLS. 53 to host videos with Lifecycle Management to archive original files to Glacier after a few days. C|oudFront to serve HLS transcoded videos from 53.

D. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. 53 to host videos with Lifecycle Management to archive all files to Glacier after a few days. CIoudFront to serve HLS transcoded videos from Glacier.

Answer: C

Q7. A for a VPC is a collection of subnets (typically private) that you may want to designate for your backend RDS DB Instances.

A. DB Subnet Set

B. RDS Subnet Group

C. DB Subnet Group

D. DB Subnet Collection 

Answer: C

Explanation:

DB Subnet Groups are a set of subnets (one per Availability Zone of a particular region) designed for your DB instances that reside in a VPC. They make easy to manage Multi-AZ deployments as well as the conversion from a Single-AZ to a Mut|i-AZ one.

Reference:  http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSVPC.htmI

Q8. How are the EBS snapshots saved on Amazon 53?

A. Exponentially

B. Incrementally

C. EBS snapshots are not stored in the Amazon 53

D. Decrementally 

Answer: B

Q9. Regarding Amazon Route 53, if your application is running on Amazon EC2 instances in two or more Amazon EC2 regions and if you have more than one Amazon EC2 instance in one or more regions, you can use to route traffic to the correct region and then use to route traffic to instances

within the region, based on probabilities that you specify.

A. weighted-based routing; alias resource record sets

B. latency-based routing; weighted resource record sets

C. weighted-based routing; weighted resource record sets

D. latency-based routing; alias resource record sets 

Answer: B

Explanation:

Regarding Amazon Route 53, if your application is running on Amazon EC2 instances in two or more Amazon EC2 regions, and if you have more than one Amazon EC2 instance in one or more regions, you can use latency-based routing to route traffic to the correct region and then use weighted resource record sets to route traffic to instances within the region based on weights that you specify.

Reference: http://docs.aws.amazon.com/Route53/Iatest/DeveIoperGuide/Tutorials.html

Q10. Just when you thought you knew every possible storage option on AWS you hear someone mention Reduced Redundancy Storage (RRS) within Amazon S3. What is the ideal scenario to use Reduced Redundancy Storage (RRS)?

A. Huge volumes of data

B. Sensitve data

C. Non-critical or reproducible data

D. Critical data 

Answer: C

Explanation:

Reduced Redundancy Storage (RRS) is a new storage option within Amazon S3 that enables customers  to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage. RRS provides a lower cost, less durable, highly available storage option that is designed to sustain the loss of data in a single facility.

RRS is ideal for non-critical or reproducible data.

For example, RRS is a cost-effective solution for sharing media content that is durably stored elsewhere. RRS also makes sense if you are storing thumbnails and other resized images that can be easily reproduced from an original image.

Reference: https://aws.amazon.com/s3/faqs/

Q11. You deployed your company website using Elastic Beanstalk and you enabled log file rotation to 53. An Elastic Map Reduce job is periodically analyzing the logs on 53 to build a usage dashboard that you share with your CIO.

You recently improved overall performance of the website using Cloud Front for dynamic content delivery and your website as the origin.

After this architectural change, the usage dashboard shows that the traffic on your website dropped by an order of magnitude. How do you fix your usage dashboard'?

A. Enable Cloud Front to deliver access logs to 53 and use them as input of the Elastic Map Reduce job.

B. Turn on Cloud Trail and use trail log tiles on 53 as input of the Elastic Map Reduce job

C. Change your log collection process to use Cloud Watch ELB metrics as input of the Elastic Map

Reducejob

D. Use Elastic Beanstalk "Rebuild Environment" option to update log delivery to the Elastic Map Reduce job.

E. Use Elastic Beanstalk 'Restart App server(s)" option to update log delivery to the Elastic Map Reduce job.

Answer: D

Q12. What is a placement group in Amazon EC2?

A. It is a group of EC2 instances within a single Availability Zone.

B. It the edge location of your web content.

C. It is the AWS region where you run the EC2 instance of your web content.

D. It is a group used to span multiple Availability Zones. 

Answer: A

Explanation:

A placement group is a logical grouping of instances within a single Availability Zone. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

Q13. You are configuring a new VPC for one of your clients for a cloud migration project, and only a public VPN will be in place. After you created your VPC, you created a new subnet, a new internet gateway, and attached your internet gateway to your VPC. When you launched your first instance into your VPC, you realized that you aren't able to connect to the instance, even if it is configured with an elastic IP. What  should be done to access the instance?

A. A route should be created as 0.0.0.0/0 and your internet gateway as target.

B. Attach another ENI to the instance and connect via new ENI.

C. A NAT instance should be created and all traffic should be forwarded to NAT instance.

D. A NACL should be created that allows all outbound traffic. 

Answer: A

Explanation:

All traffic should be routed via Internet Gateway. So, a route should be created with 0.0.0.0/0 as a source, and your Internet Gateway as your target.

Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario1.htmI

Q14. A client needs you to import some existing infrastructure from a dedicated hosting provider to AWS to try and save on the cost of running his current website. He also needs an automated process that manages backups, software patching, automatic failure detection, and recovery. You are aware that his existing set up currently uses an Oracle database. Which of the following AWS databases would be best for accomplishing this task?

A. Amazon RDS

B. Amazon Redshift

C. Amazon SimpIeDB

D. Amazon EIastiCache 

Answer: A

Explanation:

Amazon RDS gives you access to the capabilities of a familiar MySQL, Oracle, SQL Server, or PostgreSQL database engine. This means that the code, applications, and tools you already use today with your existing databases can be used with Amazon RDS. Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period and enabling point-in-time recovery.

Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html

Q15. Your EBS volumes do not seem to be performing as expected and your team leader has requested you look into improving their performance. Which of the following is not a true statement relating to the performance of your EBS volumes?

A. Frequent snapshots provide a higher level of data durability and they will not degrade the performance of your application while the snapshot is in progress.

B. General Purpose (SSD) and Provisioned IOPS (SSD) volumes have a throughput limit of 128 MB/s per volume.

C. There is a relationship between the maximum performance of your EBS volumes, the amount of I/O you are drMng to them, and the amount of time it takes for each transaction to complete.

D. There is a 5 to 50 percent reduction in IOPS when you first access each block of data on a newly created or restored EBS volume

Answer:

Explanation:

Several factors can affect the performance of Amazon EBS volumes, such as instance configuration, I/O characteristics, workload demand, and storage configuration.

Frequent snapshots provide a higher level of data durability, but they may slightly degrade the

performance of your application while the snapshot is in progress. This trade off becomes critical when you have data that changes rapidly. Whenever possible, plan for snapshots to occur during off-peak times in order to minimize workload impact.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.htmI

START AWS-Solution-Architect-Associate EXAM