AWS-Certified-Solutions-Architect-Professional Premium Bundle

AWS-Certified-Solutions-Architect-Professional Premium Bundle

AWS-Certified-Solutions-Architect-Professional Certification Exam

4.5 
(36825 ratings)
0 QuestionsPractice Tests
0 PDFPrint version
November 21, 2024Last update

Amazon AWS-Certified-Solutions-Architect-Professional Free Practice Questions

Q1. A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store. The main web-application best runs on m2.xlarge instances since it is highly memory- bound. Each new deployment requires semi-automated creation and testing of a new AMI for the application servers, which takes quite a while and is therefore only done once per week. Recently, a new chat feature has been implemented in node.js and waits to be integrated in the architecture. First tests show that the new component is CPU bound. Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS OpsWorks as an application life cycle tool to simplify management of the application and reduce the deployment cycles. What configuration in AWS OpsWorks is necessary to integrate the new chat module in the most cost-efficient and flexible way? 

A. Create one AWS OpsWorks stack, create one AWS OpsWorks layer, create one custom recipe 

B. Create two AWS OpsWorks stacks, create two AWS OpsWorks layers, create one custom recipe 

C. Create one AWS OpsWorks stack, create two AWS OpsWorks layers, create one custom recipe 

D. Create two AWS OpsWorks stacks, create two AWS OpsWorks layers, create two custom recipes 

Answer:

Q2. You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not need to be recreated in the second region? Choose 2 answers 

A. Route53 Record Sets 

B. Launch Configurations 

C. EC2 Key Pairs 

D. Security Groups 

E. IAM Roles 

F. Elastic IP Addresses (EIP) 

Answer: A, F 

Q3. An AWS customer runs a public blogging website. The site users upload two million blog entries a month. The average blog entry size is 200 KB. The access rate to blog entries drops to negligible 6 months after publication and users rarely access a blog entry 1 year after publication. Additionally, blog entries have a high update rate during the first 3 months following publication, this drops to no updates after 6 months. The customer wants to use CloudFront to improve his user's load times. Which of the following recommendations would you make to the customer? 

A. Duplicate entries into two different buckets and create two separate CloudFront distributions where S3 access is restricted only to CloudFront identity. 

B. Create a CloudFront distribution with "US/Europe" price class for US/Europe users and a different CloudFront distribution with "All Edge Locations" for the remaining users. 

C. Create a CloudFront distribution with Restrict Viewer Access, Forward Query String set to true and minimum TTL of 0. 

D. Create a CloudFront distribution with S3 access restricted only to the CloudFront identity and partition the blog entry's location in S3 according to the month it was uploaded to be used with CloudFront behaviors. 

Answer:

Q4. Your fortune 500 company has under taken a TCO analysis evaluating the use of Amazon S3 versus acquiring more hardware. The outcome was that all employees would be granted access to use Amazon S3 for storage of their personal documents. Which of the following will you need to consider so you can set up a solution that incorporates single sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? Choose 3 answers 

A. Using AWS Security Token Service to generate temporary tokens. 

B. Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in the bucket. 

C. Tagging each folder in the bucket. 

D. Configuring an IAM role. 

E. Setting up a federation proxy or identity provider. 

Answer: A, C, E 

Q5. You require the ability to analyze a large amount of data which is stored on Amazon S3 using Amazon Elastic MapReduce. You are using the cc2.8xlarge instance type, whose CPUs are mostly idle during processing. Which of the below would be the most cost efficient way to reduce the runtime of the job? 

A. Create fewer, larger files m Amazon S3. 

B. Use smaller instances that have higher aggregate I/O performance. 

C. Create more, smaller files on Amazon S3. 

D. Add additional cc2.8xlarge instances by introducing a task group. 

Answer:

Q6. A web company is looking to implement an external payment service into their highly available application deployed in a VPC. Their application EC2 instances are behind a public facing ELB. Auto Scaling is used to add additional instances as traffic Increases. Under normal load the application runs 2 Instances in the Auto Scaling group but at peak it can scale 3x in size. The application instances need to communicate with the payment service over the Internet, which requires whitelisting of all public IP addresses used to communicate with it. A maximum of 4 whitelisted IP addresses are allowed at a time and can be added through an API. How should they architect their solution? 

A. Whitelist the VPC Internet Gateway Public IP and route payment requests through the Internet Gateway. 

B. Automatically assign public IP addresses to the application instances in the Auto Scaling group and run a script on boot that adds each instances public IP address to the payment validation whitelist API. 

C. Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the NAT instances. 

D. Whitelist the ELB IP addresses and route payment requests from the Application servers through the ELB. 

Answer:

Q7. You are implementing a URL whitelisting system for a company that wants to restrict outbound HTTP/S connections to specific domains from their EC2-hosted applications. You deploy a single EC2 instance running proxy software and configure it to accept traffic from all subnets and EC2 instances in the VPC. You configure the proxy to only pass through traffic to domains that you define in its whitelist configuration. You have a nightly maintenance window of 10 minutes where all instances fetch new software updates. Each update is about 200MB in size and there are 500 instances in the VPC that routinely fetch updates. After a few days you notice that some machines are falling to successfully download some, but not all, of their updates within the maintenance window. The download URLs used for these updates are correctly listed in the proxy's whitelist configuration and you are able to access them manually using a web browser on the instances. What might be happening? Choose 2 answers 

A. You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time 

B. You are running the proxy on a sufficiently-sized EC2 instance in a private subnet and its network throughput is being throttled by a NAT running on an undersized EC2 instance 

C. The route table for the subnets containing the affected EC2 instances is not configured to direct network traffic for the software update locations to the proxy 

D. You have not allocated enough storage to the EC2 instance running the proxy so the network buffer is filling up, causing some requests to fail 

E. You are running the proxy in a public subnet but have not allocated enough EIPs to support the needed network throughput through the Internet Gateway (IGW) 

Answer: D, E 

Q8. You've been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud (VPC). The previous architect has already deployed a 3-tier VPC. 

The configuration is as follows: 

VPC: vpc-2f8bc447 

IGW: igw-2d8bc445 

NACL: ad-208bc448 

Subnets and Route Tables: Web servers: subnet-258bc44d 

Application servers: subnet-248bc44c 

Database servers: subnet-9189c6f9 

Route Tables: rtb-218bc449 rtb-238bc44b 

Associations: subnet-258bc44d : rtb-218bc449 subnet-248bc44c : rtb-238bc44b subnet-9189c6f9 : rtb- 238bc44b 

You are now ready to begin deploying EC2 instances into the VPC. Web servers must have direct access to the Internet. Application and database servers cannot have direct access to the Internet. Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates from the Internet? 

A. Create a bastion and NAT instance in subnet-258bc44d, and add a route from rtb-238bc44b to the NAT instance. 

B. Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within subnet- 248bc44c. 

C. Create a bastion and NAT instance in subnet-248bc44c, and add a route from rtb-238bc44b to subnet- 258bc44d. 

D. Create a bastion and NAT instance in subnet-258bc44d, add a route from rtb-238bc44b to Igw- 2d8bc445, and a new NACL that allows access between subnet-258bc44d and subnet- 248bc44c. 

Answer:

Q9. A company is storing data on Amazon Simple Storage Service (S3). The company's security policy

mandates that data is encrypted at rest. Which of the following methods can achieve this?

Choose 3 answers

A. Use Amazon S3 server-side encryption with AWS Key Management Service managed keys.

B. Use Amazon S3 server-side encryption with customer-provided keys.

C. Use Amazon S3 server-side encryption with EC2 key pair.

D. Use Amazon S3 bucket policies to restrict access to the data at rest.

E. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key.

F. Use SSL to encrypt the data while in transit to Amazon S3.

Answer: A, B, E 

Q10. A company is running a batch analysis every hour on their main transactional DB, running on an RDS MySQL instance, to populate their central Data Warehouse running on Redshift. During the execution of the batch, their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new dat a. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required. The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible? 

A. Create an RDS Read Replica for the batch analysis and SNS to notify the on-premises system to update the dashboard. 

B. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on premises system to update the dashboard. 

C. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard. 

D. Replace RDS with Redshift for the batch analysis and SQS to send a message to the on- premises system to update the dashboard. 

Answer:

Q11. You are the new IT architect in a company that operates a mobile sleep tracking application. When activated at night, the mobile app is sending collected data points of 1 kilobyte every 5 minutes to your backend. The backend takes care of authenticating the user and writing the data points into an Amazon DynamoDB table. Every morning, you scan the table to extract and aggregate last night's data on a per user basis, and store the results in Amazon S3. Users are notified via Amazon SNS mobile push notifications that new data is available, which is parsed and visualized by the mobile app. Currently you have around 100k users who are mostly based out of North America. You have been tasked to optimize the architecture of the backend system to lower cost. What would you recommend? Choose 2 answers 

A. Have the mobile app access Amazon DynamoDB directly Instead of JSON files stored on Amazon S3. 

B. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3. 

C. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput. 

D. Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and reduce provisioned read throughput. 

E. Create a new Amazon DynamoDB table each day and drop the one for the previous day after its  data is on Amazon S3. 

Answer: A, D 

Q12. Your customer is willing to consolidate their log streams (access logs, application logs, security logs, etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours. What is the best approach to meet your customer's requirements? 

A. Configure Amazon CloudTrail to receive custom logs, use EMR to apply heuristics the logs 

B. Send all the log events to Amazon SQS, setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics 

C. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3, use EMR to apply heuristics on the logs 

D. Send all the log events to Amazon Kinesis, develop a client process to apply heuristics on the logs 

Answer:

Q13. You require the ability to analyze a customer's clickstream data on a website, so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers dick through the site, to increase stickiness and advertising click-through. Which option meets the requirements for capturing and analyzing this data? 

A. Log dicks in weblogs by URL, store to Amazon S3, and then analyze with Elastic Map Reduce. 

B. Publish web clicks by session to an Amazon SQS queue; then periodically drain these events to Amazon RDS and analyze with SQL. 

C. Push web clicks by session to Amazon Kinesis, then analyze behavior using Kinesis workers. 

D. Write click events directly to Amazon Redshift, and then analyze with SQL. 

Answer:

Q14. You are migrating a legacy client-server application to AWS. The application responds to a specific DNS domain (e.g. www.example.com) and has a 2-tier architecture, with multiple application servers and a database server. Remote clients use TCP to connect to the application servers. The application servers need to know the IP address of the clients in order to function properly and are currently taking that information from the TCP socket. A Multi-AZ RDS MySQL instance will be used for the database. During the migration you can change the application code, but you have to file a change request. How would you implement the architecture on AWS in order to maximize scalability and high availability? 

A. File a change request to implement Alias Resource support in the application. Use Route 53 Alias Resource Record to distribute load on two application servers in different AZs. 

B. File a change request to implement Latency Based Routing support in the application. Use Route 53 with Latency Based Routing enabled to distribute load on two application servers in different AZs. 

C. File a change request to implement Cross-Zone support in the application. Use an ELB with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs. 

D. File a change request to implement Proxy Protocol support in the application. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs. 

Answer:

Q15. A web company is looking to implement an intrusion detection and prevention system into their deployed VPC. This platform should have the ability to scale to thousands of instances running inside of the VPC. How should they architect their solution to achieve these goals? 

A. Configure each host with an agent that collects all network traffic and sends that traffic to the IDS/IPS platform for inspection. 

B. Configure an instance with monitoring software and the elastic network interface (ENI) set to promiscuous mode packet sniffing to see all traffic across the VPC. 

C. Create a second VPC and route all traffic from the primary application VPC through the second VPC where the scalable virtualized IDS/IPS platform resides. 

D. Configure servers running in the VPC using the host-based "route" commands to send all traffic through the platform to a scalable virtualized IDS/IPS. 

Answer:

Q16. You are designing an intrusion detection/prevention (IDS/IPS) solution for a customer web application in a single VPC. You are considering the options for Implementing IDS/IPS protection for traffic coming from the Internet. Which of the following options would you consider? Choose 2 answers 

A. Implement IDS/IPS agents on each instance running in VPC. 

B. Implement Elastic Load Balancing with SSL listeners in front of the web applications. 

C. Implement a reverse proxy layer in front of web servers, and configure IDS/IPS agents on each reverse proxy server. 

D. Configure an instance in each subnet to switch its network interface card to promiscuous mode and analyze network traffic. 

Answer: B, C 

START AWS-Certified-Solutions-Architect-Professional EXAM