- Home
- Amazon-Web-Services
- SAP-C01 Exam
Amazon-Web-Services SAP-C01 Free Practice Questions
we provide Accurate Amazon-Web-Services SAP-C01 question which are the best for clearing SAP-C01 test, and to get certified by Amazon-Web-Services AWS Certified Solutions Architect- Professional. The SAP-C01 Questions & Answers covers all the knowledge points of the real SAP-C01 exam. Crack your Amazon-Web-Services SAP-C01 Exam with latest dumps, guaranteed!
Amazon-Web-Services SAP-C01 Free Dumps Questions Online, Read and Test Now.
NEW QUESTION 1
A company runs a legacy system on a single m4.2xlarge Amazon EC2 instance with Amazon EBS2 storage. The EC2 instance runs both the web server and a self-managed Oracle database. A snapshot is made of the EBS volume every 12 hours, and an AMI was created from the fully configured EC2 instance.
A recent event that terminated the EC2 instance led to several hours of downtime. The application was successfully launched from the AMI, but the age of the EBS snapshot and the repair of the database resulted in the loss of 8 hours of data. The system was also down for 4 hours while the Systems Operators manually performed these processes.
What architectural changes will minimize downtime and reduce the chance of lost data?
- A. Create an Amazon CloudWatch alarm to automatically recover the instanc
- B. Create a script that will check and repair the database upon reboo
- C. Subscribe the Operations team to the Amazon SNS message generated by the CloudWatch alarm.
- D. Run the application on m4.xlarge EC2 instances behind an Elastic Load Balancer/Application Load Balance
- E. Run the EC2 instances in an Auto Scaling group across multiple Availability Zones with a minimum instance count of tw
- F. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.
- G. Run the application on m4.2xlarge EC2 instances behind an Elastic Load Balancer/Application Load Balance
- H. Run the EC2 instances in an Auto Scaling group across multiple Availability Zones with aminimum instance count of on
- I. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.
- J. Increase the web server instance count to two m4.xlarge instances and use Amazon Route 53 round-robin load balancing to spread the loa
- K. Enable Route 53 health checks on the web server
- L. Migrate the database to an Amazon RDS Oracle Multi-AZ DB instance.
Answer: B
Explanation:
Ensures that there are at least two EC instances, each of which is in a different AZ. It also ensures that the database spans multiple AZs. Hence this meets all the criteria.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
NEW QUESTION 2
A photo-sharing and publishing company receives 10,000 to 150,000 images daily. The company receives the images from multiple suppliers and users registered with the service. The company is moving to AWS and wants to enrich the existing metadata by adding data using Amazon Rekognition.
The following is an example of the additional data:
As part of the cloud migration program, the company uploaded existing image data to Amazon S3 and told users to upload images directly to Amazon S3.
What should the Solutions Architect do to support these requirements?
- A. Trigger AWS Lambda based on an S3 event notification to create additional metadata using Amazon Rekognitio
- B. Use Amazon DynamoDB to store the metadata and Amazon ES to create an inde
- C. Use a web front-end to provide search capabilities backed by Amazon ES.
- D. Use Amazon Kinesis to stream data based on an S3 even
- E. Use an application running in Amazon EC2 to extract metadata from the image
- F. Then store the data on Amazon DynamoDB and Amazon CloudSearch and create an inde
- G. Use a web front-end with search capabilities backed by CloudSearch.
- H. Start an Amazon SQS queue based on S3 event notification
- I. Then have Amazon SQS send the metadata information to Amazon DynamoD
- J. An application running on Amazon EC2 extracts data from Amazon Rekognition using the API and adds data to DynamoDB and Amazon E
- K. Use a web front-end to provide search capabilities backed by Amazon ES.
- L. Trigger AWS Lambda based on an S3 event notification to create additional metadata using Amazon Rekognitio
- M. Use Amazon RDS MySQL Multi-AZ to store the metadata information and use Lambda to create an inde
- N. Use a web front-end with search capabilities backed by Lambda.
Answer: A
Explanation:
https://github.com/aws-samples/lambda-refarch-imagerecognition
NEW QUESTION 3
A large multinational company runs a timesheet application on AWS that is used by staff across the world. The application runs on Amazon EC2 instances in an Auto Scaling group behind an Elastic Load Balancing (ELB) load balancer, and stores in an Amazon RDS MySQL Multi-AZ database instance.
The CFO is concerned about the impact on the business if the application is not available. The application must not be down for more than two hours, but the solution must be as cost-effective as possible.
How should the Solutions Architect meet the CFO’s requirements while minimizing data loss?
- A. In another region, configure a read replica and create a copy of the infrastructur
- B. When an issue occurs, promote the read replica and configure as an Amazon RDS Multi-AZ database instanc
- C. Update the DNS to point to the other region’s ELB.
- D. Configure a 1-day window of 60-minute snapshots of the Amazon RDS Multi-AZ database instance.Create an AWS CloudFormation template of the application infrastructure that uses the latest snapsho
- E. When an issue occurs, use the AWS CloudFormation template to create the environment in another regio
- F. Update the DNS record to point to the other region’s ELB.
- G. Configure a 1-day window of 60-minute snapshots of the Amazon RDS Multi-AZ database instance which is copied to another regio
- H. Crate an AWS CloudFormation template of the application infrastructure that uses the latest copied snapsho
- I. When an issue occurs, use the AWS CloudFormation template to create the environment in another regio
- J. Update the DNS record to point to the other region’s ELB.
- K. Configure a read replica in another regio
- L. Create an AWS CloudFormation template of the application infrastructur
- M. When an issue occurs, promote the read replica and configure as an Amazon RDSMulti-AZ database instance and use the AWS CloudFormation template to create the environment in another region using the promoted Amazon RDS instanc
- N. Update the DNS record to point to the other region’s ELB.
Answer: D
NEW QUESTION 4
A media storage application uploads user photos to Amazon S3 for processing. End users are reporting that some uploaded photos are not being processed properly. The Application Developers trace the logs and find that AWS Lambda is experiencing execution issues when thousands of users are on the system simultaneously. Issues are caused by:
Limits around concurrent executions.
The performance of Amazon DynamoDB when saving data.
Which actions can be taken to increase the performance and reliability of the application? (Choose two.)
- A. Evaluate and adjust the read capacity units (RCUs) for the DynamoDB tables.
- B. Evaluate and adjust the write capacity units (WCUs) for the DynamoDB tables.
- C. Add an Amazon ElastiCache layer to increase the performance of Lambda functions.
- D. Configure a dead letter queue that will reprocess failed or timed-out Lambda functions.
- E. Use S3 Transfer Acceleration to provide lower-latency access to end users.
Answer: BD
Explanation:
B:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.h
D: https://aws.amazon.com/blogs/compute/robust-serverless-application-design-with-aws-lambda-dlq/c
NEW QUESTION 5
A company that is new to AWS reports it has exhausted its service limits across several accounts that are on the Basic Support plan. The company would like to prevent this from happening in the future.
What is the MOST efficient way of monitoring and managing all service limits in the company’s accounts?
- A. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, provide notifications using Amazon SNS if the limits are close to exceeding the threshold.
- B. Reach out to AWS Support to proactively increase the limits across all account
- C. That way, the customer avoids creating and managing infrastructure just to raise the service limits.
- D. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, programmatically increase the limits that are close to exceeding the threshold.
- E. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, and use Amazon SNS for notifications if a limit is close to exceeding the threshol
- F. Ensure that the accounts are using the AWS Business Support plan at a minimum.
Answer: D
Explanation:
https://github.com/awslabs/aws-limit-monitor https://aws.amazon.com/solutions/limit-monitor/
NEW QUESTION 6
An on-premises application will be migrated to the cloud. The application consists of a single Elasticsearch virtual machine with data source feeds from local systems that will not be migrated, and a Java web application on Apache Tomcat running on three virtual machines. The Elasticsearch server currently uses 1 TB of storage out of 16 TB available storage, and the web application is updated every 4 months. Multiple users access the web application from the Internet. There is a 10Gbit AWS Direct Connect connection established, and the application can be migrated over a schedules 48-hour change window.
Which strategy will have the LEAST impact on the Operations staff after the migration?
- A. Create an Elasticsearch server on Amazon EC2 right-sized with 2 TB of Amazon EBS and a public AWS Elastic Beanstalk environment for the web applicatio
- B. Pause the data sources, export the Elasticsearch index from on premises, and import into the EC2 Elasticsearch serve
- C. Move data source feeds to the new Elasticsearch server and move users to the web application.
- D. Create an Amazon ES cluster for Elasticsearch and a public AWS Elastic Beanstalk environment for the web applicatio
- E. Use AWS DMS to replicate Elasticsearch dat
- F. When replication has finished, move data source feeds to the new Amazon ES cluster endpoint and move users to the new web application.
- G. Use the AWS SMS to replicate the virtual machines into AW
- H. When the migration is complete, pause the data source feeds and start the migrated Elasticsearch and web application instance
- I. Place the web application instances behind a public Elastic Load Balance
- J. Move the data source feeds to the new Elasticsearch server and move users to the new web Application Load Balancer.
- K. Create an Amazon ES cluster for Elasticsearch and a public AWS Elastic Beanstalk environment for the web applicatio
- L. Pause the data source feeds, export the Elasticsearch index from on premises, and import into the Amazon ES cluste
- M. Move the data source feeds to the new Amazon ES cluster endpoint and move users to the new web application.
Answer: D
NEW QUESTION 7
A company is running a web application with On-Demand Amazon EC2 instances in Auto Scaling groups that scale dynamically based on custom metrics After extensive testing the company determines that the m5 2xlarge instance size is optimal for the workload Application data is stored in db r4 4xlarge Amazon RDS instances that are confirmed to be optimal The traffic to the web application spikes randomly during the day
What other cost-optimization methods should the company implement to further reduce costs without impacting the reliability of the application?
- A. Double the instance count in the Auto Scaling groups and reduce the instance size to m5 large
- B. Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running
- C. Reduce the RDS instance size to db r4 xlarge and add five equivalents sized read replicas to provide reliability
- D. Reserve capacity for all EC2 instances and leverage Spot Instance pricing for the RDS database
Answer: B
NEW QUESTION 8
A company has a data center that must be migrated to AWS as quickly as possible. The data center has a 500 Mbps AWS Direct Connect link and a separate, fully available 1 Gbps ISP connection. A Solutions Architect must transfer 20 TB of data from the data center to an Amazon S3 bucket.
What is the FASTEST way transfer the data?
- A. Upload the data to the S3 bucket using the existing DX link.
- B. Send the data to AWS using the AWS Import/Export service.
- C. Upload the data using an 80 TB AWS Snowball device.
- D. Upload the data to the S3 bucket using S3 Transfer Acceleration.
Answer: D
Explanation:
https://aws.amazon.com/s3/faqs/
NEW QUESTION 9
A large company is migrating its entire IT portfolio to AWS. Each business unit in the company has a standalone AWS account that supports both development and test environments. New accounts to support production workloads will be needed soon.
The Finance department requires a centralized method for payment but must maintain visibility into each group’s spending to allocate costs.
The Security team requires a centralized mechanism to control IAM usage in all the company’s accounts. What combination of the following options meet the company’s needs with LEAST effort? (Choose two.)
- A. Use a collection of parameterized AWS CloudFormation templates defining common IAM permissions that are launched into each accoun
- B. Require all new and existing accounts to launch the appropriate stacks to enforce the least privilege model.
- C. Use AWS Organizations to create a new organization from a chosen payer account and define an organizational unit hierarch
- D. Invite the existing accounts to join the organization and create new accounts using Organizations.
- E. Require each business unit to use its own AWS account
- F. Tag each AWS account appropriately and enable Cost Explorer to administer chargebacks.
- G. Enable all features of AWS Organizations and establish appropriate service control policies that filter IAM permissions for sub-accounts.
- H. Consolidate all of the company’s AWS accounts into a single AWS accoun
- I. Use tags for billing purposes and IAM’s Access Advice feature to enforce the least privilege model.
Answer: BD
NEW QUESTION 10
A company is using AWS CloudFormation to deploy its infrastructure. The company is concerned that, if a production CloudFormation stack is deleted, important data stored in Amazon RDS databases or Amazon EBS volumes might also be deleted.
How can the company prevent users from accidentally deleting data in this way?
- A. Modify the CloudFormation templates to add a DeletionPolicy attribute to RDS and EBS resources.
- B. Configure a stack policy that disallows the deletion of RDS and EBS resources.
- C. Modify IAM policies to deny deleting RDS and EBS resources that are tagged with an “aws:cloudformation:stack-name” tag.
- D. Use AWS Config rules to prevent deleting RDS and EBS resources.
Answer: A
Explanation:
With the DeletionPolicy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by default. To keep a resource when its stack is deleted, specify Retain for that resource. You can use retain for any resource. For example, you can retain a nested stack, Amazon S3 bucket, or EC2 instance so that you can continue to use or modify those resources after you delete their stacks. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html
NEW QUESTION 11
A company wants to allow its Marketing team to perform SQL queries on customer records to identify market segments. The data is spread across hundreds of files. The records must be encrypted in transit and at rest. The Team Manager must have the ability to manage users and groups, but no team members should have access to services or resources not required for the SQL queries. Additionally, Administrators need to audit the queries made and receive notifications when a query violates rules defined by the Security team.
AWS Organizations has been used to create a new account and an AWS IAM user with administrator permissions for the Team Manager.
Which design meets these requirements?
- A. Apply a service control policy (SCP) that allows access to IAM, Amazon RDS, and AWS CloudTrail.Load customer records in Amazon RDS MySQL and train users to execute queries using the AWS CL
- B. Stream the query logs to Amazon CloudWatch Logs from the RDS database instanc
- C. use a subscription filter with AWS lambda functions to audit and alarm on queries against personal data.
- D. Apply a service control policy (SCP) that denies access to all services except IAM, Amazon Athena, Amazon S3, and AWS CloudTrai
- E. Store customer record files in Amazon S3 and train users to execute queries using the CLI via Athen
- F. Analyze CloudTrail events to audit and alarm on queries against personal data.
- G. Apply a service control policy (SCP) that denies to all services except IAM, Amazon DynamoDB, and AWS CloudTrai
- H. Store customer records in DynamoDB and train users to execute queries using the AWS CL
- I. Enable DynamoDB streams to track the queries that are issued and use an AWS Lambda function for real-time monitoring and alerting.
- J. Apply a service control policy (SCP) that allows to IAM, Amazon Athena, Amazon S3, and AWS CloudTrai
- K. Store customer records as files in Amazon S3 and train users to leverage the Amazon S3 Select feature and execute queries using the AWS CL
- L. Enable S3 object-level logging and analyze CloudTrail events to audit and alarm on queries against personal data.
Answer: D
NEW QUESTION 12
A retail company has a custom NET web application running on AWS that uses Microsoft SQL Server for the database The application servers maintain a user's session locally.
Which combination of architecture changes are needed ensure all tiers of the solution are highly available? (Select THREE.)
- A. Refactor the application to store the user's session in Amazon ElastiCache Use Application Load Balancers to distribute the load between application instances
- B. Set up the database to generate hourly snapshots using Amazon EBS Configure an Amazon CloudWatch Events rule to launch a new database instance if the primary one fails
- C. Migrate the database to Amazon RDS tor SQL Server Configure the RDS instance to use a Multi-AZ deployment
- D. Move the NET content to an Amazon S3 bucket Configure the bucket for static website hosting
- E. Put the application instances in an Auto Scaling group Configure the Auto Scaling group to create new instances if an instance becomes unhealthy
- F. Deploy Amazon CloudFront in front of the application tier Configure CloudFront to serve content from healthy application instances only
Answer: BDE
NEW QUESTION 13
A company has deployed an application to multiple environments in AWS, including production and testing. The company has separate accounts for production and testing, and users are allowed to create additional application users for team members or services, as needed. The Security team has asked the Operations team for better isolation between production and testing with centralized controls on security credentials and improved management of permissions between environments.
Which of the following options would MOST securely accomplish this goal?
- A. Create a new AWS account to hold user and service accounts, such as an identity accoun
- B. Create users and groups in the identity accoun
- C. Create roles with appropriate permissions in the production and testing account
- D. Add the identity account to the trust policies for the roles.
- E. Modify permissions in the production and testing accounts to limit creating new IAM users to members of the Operations tea
- F. Set a strong IAM password policy on each accoun
- G. Create new IAM users and groups in each account to limit developer access to just the services required to complete their job function.
- H. Create a script that runs on each account that checks user accounts for adherence to a security policy.Disable any user or service accounts that do not comply.
- I. Create all user accounts in the production accoun
- J. Create roles for access in the production account and testing account
- K. Grant cross-account access from the production account to the testing account.
Answer: A
Explanation:
https://aws.amazon.com/blogs/security/how-to-centralize-and-automate-iam-policy-creation-in-sandbox-develop
NEW QUESTION 14
A company has a single AWS master billing account, which is the root of the AWS Organizations hierarchy. The company has multiple AWS accounts within this hierarchy, all organized into organization units (OUs). More OUS and AWS accounts will continue to be created as other parts of the business migrate applications to AWS. These business units may need to use different AWS services. The Security team is implementing the following requirements for all current and future AWS accounts.
* Control policies must be applied across all accounts to prohibit AWS servers.
* Exceptions to the control policies are allowed based on valid use cases. Which solution will meet these requirements with minimal optional overhead?
- A. Use an SCP in Organizations to implement a deny list of AWS server
- B. Apply this SCP at the leve
- C. For any specific exceptions for an OU, create a new SCP for that OU and add the required AWS services the allow list.
- D. Use an SCP In organizations to implement a deny list of AWS servic
- E. Apply this SCP at the root level and each O
- F. Remove the default AWS managed SCP from the root level and all OU level
- G. For any specific exceptions, modify the SCP attached to that OU, and add the required AWS required services to the allow list.
- H. Use an SCP in Organization to implement a deny list of AWS servic
- I. Apply this SCP at each OU leve
- J. Leave the default AWS managed SCP at the root level For any specific executions for an OU, create a new SCP for that OU.
- K. Use an SCP in Organizations to implement an allow list of AWS service
- L. Apply this SCP at the root leve
- M. Remove the default AWS managed SCP from the root level and all OU level
- N. For any specific exceptions for an OU, modify the SCP attached to that OU, and add the required AWS services to the allow list.
Answer: B
NEW QUESTION 15
A Solutions Architect must design a highly available, stateless, REST service. The service will require multiple persistent storage layers for service object meta information and the delivery of content. Each request needs to be authenticated and securely processed. There is a requirement to keep costs as low as possible?
How can these requirements be met?
- A. Use AWS Fargate to host a container that runs a self-contained REST servic
- B. Set up an Amazon ECS service that is fronted by an Application Load Balancer (ALB). Use a custom authenticator to control access to the AP
- C. Store request meta information in Amazon DynamoDB with Auto Scaling and static content in a secured S3 bucke
- D. Make secure signed requests for Amazon S3 objects and proxy the data through the REST service interface.
- E. Use AWS Fargate to host a container that runs a self-contained REST servic
- F. Set up an ECS service that is fronted by a cross-zone AL
- G. Use an Amazon Cognito user pool to control access to the AP
- H. Store request meta information in DynamoDB with Auto Scaling and static content in a secured S3 bucke
- I. Generate presigned URLs when returning references to content stored in Amazon S3.
- J. Set up Amazon API Gateway and create the required API resources and method
- K. Use an Amazon Cognito user pool to control access to the AP
- L. Configure the methods to use AWS Lambda proxy integrations, and process each resource with a unique AWS Lambda functio
- M. Store request meta information in DynamoDB with Auto Scaling and static content in a secured S3 bucke
- N. Generate presigned URLs when returning references to content stored in Amazon S3.
- O. Set up Amazon API Gateway and create the required API resources and method
- P. Use an Amazon API Gateway custom authorizer to control access to the AP
- Q. Configure the methods to use AWS Lambda custom integrations, and process each resource with a unique Lambda functio
- R. Store request meta information in an Amazon ElastiCache Multi-AZ cluster and static content in a secured S3 bucke
- S. Generate presigned URLs when returning references to content stored in Amazon S3.
Answer: C
NEW QUESTION 16
A company wants to replace its call system with a solution built using AWS managed services. The company call center would like the solution to receive calls, create contact flows, and scale to handle growth projections. The call center would also like the solution to use deep learning capabilities to recognize the intent of the callers and handle basic tasks, reducing the need to speak an agent. The solution should also be able to query business applications and provide relevant information back to calls as requested.
Which services should the Solution Architect use to build this solution? (Choose three.)
- A. Amazon Rekognition to identity who is calling.
- B. Amazon Connect to create a cloud-based contact center.
- C. Amazon Alexa for Business to build conversational interface.
- D. AWS Lambda to integrate with internal systems.
- E. Amazon Lex to recognize the intent of the caller.
- F. Amazon SQS to add incoming callers to a queue.
Answer: BDE
NEW QUESTION 17
A company wants to ensure that the workloads for each of its business units have complete autonomy and a minimal blast radius in AWS. The Security team must be able to control access to the resources and services in the account to ensure that particular services are not used by the business units.
How can a Solutions Architect achieve the isolation requirements?
- A. Create individual accounts for each business unit and add the account to an OU in AWS Organizations.Modify the OU to ensure that the particular services are blocke
- B. Federate each account with an IdP, and create separate roles for the business units and the Security team.
- C. Create individual accounts for each business uni
- D. Federate each account with an IdP and create separate roles and policies for business units and the Security team.
- E. Create one shared account for the entire compan
- F. Create separate VPCs for each business uni
- G. Create individual IAM policies and resource tags for each business uni
- H. Federate each account with an IdP, and create separate roles for the business units and the Security team.
- I. Create one shared account for the entire compan
- J. Create individual IAM policies and resource tags for each business uni
- K. Federate the account with an IdP, and create separate roles for the business units and the Security team.
Answer: A
NEW QUESTION 18
A Solutions Architect must migrate an existing on-premises web application with 70 TB of static files supporting a public open-data initiative. The architect wants to upgrade to the latest version of the host operating system as part of the migration effort.
Which is the FASTEST and MOST cost-effective way to perform the migration?
- A. Run a physical-to-virtual conversion on the application serve
- B. Transfer the server image over the internet, and transfer the static data to Amazon S3.
- C. Run a physical-to-virtual conversion on the application serve
- D. Transfer the server image over AWS Direct Connect, and transfer the static data to Amazon S3.
- E. Re-platform the server to Amazon EC2, and use AWS Snowball to transfer the static data to Amazon S3.
- F. Re-platform the server by using the AWS Server Migration Service to move the code and data to a new Amazon EC2 instance.
Answer: C
NEW QUESTION 19
A company runs an application on a fleet of Amazon EC2 instances The application requires low latency and random access to 100 GB of data The application must be able to access the data at up to 3.000 IOPS A Development team has configured the EC2 launch template to provision a 100-GB Provisioned IOPS (PIOPS) Amazon EBS volume with 3 000 IOPS provisioned A Solutions Architect is tasked with lowering costs without impacting performance and durability
Which action should be taken?
- A. Create an Amazon EFS file system with the performance mode set to Max I/O Configure the EC2 operating system to mount the EFS file system
- B. Create an Amazon EFS file system with the throughput mode set to Provisioned Configure the EC2 operating system to mount the EFS file system
- C. Update the EC2 launch template to allocate a new 1-TB EBS General Purpose SSO (gp2) volume
- D. Update the EC2 launch template to exclude the PIOPS volume Configure the application to use local instance storage
Answer: A
NEW QUESTION 20
A company collects a steady stream of 10 million data records from 100,000 sources each day. These records are written to an Amazon RDS MySQL DB. A query must produce the daily average of a data source over the past 30 days. There are twice as many reads as writes. Queries to the collected data are for one source ID at a time.
How can the Solutions Architect improve the reliability and cost effectiveness of this solution?
- A. Use Amazon Aurora with MySQL in a Multi-AZ mod
- B. Use four additional read replicas.
- C. Use Amazon DynamoDB with the source ID as the partition key and the timestamp as the sort ke
- D. Use a Time to Live (TTL) to delete data after 30 days.
- E. Use Amazon DynamoDB with the source ID as the partition ke
- F. Use a different table each day.
- G. Ingest data into Amazon Kinesis using a retention period of 30 day
- H. Use AWS Lambda to write data records to Amazon ElastiCache for read access.
Answer: B
Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html
NEW QUESTION 21
A company runs a Windows Server host in a public subnet that is configured to allow a team of administrators to connect over RDP to troubleshoot issues with hosts in a private subnet. The host must be available at all times outside of a scheduled maintenance window, and needs to receive the latest operating system updates within 3 days of release.
What should be done to manage the host with the LEAST amount of administrative effort?
- A. Run the host in a single-instance AWS Elastic Beanstalk environmen
- B. Configure the environment with a custom AMI to use a hardened machine image from AWS Marketplac
- C. Apply system updates with AWS Systems Manager Patch Manager.
- D. Run the host on AWS WorkSpace
- E. Use Amazon WorkSpaces Application Manager (WAM) to harden the hos
- F. Configure Windows automatic updates to occur every 3 days.
- G. Run the host in an Auto Scaling group with a minimum and maximum instance count of 1. Use a hardened machine image from AWS Marketplac
- H. Apply system updates with AWS Systems Manager Patch Manager.
- I. Run the host in AWS OpsWorks Stack
- J. Use a Chief recipe to harden the AMI during instance launch.Use an AWS Lambda scheduled event to run the Upgrade Operating System stack command to apply system updates.
Answer: B
NEW QUESTION 22
A retail company is running an application that stores invoice files in Amazon S3 bucket and metadata about the files in an Amazon. The S3 bucket and DynamoDB table are in us-east-1. The company wants to protect itself from data corruption and loss of connectivity to either Region.
Which option meets these requirements?
- A. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in us-east-1. Enable versioning on the S3 bucket.
- B. Create an AWS Lambda function triggered by Amazon CloudWatch Events to make regular backups of the DynamoDB tabl
- C. Set up S3 cross-region replication from us-east-1 to eu-west-1. Set up MFA deleteon the S3 bucket in us-east-1.
- D. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable versioning on the S3 bucke
- E. Implement strict ACLs on the S3 bucket.
- F. Create a DynamoDB global table to replicate data between us-east-1 and eu-west-1. Enable continuous backup on the DynamoDB table in us-east-1. Set up S3 cross-region replication from us-east-1 toeu-west-1.
Answer: A
Explanation:
https://aws.amazon.com/blogs/aws/new-cross-region-replication-for-amazon-s3/
P.S. Easily pass SAP-C01 Exam with 179 Q&As Certleader Dumps & pdf Version, Welcome to Download the Newest Certleader SAP-C01 Dumps: https://www.certleader.com/SAP-C01-dumps.html (179 New Questions)