- Home
- Amazon-Web-Services
- DBS-C01 Exam
Amazon-Web-Services DBS-C01 Free Practice Questions
Our pass rate is high to 98.9% and the similarity percentage between our DBS-C01 study guide and real exam is 90% based on our seven-year educating experience. Do you want achievements in the Amazon-Web-Services DBS-C01 exam in just one try? I am currently studying for the Amazon-Web-Services DBS-C01 exam. Latest Amazon-Web-Services DBS-C01 Test exam practice questions and answers, Try Amazon-Web-Services DBS-C01 Brain Dumps First.
Check DBS-C01 free dumps before getting the full version:
NEW QUESTION 1
A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low. Which solution meets these requirements?
- A. Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storag
- B. Provision enough instances to support high demand.
- C. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
- D. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
- E. Provision enough instances to support high demand.
- F. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
- G. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
- H. Enable Amazon Redshift Concurrency Scaling.
- I. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
- J. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
- K. Leverage Amazon Redshift elastic resize.
Answer: C
NEW QUESTION 2
A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.
What can the Database Specialist do to reduce the overall cost?
- A. Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.
- B. Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.
- C. Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.
- D. Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.
Answer: A
NEW QUESTION 3
A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database.
The application needs to be deployed to production and other non-production environments. A Database Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS CloudFormation templates used for automated deployment. The CloudFormation templates are version controlled in the company’s code repository. The company also needs to meet compliance requirement by routinely rotating its database master password for production.
What is most secure solution to store the master password?
- A. Store the master password in a parameter file in each environmen
- B. Reference the environment-specific parameter file in the CloudFormation template.
- C. Encrypt the master password using an AWS KMS ke
- D. Store the encrypted master password in theCloudFormation template.
- E. Use the secretsmanager dynamic reference to retrieve the master password stored in AWS SecretsManager and enable automatic rotation.
- F. Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems ManagerParameter Store and enable automatic rotation.
Answer: C
NEW QUESTION 4
A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. The schema and the data have been migrated successfully. The on-premises database server was also being used to run database maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to complete. These maintenance jobs need to be set up for Aurora PostgreSQL.
How can the Database Specialist schedule these jobs so the setup requires minimal maintenance and provides high availability?
- A. Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.
- B. Connect to the Aurora host and create cron jobs to run the maintenance jobs following the requiredschedule.
- C. Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatchEvents.
- D. Create the maintenance job using the Amazon CloudWatch job scheduling plugin.
Answer: D
NEW QUESTION 5
A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a
database-1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The company’s Database Specialist is able to log in to MySQL and run queries from the bastion host using these details.
When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a “could not connect to server: Connection times out” error message to Amazon CloudWatch Logs.
What is the cause of this error?
- A. The user name and password the application is using are incorrect.
- B. The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
- C. The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.
- D. The user name and password are correct, but the user is not authorized to use the DB instance.
Answer: C
NEW QUESTION 6
After restoring an Amazon RDS snapshot from 3 days ago, a company’s Development team cannot connect tothe restored RDS DB instance. What is the likely cause of this problem?
- A. The restored DB instance does not have Enhanced Monitoring enabled
- B. The production DB instance is using a custom parameter group
- C. The restored DB instance is using the default security group
- D. The production DB instance is using a custom option group
Answer: B
NEW QUESTION 7
A company’s Security department established new requirements that state internal users must connect to an existing Amazon RDS for SQL Server DB instance using their corporate Active Directory (AD) credentials. A Database Specialist must make the modifications needed to fulfill this requirement.
Which combination of actions should the Database Specialist take? (Choose three.)
- A. Disable Transparent Data Encryption (TDE) on the RDS SQL Server DB instance.
- B. Modify the RDS SQL Server DB instance to use the directory for Windows authentication.Createappropriate new logins.
- C. Use the AWS Management Console to create an AWS Managed Microsoft A
- D. Create a trust relationshipwith the corporate AD.
- E. Stop the RDS SQL Server DB instance, modify it to use the directory for Windows authentication, and startit agai
- F. Create appropriate new logins.
- G. Use the AWS Management Console to create an AD Connecto
- H. Create a trust relationship withthecorporate AD.
- I. Configure the AWS Managed Microsoft AD domain controller Security Group.
Answer: CDF
NEW QUESTION 8
A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete.
Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.
Which approach should the Database Specialist take to reduce downtime?
- A. Deploy multiple read replicas and have the team members make changes to separate replica instances
- B. Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot
- C. Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature
- D. Enable the Amazon RDS for MySQL Backtrack feature
Answer: A
NEW QUESTION 9
A company is running an Amazon RDS for PostgeSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime.
What is the FASTEST way to accomplish this?
- A. Create an Aurora PostgreSQL DB cluste
- B. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.
- C. Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.
- D. Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.
- E. Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replic
- F. Promote the replica during the cutover.
Answer: C
NEW QUESTION 10
A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.
Which approach should the Database Specialist take?
- A. Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp).Rundata transformations in AWS Glu
- B. Load the data from the S3 bucket to the Aurora DB cluster.
- C. Order an AWS Snowball appliance and copy the Oracle backup to the Snowball applianc
- D. Once theSnowball data is delivered to Amazon S3, create a new Aurora DB cluste
- E. Enable the S3 integration tomigrate the data directly from Amazon S3 to Amazon RDS.
- F. Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during theschema migratio
- G. Use AWS DMS to perform the full load and change data capture (CDC) tasks.
- H. Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an AmazonEC2 instanc
- I. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to anAurora DB cluster.
Answer: D
NEW QUESTION 11
A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379.
Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)
- A. Enable in-transit and at-rest encryption on the ElastiCache cluster.
- B. Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.
- C. Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.
- D. Create an IAM policy to allow the application service roles to access all ElastiCache API actions.
- E. Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster’s security group.
- F. Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.
Answer: ABE
NEW QUESTION 12
A Database Specialist must create a read replica to isolate read-only queries for an Amazon RDS for MySQLDB instance. Immediately after creating the read replica, users that query it report slow response times.
What could be causing these slow response times?
- A. New volumes created from snapshots load lazily in the background
- B. Long-running statements on the master
- C. Insufficient resources on the master
- D. Overload of a single replication thread by excessive writes on the master
Answer: B
NEW QUESTION 13
A company developed an AWS CloudFormation template used to create all new Amazon DynamoDB tables in its AWS account. The template configures provisioned throughput capacity using hard-coded values. The company wants to change the template so that the tables it creates in the future have independently configurable read and write capacity units assigned.
Which solution will enable this change?
- A. Add values for the rcuCount and wcuCount parameters to the Mappings section of the template.ConfigureDynamoDB to provision throughput capacity using the stack’s mappings.
- B. Add values for two Number parameters, rcuCount and wcuCount, to the templat
- C. Replace the hard-codedvalues with calls to the Ref intrinsic function, referencing the new parameters.
- D. Add values for the rcuCount and wcuCount parameters as outputs of the templat
- E. Configure DynamoDBto provision throughput capacity using the stack outputs.
- F. Add values for the rcuCount and wcuCount parameters to the Mappings section of the template.Replacethe hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
Answer: B
NEW QUESTION 14
A company is running Amazon RDS for MySQL for its workloads. There is downtime when AWS operating system patches are applied during the Amazon RDS-specified maintenance window.
What is the MOST cost-effective action that should be taken to avoid downtime?
- A. Migrate the workloads from Amazon RDS for MySQL to Amazon DynamoDB
- B. Enable cross-Region read replicas and direct read traffic to then when Amazon RDS is down
- C. Enable a read replicas and direct read traffic to it when Amazon RDS is down
- D. Enable an Amazon RDS for MySQL Multi-AZ configuration
Answer: C
NEW QUESTION 15
A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT department has established an AWS Direct Connect link from the company’s data center. The company’s Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine.
Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and
the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses.
What should the Database Specialist do to correct the Data Analysts’ inability to connect?
- A. Restart the DB cluster to apply the SSL change.
- B. Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.
- C. Add explicit mappings between the Data Analysts’ IP addresses and the instance in the security groupassigned to the DB cluster.
- D. Modify the Data Analysts’ local client firewall to allow network traffic to AWS.
Answer: D
NEW QUESTION 16
A financial company wants to store sensitive user data in an Amazon Aurora PostgreSQL DB cluster. The database will be accessed by multiple applications across the company. The company has mandated that all communications to the database be encrypted and the server identity must be validated. Any non-SSL-based connections should be disallowed access to the database.
Which solution addresses these requirements?
- A. Set the rds.force_ssl=0 parameter in DB parameter group
- B. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=allow.
- C. Set the rds.force_ssl=1 parameter in DB parameter group
- D. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=disable.
- E. Set the rds.force_ssl=0 parameter in DB parameter group
- F. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=verify-ca.
- G. Set the rds.force_ssl=1 parameter in DB parameter group
- H. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=verify-full.
Answer: D
NEW QUESTION 17
A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation.
How can the Database Specialists accomplish this?
- A. Enable the option to push all database logs to Amazon CloudWatch for advanced analysis
- B. Create appropriate Amazon CloudWatch dashboards to contain specific periods of time
- C. Enable Amazon RDS Performance Insights and review the appropriate dashboard
- D. Enable Enhanced Monitoring will the appropriate settings
Answer: C
NEW QUESTION 18
A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated infrastructure for an Application team using a development AWS account. The team wants a deployment method that will standardize the core solution components while managing environment-specific settings separately, and wants to minimize rework due to configuration errors.
Which process should the Database Specialist recommend to meet these requirements?
- A. Organize common and environmental-specific parameters hierarchically in the AWS Systems ManagerParameter Store, then reference the parameters dynamically from an AWS CloudFormation template.Deploy the CloudFormation stack using the environment name as a parameter.
- B. Create a parameterized AWS CloudFormation template that builds the required object
- C. Keep separateenvironment parameter files in separate Amazon S3 bucket
- D. Provide an AWS CLI command that deploysthe CloudFormation stack directly referencing the appropriate parameter bucket.
- E. Create a parameterized AWS CloudFormation template that builds the required object
- F. Import thetemplate into the CloudFormation interface in the AWS Management Consol
- G. Make the required changesto the parameters and deploy the CloudFormation stack.
- H. Create an AWS Lambda function that builds the required objects using an AWS SD
- I. Set the requiredparameter values in a test event in the Lambda console for each environment that the Application team canmodify, as neede
- J. Deploy the infrastructure by triggering the test event in the console.
Answer: C
NEW QUESTION 19
A clothing company uses a custom ecommerce application and a PostgreSQL database to sell clothes to thousands of users from multiple countries. The company is migrating its application and database from its on premises data center to the AWS Cloud. The company has selected Amazon EC2 for the application and Amazon RDS for PostgreSQL for the database. The company requires database passwords to be changed every 60 days. A Database Specialist needs to ensure that the credentials used by the web application to connect to the database are managed securely.
Which approach should the Database Specialist take to securely manage the database credentials?
- A. Store the credentials in a text file in an Amazon S3 bucke
- B. Restrict permissions on the bucket to the IAM role associated with the instance profile onl
- C. Modify the application to download the text file and retrieve the credentials on start u
- D. Update the text file every 60 days.
- E. Configure IAM database authentication for the application to connect to the databas
- F. Create an IAM user and map it to a separate database user for each ecommerce use
- G. Require users to update their passwords every 60 days.
- H. Store the credentials in AWS Secrets Manage
- I. Restrict permissions on the secret to only the IAM role associated with the instance profil
- J. Modify the application to retrieve the credentials from Secrets Manager on start u
- K. Configure the rotation interval to 60 days.
- L. Store the credentials in an encrypted text file in the application AM
- M. Use AWS KMS to store the key fordecrypting the text fil
- N. Modify the application to decrypt the text file and retrieve the credentials on start u
- O. Update the text file and publish a new AMI every 60 days.
Answer: B
NEW QUESTION 20
A company is looking to move an on-premises IBM Db2 database running AIX on an IBM POWER7 server. Due to escalating support and maintenance costs, the company is exploring the option of moving the workload to an Amazon Aurora PostgreSQL DB cluster.
What is the quickest way for the company to gather data on the migration compatibility?
- A. Perform a logical dump from the Db2 database and restore it to an Aurora DB cluste
- B. Identify the gaps andcompatibility of the objects migrated by comparing row counts from source and target tables.
- C. Run AWS DMS from the Db2 database to an Aurora DB cluste
- D. Identify the gaps and compatibility of theobjects migrated by comparing the row counts from source and target tables.
- E. Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate themigration compatibility.
- F. Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster.Create a migration assessment report to evaluate the migration compatibility.
Answer: D
NEW QUESTION 21
A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load fata from Amazon S3 into the Neptune DB cluster using the Neptune bulk loader API. The Database Specialist receives the following error:
“Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = us-east-1. Please verify your S3 configuration.”
Which combination of actions should the Database Specialist take to troubleshoot the problem? (Choose two.)
- A. Check that Amazon S3 has an IAM role granting read access to Neptune
- B. Check that an Amazon S3 VPC endpoint exists
- C. Check that a Neptune VPC endpoint exists
- D. Check that Amazon EC2 has an IAM role granting read access to Amazon S3
- E. Check that Neptune has an IAM role granting read access to Amazon S3
Answer: BD
Thanks for reading the newest DBS-C01 exam dumps! We recommend you to try the PREMIUM 2passeasy DBS-C01 dumps in VCE and PDF here: https://www.2passeasy.com/dumps/DBS-C01/ (85 Q&As Dumps)