1Z0-062 Premium Bundle

1Z0-062 Premium Bundle

Oracle Database 12c: Installation and Administration Certification Exam

4.5 
(9030 ratings)
0 QuestionsPractice Tests
0 PDFPrint version
December 4, 2024Last update

Oracle 1Z0-062 Free Practice Questions

Q1. Examine the current value for the following parameters in your database instance: 

SGA_MAX_SIZE = 1024M 

SGA_TARGET = 700M 

DB_8K_CACHE_SIZE = 124M 

LOG_BUFFER = 200M 

You issue the following command to increase the value of DB_8K_CACHE_SIZE: 

SQL> ALTER SYSTEM SET DB_8K_CACHE_SIZE=140M; 

Which statement is true? 

A. It fails because the DB_8K_CACHE_SIZE parameter cannot be changed dynamically. 

B. It succeeds only if memory is available from the autotuned components if SGA. 

C. It fails because an increase in DB_8K_CACHE_SIZE cannot be accommodated within SGA_TARGET. 

D. It fails because an increase in DB_8K_CACHE_SIZE cannot be accommodated within SGA_MAX_SIZE. 

Answer:

Explanation: * The SGA_TARGET parameter can be dynamically increased up to the value specified for the SGA_MAX_SIZE parameter, and it can also be reduced. 

* Example: 

For example, suppose you have an environment with the following configuration: 

SGA_MAX_SIZE = 1024M SGA_TARGET = 512M DB_8K_CACHE_SIZE = 128M In this example, the value of SGA_TARGET can be resized up to 1024M and can also be reduced until one or more of the automatically sized components reaches its minimum size. The exact value depends on environmental factors such as the number of CPUs on the system. However, the value of DB_8K_CACHE_SIZE remains fixed at all times at 128M 

* DB_8K_CACHE_SIZE Size of cache for 8K buffers 

* For example, consider this configuration: 

SGA_TARGET = 512M DB_8K_CACHE_SIZE = 128M In this example, increasing DB_8K_CACHE_SIZE by 16 M to 144M means that the 16M is taken away from the automatically sized components. Likewise, reducing DB_8K_CACHE_SIZE by 16M to 112M means that the 16M is given to the automatically sized components. 

Q2. After implementing full Oracle Data Redaction, you change the default value for the NUMBER data type as follows: 

After changing the value, you notice that FULL redaction continues to redact numeric data with zero. 

What must you do to activate the new default value for numeric full redaction? 

A. Re-enable redaction policies that use FULL data redaction. 

B. Re-create redaction policies that use FULL data redaction. 

C. Re-connect the sessions that access objects with redaction policies defined on them. 

D. Flush the shared pool. 

E. Restart the database instance. 

Answer:

Explanation: About Altering the Default Full Data Redaction Value You can alter the default displayed values for full Data Redaction polices. By default, 0 is the redacted value when Oracle Database performs full redaction (DBMS_REDACT.FULL) on a column of the NUMBER data type. If you want to change it to another value (for example, 7), then you can run the DBMS_REDACT.UPDATE_FULL_REDACTION_VALUES procedure to modify this value. The modification applies to all of the Data Redaction policies in the current database instance. After you modify a value, you must restart the database for it to take effect. 

Note: 

* The DBMS_REDACT package provides an interface to Oracle Data Redaction, which enables you to mask (redact) data that is returned from queries issued by low-privileged users or an application. 

* UPDATE_FULL_REDACTION_VALUES Procedure 

This procedure modifies the default displayed values for a Data Redaction policy for full redaction. 

* After you create the Data Redaction policy, it is automatically enabled and ready to redact data. 

* Oracle Data Redaction enables you to mask (redact) data that is returned from queries issued by low-privileged users or applications. You can redact column data by using one of the following methods: 

/ Full redaction. / Partial redaction. / Regular expressions. / Random redaction. / No redaction. 

Reference: Oracle Database Advanced Security Guide 12c, About Altering the Default Full Data Redaction Value 

Q3. Which two statements are true about extents? 

A. Blocks belonging to an extent can be spread across multiple data files. 

B. Data blocks in an extent are logically contiguous but can be non-contiguous on disk. 

C. The blocks of a newly allocated extent, although free, may have been used before. 

D. Data blocks in an extent are automatically reclaimed for use by other objects in a tablespaee when all the rows in a table are deleted. 

Answer: B,C 

Q4. Examine the parameters for your database instance: 

Which three statements are true about the process of automatic optimization by using cardinality feedback? 

A. The optimizer automatically changes a plan during subsequent execution of a SQL statement if there is a huge difference in optimizer estimates and execution statistics. 

B. The optimizer can re optimize a query only once using cardinality feedback. 

C. The optimizer enables monitoring for cardinality feedback after the first execution of a query. 

D. The optimizer does not monitor cardinality feedback if dynamic sampling and multicolumn statistics are enabled. 

E. After the optimizer identifies a query as a re-optimization candidate, statistics collected by the collectors are submitted to the optimizer. 

Answer: A,C,D 

Explanation: C: During the first execution of a SQL statement, an execution plan is generated as usual. 

D: if multi-column statistics are not present for the relevant combination of columns, the optimizer can fall back on cardinality feedback. 

(not B)* Cardinality feedback. This feature, enabled by default in 11.2, is intended to improve plans for repeated executions. 

optimizer_dynamic_sampling optimizer_features_enable 

* dynamic sampling or multi-column statistics allow the optimizer to more accurately estimate selectivity of conjunctive predicates. 

Note: 

* OPTIMIZER_DYNAMIC_SAMPLING controls the level of dynamic sampling performed by the optimizer. Range of values. 0 to 10 

* Cardinality feedback was introduced in Oracle Database 11gR2. The purpose of this feature is to automatically improve plans for queries that are executed repeatedly, for which the optimizer does not estimate cardinalities in the plan properly. The optimizer may misestimate cardinalities for a variety of reasons, such as missing or inaccurate statistics, or complex predicates. Whatever the reason for the misestimate, cardinality feedback may be able to help. 

Q5. You performed an incremental level 0 backup of a database: 

RMAN > BACKUP INCREMENTAL LEVEL 0 DATABASE; 

To enable block change tracking after the incremental level 0 backup, you issued this 

command: 

SQL > ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE 

‘ /mydir/rman_change_track.f’; 

To perform an incremental level 1 cumulative backup, you issued this command: 

RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE; 

Which three statements are true? 

A. Backup change tracking will sometimes reduce I/O performed during cumulative incremental backups. 

B. The change tracking file must always be backed up when you perform a full database backup. 

C. Block change tracking will always reduce I/O performed during cumulative incremental backups. 

D. More than one database block may be read by an incremental backup for a change made to a single block. 

E. The incremental level 1 backup that immediately follows the enabling of block change tracking will not read the change tracking file to discover changed blocks. 

Answer: A,D,E 

Explanation: A: In a cumulative level 1 backup, RMAN backs up all the blocks used since the most recent level 0 incremental backup. 

E:.Oracle Block Change Tracking Once enabled; this new 10g feature records the modified since last backup and stores the log of it in a block change tracking file using the CTW (Change Tracking Writer) process. During backups RMAN uses the log file to identify the specific blocks that must be backed up. This improves RMAN's performance as it does not have to scan whole datafiles to detect changed blocks. Logging of changed blocks is performed by the CTRW process which is also responsible for writing data to the block change tracking file. 

Note: 

* An incremental level 0 backup backs up all blocks that have ever been in use in this database. 

Q6. Which two are true concerning a multitenant container database with three pluggable database? 

A. All administration tasks must be done to a specific pluggable database. 

B. The pluggable databases increase patching time. 

C. The pluggable databases reduce administration effort. 

D. The pluggable databases are patched together. 

E. Pluggable databases are only used for database consolidation. 

Answer: C,E 

Explanation: The benefits of Oracle Multitenant are brought by implementing a pure deployment choice. The following list calls out the most compelling examples. 

* High consolidation density. (E) The many pluggable databases in a single multitenant container database share its memory and background processes, letting you operate many more pluggable databases on a particular platform than you can single databases that use the old architecture. This is the same benefit that schema-based consolidation brings. 

* Rapid provisioning and cloning using SQL. 

* New paradigms for rapid patching and upgrades. (D, not B) The investment of time and effort to patch one multitenant container database results in patching all of its many pluggable databases. To patch a single pluggable database, you simply unplug/plug to a multitenant container database at a different Oracle Database software version. 

* (C, not A) Manage many databases as one. By consolidating existing databases as pluggable databases, administrators can manage many databases as one. For example, tasks like backup and disaster recovery are performed at the multitenant container database level. 

* Dynamic between pluggable database resource management. In Oracle Database 12c, Resource Manager is extended with specific functionality to control the competition for resources between the pluggable databases within a multitenant container database. Note: 

* Oracle Multitenant is a new option for Oracle Database 12c Enterprise Edition that helps customers reduce IT costs by simplifying consolidation, provisioning, upgrades, and more. It is supported by a new architecture that allows a multitenant container database to hold many pluggable databases. And it fully complements other options, including Oracle Real Application Clusters and Oracle Active Data Guard. An existing database can be simply adopted, with no change, as a pluggable database; and no changes are needed in the other tiers of the application. 

Reference: 12c Oracle Multitenant 

Q7. Which three statements are true about Automatic Workload Repository (AWR)? 

A. All AWR tables belong to the SYSTEM schema. 

B. The AWR data is stored in memory and in the database. 

C. The snapshots collected by AWR are used by the self-tuning components in the database 

D. AWR computes time model statistics based on time usage for activities, which are displayed in the v$SYS time model and V$SESS_TIME_MODEL views. 

E. AWR contains system wide tracing and logging information. 

Answer: B,C,E 

Explanation: * A fundamental aspect of the workload repository is that it collects and persists database performance data in a manner that enables historical performance analysis. The mechanism for this is the AWR snapshot. On a periodic basis, AWR takes a “snapshot” of the current statistic values stored in the database instance’s memory and persists them to its tables residing in the SYSAUX tablespace. 

* AWR is primarily designed to provide input to higherlevel components such as automatic tuning algorithms and advisors, but can also provide a wealth of information for the manual tuning process. 

Q8. Which three resources might be prioritized between competing pluggable databases when creating a multitenant container database plan (CDB plan) using Oracle Database 

Resource Manager? 

A. Maximum Undo per consumer group 

B. Maximum Idle time 

C. Parallel server limit 

D. CPU 

E. Exadata I/O 

F. Local file system I/O 

Answer: A,C,D 

Q9. Which two statements are true about standard database auditing? (Choose two.) 

A. DDL statements can be audited. 

B. Statements that refer to standalone procedure can be audited. 

C. Operations by the users logged on as SYSDBA cannot be audited. 

D. Only one audit record is ever created for a session per audited statement even though it is executed more than once. 

Answer: A,B 

Q10. In your Database, the TBS PERCENT USED parameter is set to 60 and the TBS PERCENT FREE parameter is set to 20. 

Which two storage-tiering actions might be automated when using information Lifecycle Management (ILM) to automate data movement? 

A. The movement of all segments to a target tablespace with a higher degree of compression, on a different storage tier, when the source tablespace exceeds TBS PERCENT USED 

B. Setting the target tablespace to read-only 

C. The movement of some segments to a target tablespace with a higher degree of compression, on a different storage tier, when the source tablespace exceeds TBS PERCENT USED 

D. Setting the target tablespace offline E. The movement of some blocks to a target tablespace with a lower degree of compression, on a different storage tier, when the source tablespace exceeds TBS PERCENT USED 

Answer: B,C 

Explanation: 

The value for TBS_PERCENT_USED specifies the percentage of the tablespace quota when a tablespace is considered full. The value for TBS_PERCENT_FREE specifies the targeted free percentage for the tablespace. When the percentage of the tablespace quota reaches the value of TBS_PERCENT_USED, ADO begins to move data so that percent free of the tablespace quota approaches the value of TBS_PERCENT_FREE. This action by ADO is a best effort and not a guarantee. 

Q11. Which statement is true about the Log Writer process? 

A. It writes when it receives a signal from the checkpoint process (CKPT). 

B. It writes concurrently to all members of multiplexed redo log groups. 

C. It writes after the Database Writer process writes dirty buffers to disk. 

D. It writes when a user commits a transaction. 

Answer:

Reference: http://docs.oracle.com/cd/B19306_01/server.102/b14220/process.htm (see log writer process (LGWR)) 

Q12. Which three statements are true about SQL plan directives? 

A. They are tied to a specific statement or SQL ID. 

B. They instruct the maintenance job to collect missing statistics or perform dynamic sampling to generate a more optimal plan. 

C. They are used to gather only missing statistics. 

D. They are created for a query expression where statistics are missing or the cardinality estimates by the optimizer are incorrect. 

E. They instruct the optimizer to create only column group statistics. 

F. Improve plan accuracy by persisting both compilation and execution statistics in the SYSAUX tablespace. 

Answer: B,D,E 

Explanation: During SQL execution, if a cardinality misestimate occurs, then the database creates SQL plan directives. During SQL compilation, the optimizer examines the query corresponding to the directive to determine whether missing extensions or histograms exist (D). The optimizer records any missing extensions. Subsequent DBMS_STATS calls collect statistics for the extensions. 

The optimizer uses dynamic sampling whenever it does not have sufficient statistics corresponding to the directive. (B, not C) 

E: Currently, the optimizer monitors only column groups. The optimizer does not create an extension on expressions. 

Incorrect: 

Not A: SQL plan directives are not tied to a specific SQL statement or SQL ID. 

Note: 

* A SQL plan directive is additional information and instructions that the optimizer can use to generate a more optimal plan. For example, a SQL plan directive can instruct the optimizer to record a missing extension. 

Q13. Which two statements are true about the Oracle Direct Network File system (DNFS)? 

A. It utilizes the OS file system cache. 

B. A traditional NFS mount is not required when using Direct NFS. 

C. Oracle Disk Manager can manage NFS on its own, without using the operating kernel NFS driver. 

D. Direct NFS is available only in UNIX platforms. 

E. Direct NFS can load-balance I/O traffic across multiple network adapters. 

Answer: C,E 

Explanation: E: Performance is improved by load balancing across multiple network interfaces (if available). 

Note: 

* To enable Direct NFS Client, you must replace the standard Oracle Disk Manager (ODM) library with one that supports Direct NFS Client. 

Incorrect: Not A: Direct NFS Client is capable of performing concurrent direct I/O, which bypasses any operating system level caches and eliminates any operating system write-ordering locks Not B: 

* To use Direct NFS Client, the NFS file systems must first be mounted and available over regular NFS mounts. 

* Oracle Direct NFS (dNFS) is an optimized NFS (Network File System) client that provides faster and more scalable access to NFS storage located on NAS storage devices (accessible over TCP/IP). Not D: Direct NFS is provided as part of the database kernel, and is thus available on all supported database platforms - even those that don't support NFS natively, like Windows. 

Note: 

* Oracle Direct NFS (dNFS) is an optimized NFS (Network File System) client that provides faster and more scalable access to NFS storage located on NAS storage devices (accessible over TCP/IP). Direct NFS is built directly into the database kernel - just like ASM which is mainly used when using DAS or SAN storage. 

* Oracle Direct NFS (dNFS) is an internal I/O layer that provides faster access to large NFS files than traditional NFS clients. 

Q14. You are about to plug a multi-terabyte non-CDB into an existing multitenant container database (CDB). 

The characteristics of the non-CDB are as follows: 

Version: Oracle Database 11g Release 2 (11.2.0.2.0) 64-bit Character set: AL32UTF8 National character set: AL16UTF16 O/S: Oracle Linux 6 64-bit 

The characteristics of the CDB are as follows: 

Version: Oracle Database 12c Release 1 64-bit Character Set: AL32UTF8 National character set: AL16UTF16 O/S: Oracle Linux 6 64-bit 

Which technique should you use to minimize down time while plugging this non-CDB into the CDB? 

A. Transportable database 

B. Transportable tablespace 

C. Data Pump full export/import 

D. The DBMS_PDB package 

E. RMAN 

Answer:

Explanation: * Overview, example: 

-Log into ncdb12c as sys 

-Get the database in a consistent state by shutting it down cleanly. 

-Open the database in read only mode 

-Run DBMS_PDB.DESCRIBE to create an XML file describing the database. 

-Shut down ncdb12c 

-Connect to target CDB (CDB2) 

-Check whether non-cdb (NCDB12c) can be plugged into CDB(CDB2) 

-Plug-in Non-CDB (NCDB12c) as PDB(NCDB12c) into target CDB(CDB2). 

-Access the PDB and run the noncdb_to_pdb.sql script. 

-Open the new PDB in read/write mode. 

* You can easily plug an Oracle Database 12c non-CDB into a CDB. Just create a PDB manifest file for the non-CDB, and then use the manifest file to create a cloned PDB in the CDB. 

* Note that to plugin a non-CDB database into a CDB, the non-CDB database needs to be of version 12c as well. So existing 11g databases will need to be upgraded to 12c before they can be part of a 12c CDB. 

Q15. Flashback is enabled for your multitenant container database (CDB), which contains two pluggable database (PDBs). A local user was accidently dropped from one of the PDBs. 

You want to flash back the PDB to the time before the local user was dropped. You connect to the CDB and execute the following commands: 

SQL > SHUTDOWN IMMEDIATE SQL > STARTUP MOUNT SQL > FLASHBACK DATABASE to TIME “TO_DATE (‘08/20/12’ , ‘MM/DD/YY’)”; 

Examine following commands: 

1. ALTER PLUGGABLE DATABASE ALL OPEN; 

2. ALTER DATABASE OPEN; 

3. ALTER DATABASE OPEN RESETLOGS; 

Which command or commands should you execute next to allow updates to the flashback back schema? 

A. Only 1 

B. Only 2 

C. Only 3 

D. 3 and 1 

E. 1 and 2 

Answer:

Explanation: Example (see step23): 

Step 1: 

Run the RMAN FLASHBACK DATABASE command. 

You can specify the target time by using a form of the command shown in the following 

examples: 

FLASHBACK DATABASE TO SCN 46963; 

FLASHBACK DATABASE 

TO RESTORE POINT BEFORE_CHANGES; 

FLASHBACK DATABASE TO TIME 

"TO_DATE('09/20/05','MM/DD/YY')"; 

When the FLASHBACK DATABASE command completes, the database is left mounted and recovered to the specified target time. 

Step 2: 

Make the database available for updates by opening the database with the RESETLOGS option. If the database is currently open read-only, then execute the following commands in SQL*Plus: 

SHUTDOWN IMMEDIATE 

STARTUP MOUNT 

ALTER DATABASE OPEN RESETLOGS; 

Q16. An application accesses a small lookup table frequently. You notice that the required data blocks are getting aged out of the default buffer cache. 

How would you guarantee that the blocks for the table never age out? 

A. Configure the KEEP buffer pool and alter the table with the corresponding storage clause. 

B. Increase the database buffer cache size. 

C. Configure the RECYCLE buffer pool and alter the table with the corresponding storage clause. 

D. Configure Automata Shared Memory Management. 

E. Configure Automatic Memory Management-

Answer:

Explanation: Schema objects are referenced with varying usage patterns; therefore, their cache behavior may be quite different. Multiple buffer pools enable you to address these differences. You can use a KEEP buffer pool to maintain objects in the buffer cache and a RECYCLE buffer pool to prevent objects from consuming unnecessary space in the cache. When an object is allocated to a cache, all blocks from that object are placed in that cache. Oracle maintains a DEFAULT buffer pool for objects that have not been assigned to one of the buffer pools. 

START 1Z0-062 EXAM