Visit here for our full Oracle 1z0-083 exam dumps and practice test questions.
Question 101:
What is the purpose of the KEEP pool in the database buffer cache?
A) To store frequently accessed blocks indefinitely
B) To cache blocks for specific objects that should remain in memory
C) To reserve space for sort operations
D) To cache data dictionary information
Answer: B
Explanation:
The KEEP pool in the database buffer cache is designed to cache blocks for specific objects that should remain in memory, preventing them from being aged out by the normal least recently used algorithm that manages the default buffer pool. This capability is valuable for small, frequently accessed tables or indexes where keeping the data cached eliminates physical I/O and ensures consistent high-speed access.
Oracle’s buffer cache can be divided into multiple pools including the default pool, KEEP pool, and RECYCLE pool. The KEEP pool is configured using the DB_KEEP_CACHE_SIZE initialization parameter, which specifies how much memory to allocate for this specialized cache. Objects are assigned to the KEEP pool using the STORAGE clause in CREATE or ALTER statements, specifying BUFFER_POOL KEEP.
Ideal candidates for the KEEP pool include small reference tables that are accessed frequently by many queries, small dimension tables in data warehouse environments, frequently used indexes that fit entirely in memory, and lookup tables containing codes or descriptions referenced constantly. By caching these objects in the KEEP pool, you ensure they remain available in memory regardless of other database activity that might otherwise age them out of the default pool.
The KEEP pool differs from simply having a large default buffer cache because it provides guaranteed memory allocation for specific objects. In a large default pool, even important small tables might be aged out when large table scans or other memory-intensive operations occur. The KEEP pool reserves dedicated space that protects important small objects from this displacement.
Monitoring KEEP pool effectiveness involves examining cache hit ratios and physical reads for objects assigned to the pool. If objects in the KEEP pool still experience significant physical I/O, the pool might be undersized or the objects might be too large for the allocated space. Adjusting DB_KEEP_CACHE_SIZE or reconsidering which objects should use the KEEP pool may be necessary.
Option A is incorrect because while objects in the KEEP pool tend to stay cached longer, there is no absolute guarantee they remain indefinitely, especially if the pool is undersized.
Option C is incorrect because sort operations use the sort area in the PGA or temporary tablespaces, not the KEEP pool.
Option D is incorrect because data dictionary information is cached in the data dictionary cache within the shared pool, not in the KEEP pool.
Question 102:
Which command displays the current database name and unique database identifier?
A) SELECT NAME FROM V$DATABASE
B) SHOW DATABASE
C) SELECT DATABASE()
D) Both A and DBID query
Answer: A
Explanation:
The SELECT NAME FROM VDATABASEcommanddisplaysthecurrentdatabasename,andthesameviewalsocontainstheDBIDcolumnshowingtheuniquedatabaseidentifier.TheVDATABASE command displays the current database name, and the same view also contains the DBID column showing the unique database identifier. The V DATABASEcommanddisplaysthecurrentdatabasename,andthesameviewalsocontainstheDBIDcolumnshowingtheuniquedatabaseidentifier.TheVDATABASE view is a dynamic performance view that contains a single row with various attributes about the database instance and database itself, including fundamental identifiers and configuration information.
The NAME column in V$DATABASE contains the database name as specified during database creation, typically limited to eight characters. This name is also known as the DB_NAME and appears in initialization parameters and various database operations. The DBID column contains a unique numeric identifier assigned to the database during creation, which remains constant throughout the database’s lifetime and uniquely identifies it across all Oracle databases.
Additional useful columns in V$DATABASE include CREATED showing the database creation timestamp, LOG_MODE indicating whether the database is in ARCHIVELOG or NOARCHIVELOG mode, OPEN_MODE showing the current database state such as READ WRITE or READ ONLY, DATABASE_ROLE indicating whether this is a primary or standby database in Data Guard configurations, and PLATFORM_NAME identifying the operating system platform.
Common administrative queries against V$DATABASE include checking the database name and identifier for documentation or verification, confirming ARCHIVELOG mode status before maintenance operations, verifying database role in Data Guard environments, and gathering basic database information for inventory or monitoring systems. The view provides essential information that administrators frequently need during routine operations and troubleshooting.
The difference between database name and database instance name is important to understand. The database name identifies the physical database files and control file entries. The instance name identifies the running Oracle instance including its memory structures and background processes. In single-instance databases, these names often match, but in RAC environments, multiple instances with different names access the same database.
Option B is incorrect because SHOW DATABASE is not a valid Oracle SQL*Plus or SQL command.
Option C is incorrect because DATABASE() is not a valid Oracle function, though similar functions exist in other database systems like MySQL.
Option D is partially correct but incomplete as a standalone answer, since querying DBID requires accessing V$DATABASE.
Question 103:
What is the purpose of the COMPATIBLE initialization parameter?
A) To enable compatibility with older Oracle client versions
B) To control which database features can be used and ensure compatibility with specific Oracle versions
C) To manage compatibility between RAC instances
D) To ensure compatibility with third-party applications
Answer: B
Explanation:
The COMPATIBLE initialization parameter controls which database features can be used and ensures compatibility with specific Oracle versions by setting the minimum database version for which the database must remain compatible. Once COMPATIBLE is set to a specific version, features introduced in that version and earlier can be used, but features from newer versions are disabled to maintain compatibility with the specified version.
This parameter serves a critical role during database upgrades and migrations. When upgrading from one Oracle version to another, you can upgrade the database software and startup the upgraded instance while keeping COMPATIBLE set to the previous version. This allows you to test the new environment while maintaining the ability to downgrade if problems occur. Once you are confident the upgrade is successful, you can advance COMPATIBLE to enable new features.
Advancing the COMPATIBLE parameter is a one-way operation that cannot be reversed without recreating the database or restoring from backup. When you increase COMPATIBLE, Oracle may make irreversible changes to the database structure to enable new features. For example, advancing from COMPATIBLE = ‘12.2.0’ to ‘19.0.0’ enables Oracle Database 19c features but prevents downgrading to 12.2 without restoring from backup taken before the change.
The parameter affects various database components including data dictionary structures, storage formats, optimizer behavior, and available features. Setting COMPATIBLE too low prevents use of new features that might improve performance or functionality. Setting it too high prevents downgrading if unexpected issues arise after an upgrade. Most organizations advance COMPATIBLE gradually after thorough testing confirms the new version works correctly.
Best practices recommend leaving COMPATIBLE at the previous version initially after upgrading database software, testing thoroughly in the new environment, and advancing COMPATIBLE only after confirming all applications work correctly and the decision to remain at the new version is final. This conservative approach provides a safety margin for rollback if major issues are discovered.
Option A is incorrect because client compatibility is managed separately through Oracle Net and does not depend on the COMPATIBLE parameter.
Option C is incorrect because RAC instance compatibility is managed through different mechanisms, though COMPATIBLE must be consistent across RAC instances.
Option D is incorrect because application compatibility depends on application design and Oracle features used, not directly on the COMPATIBLE parameter.
Question 104:
Which view shows information about current database jobs scheduled through DBMS_SCHEDULER?
A) DBA_JOBS
B) DBA_SCHEDULER_JOBS
C) V$JOBS
D) USER_JOBS
Answer: B
Explanation:
The DBA_SCHEDULER_JOBS view shows comprehensive information about current database jobs scheduled through DBMS_SCHEDULER, including job name, owner, schedule information, program details, status, and execution history. This view is essential for monitoring and managing scheduled jobs in modern Oracle databases that use the DBMS_SCHEDULER package for job scheduling.
DBA_SCHEDULER_JOBS contains detailed information about each scheduled job including JOB_NAME uniquely identifying the job, OWNER showing who owns the job, JOB_TYPE indicating the type of work performed such as PLSQL_BLOCK or STORED_PROCEDURE, SCHEDULE_NAME referencing the schedule that controls execution timing, ENABLED showing whether the job is currently active, STATE indicating current status like SCHEDULED or RUNNING, and LAST_START_DATE showing when the job last executed.
Related scheduler views provide additional job information. DBA_SCHEDULER_JOB_RUN_DETAILS contains historical execution information including start times, end times, status, and error messages for each job run. DBA_SCHEDULER_RUNNING_JOBS shows jobs currently executing. Together, these views provide complete visibility into scheduled job activity and history.
Monitoring scheduled jobs involves regularly querying scheduler views to verify jobs are running as expected, check for failed executions, identify jobs that run longer than expected, and confirm that job schedules are configured correctly. Queries commonly filter by job owner, status, or last run time to focus on specific subsets of jobs requiring attention.
The difference between DBA_JOBS and DBA_SCHEDULER_JOBS reflects Oracle’s evolution in job scheduling. DBA_JOBS shows jobs scheduled through the older DBMS_JOB package, which has been deprecated in favor of DBMS_SCHEDULER. Modern databases should use DBMS_SCHEDULER and monitor jobs through DBA_SCHEDULER_JOBS. Legacy jobs created with DBMS_JOB appear in DBA_JOBS but not in DBA_SCHEDULER_JOBS.
Option A is incorrect because DBA_JOBS shows jobs scheduled through the older DBMS_JOB package, not DBMS_SCHEDULER jobs.
Option C is incorrect because V$JOBS is not a standard Oracle dynamic performance view for job information.
Option D is incorrect because USER_JOBS shows only jobs owned by the current user using the deprecated DBMS_JOB package, not DBMS_SCHEDULER jobs.
Question 105:
What is the purpose of the V$SQLTEXT view in Oracle Database?
A) To display complete SQL statement text for statements in the shared pool
B) To show SQL execution statistics
C) To display SQL execution plans
D) To store SQL statements permanently
Answer: A
Explanation:
The V$SQLTEXT view displays complete SQL statement text for statements in the shared pool, breaking long SQL statements into multiple rows of manageable size. Each row contains a piece of the SQL text along with the SQL_ID and position information that allows reconstructing the complete statement. This view is essential when analyzing SQL performance and identifying problematic queries, especially for long SQL statements that exceed the text length displayed in other views.
The view is particularly useful when investigating performance issues involving long SQL statements generated by applications or ORMs that produce complex queries. Being able to see the complete SQL text helps understand what the query is trying to accomplish, identify inefficient constructs, and determine appropriate tuning strategies. Without VSQLTEXT,administratorswouldbelimitedtothetruncatedtextinVSQLTEXT, administrators would be limited to the truncated text in V SQLTEXT,administratorswouldbelimitedtothetruncatedtextinVSQL.
Performance monitoring tools and scripts commonly query V$SQLTEXT when displaying SQL statements to users. Enterprise Manager and other monitoring solutions internally query this view to present complete SQL text in their interfaces. The view’s structure with fixed-size text pieces and ordered retrieval makes it reliable for reconstructing statements of any length within Oracle’s SQL size limits.
Option B is incorrect because SQL execution statistics are provided by VSQLandVSQL and V SQLandVSQLSTATS, not V$SQLTEXT which focuses only on statement text.
Option C is incorrect because SQL execution plans are shown in VSQLPLAN,notVSQL_PLAN, not V SQLPLAN,notVSQLTEXT.
Option D is incorrect because V$SQLTEXT shows statements currently in the shared pool, which is volatile memory, not permanent storage.
Question 106:
Which parameter specifies the maximum size of the PGA that can be allocated to a single process?
A) PGA_AGGREGATE_TARGET
B) PGA_AGGREGATE_LIMIT
C) WORK_AREA_SIZE
D) There is no single parameter; it is managed automatically
Answer: B
Explanation:
The PGA_AGGREGATE_LIMIT parameter specifies the maximum size of PGA memory that can be allocated across all processes collectively, not for a single process specifically. However, it effectively limits individual processes by constraining total PGA usage. When total PGA usage approaches this limit, Oracle takes action to prevent exceeding it, including terminating sessions that are consuming excessive PGA memory.
Prior to Oracle Database 12c, there was no hard limit on total PGA usage. The PGA_AGGREGATE_TARGET parameter provided a target for automatic PGA memory management, but processes could exceed this target if they needed more memory for operations like large sorts or hash joins. This could lead to excessive memory consumption and potential operating system paging or memory exhaustion.
PGA_AGGREGATE_LIMIT addresses this issue by establishing a firm ceiling on total PGA usage. When aggregate PGA usage exceeds this limit, Oracle identifies sessions using the most PGA memory and may terminate them to reduce overall usage. This protection prevents runaway memory consumption from causing database or system instability.
The default value for PGA_AGGREGATE_LIMIT is typically calculated as twice the PGA_AGGREGATE_TARGET value, or three gigabytes if PGA_AGGREGATE_TARGET is not set. Administrators can adjust this parameter based on available system memory and workload characteristics. Setting it too low may cause unnecessary session terminations, while setting it too high may allow memory exhaustion.
Individual work area sizes within the PGA are controlled by automatic PGA memory management when PGA_AGGREGATE_TARGET is set. Oracle distributes available PGA memory among active sessions based on their requirements and system-wide availability. Sessions performing memory-intensive operations like sorts receive more PGA allocation within the constraints of PGA_AGGREGATE_TARGET and PGA_AGGREGATE_LIMIT.
Option A is incorrect because PGA_AGGREGATE_TARGET sets a target for automatic PGA tuning across all processes, not a maximum limit.
Option C is incorrect because WORK_AREA_SIZE is not a specific Oracle parameter name, though work area sizing is part of PGA management.
Option D is incorrect because PGA_AGGREGATE_LIMIT does provide a specific limit on total PGA usage.
Question 107:
What is the purpose of online table redefinition in Oracle Database?
A) To drop and recreate tables quickly
B) To reorganize tables and change their structure while remaining online and accessible
C) To backup tables before modifications
D) To partition tables automatically
Answer: B
Explanation:
Online table redefinition reorganizes tables and changes their structure while remaining online and accessible to users, avoiding the downtime traditionally required for major table restructuring operations. This feature enables administrators to perform significant table modifications including adding or removing columns, changing storage parameters, moving tables to different tablespaces, converting heap tables to partitioned tables, and reorganizing data without interrupting application availability.
The online redefinition process uses the DBMS_REDEFINITION package and works by creating an interim table with the desired new structure, copying data from the original table to the interim table while both remain accessible, synchronizing changes that occur during the copy process through materialized view logs, and finally swapping the tables so the interim table becomes the production table. Throughout this process, DML operations can continue against the original table with minimal impact on performance.
Common uses for online redefinition include reorganizing fragmented tables to reclaim space and improve performance, changing table organization from heap to partitioned or index-organized, moving large tables to different tablespaces or storage tiers, adding or modifying columns without downtime, and implementing compression on existing tables. These operations would traditionally require lengthy maintenance windows but can now occur during normal business hours.
The redefinition process begins by calling DBMS_REDEFINITION.CAN_REDEF_TABLE to verify the table can be redefined, creating an interim table with the desired new structure, calling START_REDEF_TABLE to begin the redefinition and initiate synchronization, optionally calling SYNC_INTERIM_TABLE to manually synchronize changes during lengthy redefinitions, and finally calling FINISH_REDEF_TABLE to complete the process and swap tables.
Performance during online redefinition depends on factors including table size, DML activity rate during redefinition, system resources available for the operation, and whether indexes and constraints must be recreated. Large tables may take hours to redifine, but applications remain available throughout the process. Administrators can monitor progress through data dictionary views and adjust synchronization frequency to balance currency against system load.
Option A is incorrect because simply dropping and recreating tables causes downtime and data loss, which online redefinition specifically avoids.
Option C is incorrect because backing up tables is a separate operation, though backups should be taken before major restructuring operations.
Option D is incorrect because while online redefinition can convert tables to partitioned, automatic partitioning is not its primary purpose.
Question 108:
Which view shows current blocking relationships between sessions?
A) V$LOCK
B) V$SESSION
C) V$BLOCKING_SESSIONS
D) V$SESSION_WAIT
Answer: C
Explanation:
The V$BLOCKING_SESSIONS view provides insight into current blocking relationships between sessions, showing which sessions are waiting for locks and which ones are blocking them. It was introduced to simplify lock contention analysis, replacing the need to query and join multiple views like V$LOCK. This view contains the session identifier of both the waiting session and the blocking session, making it easier to track lock dependencies.
Before the introduction of V$BLOCKING_SESSIONS, administrators had to join V$LOCK with itself or use complex queries involving V$SESSION to identify blocking sessions. The new view eliminates this complexity by directly displaying the relationships, which is particularly useful for diagnosing and resolving blocking issues.
Lock contention investigations typically begin by querying V$BLOCKING_SESSIONS to identify which sessions are blocked and by whom. Once blocking sessions are identified, administrators can investigate further by checking the SQL statements being executed through V$SESSION and V$SQL, determining the duration of the blocks, and deciding whether to allow the transactions to complete naturally or terminate the blocking sessions if they are stuck.
V$BLOCKING_SESSIONS also helps identify blocking chains—when Session A blocks Session B, which in turn blocks Session C. By identifying and resolving the root blocker, typically Session A, administrators can resolve the entire chain and allow all waiting sessions to proceed.
Option A is incorrect because V$LOCK requires complex self-joins to identify blocking relationships, while V$BLOCKING_SESSIONS provides a simpler solution. Option B is incorrect because V$SESSION shows session information including the blocking_session column but lacks the focused view on blocking relationships that V$BLOCKING_SESSIONS provides. Option D is incorrect because V$SESSION_WAIT shows which sessions are currently waiting for resources but does not directly reveal blocking relationships as clearly as V$BLOCKING_SESSIONS.
Question 109:
What is the purpose of SecureFiles in Oracle Database?
A) To encrypt all database files
B) To provide advanced LOB storage with features like compression and deduplication
C) To secure database connections
D) To manage file system permissions
Answer: B
Explanation:
SecureFiles provide advanced LOB storage with features like compression, deduplication, and encryption specifically for LOB data types including CLOB, BLOB, and NCLOB. This storage format offers significant improvements over the older BasicFiles LOB storage including better performance, space efficiency through compression, storage savings through deduplication of duplicate LOB content, and built-in encryption capabilities for sensitive LOB data.
Oracle introduced SecureFiles to address limitations and performance issues with BasicFiles LOB storage. SecureFiles use more efficient storage structures, provide better concurrency for LOB operations, support larger LOB sizes, and integrate advanced features that previously required separate implementation. The name SecureFiles refers to security features but the benefits extend well beyond security to include performance and storage efficiency.
Compression for SecureFiles LOBs reduces storage requirements for text and other compressible content. Oracle offers multiple compression levels including MEDIUM for balanced compression and performance, and HIGH for maximum compression at the cost of additional CPU usage. Compression is transparent to applications and can significantly reduce storage costs for LOB-heavy databases.
Deduplication eliminates redundant copies of identical LOB content by storing only one copy of each unique LOB value and using references for duplicates. This feature is valuable when many rows contain identical or similar LOB content, such as documents or images that are duplicated across records. Deduplication can dramatically reduce storage for such data.
Encryption for SecureFiles LOBs protects sensitive content using Oracle’s Transparent Data Encryption. LOB encryption operates at the column level, encrypting LOB data transparently as it is written and decrypting it when read. Applications require no modifications to use encrypted LOBs.
Creating tables with SecureFiles LOBs requires specifying the STORE AS SECUREFILE clause in the LOB storage definition. For example, CREATE TABLE documents (id NUMBER, content CLOB) LOB(content) STORE AS SECUREFILE creates a table with SecureFiles LOB storage. Existing BasicFiles LOBs can be converted to SecureFiles through online redefinition or other migration approaches.
Option A is incorrect because SecureFiles are specific to LOB storage, not all database files, and security is just one of multiple features.
Option C is incorrect because securing database connections involves Oracle Net encryption and other network security features, not SecureFiles.
Option D is incorrect because file system permissions are managed at the operating system level, not through SecureFiles which is a database storage feature.
Question 110:
Which command shows the structure of a table including column names and data types?
A) SHOW TABLE table_name
B) DESCRIBE table_name
C) DISPLAY table_name
D) EXPLAIN table_name
Answer: B
Explanation:
The DESCRIBE command shows the structure of a table including column names, data types, and null constraints. This command is available in SQL*Plus and SQL Developer, providing a quick way to view table structure without querying data dictionary views. DESCRIBE can be abbreviated as DESC, making it one of the most commonly used commands for exploring database schema structure.
When you execute DESCRIBE table_name or DESC table_name, Oracle displays each column in the table with its name, whether it allows null values, and its data type including length or precision. For example, DESC employees might show columns like EMPLOYEE_ID (NUMBER NOT NULL), FIRST_NAME (VARCHAR2(50)), SALARY (NUMBER(8,2)), etc.
The output format shows the column name on the left, the nullable status in the middle indicating NULL or NOT NULL, and the data type with size on the right. This concise format provides all essential information about table structure needed for writing queries, understanding data requirements, and analyzing schema design.
DESCRIBE works with various database objects beyond tables including views showing their column structure, synonyms resolving to their underlying object structure, packages listing procedures and functions, and functions or procedures showing parameter lists. This versatility makes DESCRIBE valuable for exploring different types of database objects.
For programmatic access to table structure, data dictionary views like DBA_TAB_COLUMNS, ALL_TAB_COLUMNS, or USER_TAB_COLUMNS provide detailed column metadata that can be queried with SQL. These views include additional information not shown by DESCRIBE such as default values, character set information, hidden column status, and statistics. DESCRIBE provides a quick interactive view while data dictionary views enable complex queries about schema structure.
SQL Developer and other database tools provide graphical interfaces for viewing table structure with additional details like constraints, indexes, triggers, and dependencies. These visual tools complement the command-line DESCRIBE command, offering different ways to explore database schema based on user preference and task requirements.
Option A is incorrect because SHOW TABLE is not a valid Oracle command, though similar commands exist in other database systems.
Option C is incorrect because DISPLAY is not a valid Oracle command for showing table structure.
Option D is incorrect because EXPLAIN is used for execution plans, not describing table structure.
Question 111:
What is the purpose of the DEFAULT clause when adding a new column to a table?
A) To specify the column’s data type
B) To assign a default value to the column for existing and future rows
C) To make the column mandatory
D) To create an index on the column
Answer: B
Explanation:
The DEFAULT clause when adding a new column assigns a default value to that column for both existing rows and future rows where no explicit value is provided during insertion. This feature is essential when adding columns to populated tables because it determines what value existing rows will have for the new column and provides automatic values for subsequent inserts that omit the column.
When you add a column with a DEFAULT clause to a table that already contains data, Oracle handles existing rows efficiently. In recent Oracle versions, when adding a nullable column with a default value, Oracle updates only the data dictionary and does not immediately update existing rows. The default value is returned when those rows are queried, but physical row updates occur only when rows are modified for other reasons. This optimization makes adding columns with defaults very fast even on large tables.
For example, ALTER TABLE employees ADD status VARCHAR2(20) DEFAULT ‘Active’ adds a status column with a default value of ‘Active’. Existing employee records will show ‘Active’ for status when queried, and future inserts that omit status will automatically receive ‘Active’ unless explicitly specified otherwise.
The DEFAULT clause works with all data types including literals, expressions, and functions. Common defaults include constant values like ‘N/A’ or 0, SYSDATE or SYSTIMESTAMP for date columns to automatically record when rows are created, USER to capture who created the row, and sequence values through sequence functions. These defaults automate data entry and ensure consistency.
The relationship between DEFAULT and NOT NULL constraints is important. When adding a NOT NULL column to a table with existing rows, you must provide a DEFAULT value because existing rows need a non-null value for the new column. Without DEFAULT, adding a NOT NULL column to a populated table would fail because Oracle cannot determine what value existing rows should have.
Applications benefit from column defaults by reducing the amount of explicit data they must provide during inserts, ensuring consistency for fields that typically have standard values, and simplifying data entry forms where certain fields have common defaults. Defaults make applications more robust by preventing null values in situations where nulls are inappropriate.
Option A is incorrect because the data type is specified separately from the DEFAULT clause.
Option C is incorrect because making a column mandatory requires a NOT NULL constraint, not just a DEFAULT clause, though they are often used together.
Option D is incorrect because creating indexes requires separate CREATE INDEX commands, not the DEFAULT clause.
Question 112:
Which feature allows queries to continue using old execution plans even after statistics change?
A) SQL Plan Management
B) Stored Outlines
C) SQL Profiles
D) Cursor Sharing
Answer: A
Explanation:
SQL Plan Management allows queries to continue using old execution plans even after statistics change by maintaining a SQL plan baseline containing accepted execution plans for each statement. This feature prevents performance regressions that can occur when the optimizer chooses different, potentially worse plans after statistics updates, schema changes, or database upgrades.
SQL Plan Management works through plan evolution. When enabled, the optimizer first checks if a SQL statement has a plan baseline. If a baseline exists, the optimizer uses one of the accepted plans from the baseline rather than generating a new plan from scratch. If the optimizer discovers a new plan that it estimates would perform better, this plan is added to the plan history but not used automatically. The new plan must be verified to perform well before being accepted into the baseline.
Creating plan baselines can happen automatically when the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter is set to TRUE, causing Oracle to automatically create plan baselines for all repeatable SQL statements. Baselines can also be created manually by loading plans from cursor cache, SQL tuning sets, or staging tables. This flexibility enables capturing known good plans before risky operations like upgrades.
The plan evolution process involves testing new plans against accepted plans using actual execution to verify they perform better. The DBMS_SPM package provides procedures for evolving plans manually or automatically. Automatic plan evolution runs during maintenance windows and promotes plans that demonstrate better performance. Manual evolution gives administrators complete control over which plans are accepted.
Option B is incorrect because Stored Outlines are an older, deprecated feature that Oracle replaced with SQL Plan Management for maintaining stable execution plans. Option C is incorrect because SQL Profiles provide additional statistics to help the optimizer make better decisions but do not prevent plan changes like SQL Plan Management does. Option D is incorrect because Cursor Sharing controls how Oracle handles literals in SQL statements, not plan stability.
Question 113:
What is the purpose of the PURGE option when dropping tables?
A) To delete data gradually over time
B) To permanently remove the table without placing it in the recycle bin
C) To compress the table before dropping
D) To backup the table before dropping
Answer: B
Explanation:
The PURGE option when dropping tables permanently removes the table without placing it in the recycle bin, making the drop operation immediate and irreversible. This is different from the standard DROP TABLE command which moves the table to the recycle bin, allowing recovery through Flashback Drop if needed. Using PURGE ensures the table and its storage are immediately released.
When you execute DROP TABLE table_name PURGE, Oracle completely removes the table, its data, indexes, constraints, and triggers without any possibility of recovery through the recycle bin. The space occupied by the table is immediately returned to the tablespace for reuse by other objects. This operation is faster than a regular drop because Oracle does not need to manage recycle bin entries.
Use cases for PURGE include situations where you are certain the table should not be recoverable, such as dropping temporary tables or test tables that will not be needed again. It is also useful when tablespace space is critically low and you need immediate space recovery rather than waiting for recycle bin cleanup. Security-sensitive scenarios where dropped data must be immediately unrecoverable also warrant using PURGE.
The recycle bin mechanism provides a safety net for accidental drops by keeping dropped objects accessible for recovery. However, objects in the recycle bin still consume tablespace space. When space becomes constrained, Oracle may automatically purge objects from the recycle bin to make room for new data. Manual purging using DROP TABLE with PURGE option gives administrators explicit control over when space is released.
Option A is incorrect because PURGE does not involve gradual deletion over time but rather immediate permanent removal. Option C is incorrect because compression is not related to the PURGE option when dropping tables. Option D is incorrect because PURGE does not create backups; it prevents recovery by bypassing the recycle bin entirely.
Question 114:
Which parameter controls the number of seconds to wait before timing out on a database connection attempt?
A) CONNECTION_TIMEOUT
B) SQLNET.INBOUND_CONNECT_TIMEOUT
C) CONNECT_TIMEOUT
D) SESSION_TIMEOUT
Answer: B
Explanation:
The SQLNET.INBOUND_CONNECT_TIMEOUT parameter controls the number of seconds the listener waits for a client to complete a connection request before timing out the connection attempt. This parameter is configured in the sqlnet.ora file and provides protection against malicious or malfunctioning clients that open network connections without completing the connection handshake.
Connection timeout protection prevents denial of service attacks where attackers open many connections without completing them, exhausting listener resources. By setting SQLNET.INBOUND_CONNECT_TIMEOUT to an appropriate value such as 10 or 60 seconds, you ensure that incomplete connections are cleaned up automatically, freeing listener resources for legitimate connection requests.
The parameter applies specifically to the connection establishment phase, not to idle connected sessions. Once a connection is fully established and a session is created, different timeout mechanisms like resource manager session timeouts or profile limits control idle session behavior. SQLNET.INBOUND_CONNECT_TIMEOUT only affects the brief period between initial network connection and completion of authentication.
Configuration involves editing the sqlnet.ora file on the database server and adding the line SQLNET.INBOUND_CONNECT_TIMEOUT=value where value is the timeout in seconds. After making changes, reload the listener configuration using lsnrctl reload or restart the listener. No database restart is required since this is a network configuration parameter.
Option A is incorrect because CONNECTION_TIMEOUT is not the correct parameter name in Oracle network configuration files. Option C is incorrect because CONNECT_TIMEOUT is not a standard Oracle parameter for listener connection timeouts. Option D is incorrect because SESSION_TIMEOUT typically refers to idle session timeouts managed through resource manager or profiles, not connection establishment timeouts.
Question 115:
What is the purpose of the V$PROCESS view?
A) To show SQL statements being processed
B) To display information about currently active operating system processes connected to the instance
C) To monitor process scheduling
D) To display background process statistics only
Answer: B
Explanation:
The V$PROCESS view displays information about currently active operating system processes connected to the Oracle instance, including both background processes and server processes serving user sessions. This view is essential for monitoring process-level activity, diagnosing connection issues, identifying resource consumption at the process level, and understanding the relationship between sessions and operating system processes.
Each row in V$PROCESS represents one Oracle process including the operating system process ID, process address, terminal identifier, program name, memory usage, and CPU usage. Background processes like PMON, SMON, DBWn, and LGWR appear in this view along with dedicated server processes and shared server processes. This comprehensive view of all Oracle processes helps administrators monitor overall instance health.
Common administrative queries join VPROCESSwithVPROCESS with V PROCESSwithVSESSION to correlate session information with process information. This combination reveals which users are consuming the most CPU or memory, helps identify the operating system process ID for sessions that need to be killed at the OS level, and provides complete visibility into session and process activity. The join typically uses the PADDR column from VSESSIONmatchingADDRinVSESSION matching ADDR in V SESSIONmatchingADDRinVPROCESS.
Process information includes resource consumption metrics like CPU_USED showing cumulative CPU seconds consumed by the process, PGA_USED_MEM and PGA_ALLOC_MEM showing PGA memory usage, and EXECUTION_TYPE indicating whether the process is a background process or server process. These metrics help identify processes that are consuming excessive resources.
Question 116:
Which clause is used with CREATE TABLE to specify that the table should be created only if it does not already exist?
A) IF NOT EXISTS
B) CREATE OR REPLACE TABLE
C) Oracle does not support this syntax
D) IGNORE IF EXISTS
Answer: C
Explanation:
Oracle does not support syntax to create a table only if it does not already exist within the standard CREATE TABLE statement. Unlike some other database systems that support CREATE TABLE IF NOT EXISTS syntax, Oracle requires different approaches to achieve conditional table creation. Attempting to create a table that already exists will result in an error, and handling this requires application logic or PL/SQL exception handling.
To conditionally create tables in Oracle, developers typically use PL/SQL blocks with exception handling. The approach involves attempting to create the table within a BEGIN/END block and catching the exception that occurs if the table already exists. By trapping the “name is already used” error, the code can silently ignore the error and continue execution, effectively achieving conditional creation.
Another approach involves querying data dictionary views like USER_TABLES or ALL_TABLES to check whether the table exists before attempting creation. If the table is not found in the data dictionary, the code proceeds with creation. This method requires dynamic SQL because the CREATE TABLE statement must be conditionally executed based on the query results.
Third-party tools and frameworks sometimes provide their own mechanisms for conditional table creation. These tools may check for table existence before issuing CREATE statements or may wrap Oracle operations with exception handling. Understanding that this is not native Oracle SQL syntax helps avoid confusion when working with Oracle versus other database systems.
Option A is incorrect because IF NOT EXISTS is not valid Oracle SQL syntax for CREATE TABLE, though it exists in other database systems like MySQL and PostgreSQL. Option B is incorrect because CREATE OR REPLACE TABLE is not valid syntax; Oracle supports CREATE OR REPLACE for views, procedures, and other objects but not for tables. Option D is incorrect because IGNORE IF EXISTS is not valid Oracle syntax.
Question 117:
What is the purpose of the MONITORING clause when creating or altering tables?
A) To enable auditing on the table
B) To track table modifications for statistics gathering and segment advisor recommendations
C) To monitor performance of queries against the table
D) To enable real-time monitoring of DML operations
Answer: B
Explanation:
The MONITORING clause when creating or altering tables tracks table modifications for statistics gathering and segment advisor recommendations by maintaining information about DML activity against the table. This monitoring helps Oracle’s automatic statistics gathering identify when tables have changed significantly and need updated statistics, and provides data to the Segment Advisor for making space management recommendations.
When monitoring is enabled on a table, Oracle tracks the number of inserts, updates, and deletes performed against the table. This information appears in data dictionary views and helps determine when statistics have become stale. The automatic statistics gathering job uses monitoring information to prioritize which tables need statistics updates during maintenance windows, focusing effort on tables that have changed most.
The syntax includes MONITORING keyword in CREATE TABLE or ALTER TABLE statements. For example, ALTER TABLE employees MONITORING enables monitoring for the employees table. Conversely, ALTER TABLE employees NOMONITORING disables monitoring. In current Oracle versions, monitoring is often enabled by default for new tables, and the monitoring mechanism has evolved with newer statistics gathering approaches.
Historical note: the MONITORING feature was more prominent in older Oracle versions before enhanced automatic statistics gathering capabilities. Modern Oracle databases use more sophisticated mechanisms to detect stale statistics including tracking DML operations through different internal mechanisms. The MONITORING clause remains for backward compatibility but is less critical in current versions.
Option A is incorrect because auditing is enabled through separate auditing commands and features, not through the MONITORING clause on tables. Option C is incorrect because monitoring query performance involves different features like SQL monitoring and is not controlled by the table-level MONITORING clause. Option D is incorrect because real-time DML monitoring uses different features and the MONITORING clause does not provide real-time operational monitoring.
Question 118:
Which background process performs cleanup of temporary segments no longer needed by sessions?
A) PMON
B) SMON
C) DBWn
D) CKPT
Answer: B
Explanation:
The SMON background process performs cleanup of temporary segments no longer needed by sessions as part of its system maintenance responsibilities. When sessions create temporary segments for sort operations or other temporary data storage, these segments should be released when no longer needed. If sessions terminate abnormally, temporary segments might remain allocated. SMON periodically cleans up these orphaned temporary segments.
SMON has multiple responsibilities beyond temporary segment cleanup including performing instance recovery after database crashes, coalescing free space in dictionary-managed tablespaces to reduce fragmentation, recovering transactions that were active in distributed databases when connections failed, and performing other system-level housekeeping tasks. This background process runs automatically and requires no manual intervention under normal circumstances.
Temporary segment cleanup occurs periodically as SMON wakes up to check for cleanup tasks. The process identifies temporary segments that are no longer associated with active sessions and releases their extents back to the tablespace for reuse. This automatic cleanup prevents temporary tablespace space leakage that could otherwise occur if abnormal session terminations left segments allocated indefinitely.
Monitoring SMON activity can be done through alert log messages and trace files. When SMON performs significant cleanup operations, it may log messages indicating the extent of cleanup work performed. Under normal circumstances, SMON operates silently in the background, but unusual amounts of SMON activity might indicate problems with temporary segment management or session cleanup.
Option A is incorrect because PMON cleans up after failed user processes by releasing their locks and rolling back uncommitted transactions, but it does not specifically clean up temporary segments. Option C is incorrect because DBWn writes dirty buffers from the buffer cache to data files but does not perform temporary segment cleanup. Option D is incorrect because CKPT manages checkpoint operations but does not clean up temporary segments.
Question 119:
What is the purpose of the LIKE clause with the ESCAPE option in SQL queries?
A) To perform exact string matches
B) To specify a custom escape character for pattern matching when wildcards appear in the search pattern
C) To enable case-insensitive searches
D) To compare multiple patterns simultaneously
Answer: B
Explanation:
The LIKE clause with the ESCAPE option specifies a custom escape character for pattern matching when wildcards appear in the search pattern itself. This is necessary when you need to search for literal percent signs or underscores rather than using them as wildcards. The ESCAPE option allows you to designate a character that, when preceding a wildcard, treats that wildcard as a literal character.
Pattern matching with LIKE normally treats percent as a wildcard matching any sequence of characters and underscore as a wildcard matching any single character. However, sometimes you need to search for these characters literally in your data. For example, searching for product codes that contain an underscore or searching for text containing percentage symbols requires escaping these wildcard characters.
The syntax is WHERE column LIKE pattern ESCAPE escape_char. For example, WHERE product_code LIKE ‘%_TEMP%’ ESCAPE ” searches for product codes containing the literal string underscore-TEMP rather than treating underscore as a wildcard. The backslash before the underscore tells Oracle to treat it literally rather than as a wildcard.
Any character can be designated as the escape character, though backslash and exclamation mark are commonly used. The chosen escape character should not appear elsewhere in the pattern except when escaping wildcards, to avoid confusion. For example, WHERE description LIKE ‘%!%%’ ESCAPE ‘!’ searches for text containing a literal percent sign, with exclamation mark as the escape character.
Option A is incorrect because exact string matches use the equals operator rather than LIKE, and the ESCAPE option does not change LIKE to perform exact matching. Option C is incorrect because case-insensitive searches typically use UPPER or LOWER functions or are controlled by database NLS settings, not the ESCAPE option. Option D is incorrect because comparing multiple patterns uses multiple LIKE clauses or regular expressions, not the ESCAPE option.
Question 120:
Which view shows information about materialized view logs?
A) DBA_MVIEW_LOGS
B) DBA_MVIEWS
C) V$MVIEW_LOG
D) USER_SNAPSHOT_LOGS
Answer: A
Explanation:
The DBA_MVIEW_LOGS view shows information about materialized view logs in the database including the master table name, log owner, log table name, and various attributes controlling how changes are captured. Materialized view logs are required for fast refresh of materialized views, recording changes to base tables so that materialized views can be incrementally updated rather than completely rebuilt.
Materialized view logs capture insert, update, and delete operations on master tables. When a materialized view performs a fast refresh, Oracle reads the log to identify which rows changed and applies only those changes to the materialized view. This incremental approach is much more efficient than complete refresh, which rebuilds the entire materialized view by re-executing its defining query.
The DBA_MVIEW_LOGS view includes columns indicating the master table for which the log captures changes, the log owner and name, whether the log captures primary keys or ROWIDs, whether sequence numbers are included, and various flags controlling log behavior. This information helps administrators manage materialized view refresh strategies and troubleshoot refresh issues.
Creating materialized view logs uses CREATE MATERIALIZED VIEW LOG ON table_name syntax with options specifying what information to capture. Options include WITH PRIMARY KEY to capture primary key values, WITH ROWID to capture ROWIDs, WITH SEQUENCE to include sequence numbers for ordering changes, and INCLUDING NEW VALUES to store new values for updated columns. The log creation options must match the refresh requirements of materialized views using the log.
Option B is incorrect because DBA_MVIEWS shows information about materialized views themselves, not the logs that support their refresh operations. Option C is incorrect because V$MVIEW_LOG is not a standard Oracle dynamic performance view. Option D is incorrect because USER_SNAPSHOT_LOGS uses older snapshot terminology but could show log information for the current user, though DBA_MVIEW_LOGS is the current standard view name.