Pass IBM C2090-617 Exam in First Attempt Easily

Latest IBM C2090-617 Practice Test Questions, Exam Dumps
Accurate & Verified Answers As Experienced in the Actual Test!

Coming soon. We are working on adding products for this exam.

Exam Info
Related Exams

IBM C2090-617 Practice Test Questions, IBM C2090-617 Exam dumps

Looking to pass your tests the first time. You can study with IBM C2090-617 certification practice test questions and answers, study guide, training courses. With Exam-Labs VCE files you can prepare with IBM C2090-617 DB2 10 System Administrator for z/OS exam dumps questions and answers. The most complete solution for passing with IBM certification C2090-617 exam dumps questions and answers, study guide, training course.

A Comprehensive Guide to Passing the C2090-617 Exam

The C2090-617 Exam, officially titled IBM DB2 11.1 System Administrator for z/OS, is a crucial certification for professionals working in mainframe environments. Passing this exam validates an individual's skills and knowledge required to perform the day-to-day tasks of a system administrator for DB2 on the z/OS platform. The target audience includes experienced database administrators and system programmers who are responsible for the installation, configuration, security, maintenance, and recovery of DB2 subsystems. This certification demonstrates a high level of competency in managing mission-critical database systems that power many of the world's largest enterprises in sectors like banking, finance, and insurance.

Achieving this certification can significantly enhance career prospects, as it signals a deep understanding of one of the most robust and reliable database systems in existence. The C2090-617 Exam covers a broad range of topics, ensuring that certified individuals are well-rounded administrators. It tests not only theoretical knowledge but also the practical application of concepts related to system operation, performance tuning, and disaster recovery. Preparing for this exam requires a disciplined approach, combining theoretical study with hands-on experience to master the complexities of the DB2 for z/OS environment and its intricate relationship with the underlying operating system.

This series will serve as a detailed guide, breaking down the major objectives of the C2090-617 Exam into manageable sections. We will explore each topic area, providing the foundational knowledge necessary to build confidence and competence. From the initial installation and migration of a DB2 subsystem to advanced concepts like high availability and performance monitoring, this guide aims to be a comprehensive resource. By following this structured learning path, candidates can systematically prepare for the challenges of the exam and move closer to achieving their certification goals and advancing their careers in mainframe database administration.

Core Architecture of DB2 for z/OS

Before diving into the specifics of administration, it is essential for any C2090-617 Exam candidate to possess a solid understanding of the DB2 for z/OS architecture. At its core, a DB2 subsystem operates within its own set of z/OS address spaces. The primary address spaces are the System Services Address Space (MSTR) and the Database Services Address Space (DBM1). The MSTR address space is responsible for overall system control, command processing, and managing connections, while the DBM1 address space handles all database-related requests, such as SQL processing, buffer management, and locking.

The interaction between these address spaces and other components is fundamental to DB2's operation. When a user or application connects to DB2, the request is managed through various other address spaces, such as the Distributed Data Facility (DDF) address space for network connections. Data within DB2 is stored in logical structures called table spaces and index spaces. Physically, these are stored in VSAM Linear Data Sets (LDS) on Direct Access Storage Devices (DASD). Understanding this logical-to-physical mapping is crucial for tasks like space management, backup, and recovery, which are key topics in the C2090-617 Exam.

Another critical component of the architecture is the DB2 catalog and directory. The catalog is a set of DB2 tables that contains metadata about all the objects defined in the subsystem, such as tables, indexes, views, and user privileges. The directory contains internal information that DB2 uses to operate, including database descriptors (DBDs) and skeleton cursor tables (SKCTs). A system administrator must know how to query the catalog to get information about the system and understand the importance of maintaining the health of both the catalog and directory, as they are vital for the subsystem's functionality.

Memory structures, particularly buffer pools, play a pivotal role in DB2 performance. Buffer pools are areas of virtual storage in the DBM1 address space that are used to cache data and index pages from disk. Efficiently managing and tuning buffer pools can dramatically reduce I/O operations and improve query response times. The C2090-617 Exam expects candidates to understand how to define, monitor, and alter buffer pools to meet the performance requirements of their applications. This includes knowledge of different page sizes, thresholds, and the types of objects that should be assigned to specific pools.

Installation and Migration Planning

A significant portion of the C2090-617 Exam focuses on the installation of a new DB2 subsystem and the migration from a previous version. Proper planning is the most critical phase of this process. Before initiating an installation, an administrator must perform a thorough assessment of hardware and software prerequisites. This includes verifying the z/OS version, ensuring sufficient DASD space is available for DB2 datasets, and confirming that all required supporting software products, such as IBM's System Modification Program/Extended (SMP/E) for maintenance, are in place. Careful capacity planning is also needed for key resources like memory for buffer pools and CPU for processing.

Naming conventions are another vital aspect of pre-installation planning. Establishing a consistent and logical naming standard for all DB2 datasets, subsystem parameters, and user authorizations is essential for long-term manageability. This includes deciding on the DB2 subsystem name (SSID), collection IDs, and the high-level qualifiers for datasets. A well-defined naming strategy simplifies automation, troubleshooting, and daily operational tasks. Furthermore, security considerations must be addressed upfront by working with security administrators to define the necessary RACF, ACF2, or Top Secret profiles and permissions for the DB2 address spaces and administrative user IDs.

The installation process itself is driven by a series of jobs that are customized by an interactive CLIST, often referred to as DSNTINST. Before running the CLIST, the administrator needs to gather all the necessary parameters for the subsystem. This includes decisions on the sizes and number of buffer pools, the configuration of the active and archive logs, the settings for the IRLM (IMS Resource Lock Manager), and the various subsystem parameters known as ZPARMs. These ZPARMs control nearly every aspect of DB2's behavior, from locking and logging to query optimization and security, making their initial setup a critical task.

When planning for a migration from an older version of DB2 to version 11.1, the process is more complex than a new installation. The administrator must carefully review the migration guide to understand potential incompatibilities and new features. The migration process typically occurs in stages, starting with Conversion Mode (CM) and eventually moving to New Function Mode (NFM). In CM, the system runs with the new code but maintains compatibility with the previous version, allowing for fallback if necessary. Planning for migration involves running pre-migration health checks, backing up the existing system, and scheduling a maintenance window for the migration activities.

The DB2 Installation and Verification Process

The actual installation of a DB2 11.1 subsystem is executed through a series of JCL jobs generated by the DSNTINST CLIST. After the administrator provides all the required parameters in the CLIST panels, it creates tailored members in a specified dataset, typically named prefix.NEW.SDSNSAMP. These members contain the JCL to allocate datasets, define the subsystem to z/OS, create the catalog and directory, and set up the initial ZPARM module. The administrator must review these generated jobs carefully before submission to ensure all parameters are correct. The C2090-617 Exam requires familiarity with the purpose of these key installation jobs.

The installation job stream follows a logical sequence. Early jobs, such as DSNTIJIN, allocate the necessary datasets, including the bootstrap dataset (BSDS), active logs, and system databases. Subsequent jobs like DSNTIJTC create the DB2 catalog and directory tables and indexes. Another important job, DSNTIJUZ, assembles and link-edits the subsystem parameter module (ZPARM), which is essential for starting the DB2 subsystem. Each job must complete successfully before the next one is submitted. Careful tracking of job completion codes and review of job outputs are mandatory to ensure a clean installation.

Once all the installation jobs have been run successfully, the next step is to start the DB2 subsystem for the first time. This is typically done by executing the z/OS START command, for example, -ssid START DB2. The administrator must closely monitor the z/OS system console and the DB2 MSTR address space job log for messages indicating the startup progress. A successful startup sequence will show the initialization of various DB2 components, the allocation of buffer pools, and the opening of the logs, culminating in a message indicating that DB2 startup is complete and the subsystem is active.

After the initial startup, the Installation Verification Procedure (IVP) must be run to confirm that the subsystem is functioning correctly. The IVP consists of a set of sample jobs that perform various database operations, such as creating tables, loading data, running SQL queries, and executing DB2 programs. These jobs test different aspects of the DB2 functionality, including TSO, CICS, and batch connections. Successful execution of all IVP jobs provides a high level of confidence that the core components of the DB2 installation are working as expected, a critical milestone in the process.

Understanding and Managing ZPARMs

Subsystem parameters, commonly known as ZPARMs, are the central control mechanism for a DB2 for z/OS subsystem. They are assembled into a load module, typically named DSNZPARM, which is loaded when the DB2 subsystem starts. These parameters define and influence virtually every aspect of DB2's behavior, including memory usage, locking protocols, logging frequency, security checks, and network communication settings. For the C2090-617 Exam, a deep understanding of the most critical ZPARMs and their impact on the system is mandatory. An administrator must know how to set initial values and when to modify them to meet changing requirements.

ZPARMs are organized into different functional areas, often by the name of the assembly macro used to define them, such as DSN6SYSP for system-wide parameters, DSN6LOGP for logging, and DSN6ARVP for archiving. For example, within DSN6SYSP, a parameter like CONDBAT controls the maximum number of concurrent database access threads, while CHKFREQ in DSN6LOGP determines how often a system checkpoint is taken. Knowing which macro contains which parameter is helpful for locating and modifying them. Administrators must be familiar with the key parameters that affect performance, availability, and resource consumption.

Modifying ZPARMs is a controlled process that requires careful consideration. The first step is to edit the source member, which contains the assembly macros, to change the desired parameter values. After the changes are made, the administrator must run the DSNTIJUZ job, which invokes the assembler and linker to create a new DSNZPARM load module. This new module is typically placed in the prefix.SDSNEXIT library. It is crucial to maintain a backup of the previous ZPARM source and load module in case a rollback is needed.

For a new ZPARM module to take effect, the DB2 subsystem must be stopped and restarted. This is because the ZPARMs are read only once during the initial startup process. This requirement means that ZPARM changes must be carefully planned and scheduled during a maintenance window to avoid impacting users. However, some ZPARMs can be changed online using the -SET SYSPARM command. An administrator must know which parameters are dynamic and can be altered without a subsystem recycle, as this provides greater flexibility for tuning and operational adjustments.

Implementing Security and Auditing

Security is a paramount concern in mainframe environments, and the C2090-617 Exam places significant emphasis on a system administrator's ability to secure a DB2 for z/OS subsystem. DB2 security is managed through a combination of internal mechanisms and an external security manager like RACF, ACF2, or Top Secret. The primary method involves controlling access to DB2 resources through the granting and revoking of privileges and authorities. An administrator must understand the hierarchy of authorities, from the highest level like SYSADM down to specific object privileges like SELECT on a table.

The GRANT and REVOKE SQL statements are the tools used to manage these permissions. System administrators must be proficient in using these commands to control who can perform what actions. This includes granting authorities to users or groups (roles), managing privileges on objects like tables, packages, and plans, and understanding the concept of ownership. For example, the owner of an object has implicit privileges on it and the ability to grant access to others. A key concept for the C2090-617 Exam is the principle of least privilege, where users are only given the minimum access required to perform their job functions.

Integration with an external security manager is standard practice and a critical exam topic. DB2 uses security exits to pass authorization requests to the external manager. This allows for centralized security administration and more granular control. An administrator must understand how to configure DB2 to use these exits and how the corresponding security classes and profiles are defined in the security product. For instance, controlling who can issue DB2 commands or access specific utilities is often managed through profiles in the DSNADM class in RACF. This integration provides a robust, multi-layered security model.

Beyond access control, auditing is another essential component of a comprehensive security strategy. Auditing involves tracking and recording specific events that occur within the DB2 subsystem. This is configured using the DB2 audit facility, which can be started via the -START TRACE command with the AUDIT class specified. Administrators must know what events can be audited, such as authorization failures, DDL changes, or data access by privileged users. The audit trace records are written to SMF or GTF datasets and can be analyzed using reporting tools to detect suspicious activity or to comply with regulatory requirements.

System Operation and Command Processing

A core responsibility of a DB2 system administrator is the daily operation of the subsystem, which involves starting, stopping, and monitoring its status. The C2090-617 Exam requires a thorough knowledge of the commands used for these tasks. The primary interface for controlling DB2 is through z/OS console commands, prefixed with a subsystem recognition character (e.g., -DSN1). The -START DB2 and -STOP DB2 commands initiate the startup and shutdown procedures. An administrator must understand the different modes of stopping DB2, such as MODE(QUIESCE) for a graceful shutdown and MODE(FORCE) for an emergency termination.

Once the subsystem is running, the administrator uses a variety of DISPLAY commands to monitor its health and status. The -DISPLAY DATABASE command shows the state of databases, table spaces, and index spaces, indicating if they are read/write, read-only, or in a restricted state like STOP or UTILITY. Similarly, -DISPLAY THREAD provides information about active connections, their status, and the resources they are using. Being able to interpret the output of these commands is crucial for identifying and diagnosing problems, such as a long-running utility or a stalled application thread.

Command processing is not limited to the z/OS console. Administrators also interact with DB2 through other interfaces like TSO using the DSN command processor or by submitting batch jobs. The DSN command allows for the execution of DB2 subcommands, such as RUN to execute a program or FREE to release application plans. Batch jobs are commonly used to run utilities, execute SQL scripts, or perform other administrative tasks. A candidate for the C2090-617 Exam must be comfortable with the JCL required to execute DB2 programs and utilities in a batch environment.

Managing the various address spaces associated with DB2 is another key operational task. This includes the DDF address space for distributed connections, the stored procedure address spaces, and the IRLM for locking. Commands like -DISPLAY DDF and -DISPLAY IRLM provide insight into the status of these components. If a problem occurs, for example, with network connectivity, the administrator needs to know how to use these commands to investigate the DDF's status, including its listeners and active connections. This knowledge is essential for supporting a connected enterprise environment.

Managing DB2 Utilities

The DB2 for z/OS utilities are a powerful set of tools that system and database administrators use to manage and maintain the database environment. Mastery of these utilities is a non-negotiable requirement for passing the C2090-617 Exam. The utilities can be categorized by their function, such as data management, backup and recovery, and integrity checking. Utilities like LOAD, UNLOAD, and REORG are workhorses for moving data and maintaining the physical organization of table spaces and indexes. An administrator must know the syntax for these utilities and when to use them.

The LOAD utility is used to populate tables with data from an external file. It is highly efficient for bulk data insertion. The UNLOAD utility performs the opposite function, extracting data from a table into a dataset. The REORG utility is used to reorganize a table space or index to improve performance by reclaiming fragmented space, re-clustering data according to the clustering index, and updating statistics. Understanding the different options for these utilities, such as whether a REORG should be performed online or offline, is crucial for minimizing application impact.

Data consistency and integrity are maintained using utilities like CHECK DATA, CHECK INDEX, and CHECK LOB. CHECK DATA verifies referential integrity constraints between tables, reporting on any violations. CHECK INDEX ensures that the entries in an index are consistent with the data in the table. These utilities are essential for identifying and correcting data corruption issues. The REPAIR utility can be used to fix certain types of problems, but its use requires extreme caution and a deep understanding of the underlying data structures, a topic often covered in the C2090-617 Exam.

For statistical information, the RUNSTATS utility is paramount. RUNSTATS collects statistics about the data in tables, indexes, and columns, such as the number of rows, cardinality of columns, and data distribution. This information is stored in the DB2 catalog and is used by the query optimizer to choose the most efficient access paths for SQL statements. Outdated or missing statistics can lead to poor query performance, making the regular execution of RUNSTATS a critical administrative task. An administrator must know how to tailor RUNSTATS execution to collect the most accurate statistics for their workload.

Routine Maintenance and Housekeeping

Beyond running specific utilities, a DB2 system administrator is responsible for a range of routine maintenance tasks that ensure the long-term health and stability of the system. These housekeeping activities are a critical part of the operational duties covered in the C2090-617 Exam. One of the most important tasks is managing the DB2 catalog and directory. Over time, these system databases can become fragmented and require reorganization, just like user databases. Regularly running REORG on the catalog and directory table spaces is a best practice for maintaining optimal system performance.

Another key maintenance activity is managing the archive logging process. As the active log datasets fill up, DB2 offloads them to archive log datasets. The administrator must ensure that there is sufficient space for these archive logs and that they are properly managed, often by migrating them to tape or virtual tape for long-term retention. The DSNJU003 utility can be used to modify the bootstrap dataset (BSDS) to add or remove active or archive log datasets. Proper log management is essential not only for recovery but also for features like data replication.

The collection of performance and accounting data is another routine task. DB2 can generate detailed trace records about system events, SQL execution, locking, and I/O. This data is typically written to SMF and is the primary input for performance monitoring and capacity planning tools. The administrator is responsible for starting and stopping these traces and ensuring that the data collection process does not impose an unacceptable level of overhead on the system. Analyzing this data is part of the performance tuning discipline, but managing its collection is an operational task.

Managing DB2 maintenance and applying PTFs (Program Temporary Fixes) from IBM is also a core responsibility. This is done using the SMP/E tool. An administrator must be able to receive, apply, and accept DB2 maintenance to keep the system up-to-date and to fix known bugs. This process requires careful planning and coordination, as applying maintenance often requires a subsystem outage. Understanding the SMP/E process and how it relates to DB2 libraries is a key system programming skill tested in the C2090-617 Exam.

Concurrency and Locking Management

Concurrency control is the mechanism that DB2 uses to manage simultaneous access to data by multiple users and applications, and it is a fundamental concept for the C2090-617 Exam. The primary tool for this is locking. When a transaction accesses data, DB2 acquires a lock on the resource to prevent other transactions from making conflicting changes. An administrator must understand the different lockable resources (e.g., table space, table, page, row), the various lock modes (e.g., Share, Update, Exclusive), and how these factors influence the degree of concurrency.

The choice of lock size (LOCKSIZE parameter on CREATE or ALTER TABLESPACE) has a significant impact on concurrency and performance. A LOCKSIZE of ROW provides the highest level of concurrency, as only individual rows are locked, but it can also lead to higher CPU and memory consumption for managing a large number of locks. Conversely, a LOCKSIZE of TABLESPACE minimizes locking overhead but severely restricts concurrent access. An administrator must be able to choose the appropriate lock size based on the application's access patterns.

When two or more transactions are waiting for locks held by each other, a deadlock or a timeout can occur. A deadlock is a situation where a circular chain of dependencies prevents transactions from proceeding. A timeout happens when a transaction waits for a lock longer than a specified time limit. The IRLM component is responsible for detecting and resolving these situations. It will typically resolve a deadlock by choosing one transaction as a victim and rolling it back, which releases its locks and allows the other to proceed. Administrators must know how to monitor for deadlocks and timeouts using traces or monitoring tools.

The isolation level, specified when a plan or package is bound, also plays a crucial role in concurrency. Common isolation levels include Cursor Stability (CS) and Read Stability (RS). CS, the default, holds locks only on the current row being accessed, providing good concurrency. RS holds locks on all rows that satisfy the query's predicates until the transaction commits, offering greater data consistency at the cost of reduced concurrency. Understanding the trade-offs between different isolation levels and their effect on locking is essential for application design and troubleshooting performance problems, a key skill for the C2090-617 Exam.

Fundamentals of DB2 Performance and Tuning

Performance and tuning are at the heart of a DB2 system administrator's responsibilities and represent a major section of the C2090-617 Exam. The primary goal is to ensure that the database system meets the service level agreements (SLAs) for response time and throughput. This discipline is multifaceted, involving the tuning of the DB2 subsystem itself, the underlying z/OS environment, the database design, and the applications that access the data. A holistic approach is required, as a bottleneck in one area can negatively impact the entire system's performance.

The first step in performance tuning is effective monitoring. An administrator cannot tune what they cannot measure. DB2 provides a comprehensive instrumentation facility that collects detailed information about system activity. This data, known as trace data, can be captured in SMF or GTF datasets and analyzed using performance reporting tools like DB2 Performance Monitor (DB2PM) or other third-party products. Understanding how to start traces, interpret performance reports, and identify key metrics like buffer pool hit ratios, lock wait times, and SQL execution times is a foundational skill.

There are three main areas of tuning: system-level, database-level, and application-level. System-level tuning involves configuring the DB2 subsystem and its environment for optimal performance. This includes setting ZPARMs correctly, tuning the IRLM, managing memory by defining and tuning buffer pools, and ensuring the logging components are efficient. For example, a poorly configured buffer pool can lead to excessive I/O, which is often a primary cause of poor performance. The C2090-617 Exam expects candidates to know the key ZPARMs and components that influence overall system performance.

Database-level tuning focuses on the physical design of the database objects. This includes choosing appropriate data types, normalizing the data model, and designing effective indexing strategies. The physical organization of data on disk is also critical. An administrator uses utilities like REORG to maintain data clustering and RUNSTATS to provide the optimizer with accurate information about the data. A well-designed physical database provides the foundation upon which high-performing applications can be built.

Mastering Buffer Pool Tuning

Buffer pools are arguably the single most important component to tune for DB2 performance, and they are a guaranteed topic on the C2090-617 Exam. A buffer pool is an area of virtual memory in the DBM1 address space that DB2 uses to cache pages of data and indexes that have been read from disk. The goal of buffer pool tuning is to maximize the number of times DB2 can find a required page in memory (a buffer hit), thus avoiding a slow physical I/O operation to disk. A high buffer pool hit ratio is a key indicator of good performance.

The first aspect of tuning is proper allocation and separation. It is a best practice to create multiple buffer pools and assign different types of objects to them based on their access patterns. For example, it is common to place catalog and directory objects in their own dedicated buffer pool. Similarly, heavily accessed small tables and their indexes might go into one pool, while large, sequentially accessed tables go into another. This separation prevents the activity on one type of object from flushing out the pages of another, unrelated object, leading to more stable and predictable performance.

An administrator must also choose the correct page size for the buffer pools and the corresponding table spaces. DB2 supports page sizes of 4K, 8K, 16K, and 32K. The choice of page size can affect I/O efficiency and storage. For tables with large rows or LOB data, a larger page size can be more efficient. The buffer pools must be defined with page sizes that match the table spaces they will serve. A single buffer pool can only handle one page size.

Monitoring is the key to effective buffer pool tuning. The -DISPLAY BUFFERPOOL command provides a real-time snapshot of buffer pool activity, including hit ratios and I/O rates. For more detailed analysis, performance reports based on trace data are used. These reports show metrics like the random getpage hit ratio, sequential prefetch efficiency, and page residency times. By analyzing these metrics, an administrator can determine if a buffer pool is too small, too large, or if its thresholds need adjustment. For instance, a low hit ratio suggests the pool may be too small for its workload.

Based on monitoring data, the administrator can take tuning actions. This typically involves using the -ALTER BUFFERPOOL command to dynamically change the size of a pool. Other tunable parameters include the various thresholds, such as the deferred write threshold (DWQT) and the vertical deferred write threshold (VDWQT), which control when updated pages are written back to disk. Getting these settings right ensures a balance between read performance and the efficiency of write operations. Continuous monitoring and adjustment are necessary as workloads change over time, making buffer pool tuning an ongoing process.

The Role of the DB2 Optimizer and EXPLAIN

The DB2 optimizer is a sophisticated component responsible for choosing the most efficient method, or access path, for executing an SQL statement. For any non-trivial SQL query, there are often many possible ways to retrieve the requested data. The optimizer's job is to evaluate these alternatives and select the one it estimates will have the lowest cost, typically measured in terms of CPU time and I/O operations. A system administrator's deep understanding of the optimizer is essential for performance tuning and is thoroughly tested on the C2090-617 Exam.

The optimizer's decisions are based on a wide range of inputs. The most important of these are the statistics stored in the DB2 catalog tables, which are collected by the RUNSTATS utility. These statistics describe the data's characteristics, such as the number of rows in a table and the number of distinct values in a column. Without accurate and up-to-date statistics, the optimizer is essentially flying blind and is likely to choose a suboptimal access path. This is why running RUNSTATS regularly is a critical performance tuning task.

Other factors influencing the optimizer include the query itself, the physical database design (including indexes), and various ZPARM settings. The way an SQL query is written can have a huge impact on the final access path. For example, using non-indexed columns in a WHERE clause might force a table space scan. The presence of appropriate indexes is often the single most important factor in achieving good query performance. The optimizer will evaluate the available indexes and estimate the cost of using them versus other access methods.

To understand the optimizer's chosen access path for a given SQL statement, administrators and developers use the EXPLAIN facility. When an EXPLAIN is performed on a query, DB2 populates a set of tables, known as the plan tables, with detailed information about the access path. This information includes which indexes are being used, the join method for multi-table queries (e.g., nested loop join, merge scan join), the join order, and the estimated cost of the query. Analyzing this output is a core skill for any performance analyst.

Indexing Strategies for High Performance

Indexes are data structures that provide an ordered path to data, allowing DB2 to locate specific rows quickly without having to scan the entire table. A well-designed indexing strategy is fundamental to achieving high performance in a relational database, and it is a critical knowledge area for the C2090-617 Exam. The most common type of index is the B-tree index. An administrator must understand its structure, consisting of a root page, intermediate pages, and leaf pages that contain the key values and pointers (RIDs) to the actual data rows.

When designing an index, the most important consideration is the SQL workload that will access the table. Indexes should be created on columns that are frequently used in WHERE clauses, join conditions, and ORDER BY or GROUP BY clauses. A good candidate for an index is a column with high cardinality, meaning it has many distinct values. Creating an index on a column with very few distinct values (e.g., a gender column) is generally not effective, as it will not significantly filter the data.

DB2 offers several types of indexes. A clustering index is a special type that determines the physical order of the data in the table space. A table can have only one clustering index. Maintaining a high degree of clustering (i.e., the physical order matching the index order) is important for performance, especially for range queries. The REORG utility is used to restore clustering. Unique indexes enforce the uniqueness of key values, while non-unique indexes allow duplicates. Understanding when to use each type is essential.

While indexes are crucial for improving read performance, they do come with a cost. Every time a row is inserted, updated, or deleted, all indexes on that table must also be updated. This adds overhead to data modification operations. Therefore, it is important to avoid over-indexing a table, especially tables that are subject to heavy transactional workloads with many inserts and updates. The goal is to create the minimum number of indexes that satisfy the query workload's performance requirements. This balancing act is a key skill for a database administrator.

Monitoring and Managing System Resources

Effective performance tuning requires constant monitoring of the key system resources that DB2 relies on. The C2090-617 Exam expects an administrator to be proficient in monitoring CPU, memory, and I/O, and to understand how their utilization impacts DB2 performance. On the z/OS platform, tools like the Resource Measurement Facility (RMF) provide system-wide performance data, while DB2's own instrumentation provides a more detailed view of its internal resource consumption.

CPU consumption is a primary concern. An administrator needs to be able to distinguish between different types of CPU time, such as TCB time (work done under a task control block) and SRB time (work done in service request blocks). High CPU consumption can be caused by inefficient SQL, excessive locking, or poorly tuned system parameters. By analyzing performance reports, an administrator can identify the specific threads, plans, or packages that are consuming the most CPU and target them for further investigation.

Memory, or virtual storage, is another critical resource. The DB2 DBM1 address space contains several large memory areas, with the buffer pools being the largest and most important. However, other memory pools, like the EDM pool for database descriptors and cursor tables, are also vital. An administrator must monitor the usage of these pools to ensure they are sized correctly. Insufficient memory in any of these areas can lead to performance degradation or even system outages. The -DISPLAY POOL commands and performance reports provide the necessary monitoring data.

I/O performance is often the biggest bottleneck in a database system. An administrator must monitor the I/O rates to the database datasets, logs, and other critical files. This includes tracking the average response time for read and write operations. High I/O rates or slow response times can be caused by poorly tuned buffer pools, inefficient access paths (e.g., table space scans), or contention at the disk or channel level. DB2 performance reports provide detailed I/O statistics at the buffer pool and dataset level, helping the administrator pinpoint the source of I/O contention.

Backup and Recovery Concepts

The ability to recover a database system from any type of failure is a non-negotiable responsibility of a DB2 system administrator and a cornerstone of the C2090-617 Exam. The entire backup and recovery strategy in DB2 for z/OS is built upon two key components: image copy backups of the database objects and the transaction log. An image copy is a physical backup of a table space or index space. The transaction log records every change made to the data. By combining the last valid image copy with the log records of changes made since that copy, DB2 can restore data to any point in time.

There are different types of image copies. A full image copy is a complete backup of every page in the object. An incremental image copy only backs up the pages that have changed since the last full or incremental copy. A strategy combining periodic full copies with more frequent incremental copies can reduce backup time and storage requirements. An administrator must understand the trade-offs and be able to design a backup strategy that meets the recovery time objective (RTO) and recovery point objective (RPO) defined in the business service level agreements.

The DB2 logging mechanism is the heart of recoverability. DB2 uses a write-ahead log, meaning changes are written to the active log before they are applied to the database on disk. When the active logs fill up, they are offloaded to archive logs. These archive logs, along with the active logs, form a continuous record of all data modifications. The bootstrap dataset (BSDS) is a critical component that keeps track of all active and archive log datasets, as well as information about recent checkpoints. An administrator must ensure the logging environment is robust and well-managed.

Recovery scenarios can range from simple to complex. A transaction-level recovery might involve a single application issuing a ROLLBACK statement to undo its changes. A media recovery, on the other hand, is required when a physical disk failure leads to the loss of a database dataset. In this case, the administrator would use the RECOVER utility to restore the object from its last image copy and then apply the log records to bring it to a current or specific point in time. The C2090-617 Exam requires a deep understanding of the RECOVER utility and its various options.

Disaster recovery planning is the ultimate test of a recovery strategy. This involves planning for the loss of an entire data center. The strategy typically relies on shipping image copies and archive logs to a remote site, where the DB2 subsystem can be rebuilt. An administrator must be familiar with the procedures for a remote site recovery, including setting up the system, restoring the DB2 catalog and directory, and recovering user data. This is a complex process that requires meticulous planning and regular testing to ensure its viability in a real disaster.

Using Backup and Recovery Utilities

The C2090-617 Exam requires practical knowledge of the specific DB2 utilities used to implement a backup and recovery strategy. The primary utility for creating backups is COPY. This utility is used to create full or incremental image copies of table spaces and index spaces. An administrator must know the syntax for the COPY utility, including how to specify the object to be copied, the output dataset for the image copy, and options like SHRLEVEL, which controls the level of concurrent access allowed to the object while the copy is being made.

The MERGECOPY utility is used in conjunction with incremental image copies. It can merge several incremental copies into a new, consolidated incremental copy, or merge them with a full image copy to create an up-to-date full copy. This helps to streamline the recovery process by reducing the number of log records that need to be applied. The MODIFY RECOVERY utility is used to delete old or unneeded image copy records from the SYSIBM.SYSCOPY catalog table, which is an essential housekeeping task to prevent the table from growing too large.

The workhorse of recovery is the RECOVER utility. This utility orchestrates the process of restoring a table space or index space. When invoked, RECOVER reads the SYSCOPY table to find the most recent usable image copy, restores the object from that copy, and then reads the DB2 log to apply any changes made since the copy was taken. An administrator must know how to use its various options, such as recovering to the current point in time (TORBA or TOLOGPOINT with the current log RBA) or to a specific prior point in time.

Another important utility is QUIESCE. The QUIESCE utility is used to establish a point of consistency, or a recovery point, across a set of related table spaces. It does this by flushing any updated pages from the buffer pools to disk and recording the current log RBA in the SYSCOPY table. This recorded point can then be used as a target for a point-in-time recovery, ensuring that all related data is restored to a transactionally consistent state. This is particularly important for tables related by referential integrity.

The REPORT utility is used to provide information about the recovery history of objects. For example, REPORT RECOVERY can list all the image copies and log records that would be needed to recover a specific table space. This is invaluable for recovery planning and for verifying that a complete set of recovery assets exists. Mastery of these utilities, including their JCL and control statement syntax, is fundamental for any DB2 system administrator and is a key area of focus for the C2090-617 Exam.

Data Management and Organization

Effective data management is crucial for both performance and availability. A DB2 system administrator must be an expert in managing the physical storage and organization of data, a topic well-covered in the C2090-617 Exam. This starts with the design of table spaces. DB2 offers different types of table spaces, such as simple, segmented, and partitioned. Segmented table spaces are the most common, as they allow multiple tables to reside in the same table space while keeping their data physically separate in segments, which improves space management and performance of mass deletes.

Partitioned table spaces are used for very large tables. They allow a table to be split into multiple partitions, each stored in its own dataset. Partitioning can be done by range (e.g., by date or sales region) or by growth. This provides significant advantages for utility processing, as utilities can be run on a single partition at a time, reducing the window of unavailability. It also enables query parallelism and can improve query performance if the partitioning key is used in the WHERE clause, allowing DB2 to scan only the relevant partitions.

The REORG utility is the primary tool for maintaining the physical organization of data. Over time, as data is inserted, updated, and deleted, the data can become fragmented, and the clustering of the data can degrade. A REORG reorganizes the data to reclaim free space, restore the clustering order defined by the clustering index, and update inline statistics. An administrator must know when a REORG is needed, which can be determined by running RUNSTATS and then querying the catalog statistics or by using the REORGCHK utility.

Data definition and manipulation are also part of data management. An administrator must be proficient with Data Definition Language (DDL) statements like CREATE, ALTER, and DROP to manage database objects. The ALTER statement is particularly important for making changes to existing objects, such as adding a column to a table, changing the size of a buffer pool, or adding a partition to a table space. Understanding the implications of these changes, such as when an ALTER will place an object in a restrictive state, is a key competency tested by the C2090-617 Exam.

Managing space is an ongoing task. An administrator must monitor the space utilization of table spaces and indexes to prevent out-of-space conditions. This involves querying the SYSIBM.SYSTABLEPART and SYSINDEXPART catalog tables or using monitoring tools. When an object is approaching its maximum size, the administrator might need to alter its space attributes (PRIQTY, SECQTY) or, for multi-dataset objects, add a new dataset. Proactive space management is essential for preventing application outages caused by storage issues.

Special Recovery Scenarios

Beyond the standard media recovery, a DB2 administrator must be prepared for a variety of more complex or special recovery scenarios. The C2090-617 Exam may present questions that test this advanced knowledge. One such scenario is the recovery of a dropped object. If a table is accidentally dropped, a standard RECOVER will not work because the object's metadata has been removed from the DB2 catalog. Recovery in this case involves a point-in-time recovery of the table space to a time just before the drop occurred, which restores both the data and the catalog entry.

Another challenging scenario is the recovery of the DB2 catalog or directory. Since these are system databases that DB2 itself relies on, their recovery is more complex than user data recovery. If a catalog table space is lost, the administrator must use specific procedures, often involving starting DB2 in a restricted mode, to recover it. The recovery of the directory, specifically the SCT02 and DBD01 spaces, is even more critical and complex. A solid understanding of the RECOVER utility's specific options for system objects is required.

System-level recovery, also known as a full system restore, might be necessary after a major software or logical corruption issue. This involves recovering the entire DB2 subsystem to a specific point in time. This is typically accomplished by restoring all DB2 volumes from a system-level backup and then performing a conditional restart of DB2 to a specific log point. A more common approach is to use the QUIESCE utility to establish a system-wide point of consistency and then recover all user table spaces to that point, which is a less disruptive method.

The recovery of objects containing LOB (Large Object) or XML data presents unique challenges. These data types are often stored in auxiliary table spaces and indexes that are linked to the base table. When recovering the base table space, the administrator must ensure that the associated LOB or XML objects are also recovered to the same point in time to maintain consistency. The RECOVER utility handles this automatically, but the administrator must be aware of these relationships when planning backup and recovery jobs.

Finally, dealing with data that is in an inconsistent or broken state requires special handling. For example, if a CHECK DATA utility finds referential integrity violations, it can place the affected table space in a CHECK-pending state, which restricts access. The administrator must then take corrective action, which could involve correcting the data and re-running the check or using the REPAIR utility to reset the pending state. Using REPAIR is risky and should only be done with a full understanding of its consequences, a level of knowledge expected for the C2090-617 Exam.

Managing Data Integrity and Concurrency

Maintaining data integrity is a fundamental goal of any database management system. In DB2, this is achieved through various mechanisms that a system administrator must manage and understand. The C2090-617 Exam will test knowledge of constraints, such as unique constraints, primary key constraints, and referential integrity (foreign key) constraints. These constraints are defined using DDL and are automatically enforced by DB2 during data modification operations like INSERT, UPDATE, and DELETE. An administrator must know how to define these constraints and manage the states they can enter.

For example, if a large data load is performed with referential integrity checks turned off, the table space will be placed in a CHECK-pending status. The administrator must then run the CHECK DATA utility to validate the constraints and clear the pending state. Understanding this workflow is crucial. Similarly, triggers are another mechanism for enforcing business rules and maintaining data integrity. A trigger is a piece of procedural code that is automatically executed in response to a specific data modification event on a table.

Concurrency, as discussed earlier in the context of performance, is also a data integrity issue. The locking mechanism ensures that transactions are isolated from each other, preventing phenomena like lost updates or dirty reads. The choice of isolation level (e.g., CS, RS, RR) determines the degree of isolation and has a direct impact on data integrity. An administrator must be able to advise developers on the appropriate isolation level for their applications, balancing the need for data consistency with the requirement for high concurrency.

The SET INTEGRITY statement is a powerful tool for managing constraint checking. It can be used to turn off constraint checking for a table before a large data load to improve performance. After the load is complete, the statement is used again to turn checking back on, which triggers the validation of the newly loaded data. This is often more efficient than having DB2 check constraints for every single row during the load process. This is a practical technique that a candidate for the C2090-617 Exam should know.

Ultimately, the administrator is the guardian of the organization's data. This responsibility involves not only implementing technical controls like constraints and locks but also establishing and enforcing procedures for data handling. This includes managing data movement with utilities like LOAD and UNLOAD, ensuring that data quality is maintained, and working with data owners to define and implement data governance policies. This broader view of data management and integrity is an important aspect of the senior system administrator role that the certification represents.

High Availability and Data Sharing

In today's 24/7 business environment, high availability is no longer a luxury but a necessity. The C2090-617 Exam requires a thorough understanding of the features DB2 for z/OS provides to ensure continuous operation. The premier solution for this is DB2 data sharing. A data sharing group consists of two or more DB2 subsystems, running on the same or different LPARs in a Parallel Sysplex, that can access and modify the same data concurrently. This architecture provides scalability and high availability. If one DB2 member fails, users and applications can be transparently re-routed to another active member in the group.

Setting up a data sharing group is a complex task that involves several key components. The Parallel Sysplex provides the underlying infrastructure, including a Coupling Facility (CF) which is a high-speed hardware component used for caching and locking. Within the CF, DB2 uses a group buffer pool to cache data pages shared among the members and a lock structure to manage global locking. An administrator must understand the roles of these components and how to size and configure them for optimal performance and availability.

The concept of global versus local locking is fundamental to data sharing. When a DB2 member needs to lock a resource, it must first be propagated to the global lock manager in the CF to ensure it is coordinated across all members. This adds a layer of complexity and overhead compared to a standalone system. Similarly, the group buffer pools introduce the concept of cross-invalidation, where an update to a page on one member invalidates any copies of that page cached in other members' buffer pools. The C2090-617 Exam tests the administrator's knowledge of these internal mechanisms.

From an operational perspective, managing a data sharing group involves several unique tasks. Commands are often scoped to a single member or the entire group. An administrator must know how to start and stop individual members, as well as the entire group. Monitoring tools must be ableto provide a group-wide view of performance, highlighting issues like CF contention or imbalances in workload distribution across the members. Recovery in a data sharing environment is also more complex, as it may involve coordinating log merges from multiple members.

Data sharing provides near-continuous availability, but it is not a disaster recovery solution, as the entire sysplex is typically located in a single data center. For disaster recovery, data sharing groups are often paired with data replication technologies. These technologies capture changes from the source system's log and apply them to a target system at a remote site, keeping it in sync. An administrator should have a high-level understanding of how these replication solutions integrate with a DB2 data sharing environment to provide a comprehensive availability and recovery strategy.

Distributed Data Facility (DDF)

The Distributed Data Facility (DDF) is the DB2 for z/OS component that handles all network communication, allowing clients and other database servers to connect to DB2 over TCP/IP. A deep understanding of DDF is essential for any administrator supporting modern applications and is a key topic in the C2090-617 Exam. DDF runs in its own address space and is responsible for listening for incoming connection requests, managing distributed threads, and processing DRDA (Distributed Relational Database Architecture) protocol flows.

Configuring DDF involves several key ZPARMs and the setup of the Communications Database (CDB). The CDB, which is part of the DB2 catalog, contains tables that define how DB2 communicates with remote locations. An administrator must know how to set DDF-related ZPARMs to control aspects like the TCP/IP port number, the maximum number of concurrent distributed threads (MAXDBAT), and whether the DDF address space should start automatically with DB2. The -START DDF and -STOP DDF commands are used to control its operation manually.

Security for distributed connections is a major concern. DDF provides several options for authentication, including user ID and password, Kerberos tickets, and client certificates for SSL/TLS encryption. An administrator must know how to configure these security mechanisms to protect data in transit and to ensure that only authorized clients can connect to the DB2 subsystem. This often involves coordination with network administrators and security administrators to manage firewalls, digital certificates, and security server profiles.

Performance monitoring and tuning of the DDF workload are also critical. The administrator must monitor the number of active and inactive distributed threads to ensure that the system is not overloaded. The CONDBAT ZPARM controls the total number of connections, while MAXDBAT controls the number that can be active simultaneously. If MAXDBAT is reached, new incoming work will have to wait. Performance reports can provide detailed accounting information for distributed threads, helping to identify resource-intensive client applications or SQL statements.

Troubleshooting DDF connectivity issues is a common task. Problems can arise from network issues, firewall blockages, incorrect client configuration, or security credential failures. An administrator must be able to use commands like -DISPLAY DDF to check the status of DDF and its connections. They must also be able to analyze console messages and DDF trace information to diagnose the root cause of a connection failure. This requires a solid understanding of both DB2 and the underlying network concepts, a skill set validated by the C2090-617 Exam.

Stored Procedures and Triggers

Stored procedures and triggers are server-side programming objects that allow business logic to be encapsulated within the database. The C2090-617 Exam expects an administrator to understand how to manage and support these objects. A stored procedure is a compiled program that can be called by a client application. They offer several benefits, including improved performance by reducing network traffic, better security by controlling access to data, and code reusability. An administrator is responsible for the environment in which these procedures run.

Stored procedures in DB2 for z/OS execute in a special type of address space managed by the z/OS Workload Manager (WLM). An administrator must work with the z/OS systems programmer to configure these WLM-managed address spaces. This includes defining the JCL procedure for the address space, setting performance goals, and specifying the number of address spaces that can be started. If this environment is not configured correctly, stored procedure calls may fail or perform poorly.

Managing stored procedures involves tasks like creating, altering, and dropping them using DDL. The administrator is also responsible for granting EXECUTE privileges on the procedure to the appropriate users or roles. When a new version of a stored procedure is needed, DB2's native versioning capabilities can be used to deploy the new version without impacting applications that are still using the old one. Troubleshooting stored procedure execution, which may involve looking at trace data or WLM performance metrics, is also a key skill.

Triggers, on the other hand, are executed automatically by DB2 in response to an INSERT, UPDATE, or DELETE operation on a specific table. They are often used to enforce complex business rules, maintain data integrity, or log changes to data. An administrator needs to understand the difference between BEFORE and AFTER triggers and how they can impact application performance. A poorly written trigger can add significant overhead to every data modification statement on its associated table.

The management of triggers is similar to that of other database objects. They are created and dropped using DDL, and their definitions are stored in the DB2 catalog. When troubleshooting performance issues on a table, an administrator should always check if any triggers are defined on it, as they can be a hidden source of overhead. The EXPLAIN facility can provide some insight into the cost of trigger execution. A solid understanding of how to support a database environment that heavily utilizes stored procedures and triggers is essential for a modern DB2 administrator.

Preparing for the C2090-617 Exam

Successfully passing the C2090-617 Exam requires a structured and disciplined preparation strategy. The first step is to download the official exam objectives from the certification provider's website. These objectives are a detailed blueprint of the topics that will be on the exam and their relative weighting. You should use this document as a checklist to guide your studies, ensuring that you cover all required areas and focus your time appropriately on the more heavily weighted sections like system operations and performance tuning.

The primary study resource should be the official IBM documentation for DB2 11.1 for z/OS. The key manuals to focus on are the Installation and Migration Guide, the Administration Guide, the Command Reference, and the SQL Reference. While these manuals are vast, they are the definitive source of information. It is not necessary to memorize them, but you should be very familiar with their structure and know where to find information on specific topics. Reading through the core sections related to the exam objectives is highly recommended.

Theoretical knowledge alone is not sufficient. The C2090-617 Exam is designed to test the skills of an experienced administrator, so hands-on practice is absolutely critical. If you have access to a DB2 for z/OS system, use it extensively. Practice common administrative tasks like running utilities, issuing commands, querying the catalog, and interpreting EXPLAIN output. If you don't have access, consider seeking out workshops or lab environments that provide this hands-on experience. Simulating real-world scenarios is the best way to solidify your understanding.

Practice exams are an invaluable tool for final preparation. They help you get accustomed to the format and style of the exam questions and test your knowledge under timed conditions. When you take a practice exam, carefully review both your correct and incorrect answers. For incorrect answers, go back to the documentation or your study notes to understand why you made a mistake. This process helps to identify and fill any gaps in your knowledge before you sit for the actual exam.

On exam day, manage your time wisely. Read each question carefully, paying close attention to keywords like "NOT" or "BEST". If you encounter a difficult question, make your best guess, mark it for review, and move on. Don't spend too much time on a single question at the expense of others. After you have gone through all the questions, you can go back and spend more time on the ones you marked. A calm and methodical approach, built upon a foundation of thorough preparation, will give you the best chance of success on the C2090-617 Exam.

Final Thoughts

As you approach the final stages of your preparation for the C2090-617 Exam, it is beneficial to conduct a final review of the most critical topics. Revisit the core DB2 architecture, ensuring you can clearly explain the roles of the MSTR, DBM1, and DDF address spaces, as well as the function of the DB2 catalog, directory, and logs. A solid architectural foundation makes it easier to understand how all the other pieces fit together. Review the installation and migration process, focusing on the key jobs and the purpose of the different migration modes.

Go over the security model one more time. Be confident in your understanding of the hierarchy of authorities, the difference between implicit and explicit privileges, and how DB2 integrates with an external security manager like RACF. Ensure you can describe how to set up auditing and understand the purpose of advanced features like trusted contexts and RCAC. Security is a critical topic, and you can expect several questions on it.

Dedicate significant review time to system operations and utilities. Be able to name and describe the function of the main utilities like COPY, RECOVER, REORG, RUNSTATS, and CHECK DATA. Practice the syntax for the most common operator commands, especially the DISPLAY commands used for monitoring. The exam will test your practical knowledge of how to manage and maintain a DB2 subsystem on a day-to-day basis, and this is where that knowledge is applied.

Finally, review backup and recovery procedures and the principles of high availability. Be able to walk through a standard media recovery scenario using the RECOVER utility. Understand the purpose of a QUIESCE point. For high availability, be clear on the core concepts of data sharing, including the role of the Parallel Sysplex and the Coupling Facility. A comprehensive review of these key areas will boost your confidence and ensure you are well-prepared to demonstrate your expertise and pass the C2090-617 Exam.

Use IBM C2090-617 certification exam dumps, practice test questions, study guide and training course - the complete package at discounted price. Pass with C2090-617 DB2 10 System Administrator for z/OS practice test questions and answers, study guide, complete training course especially formatted in VCE files. Latest IBM certification C2090-617 exam dumps will guarantee your success without studying for endless hours.

  • C1000-172 - IBM Cloud Professional Architect v6
  • C1000-132 - IBM Maximo Manage v8.0 Implementation
  • C1000-125 - IBM Cloud Technical Advocate v3
  • C1000-142 - IBM Cloud Advocate v2
  • C1000-156 - QRadar SIEM V7.5 Administration
  • C1000-138 - IBM API Connect v10.0.3 Solution Implementation

Why customers love us?

92%
reported career promotions
89%
reported with an average salary hike of 53%
93%
quoted that the mockup was as good as the actual C2090-617 test
97%
quoted that they would recommend examlabs to their colleagues
What exactly is C2090-617 Premium File?

The C2090-617 Premium File has been developed by industry professionals, who have been working with IT certifications for years and have close ties with IT certification vendors and holders - with most recent exam questions and valid answers.

C2090-617 Premium File is presented in VCE format. VCE (Virtual CertExam) is a file format that realistically simulates C2090-617 exam environment, allowing for the most convenient exam preparation you can get - in the convenience of your own home or on the go. If you have ever seen IT exam simulations, chances are, they were in the VCE format.

What is VCE?

VCE is a file format associated with Visual CertExam Software. This format and software are widely used for creating tests for IT certifications. To create and open VCE files, you will need to purchase, download and install VCE Exam Simulator on your computer.

Can I try it for free?

Yes, you can. Look through free VCE files section and download any file you choose absolutely free.

Where do I get VCE Exam Simulator?

VCE Exam Simulator can be purchased from its developer, https://www.avanset.com. Please note that Exam-Labs does not sell or support this software. Should you have any questions or concerns about using this product, please contact Avanset support team directly.

How are Premium VCE files different from Free VCE files?

Premium VCE files have been developed by industry professionals, who have been working with IT certifications for years and have close ties with IT certification vendors and holders - with most recent exam questions and some insider information.

Free VCE files All files are sent by Exam-labs community members. We encourage everyone who has recently taken an exam and/or has come across some braindumps that have turned out to be true to share this information with the community by creating and sending VCE files. We don't say that these free VCEs sent by our members aren't reliable (experience shows that they are). But you should use your critical thinking as to what you download and memorize.

How long will I receive updates for C2090-617 Premium VCE File that I purchased?

Free updates are available during 30 days after you purchased Premium VCE file. After 30 days the file will become unavailable.

How can I get the products after purchase?

All products are available for download immediately from your Member's Area. Once you have made the payment, you will be transferred to Member's Area where you can login and download the products you have purchased to your PC or another device.

Will I be able to renew my products when they expire?

Yes, when the 30 days of your product validity are over, you have the option of renewing your expired products with a 30% discount. This can be done in your Member's Area.

Please note that you will not be able to use the product after it has expired if you don't renew it.

How often are the questions updated?

We always try to provide the latest pool of questions, Updates in the questions depend on the changes in actual pool of questions by different vendors. As soon as we know about the change in the exam question pool we try our best to update the products as fast as possible.

What is a Study Guide?

Study Guides available on Exam-Labs are built by industry professionals who have been working with IT certifications for years. Study Guides offer full coverage on exam objectives in a systematic approach. Study Guides are very useful for fresh applicants and provides background knowledge about preparation of exams.

How can I open a Study Guide?

Any study guide can be opened by an official Acrobat by Adobe or any other reader application you use.

What is a Training Course?

Training Courses we offer on Exam-Labs in video format are created and managed by IT professionals. The foundation of each course are its lectures, which can include videos, slides and text. In addition, authors can add resources and various types of practice activities, as a way to enhance the learning experience of students.

Enter Your Email Address to Proceed

Please fill out your email address below in order to purchase Certification/Exam.

A confirmation link will be sent to this email address to verify your login.

Make sure to enter correct email address.

Enter Your Email Address to Proceed

Please fill out your email address below in order to purchase Demo.

A confirmation link will be sent to this email address to verify your login.

Make sure to enter correct email address.

How It Works

Download Exam
Step 1. Choose Exam
on Exam-Labs
Download IT Exams Questions & Answers
Download Avanset Simulator
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates latest exam environment
Study
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!

SPECIAL OFFER: GET 10% OFF. This is ONE TIME OFFER

You save
10%
Save
Exam-Labs Special Discount

Enter Your Email Address to Receive Your 10% Off Discount Code

A confirmation link will be sent to this email address to verify your login

* We value your privacy. We will not rent or sell your email address.

SPECIAL OFFER: GET 10% OFF

You save
10%
Save
Exam-Labs Special Discount

USE DISCOUNT CODE:

A confirmation link was sent to your email.

Please check your mailbox for a message from [email protected] and follow the directions.