Oracle8i Parallel Server Concepts and Administration Release 8.1.5 A67778-01 |
|
This appendix describes differences in the Oracle Parallel Server Option from release to release.
When one instance requests a consistent-read (CR) on a block held by another instance, Cache Fusion processing sends a CR copy of the requested block directly to the requesting instance by way of the interconnect. This greatly reduces cache coherency contention among instances during read/write conflicts.
Implementation of Cache Fusion requires that some background and foreground processes, namely LMON and LCK, now communicate directly from one instance to another over the interconnect. A new process, the Block Server Process (BSP), rolls back uncommitted transactions and copies CR server blocks for transmission to requesting instances. This reduces the pinging required to maintain cache coherency, thereby greatly improving performance.
Cache Fusion makes deployment of OPS on OLTP and hybrid applications more feasible. Historically, databases that experienced random changes were not considered good parallel server candidates. With the advent of Cache Fusion and advanced cross-instance interconnect technology, OLTP and hybrid applications are becoming more scalable. This is particularly true if, for example, a table is modified on one instance and then another instance reads the table.
V$DLM_ALL_LOCKS view is new and shows statistics on locks whether they are blocking or blocked locks as well as all other lock types.
V$DLM_RESS view is new and shows all resources associated with a lock according to lock type.
V$DLM_CONVERT_LOCAL view is new and shows lock conversion statistics for locks opened on the local node.
V$DLM_CONVERT_REMOTE view is new and shows lock conversion statistics for locks opened on remote nodes.
V$DLM_MISC view is new and shows DLM message information.
For 8.1, the functionality of the GMS (Group Membership Services) has been moved from the GMS module to the vendor-specific Cluster Managers (CM) and the Oracle database kernel. In 8.1, a discrete GMS module is no longer visible to the Oracle user.
This change greatly improves vendor hardware compatibility with Oracle. From the user point-of-view, it also simplifies CM use and maintenance. The CM now starts automatically upon instance startup; you no longer need to manually startup and shut down member services.
The name of the feature "Parallel Transaction Recovery" is now called "Fast-Start Parallel Rolback." In addition to the name change, in 8.0, SMON serially processed rollback segment recovery. This lead to extended rollback recovery periods. In 8.1, Fast-start parallel rollback reduces recovery time thus making the database available sooner. Parallel rollback uses multiple processes to recover rollback segments when the value for the parameter FAST_START_PARALLEL_ROLLBACK, previously known as PARALLEL_TRANSACTION_RECOVERY, is greater than one.
The default for this parameter is LOW, implying that parallel recovery will use no more than 2 time the CPU_COUNT number of processes, in addition to SMON, to do parallel recovery.
To determine a more accurate setting, examine the contents of two new tables, V$FAST_START_SERVERS and V$FAST_START_TRANSACTIONS. Also consider the average time required to recover a transaction and your desired recovery period duration. When you set FAST_START_PARALLEL_ROLLBACK to a value greater than one, SMON starts multiple recovery processes to accommodate the amount of unrecovered rollback segments in the rollback file. The quantity of processes SMON starts is limited by the value for FAST_START_PARALLEL_ROLLBACK.
The single name previously used to identify a service (SID) is replaced by three levels of addressing. The new parameters for instance registration are:
Clients can connect to the service without specification of which handler or instance they require, thus allowing automatic load balancing to select the optimal handler in the optimal instance. Load balancing is discussed under the following heading.
The TNS listener now performs load balancing over distributed services spanning multiple nodes. The service, instance, and handler names are used to determine the load balancing behavior.
Oradebug is a utility used by consulting and support personnel to diagnose and troubleshoot problematic systems at runtime. Oradebug functionality is extended for the Oracle Parallel Server.
OPSM is an option that simplifies parallel server administration. OPSM's 8.1 enhancements provide a single generic interface for administering parallel servers on any platform.
For more information about OPSM, see the Oracle Parallel Server Management Users Guide.
The Oracle Universal Installer and Oracle Database Configuration Assistant are both cluster aware. In release 8.1, only a single installer session is required to install OPS. The installer collects node information from the user, distributes the required Oracle products to the specified nodes, and invokes the OPS Assistant to set up the instances and create the database.
When OPS Assistant is done with this process, the parallel server is available on all nodes and the parallel server configuration information is saved so that OPSM can use it to manage the new parallel server.
Instance affinity for jobs is the association of jobs to an instance. Using the new DBMS_JOB package, you can indicate whether a particular instance, or any instance, can execute a user submitted job in the OPS environment.
Use this release 8.1 feature to improve load balancing and limit block pinging. For instance, using OPS and replication at the same time may result in block pinging problems on the deferred transactions queue if all instances in a clustered environment decide to propagate transactions from the deferred transaction queue. By limiting activity against tables to only one instance within a parallel server cluster, you can limit pinging. For more information, also see the Oracle8i Supplied Packages Reference.
The following parameters are obsolete as of release 8.1:
GC_LCK_PROCS
GC_LATCHES
PARALLEL_DEFAULT_MAX_INSTANCES
LOG_FILES
OPS_ADMIN_GROUP
CACHE_SIZE_THRESHOLD
OGMS_HOME
ALLOW_PARTIAL_SN_RESULTS
SEQUENCE_CACHE_ENTRIES
The following initialization parameters were added specifically for Oracle Parallel Server:
The following initialization parameters are obsolete:
The following views changed:
A new option has been added for the OGMSCTL command.
The following parameters were added specifically for Oracle Parallel Server:
The following global cache lock initialization parameters are obsolete:
The values set by the GC_* parameters are not adjusted to prime numbers, but rather are left exactly as entered.
The following parameters have changed:
The following new views were added:
The following views changed:
Global dynamic performance views (GV$ fixed views) were added, corresponding to each of the V$ views except for V$ROLLNAME.
Oracle Parallel Server release 8.0 is not dependent on an external Distributed Lock Manager. The lock management facility is now internal to Oracle. The Integrated Distributed Lock Manager is dependent on an external node monitor.
LMON and LMDn processes have been added.
The ability to logically group instances together and perform operations upon all of the associated instances was added.
Group Membership Services (GMS) is used by the Lock Manager (LM) and other Oracle components for inter-instance initialization and coordination.
In Oracle Parallel Server release 8.0, fine grain locking is available on all platforms. It is enabled by default.
Oracle8 supports the ability of the application to automatically reconnect if the connection to the database is broken.
Recovery Manager is now the preferred method of recovery from media failure.
The following initialization parameters were added specifically for the Parallel Server Option:
The following view was added specifically for the Parallel Server Option:
FILE_LOCK
The following view changed:
V$BH
The following views were added:
You can now set free list groups for indexes, as well as for tables and clusters.
In Oracle Parallel Server release 7.3, PCM locks have additional options for configuration using fine grain locking. The changes affect the interpretation of the various parameters that determine the locks used to protect the database blocks in the distributed parallel server cache.
Fine grain locking is a more efficient method for providing locking in a multinode configuration. It provides a reduced rate of lock collision, and reduced space requirements for managing locks, particularly in MPP systems. This feature relies on facilities provided by the hardware and operating system platform, and may not be available on all platforms.
Fine grain locking is discussed in the section "Two Methods of PCM Locking: Fixed and Releasable".
This feature enables each instance to register itself and certain of its attributes, and to establish contact with any other instance. Instance registration is transparent to the user, except in the case of parallel query failure on remote instances of a parallel server. If a parallel query dies due to an error on a remote instance, the failed instance is now identified in the error message.
This release offers a more efficient way of allocating sort temporary space, which reduces serialization and cross-instance pinging. If you set up this capability correctly, it can particularly benefit OPS performance in parallel mode.
For best results, try to establish stable sort space. Remember that sort space is cached in the instance. One instance does not release the space unless another instance runs out of space and issues a call to the first one to do so. This is an expensive, serialized process which hurts performance. If your system permanently deviates from stable sort space, it is better to over-allocate space, or simply not to use temporary tablespaces.
To determine the stability of your sort space, you can check the V$SORT_SEGMENT view. This new view shows every instance's history of sorting. If the FREED_EXTENTS and ADDED_EXTENTS columns show excessive allocation/deallocation activity, you should consider adding more space to the corresponding tablespace. Check also the FREE_REQUESTS value to determine if there is inter-instance conflict over sort space.
Another reason for excessive allocation and deallocation may be that some sorts are just too large. It may be worthwhile to assign a different temporary tablespace for the operations which require huge sorts. The MAX_SORT_SIZE value may help you to determine whether these large sorts have indeed occurred.
Various scalability and throughput improvements have been made that affect XA transactions. These changes have no visible impact, other than improved performance.
The following three latches perform much better, and so enhance scalability:
Transaction throughput is enhanced because most of the common XA calls have reduced code path and reduced round-trips to the database.
Recovery of distributed transactions submitted through a TP monitor using the XA interface is now fully supported in OPS.
The XA_RECOVER call has been enhanced, ensuring correct and complete recovery of one instance from transactions that have failed in another instance.
An option has been added to make the XA_RECOVER call wait for instance recovery. This feature enables one Oracle instance to do recovery on behalf of a failed Oracle instance, when both are part of the same OPS cluster.
The XA_INFO string has a new clause called OPS_FAILOVER. If this is set to true for a given XA resource manager connection, any XA_RECOVER call issued from that connection will wait for any needed instance recovery to complete. The syntax is as follows:
OPS_FAILOVER=T
Upper- or lowercase (T or t) can be used. The default value of OPS_FAILOVER is false (F or f).
Previously, there was no guarantee that an XA_RECOVER call would return the list of in-doubt transactions from the failed instance. Setting OPS_FAILOVER=T ensures that this will happen.
When OPS_FAILOVER is set to true, the XA_RECOVER call will wait until SMON has finished cache recovery, has identified the in-doubt transactions, and added them to the PENDING_TRANS$ table that has a list of in-doubt transactions.
Transaction recovery behavior has changed to allow:
In previous releases, the database could not be opened until complete transaction recovery was performed after a failure. As of release 7.3, the database is opened for connections as soon as cache recovery is completed. (This only applies when opening the database, as opposed to doing failover in an OPS environment.) In case of an instance failure, the database is available for connections through other running instances.
This means that active transactions as of the time of the failure are not yet rolled back; they appear active (holding row locks) to users of the system. Furthermore, all transactions system-wide that were active as of the time of failure are marked DEAD and the rollback segments containing these transactions are marked PARTIALLY AVAILABLE. These transactions are recovered as part of SMON recovery in the background, or by foreground processes that may encounter them, as described in the next section. The rollback segment is available for onlining.
Given fast warmstart capability, the time needed to recover all transactions does not limit the general availability of the database. All data except the part locked by unrecovered transactions is now available to users. Given an OLTP workload, however, all the requests that were active when the database or instance went down will probably be resubmitted immediately. They will very likely encounter the locks held by the unrecovered transactions. The time needed to recover these transactions is thus still critical for access to the locked data. To alleviate this problem, transactions can now be recovered in parallel, if needed. Recovery can be done by the following operations.
Recovery by Foreground Processes. Rows may be locked by a transaction that has not yet been recovered. Any foreground process that encounters such a row can itself recover the transaction. The current recovery by SMON will still happen--so the entire transaction recovery will complete eventually. But if any foreground process runs into a row lock, it can quickly recover the transaction holding the lock, and continue. In this way recovery operations are parallelized on a need basis: dead transactions will not hold up active transactions. Previously, active transactions had to wait for SMON to recover the dead transactions.
Recovery is done on a per-rollback segment basis. This prevents multiple foreground processes in different instances from recovering transactions in the same rollback segment, which would cause pinging. The foreground process fully recovers the transaction that it would otherwise have waited for. In addition, it makes a pass over the entire rollback segment and partially recovers all unrecovered transactions. It applies a configurable number of changes (undo records) to each transaction. This allows short transactions to be recovered quickly; without waiting for long transactions to be recovered. The initialization parameter CLEANUP_ROLLBACK_ENTRIES specifies the number of changes to apply.
Recovery by SMON. SMON transaction recovery operations are mostly unchanged. SMON is responsible for recovering transactions marked DEAD within its instance, transaction recovery during startup, and instance recovery. The only change is that it will make multiple passes over all the transactions that need recovery and apply only the specified number of undo records per transaction per pass. This prevents short transactions from waiting for recovery of a long transaction.
Recovery by Onlining Rollback Segment. Onlining a rollback segment now causes complete recovery of all transactions it contains. Previously, the onlining process posted SMON to do the recovery. Note that implicit onlining of rollback segments as part of warmstart or instance startup does not recover all transactions but instead marks them DEAD.
In standard Oracle, load balancing now allows multiple listeners and multiple instances to be balanced at SQL*Net connect time. Multiple listeners can now listen on one Oracle instance, and the Oracle dispatcher will register with multiple listeners. The SQL*Net client layer will randomize multiple listeners via the DESCRIPTION_LIST feature.
For more information about load balancing at connect, please see the SQL*Net documentation for Oracle7 Server release 7.3.
The default value for the SORT_DIRECT_WRITES initialization parameter is now AUTO; it will turn itself on if your sort area is a certain size or greater. This will improve performance. For more information, see the Oracle8i Tuning.
In Oracle7 Server release 7.3, the performance of delayed block cleanout is improved and related pinging is reduced. These enhancements are particularly beneficial for the Oracle Parallel Server.
Oracle7 Server release 7.3 provides a new initialization parameter, DELAYED_LOGGING_BLOCK_CLEANOUTS, which is TRUE by default.
When Oracle commits a transaction, each block that the transaction changed is not immediately marked with the commit time. This is done later, upon demand--when the block is read or updated. This is called block cleanout. When block cleanout is done during an update to a current block, the cleanout changes and the redo records of the update are piggybacked with those of the update. In previous releases, when block cleanout was needed during a read to a current block, extra cleanout redo records were generated and the block was dirtied. This has been changed.
As of release 7.3, when a transaction commits, all blocks in the cache changed by the transaction are cleaned out immediately. This cleanout performed at commit time is a "fast version" which does not generate redo log records and does not repin the block. Most blocks will be cleaned out in this way, with the exception of blocks changed by long running transactions.
During queries, therefore, the data block's transaction information is normally up-to-date and the frequency with which block cleanout is needed is much reduced. Regular block cleanouts are still needed when querying a block where the transactions are still truly active, or when querying a block which was not cleaned out during commit.
During changes (INSERT, DELETE, UPDATE), the cleanout redo log records are generated and piggyback with the redo of the changes.
Oracle7 Server release 7.3 provides improved defaults in the method by which servers are allocated among instances for the parallel query option. As a result, users can now specify parallelism without giving any hints.
Parallel query slaves are now assigned based on disk transfer rates and CPU processing rates for user queries. Work is assigned to query slaves that have preferred access to local disks versus remote disks, which is more costly. In this way data locality will improve parallel query performance.
For best results, you should evenly divide data among the parallel server instances and nodes--particularly for moderate to large size tables that substantially dominate the processing. Data should be fairly evenly distributed on various disks, or across all the nodes. For very small tables, this is not necessary.
For example, if you have two nodes, a table should not be divided in an unbalanced way such that 90% resides on one node and 10% on the other node. Similarly, if you have four disks, one should not contain 90% of the data and the others contain only 10%. Rather, data should be spread evenly across available nodes and disks. This happens automatically if you use disk striping. If you do not use disk striping, you must manually ensure that this happens, if you desire optimum performance.
For most parallel server configurations it is no longer necessary to pre-allocate data blocks to retain partitioning of data across free list groups. When a row is inserted, a group of data blocks is allocated to the appropriate free list group for an instance.
The following views were added specifically for the Parallel Server Option:
The following views changed:
The following views were added:
It is now possible to specify a particular instance, and hence the free list group, from a session, using the command:
ALTER SESSION SET INSTANCE = instance_number
It is now possible to disable the ability for a user to lock a table using the command:
ALTER TABLE table_name DISABLE TABLE LOCK
Re-enabling table locks is accomplished using the following command:
ALTER TABLE table_name ENABLE TABLE LOCK
The PCM locks held by a failing instance are now recovered by the lock processes of the instance recovering for the failed instance.
CACHE_SIZE_THRESHOLD was added.
The following views changed:
This section describes differences between Oracle Version 6 and Oracle7 Release 7.0.
The Parallel Server Option for Version 6 is upwardly compatible with Oracle7 with one exception. In Version 6 all instances share the same set of redo log files, whereas in Oracle7 each instance has its own set of redo log files. Oracle8i Migration gives full details of migrating to Oracle7. After a database is upgraded to work with Oracle7 it cannot be started using a Oracle Version 6 server. Applications that run on Oracle7 may not run on Oracle Version 6.
While the database is mounted in parallel mode, Oracle7 supports the following file operations that Oracle Version 6 only supported in exclusive mode:
The instance that executes these operations may have the database open, as well as mounted.
Table A-1 shows the file operations and corresponding SQL statements that cannot be performed in Oracle Version 6 with the database mounted in parallel mode.
Oracle7 allows all of the file operations listed above while the database is mounted in shared mode.
A redo log file cannot be dropped when it is active, or when dropping it would reduce the number of groups for that thread below two. When taking a datafile online or offline in Oracle7, the instance can have the database either open or closed and mounted. If any other instance has the database open, the instance taking the file online or offline must also have the database open.
The global constant parameter GC_SAVE_ROLLBACK_LOCKS reserves distributed locks for deferred rollback segments, which contain rollback entries for transactions in tablespaces that were taken offline.
Version 6 does not support taking tablespaces offline in parallel mode, so the initialization parameter GC_SAVE_ROLLBACK_LOCKS is not necessary in Oracle Version 6. In Oracle7, this parameter is required for deferred rollback segments.
In Oracle Version 6, all instances share the same set of online redo log files and each instance writes to the space allocated to it within the current redo log file.
In Oracle7, each instance has its own set of redo log files. A set of redo log files is called a thread of redo. Thread numbers are associated with redo log files when the files are added to the database, and each instance acquires a thread number when it starts up.
Log switches are performed on a per-instance basis in Oracle7; log switches in Oracle Version 6 apply to all instances, because the instances share redo log files.
Oracle7 introduces mirroring of online redo log files. The degree of mirroring is determined on a per-instance basis. This allows you to specify mirroring according to the requirements of the applications that run on each instance.
In Oracle Version 6, all instances shared one set of online redo log files. Therefore, the ALTER SYSTEM SWITCH LOGFILE statement forced all instances to do a log switch to the new redo log file.
There is no global option for this SQL statement in Oracle7, but you can force all instances to switch log files (and archive all online log files up to the switch) by using the ALTER SYSTEM ARCHIVE LOG CURRENT statement.
The LOG_ALLOCATION parameter of Oracle Version 6 is obsolete in Oracle7. Oracle7 includes the new initialization parameter THREAD, which associates a set of redo log files with a particular instance at startup.
This section describes changes concerning free space lists.
In Oracle Version 6, blocks freed by deletions or by updates that shrank rows are added to the common pool of free space. In Oracle7, blocks will go to the free list and free list group of the process that deletes them.
In Oracle Version 6, the FREELISTS and FREELIST GROUPS storage options are not available for the CREATE CLUSTER statement, and the ALLOCATE EXTENT clause is not available for the ALTER CLUSTER statement.
In Oracle7, clusters (except for most hash clusters) can use multiple free lists by specifying the FREELISTS and FREELIST GROUPS storage options of CREATE CLUSTER and by assigning extents to instances with the statement ALTER CLUSTER ALLOCATE EXTENT (INSTANCE n).
Hash clusters in Oracle7 can have free lists and free list groups if they are created with a user-defined key for the hashing function and the key is partitioned by instance.
The FREELISTS and FREELIST GROUPS storage options replace the initialization parameters FREE_LIST_INST and FREE_LIST_PROC of Oracle Version 6.
In Oracle Version 6, Export did not export free list information. In Oracle7, Export and Import can handle FREELISTS and FREELIST GROUPS.
STARTUP and SHUTDOWN must be done while disconnected in Version 6. In Oracle7, Release 7.0, STARTUP and SHUTDOWN must be issued while connected as INTERNAL, or as SYSDBA or SYSOPER.
In Oracle7, operations can be performed using either commands or the SQL*DBA menu interface, as described in Oracle8i Utilities.
This section lists new parameters and obsolete parameters.
The new initialization parameter THREAD associates a set of redo log files with a particular instance at startup.
For a complete list of new parameters, refer to the Oracle8i Reference.
The following initialization parameters used in earlier versions of the Parallel Server Option are now obsolete in Oracle7.
For a complete list of obsolete parameters, refer to Oracle8i Migration.
In Oracle Version 6, each instance archives the online redo log files for the entire parallel server because all instances share the same redo log files. You can therefore have the instance with easiest access to the storage medium use automatic archiving, while other instances archive manually.
In Oracle7, each instance has its own set of online redo log files so that automatic archiving only archives for the current instance. Oracle7 can also archive closed threads. Manual archiving allows you to archive online redo log files for all instances. You can use the THREAD option of the ALTER SYSTEM ARCHIVE LOG statement to archive redo log files for any specific instance.
In Oracle7, the filenames of archived redo log files can include the thread number and log sequence number.
A new initialization parameter, LOG_ARCHIVE_FORMAT, specifies the format for the archived filename. A new database parameter, MAXLOGHISTORY, in the CREATE DATABASE statement can be specified to keep an archive history in the control file.
Online recovery from media failure is supported in Oracle7 while the database is mounted in either parallel or exclusive mode.
In either mode, the database or object being recovered cannot be in use during recovery: