Oracle8i Parallel Server Concepts and Administration Release 8.1.5 A67778-01 |
|
This chapter provides an overview of Oracle Parallel Server's (OPS) internal locking mechanisms by covering the following topics:
This section covers the following topics:
You must understand locking mechanisms to effectively harness parallel processing and parallel database capabilities. You can influence each type of locking by setting initialization parameters, administering the system, and by designing efficient applications. Not using locks effectively causes your system to spend so much time synchronizing shared resources that you do not achieve speedup and scaleup. Your parallel system could even suffer performance degradation.
OPS uses locks for two primary purposes:
Transaction locks implement row level locking for transaction consistency. Row level locking is supported in both single instance Oracle and OPS.
Instance locks (also commonly known as distributed locks) guarantee cache coherency. They ensure the consistency of data and other resources distributed among multiple instances belonging to the same database. Instance locks include PCM and non-PCM locks.
See Also:
For more information about Oracle locks, please refer to Chapter 8, "Integrated Distributed Lock Manager" and to Oracle8i Concepts. |
Figure 7-1 shows latches and enqueues: locking mechanisms that are synchronized within a single instance. These are used in single instance Oracle and in OPS whether parallel server is enabled or disabled.
* The mount lock is obtained if the Parallel Server Option has been linked in to your Oracle executable.
Latches are simple, low level serialization mechanisms that protect in-memory data structures in the SGA. Latches do not protect datafiles. They are entirely automatic and are held for a very short time in exclusive mode. Being local to the node, internal locks and latches do not provide internode synchronization.
Enqueues are shared memory structures that serialize access to database resources. These locks can be local to one instance or global to a database. They are associated with a session or transaction and can be in any mode:
Enqueues are held longer than latches, have more granularity and more modes, and protect more database resources. For example, if you request a table lock, or a DML lock, you receive an "enqueue".
Certain enqueues are local to single instances when OPS is disabled. But with OPS enabled, enqueues must be maintained on a system-wide level. Enqueues are managed by the Integrated Distributed Lock Manager (IDLM).
When OPS is enabled, most local enqueues become global enqueues. This is reflected in Figure 7-1 and Figure 7-2. They appear as enqueues in the fixed tables--no distinction is made between local and global enqueues. Global enqueues are handled in a distributed fashion.
Figure 7-2 illustrates the instance locks used by OPS. In OPS implementations, the status of all Oracle locking mechanisms is tracked and coordinated by the IDLM.
Instance locks (other than the mount lock) only come into existence if you start an Oracle instance with OPS enabled. They synchronize between instances, communicating the current status of a resource among the instances of OPS.
Instance locks are held by background processes of instances rather than by transactions. An instance owns an instance lock that protects a resource, such as a data block or data dictionary entry, when the resource enters its SGA.
To ensure cache coherency, the IDLM handles locking only for resources accessed by more than one instance of OPS. The IDLM communicates requests for instance locks and the status of the locks between lock processes of each instance. There are several views associated with the IDLM as described in Table 7-1.
There are two type of Instance locks: Parallel Cache Management (PCM) and Non-PCM locks.
PCM locks are instance locks covering one or more data blocks (table or index blocks) in the buffer cache. PCM locks do not lock rows for transactions and are implemented in two ways:
With hashed locking, an instance never disowns a PCM lock unless another instance asks for it. This minimizes the overhead of instance lock operations in systems with relatively low contention for resources. With fine grain locking, once the block is released, the lock is released. Non-PCM locks are disowned.
There are many different types of non-PCM locks. These control access to data and control files, control library and dictionary caches, and perform various types of communication between instances. These locks do not protect datafile blocks. Examples are DML enqueues (table locks), transaction enqueues, and DDL or dictionary locks. The System Change Number (SCN), and the mount lock are global locks, not enqueues.
PCM locks are typically more numerous than non-PCM locks. However, there are still enough non-PCM locks for which you must carefully plan adequate IDLM capacity. Typically 5% to 10% of locks are non-PCM. Non-PCM locks do not grow in volume the way PCM locks do.
You can control PCM locks in detail by setting initialization parameters to allocate the number desired. However, you have almost no control over non-PCM locks. You can attempt to eliminate the need for table locks by setting DML_LOCKS = 0 or by using the ALTER TABLE ENABLE/DISABLE TABLE LOCK command, but other non-PCM locks will still persist.
See Also:
For more information, please see Chapter 16, "Ensuring IDLM Capacity for Resources and Locks". |
In OPS, the LCK process provides inter-instance locking. LCK manages most locks used by an instance and coordinates requests for those locks by other instances. LCK maintains all PCM locks, hashed or fine grain, and some of the non-PCM locks, such as row cache or library cache locks. LCK handles PCM as well as non-PCM locks.
Although instance locks are mainly handled by LCK, some instance locks are directly acquired by other background or shadow foreground processes. In general, if a background process such as LCK owns an instance lock, it is for the entire instance. If a foreground process owns an instance lock, it is just for that particular process. For example, the log writer (LGWR) obtains the SCN instance lock, the database writer (DBWR) obtains the media recovery lock. The bulk of these locks, however, are handled by LCK.
Foreground processes obtain transaction locks, LCK does not. Transaction locks are associated with the session/transaction unit, not with the process.
The LMON and LMD0 processes implement the global lock management subsystem of OPS. LMON performs lock cleanup and lock invalidation after the death of an Oracle shadow process or another Oracle instance. It also reconfigures and redistributes the global locks as OPS instances are started and stopped.
The LMD0 process handles remote lock requests for global locks, that is, lock requests originating from another instance for a lock owned by the current instance. All global lock messages directed to an OPS instance are handled by the LMD0 process of that instance.
To effectively implement locks, carefully evaluate their relative expense. As a rule-of-thumb:
In general, instance locks and global enqueues have an equivalent effect on performance. When OPS is disabled, all enqueues are local. When OPS is enabled, most enqueues are global.
Table 7-2 dramatizes the relative expense of latches, enqueues, and instance locks. The elapsed time required per lock varies by system. Values used in the "Actual Time Required" column of this table are only examples.
Microseconds, milliseconds, and tenths of a second may seem like negligible units of time. However, imagine the cost of locks using grossly exaggerated values such as those listed in the "Relative Time Required" column. This should make the need to carefully calibrate lock use in your systems and applications more obvious. In a large OLTP implementation, for example, you should avoid unregulated instance lock use. Imagine waiting hours or days to complete a transaction!
Stored procedures are available for analyzing the number of PCM locks an application uses if it performs particular functions. You can set values for your initialization parameters and then call the stored procedure to see the projected expenditure in terms of locks.
See Also:
For more information, please refer to Chapter 15, "Allocating PCM Instance Locks" and Chapter 16, "Ensuring IDLM Capacity for Resources and Locks". |
This section covers the following topics:
All Oracle enqueues and instance locks are named using one of the following formats:
type ID1 ID2
or type, ID1, ID2
or type (ID1, ID2)
Where:
type |
A two-character type name for the lock type, as described in the V$LOCK table, and listed in Table 7-3 and Table 7-4. |
ID1 |
The first lock identifier, used by the IDLM. The convention for this identifier differs from one lock type to another. |
ID2 |
The second lock identifier, used by the IDLM. The convention for this identifier differs from one lock type to another. |
For example, a space management lock might be named ST 1 0. A PCM lock might be named BL 1 900.
The V$LOCK table lists local and global Oracle enqueues currently held or requested by the local instance. The "lock name" is actually the name of the resource; locks are taken out against the resource.
All PCM locks are Buffer Cache Management locks.
Type | Lock Name |
---|---|
BL |
Buffer Cache Management |
The syntax of PCM lock names is type ID1 ID2, where:
Sample PCM lock names are:
Non-PCM locks have many different names.
The IDLM component is a distributed resource manager that is internal to OPS. This section explains how the IDLM coordinates locking mechanisms that are internal to Oracle. Chapter 8, "Integrated Distributed Lock Manager" presents a detailed description of IDLM features and functions.
This section covers the following topics:
In OPS implementations, the IDLM facility maintains an inventory of Oracle instance locks and global enqueues held against system resources. The IDLM acts as a referee when conflicting lock requests arise.
In Figure 7-3 the IDLM is represented as an inventory sheet listing resources and the current status of locks on each resource across OPS. Locks are represented as follows: S for shared mode, N for null mode, X for exclusive mode.
This inventory includes all instances. For example, resource BL 1, 101 is held by three instances with shared locks and three instances with null locks. Since the table reflects up to 6 locks on one resource, at least 6 instances are evidently running on this system.
Oracle database resources are mapped to IDLM resources, with the necessary mapping performed by the instance. For example, a hashed lock on an Oracle database block with a given data block address (such as file 2 block 10) becomes translated as a BL resource with the class of the block and the lock element number (such as BL 9 1). The data block address (DBA) is translated from the Oracle resource level to the IDLM resource level; the hashing function used is dependent on GC_* parameter settings. The IDLM resource name identifies the physical resource in views such as V$LOCK.
Figure 7-5 illustrates how IDLM locks and PCM locks relate. To allow instance B to read the value of data at data block address x, instance B must first check for locks on that data. The instance translates the block's database resource name to the IDLM resource name, and asks the IDLM for a shared lock in order to read the data.
As illustrated in the following conceptual diagram, the IDLM checks outstanding locks on the granted queue and determines there are already two shared locks on resource BL1,441. Since shared locks are compatible with read-only requests, the IDLM grants a shared lock to instance B. The instance then proceeds to query the database to read the data at data block address x. The database returns the data.
If the required block already had an exclusive lock on it from another instance, then instance B would have to wait for this to be released. The IDLM would place the shared lock request from instance B on the convert queue. The IDLM would then notify the instance when the exclusive lock was removed and grant its request for a shared lock.
The term IDLM lock refers simply to the IDLM's notations for tracking and coordinating the outstanding locks on a resource.
The IDLM provides one lock per instance on a PCM resource. As illustrated in Figure 7-6, if you have a four-instance system and require a buffer lock on a single resource, you actually have four locks--one per instance.
The number of non-PCM locks may depend on the type of lock.