Oracle8i Parallel Server Concepts and Administration Release 8.1.5 A67778-01 |
|
[Architecture] is music in space, as if it were a frozen music...
This chapter explains features of the Oracle Parallel Server (OPS) architecture that differ from an Oracle server in exclusive mode.
Each Oracle instance in an OPS architecture has its own:
All instances in an OPS environment share or need to access the same sets of:
The OPS instance contains:
The basic OPS components appear in Figure 5-1. DBWR processes are shown writing data, users are reading data. The background processes LMD and LCK, as well as foreground (FG) processes, communicate directly from one instance to another by way of the interconnect.
The characteristics of OPS can be summarized as:
A parallel server is administered in the same manner as a non-parallel server, except that you must connect to a particular instance to perform certain administrative tasks. For example, creating users or objects can be done from any single instance.
Applications accessing the database can run on the same nodes as instances of a parallel server or on separate nodes, using the client-server architecture. A parallel server can be part of a distributed database system. Distributed transactions access data in a remote database in the same manner, regardless of whether the datafiles are owned by a standard Oracle Server in exclusive mode or by a parallel server in exclusive or shared mode.
Other non-Oracle processes can run on each node, or you can dedicate the entire system or part of the system to Oracle. For example, a parallel server and its applications might occupy three nodes of a five-node configuration, while the other two nodes are used for non-Oracle applications.
Each instance of a parallel server has its own System Global Area (SGA). The SGA has the following memory structures:
Data sharing among SGAs in OPS is controlled by parallel cache management using parallel cache management (PCM) locks.
Copies of the same data block can be present in several SGAs at the same time. PCM locks ensure that the database buffer cache is kept consistent for all instances. It thus ensures readability by one instance of changes made by other instances.
Each instance has a shared pool that can only be used by the user applications connected to that instance. If the same SQL statement is submitted by different applications using the same instance, it is parsed and stored once in that instance's SGA. If that same SQL statement is also submitted by an application on another instance, then the other instance also parses and stores the statement.
Each instance in OPS has its own set of background processes that are identical to the background processes of a single server in exclusive mode. The DBWR, LGWR, PMON, and SMON processes are present for every instance; the optional processes, ARCH, CKPT, Dnnn and RECO, can be enabled by setting initialization parameters. In addition to the standard background processes, each instance of OPS has at least one lock process, LCK0. You can enable additional lock processes if needed.
In OPS, IDLM also uses the LMON and LMD0 processes. LMON manages instance and failures and associated recovery for the IDLM. In particular, LMON handles the part of recovery associated with global locks. LMD processes handle remote lock requests such as those originating from other instances. LMD also handles deadlock detection. The LCK process manages locks used by an instance and coordinates requests from other instances for those locks.
When an instance fails in shared mode, another instance's SMON detects the failure and recovers for the failed instance. The LCK process of the instance doing the recovery cleans up outstanding PCM locks for the failed instance.
Foreground processes communicate lock requests directly to remote LMD processes. Foreground processes send request information such as the resource name it is requesting a lock for and the mode in which it is needs the lock.
The IDLM processes the request asynchronously, so the foreground process waits for the request to complete before closing the request.
See Also:
For more information about how these requests are processed, please refer to "Asynchronous Traps (ASTs) Communicate Lock Request Status". |
Cache Fusion resolves cache coherency conflicts when one instance requests a block held in exclusive mode by another instance. In such cases, Oracle transfers a consistent-read version of the block directly from the memory cache of the holding instance to the requesting instance. Oracle does this without writing the block to disk.
Cache Fusion uses the Block Server Process (BSP) to roll back uncommitted transactions. BSP then sends the consistent read block directly to the requestor. The state of the block is consistent as of the point in time at which the request was made by the requesting instance. Figure 5-2 illustrates this process.
Cache Fusion does this only for consistent read, reader/writer requests. This greatly reduces the number of lock downgrades and the volume of inter-instance communication. It also increases the scalability of certain applications that previously were not likely OPS candidates, such as OLTP and hybrid applications.
When setting up OPS, observe the guidelines in Table 5-1: