Oracle8i Tuning Release 8.1.5 A67775-01 |
|
The Multi-threaded Server (MTS) is a strategic component of Oracle server technology that provides greater user scalability for applications supporting numerous clients with concurrent database connections. Applications benefit from MTS features such as connection pooling and multiplexing.
MTS offers strategic functionality for Oracle because its architecture allows increased user scalability, improved throughput, and better response time. MTS does this while using fewer resources, even as the number of users increases. MTS can scale most applications to accommodate a large number of users without changing the applications.
Note: You can also configure Oracle to use both MTS and dedicated server configurations. For more information on this, please refer to the Net8 Administrator's Guide. |
To set up MTS, set the MTS-related parameters to suit your application.
Applications requiring a large number of concurrent database user connections while minimizing database server memory use are the types of applications that benefit most from MTS. Examples of these are OLTP applications with high transaction volumes and web-based, thin-client applications using IIOP with Java-based architectures.
MTS provides user scalability and performance enhancements to enable using a three-tier application model with the Oracle Server. In many cases, MTS is an excellent replacement for transaction processing (TP) monitors. This is because MTS does not have some of the performance overhead associated with TP monitors.
You can also have an excellent scalability, high-availability system by using MTS along with OPS (Oracle Parallel Server). It is possible to achieve 24-hours per day, seven days-per-week uptime when using MTS with OPS.
The following are among the key performance enhancements and new MTS-related functionality for Oracle8i:
For more information on OPS please refer to Oracle8i Parallel Server Concepts and Administration.
See Also:
Using MTS allows you to tune your system and minimize resource usage. The values for MTS-related parameters for typical applications are determined by a number of factors such as:
The following section describes how to tune dispatchers and the use of MTS' connection pooling and connection multiplexing features.
The number of active dispatchers on a running system does not increase dynamically. To increase the number of dispatchers to accommodate more users after system startup, alter the value for the DISPATCHERS attribute of the MTS_DISPATCHERS parameter. You can use a value for this parameter such that the total number of dispatchers requested does not exceed the value set for the parameter MTS_MAX_DISPATCHERS. The default value of MTS_MAX_DISPATCHERS is 5.
If the number of dispatchers you configure is greater than the value of MTS_MAX_DISPATCHERS, Oracle increases the value of MTS_MAX_DISPATCHERS to this value. Unless you expect the number of concurrent connections to increase over time, you do not need to set this parameter.
A ratio of 1 dispatcher for every 250 connections works well for typical systems. For example, if you anticipate 1,000 connections at peak time, you may want to configure 4 dispatchers. Being too aggressive in your estimates is not beneficial; configuring too many dispatchers can degrade performance.
If you do not have the resources to configure a large number of dispatchers and your system requires more simultaneous connections, use connection pooling and connection multiplexing with MTS.
MTS enables connection pooling where clients share connection slots in a pool on the server side by temporarily releasing client connections when connections are idle. MTS enables connection multiplexing with the connection manager. This allows multiple client connections to share a single connection from the connection manager to the database.
See Also:
For an example of implementing multiplexing, please refer to the Net8 Administrator's Guide. |
This section explains how to maximize throughput and response time using MTS by covering the following topics:
The number of shared server processes spawned changes dynamically based on need. The system begins by spawning the number of shared servers as initially designated by the value set for MTS_SERVERS. If needed, the system spawns more shared servers up to the value set for the MTS_MAX_SERVERS parameter.
If system load decreases, it maintains a minimum number of servers as specified by MTS_SERVERS. Because of this, do not set MTS_SERVERS to too high a value at system startup. Typical systems seem to stabilize at a ratio of 1 shared server for every 10 connections.
For OLTP applications, the connections-to-servers ratio could be higher since the rate of requests could be low and the ratio of server usage-to-requests is low. In applications where the rate of requests is high or the server usage-to-request ratio is high, the connections-to-servers ratio could be lower.
In this case, set MTS_MAX_SERVERS to a reasonable value based on your application. The default value of MTS_MAX_SERVERS is 20 and the default for MTS_SERVERS is 1.
On NT, exercise care when setting MTS_MAX_SERVERS to too a high value because as mentioned, each server is a thread in a common process. The optimal values for these settings can change based on your configuration; these are just estimates of what seems to work for typical configurations.
MTS_MAX_SERVERS is a static INIT.ORA parameter, so you cannot change it without shutting down your database. However, MTS_SERVERS is a dynamic parameter, so you can change it within an active sessions using the ALTER SYSTEM command.
Example OLTP Application: If you expect to require 2,000 concurrent connections, begin with 200 shared servers or 1 shared server for every 10 connections. Set MTS_MAX_SERVERS to 400. Since this is an OLTP application, you would expect the load imposed by each connection on the shared servers to be lower than a typical application. Instead, start off with 100 shared servers, not 200. If you need more shared servers, the system adjusts the number up to the MTS_MAX_SERVERS value.
Net8 stores buffer data in the Session Data Unit (SDU) and sends data stored in this buffer when it is full or when applications try to read the data. Tune the default SDU size when large amounts of data are being retrieved and when packet size is consistently the same. This may speed retrieval and also reduce fragmentation.
Connection load balancing distributes the load based on:
Connection load balancing only works when MTS is enabled.
When using OPS or replicated databases, the connection load balancing feature of MTS provides better load balancing than if you use DESCRIPTION_LISTS. This is because this feature distributes client connections based on actual CPU load. MTS also enables simplified application-dependent routing configurations that ensure that a request from the same application may be routed to the same node each time. This improves the efficiency of application data transfer.
See Also:
For more information, also refer to the chapter "Administering Multiple Instances" in Oracle8i Parallel Server Concepts and Administration. |
This section explains how to tune memory use with MTS.
Oracle recommends using the large pool to allocate MTS-related UGA (User Global Area), not the shared pool. This is because Oracle also uses the shared pool to allocate SGA memory for other purposes such as shared SQL and PL/SQL procedures. Using the large pool instead of the shared pool also decreases SGA fragmentation.
To store MTS-related UGA in the large pool, specify a value for the parameter LARGE_POOL_SIZE. If you do not set a value for LARGE_POOL_SIZE, Oracle uses the shared pool for MTS user session memory.
When using MTS, configure a larger-than-normal large or shared pool. This is necessary because MTS stores all user state information from the UGA in the SGA (Shared Global Area).
If you use the shared pool, Oracle has a default value for SHARED_POOL_SIZE of 8MB on 32-bit systems and 64MB on 64 bit systems. The LARGE_POOL_SIZE does not have a default value, but its minimal value is 300K.
The exact amount of UGA Oracle uses depends on each application. To determine an effective setting for the large or shared pools, observe UGA use for a typical user and multiply this amount by the estimated number of user sessions.
Even though use of shared memory increases with MTS, the total amount of memory use decreases. This is because there are fewer processes, therefore, Oracle uses less PGA memory with MTS. This is the opposite of how this functions in dedicated server environments.
With MTS, you can set the PRIVATE_SGA parameter to limit the memory used by each client session from the SGA. PRIVATE_SGA defines the number of bytes of memory used from the SGA by a session. However, this parameter is rarely used because most DBAs do not limit SGA consumption an a user-by-user basis.
Oracle provides several views with information about dispatchers, shared servers, the rate at which connections are established, the messages queued, shared memory used, and so on.
Performance of certain database features may degrade slightly when MTS is used. These features include Bfiles, parallel execution, inter-node parallel execution, and hash joins. This is because these features may prevent a session from migrating to another shared server while they are active.
A session may remain non-migratable after a request from the client has been processed. Use of these features may make sessions non migratable because these features have not stored all the user state information in the UGA, but have left some of the state in the PGA. As a result, if different shared servers process requests from the client, the part of the user state stored in the PGA is inaccessible. To avoid this, individual shared servers often need to remain bound to a user session. This makes the session non-migratable among shared servers.
When using these features, you may need to configure more shared servers. This is because some servers may be bound to sessions for an excessive amount of time.