How to configure UNIX and Linux systems for WebSphere MQ
You are planning to install or upgrade a WebSphere MQ server on a UNIX or Linux system and need to know how to tune the operating system parameters, including kernel parameters related to inter-process communication (IPC) resources like shared memory and semaphores.
If you do not configure your operating system parameters properly, the WebSphere MQ server may exhaust system resources when you process your production workload. Depending on the resource which was exhausted, WebSphere MQ could return an error to the application like MQRC_RESOURCE_PROBLEM (2012), write a message to its error logs, create FDC files in the /var/mqm/errors directory, or even terminate.
If you need to increase parameters beyond the IBM WebSphere MQ defaults, bear in mind that modern systems are capable of supporting large amounts of resources. For example, the UNIX IPC interface was designed in the late 1970s on a 16-bit DEC PDP-11 minicomputer, similar to this one used by UNIX designers Dennis Ritchie and Ken Thompson. Modern systems run with a hundred thousand to more than a million times more memory, so doubling and quadrupling IPC parameters to handle your workload will not stress your system. Be generous with these values and refer to the section on IPC parameters below for more information about specific settings.
The mqconfig script analyzes your system and compares its settings to the IBM recommended values for WebSphere MQ 7.5, 7.1 or 7.0. It displays the results of this comparison in an easy to read format, along with a PASS, WARN, or FAIL grade for each setting. The mqconfig script does not make any modifications to your systems. A version called mqconfig-old is still provided for older versions of WebSphere MQ.
To use mqconfig, you must first download the script to your system and make it executable (e.g. 'chmod a+x mqconfig'), then run it using the syntax given below. On Solaris 10 you should use the '-p' parameter to identify the projects in which you run WebSphere MQ queue managers. If you omit this parameter, mqconfig will try to determine which projects it should analyze, perhaps incorrectly.
mqconfig -v Version [-p Project]... (Solaris 10 only)
Version: 7.5, 7.1 or 7.0
Example 1: To check your system configuration for WebSphere MQ 7.5:
mqconfig -v 7.5
Example 2: To check your group.mqm and mqdev projects on Solaris for WebSphere MQ 7.1:
mqconfig -v 7.1 -p group.mqm -p mqdev
Example 3: To read the detailed help about mqconfig and resource limits:
Here is a sample of the mqconfig output showing a Linux system which has four potential tuning issues. The semmni value is half the IBM recommended value, and the nofile soft limit is way too low, which is why both parameters failed. The tcp_keepalive_time limit is unusual in that lower values are better, so here it failed for being too high. Finally, the shmmni value gave only a warning because it is reasonably close to the IBM limit:
Please note that any values listed in the "Current User Limits" section are resource limits which apply to the user running mqconfig. If you normally start queue managers as the mqm user (or via sudo to mqm) then you should run mqconfig as mqm to verify its user limits. Other members of the mqm group (and perhaps root as well) can also run mqconfig to make sure their user limits are acceptable for starting WebSphere MQ queue managers.
The mqconfig script may also recommend a change to your shell options in order to avoid a performance problem caused when shells run WebSphere MQ background jobs with reduced priority. If your shell is not susceptible then mqconfig will print nothing. If mqconfig suggests a change you can simply modify your profile; For example, Korn shell users can add the line 'set +o bgnice' to their profile.
If you note a discrepancy between mqconfig and the WebSphere MQ Information Center, or if you encounter a problem with the mqconfig script, please submit a comment using the link at the bottom of the page.
Operating System Notes
AIX The AIX kernel is self-tuning with regard to IPC parameters, so WebSphere MQ will not run into a limit on shared memory or semaphores. The mqconfig script can check other basic settings to ensure they are suitable for WebSphere MQ.
HP-UX 11i You can view or change kernel parameters with the System Management Homepage tool (smh) or by using the kctune command. The kctune command can show whether a parameter change takes effect immediately or whether you must restart the system. On HP-UX 11.23 and older, you may use the System Administration Manager tool (sam) instead of smh.
Linux You can view or change kernel parameters dynamically using the sysctl command or using the files under the /proc filesystem. In order to change the parameters permanently you can add your values to the /etc/sysctl.conf file or use a system startup script to modify the parameters on each startup.
One oddity is that all semaphore tuning parameters are held in a single parameter called sem, rather than individually. The fields in sem correspond to semmsl, semmns, semopm and semmni. All four fields must be set at the same time, even if you wish to change only one value.
Solaris 9 Solaris 9 is supported only by WebSphere MQ 7.0 and older versions. You can view kernel parameters by examining the output of the 'sysdef -i' command. To change parameters you must edit the /etc/system file and then reboot the system.
Solaris 10 Solaris 10 uses projects to replace the system-wide tuning parameters used in previous versions. The WebSphere MQ Information Center describes how to use the projects, projadd and projmod commands to list, create and modify projects. The previous kernel parameters have received more descriptive names, but note that some begin with 'project' and others with 'process' as shown below:
In order to use the resource limits recommended by IBM, you should configure a project (for example 'group.mqm') and ensure that you start queue managers in that project. You can check your current project using the id command and use the newtask command to run a single command or start a shell in a different project. The following example demonstrates both methods, with the commands on the dark gray background running in the group.mqm project rather than the default project:
sun10> id -p uid=500(justinf) gid=501(dev) projid=3(default)
sun10> newtask -p group.mqm strmqm SOHO WebSphere MQ queue manager 'SOHO' starting. The queue manager is associated with installation 'Manhattan'. 5 log records accessed on queue manager 'SOHO' during the log replay phase. Log replay for queue manager 'SOHO' complete. Transaction manager state recovered for queue manager 'SOHO'. WebSphere MQ queue manager 'SOHO' started using V184.108.40.206.
sun10> id -p uid=500(justinf) gid=501(dev) projid=3(default)
sun10> newtask -p group.mqm
sun10> id -p uid=500(justinf) gid=501(dev) projid=100(group.mqm)
sun10> strmqm CHELSEA WebSphere MQ queue manager 'CHELSEA' starting. The queue manager is associated with installation 'Manhattan'. 5 log records accessed on queue manager 'CHELSEA' during the log replay phase. Log replay for queue manager 'CHELSEA' complete. Transaction manager state recovered for queue manager 'CHELSEA'. WebSphere MQ queue manager 'CHELSEA' started using V220.127.116.11.
sun10> id -p uid=500(justinf) gid=501(dev) projid=3(default)
It is easy for WebSphere MQ queue managers to end up running in the default project, for example because an administrator forgot to use the newtask command. You should either configure the default project to satisfy the IBM WebSphere MQ default tuning values, or put processes in place to ensure that WebSphere MQ commands run in the right project.
IPC Tuning Parameters
The following parameters control Inter-Process Communication (IPC) semaphore and shared memory resources used by WebSphere MQ. Not all parameters exist on every system; AIX, for example, does not use any of these parameters. Parameters relating the IPC message queues are not listed since WebSphere MQ no longer uses them.
If you are installing WebSphere MQ on a system with other products that recommend certain IPC parameter settings. In some cases (marked with an asterisk in the table below) you should add up the recommendations of all the products and use the total. For example, if WebSphere MQ recommends 1024 and DB2 recommends 1024, choose a value of 2048 or higher. Otherwise you should use the highest requested value. For example, if WebSphere MQ wants a value of 256 and DB2 asks for 512, you should use the higher value of 512.
The maximum number of semaphore sets on the system. WebSphere MQ queue managers add sets based on workload, so you should check this parameter at runtime to ensure the resource usage is safely within your limit.
The maximum number of semaphores in a single set.
The total number of semaphores in the system. You can calculate the theoretical maximum by multiplying semmni * semmsl, but in practice some sets will have fewer than the maximum number of semaphores.
The maximum number of semaphore undo requests in the system. When a program ends or crashes, the operating system will automatically release semaphores for which "undo" support was requested. WebSphere MQ uses the undo option with some semaphores to ensure they will not get stuck in a locked state.
The maximum number of semaphore undo requests a single process can make.
The maximum adjustment value the operating system can apply to a semaphore when processing an undo requests. Since WebSphere MQ uses binary rather than counting semaphores, this parameter does not affect it.
The maximum number of shared memory sets on the system. WebSphere MQ queue managers add sets based on workload, so you should check this parameter at runtime to ensure the resource usage is safely within your limit.
The maximum number of shared memory sets a single process can attach. This value should match shmmni so that WebSphere MQ processes can attach all sets, if necessary.
The maximum size of a shared memory set. Setting a large value will not waste memory on your system since WebSphere MQ starts by allocating small set, and allocates large ones only when it is processing a heavy workload.
The minimum size of a shared memory set. There is no compelling reason to use any value other than 1.
The maximum number of pages available for shared memory on Linux systems.
The maximum value of a semaphore. WebSphere MQ does not use counting semaphores, so this parameter does not affect it.
The mqconfig-old script is an older version for WebSphere MQ 6.0 and 5.3 which will be withdrawn after WebSphere MQ 6.0 ends support on September 30, 2012. To use mqconfig-old, you must first download the script to your system and make it executable (for example 'chmod a+x mqconfig-old'), then run it using the syntax given below. On Solaris 10 you should use the '-p' parameter to identify the projects in which you run WebSphere MQ queue managers. If you omit this parameter, mqconfig-old will try to guess which projects it should analyze.
Ensure that you have satisfied the following requirements before trying to use the WebSphere MQ Explorer to do remote administration. Verify that:
The WebSphere MQ server and client is installed on the local and the remote machine.
A command server is running for every queue manager.
A TCP/IP listener exists for every queue manager. This can be the WebSphere MQ listener or the inetd daemon as appropriate for your operating system environment.
The server-connection channel, called SYSTEM.ADMIN.SVRCONN, exists on every remote queue manager. This channel is mandatory for every remote queue manager being administered.
Create the channel using the following MQSC command: DEFINE CHANNEL(SYSTEM.ADMIN.SVRCONN) CHLTYPE(SVRCONN).
The userID of the initiator must be a member of the "mqm" group on the local and remote machine. The model queue SYSTEM.MQEXPLORER.REPLY.MODEL must exists on every queue manager.
Create the queue using the following MQSC command: DEFINE QMODEL(SYSTEM.MQEXPLORER.REPLY.MODEL)
Using the WebSphere MQ Explorer for Remote Administration
Satisfy all of the requirements for Remote Administration and then you can use the WebSphere MQ to do administration tasks on your local and remote queue managers.
To show a remote queue manager, right click the WebSphere MQ Explorer Queue Manager folder, select Show/Hide, then select the Add button on the Add Queue Manager screen(see Figure 1then type:
The name of the remote queue manager and press the Next button.
Select the Specify connection details radio button(see Figure 2).
Type the host name or IP address of the remote queue manager and listener port and press the Finish button.
Reasons for Remote Administration failures:
The Command server is not running on remote queue manager. Message AMQ4042 will be issued.
The Listener is not running on remote queue manager. Message AMQ4043 will be issued.
The SYSTEM.ADMIN.SVRCONN is not defined on remote queue manager. Message AMQ4043 will be issued.
The security check failed on remote queue manager. Message AMQ4043 will be issued.
Prior releases of WebSphere MQ for z/OS (v5.3 and v5.3.1) do not provide this function and any attempt to remotely administer a queue manager will fail. The WebSphere MQ V6.0 product has been enhanced to support the remote administration of a z/OS queue manager using the WebSphere MQ Explorer on Windows and Linux Intel.
The SYSTEM.MQEXPLORER.REPLY.MODEL is not defined on remote queue manager and you are using the V6.0 WebSphere MQ Explorer. Message AMQ4400 will be issued.
The Client attachment feature is not installed on WebSphere MQ for z/OS. The CHIN joblog will contain message CSQX260E. To connect to the z/OS queue manager using the SYSTEM.ADMIN.SVRCONN channel, you need to have the Client Attachment feature installed, which is FMID JMS6007 for WebSphere MQ for z/OS 6.0.0.
Note: In WebSphere MQ V7 for z/OS you can create five “free” client attachments for use with MQ Explorer. These attachments must use the channel name SYSTEM.ADMIN.SVRCONN, which is the default channel name used by the MQ Explorer.
To use the free client attachments, you must first alter the SYSTEM.ADMIN.SVRCONN channel definition and set the maximum instances (MAXINST) attribute to 5 or less. You should also ensure that your server-connection channel is secured by the usual means.
IBM® WebSphere® Application Server V7 and V8 provide support for asynchronous messaging based on the Java Message Service (JMS) specification. Using the WebSphere MQ messaging provider, you can write message-driven beans that listen on a WebSphere MQ destination (either a message queue or a topic). When a message arrives on the destination, the message-driven bean's onMessage() method is invoked to process the message.
In WebSphere Application Server V7 and V8, the WebSphere MQ messaging provider supports the use of activation specifications to monitor destinations hosted by WebSphere MQ queue managers. This article shows how activation specifications connect to WebSphere MQ on distributed platforms, describes the mechanism used to monitor the destinations looking for messages, and then shows how message-driven beans are invoked after a suitable message has been detected. The article assumes a basic knowledge of JMS and WebSphere MQ.
In general terms, J2C activation specifications are administered objects that contain information about how to connect to a JMS provider, along with details of the destination on that JMS provider that will be monitored for messages. When deploying an application that contains a message-driven bean, you need to specify the activation specification that the message-driven beans will use. When the activation specification starts up, it connects to a JMS provider, opens the JMS destination, and then monitors it looking for messages.
Figures 1 and 2 below show a sample WebSphere MQ activation specification that has been defined using the activation specification panel in the WebSphere Application Server Integrated Solutions Console. When this activation specification starts up, it makes a BINDINGS mode connection to a local WebSphere MQ queue manager called pault, opens the destination jms/TestQueue, and then starts monitoring this destination for messages.
Figure 1. Specifying the queue manager name and transport type
Figure 2. Specifying the JMS destination that an activation specification will monitor
Activation specifications can be configured to use message selectors, which enables them to only pass messages that meet the selection criteria to message-driven beans. In Figure 2 above, no message selector has been specified, and therefore all messages that arrive on the destination are suitable for processing by this activation specification.
Once an activation specification finds a suitable message, it schedules a piece of work within the application server to process it. Each message requires a JMS server session in order to run, and multiple messages can be processed at the same time.
Each activation specification has an associated server session pool, and its size controls the number of messages that can be processed concurrently by an activation specification. The default size of the server session pool is 10, which means that up to 10 messages can be processed at the same time by a single activation specification. To change the server session pool size, modify the activation specification advanced property Maximum server sessions, as shown in Figure 3:
Figure 3. Specifying how many messages can be processed concurrently
The mechanism that an activation specification uses to detect messages on JMS destinations hosted on a WebSphere MQ queue manager varies depending on the WebSphere MQ messaging provider mode that is being used, as described below.
WebSphere MQ messaging provider normal mode
Activation specifications use the WebSphere MQ messaging provider normal mode if they are connecting to a WebSphere MQ V7 queue manager and they have the Provider versionproperty set to either unspecified (the default value) or 7. In this mode of operation, the activation specification takes advantage of a number of the features of WebSphere MQ V7 when connecting to a queue manager and getting messages. When it starts up, the activation specification:
Creates a connection to the WebSphere MQ queue manager it has been set up to use.
If the activation specification is configured to use a Queue Destination, it opens the queue using the WebSphere MQ API call MQOPEN.
If the activation specification has been configured to use a Topic Destination, it issues a WebSphere MQ API MQSUB call to subscribe to the appropriate topic.
After the queue has been opened or the topic subscribed to, the activation specification uses the WebSphere MQ API call MQCB to register a callback. The callback is set up with the following WebSphere MQ GetMessageOptions:
After the callback has been registered, the activation specification issues a WebSphere MQ MQCTL API call, which tells the queue manager that the activation specification is ready to start receiving messages.
Now, when a suitable message arrives on the queue that the activation specification is monitoring, or is published on the topic that the activation specification has subscribed to, the queue manager marks the message to prevent any other activation specifications from seeing it, and then passes details of the message to the activation specification via the callback that was set up earlier.
WebSphere MQ messaging provider migration mode
The other way that activation specifications can connect to a WebSphere MQ queue manager is by using the WebSphere MQ messaging provider migration mode. This mode is used if one of the following conditions is true:
The activation specification is configured to connect to a WebSphere MQ V6 queue manager.
The activation specification is configured to connect to a WebSphere MQ V7 queue manager, and has the Provider Versionproperty set to 6.
The activation specification has been configured to connect to a WebSphere MQ V7 queue manager using the CLIENT transport, and is using a WebSphere MQ channel that has the Sharing Conversations (SHARECNV) property set to 0.
When the activation specification starts up in migration mode, it:
Creates a connection to the WebSphere MQ queue manager it has been set up to use.
If the activation specification has been configured to monitor a Queue Destination, it issues an MQOPEN API call to open the queue.
If the activation specification has been configured to use a Topic Destination, it:
Opens a subscription for the topic.
Checks the values of the activation specification Broker Properties Broker connection consumer subscription queue and Broker durable subscriber connection consumer queue to see which WebSphere MQ queue the Broker will publish messages for this activation specification to.
Calls the WebSphere MQ API MQOPEN to open the appropriate subscription queue.
Once the queue has been opened on the queue manager, the activation specification browses the queue looking for messages by issuing a number of MQGET API calls. The activation specification uses a combination of the WebSphere MQ GetMessageOptions MQGMO_BROWSE_FIRST and MQGMO_BROWSE_NEXT to scan the queue from top to bottom.
When an activation specification has detected a message on a destination (either because a WebSphere MQ V7 queue manager has passed back information about a message via a callback, or because the activation specification has browsed a suitable message), it:
Constructs a message reference that represents the message.
Gets a server session from the activation specification server session pool.
Loads up the server session with the message reference.
Schedules a piece of work with the application server Work Manager.
The activation specification then goes back to looking for more messages.
Getting server sessions
As mentioned earlier, activation specifications will process up to 10 messages concurrently by default. What happens if an activation specification tries to process a message and all 10 server sessions are already busy processing messages? In this situation, the activation specification will block until a server session becomes free. As soon as a server session is available, the activation specification loads it up with the message reference, and then schedules a new piece of work so the server session can run again.
Once the activation specification loads a server session with a message reference, it schedules some work so that the message can be processed. What happens to the work? The Work Manager:
Gets a thread from the WebSphere Application Server WebSphere MQ messaging provider Resource Adapter thread pool. The name of this thread pool is WMQJCAResourceAdapter.
Runs the piece of work on this thread.
After the work has been scheduled, the application server Work Manager will run this piece of work at some point in the future. The work, when started:
Starts either a local or global (XA) transaction, depending on whether the message-driven bean requires XA transactions or not (specified in the message-driven bean's deployment descriptor).
If this is the first time the server session has been used, it:
Creates a new connection to WebSphere MQ.
Issues an MQOPEN API call to open the queue where the message resides.
Gets the message from WebSphere MQ by issuing a destructive MQGET API call.
Runs the message-driven bean's onMessage() method.
Once onMessage() has completed, the server session completes the local or global transaction before exiting.
To improve performance, the connection to the queue manager that the server session uses is left open after the message has been processed and the work completed. Then, the next time the server session is used to process a message, it need not reconnect to WebSphere MQ and reopen the queue containing the message. By default, unused server sessions associated with activation specifications are left open for 30 minutes before being closed off. You can later this timeout period by modifying the value of the activation specification advanced property Server session pool timeout, as shown in Figure 4 below
On a lightly loaded system, the time between the piece of work being scheduled and the Work Manager starting the work can be just a few milliseconds. On busy systems, there may be a lengthy delay before the work is actually started. There are two possible reasons for a delay:
There were no free threads in the WMQJCAResourceAdapter thread pool to run the work.
The Work Manager was able to get a thread from the thread pool, and but then could not start the work because the application server was too busy.
The Work Manager records when a piece of work was scheduled, and when it starts the work, it checks how much time has elapsed since the activation specification scheduled the work. By default, the activation specification expects the work to be started within 10 seconds of it being scheduled. If more than 10 seconds elapse before the Work Manager starts the work, then a WorkRejected exception is returned back to the activation specification, causing exceptions similar to the one below to appear in the application server SystemErr.log:
Exception in thread "WMQJCAResourceAdapter : 1" java.lang.RuntimeException:
javax.resource.spi.work.WorkRejectedException: Work timed out (id=4), error code: 1
: : : : : : : : : : : : : : :
Caused by: javax.resource.spi.work.WorkRejectedException: Work timed out
(id=4), error code: 1
When an exception like this one occurs, the message in the Message Reference will have been "unmarked" by the queue manager, so that it can be reprocessed. You can change this 10-second time limit on the activation specification Advanced properties panel using Start timeout, as shown in Figure 4:
Figure 4. Modifying the server session timeout and the amount of time to wait for work to start
Earlier, it was mentioned that a piece of work might get delayed if there are not enough threads in the WMQJCAResourceAdapter thread pool, which leads to the obvious question, "What should the size of this thread pool be?". One thread pool per application server is used by activation specifications to run server sessions. Each activation specification has an advanced property called Maximum server sessions, which defines the maximum number of concurrent server sessions that can be running at the same time. Since each server session is used to process messages, this property essentially says how many messages can be processed concurrently by message-driven beans using this activation specification. So in order to determine what the size of the WMQJCAResourceAdapter thread pool should be, you need to add up the values of the Maximum server sessions properties for each WebSphere MQ messaging provider activation specification on the application server. For example, suppose you have 25 activation specifications defined, each with the Maximum server sessions property set to 3. In this situation, there can be up to 75 server sessions running concurrently, each of them using a thread from the WMQJCAResourceAdapter thread pool. Therefore you should set the maximum size of this thread pool to 75. Figure 5 shows the WMQJCAResourceAdapter thread pool panel in the WebSphere Integrated Solutions Console, where you can change the size of this thread pool:
Figure 5. Changing max number of threads available to all activation specifications defined in the application server.
If you start seeing WorkRejected errors appearing in the application server SystemOut.log file, the first thing to check is that the WMQJCAResourceAdapter thread pool is large enough to handle all of the server sessions needed by your activation specifications. If the thread pool is the right size, then the errors are caused by the Work Manager being unable to start the work request within the specified time period. In this situation, you should either increase the value of the activation specification advanced property Start Timeout, or investigate reducing the load on your application server system.
Using WebSphere MQ messaging provider normal mode
As described above, there are three situations in which there might be a delay in between a message being detected and that message being processed by a message-driven bean:
If all server sessions associated with an activation specification are being used.
If all threads in the WMQJCAResourceAdapter thread pool are being used to process messages.
If there is a delay between work being scheduled and the Work Manager actually starting the work.
If the activation specification is running in WebSphere MQ messaging provider normal mode, the queue manager marks messages before passing their details back to the activation specification. Marking the message means that no other activation specification (or WebSphere Application Server Listener Port), either running in the same application server or on a different application server, can see the message, which prevents another message-driven bean from getting the message before a server session has had time to process it.
By default, messages are marked for 5 seconds. To change this time period, modify the WebSphere MQ queue manager property Message mark browse interval (MARKINT).
After WebSphere MQ has passed details of a message to process to an activation specification, the 5 second timer starts. During these five seconds:
The activation specification must get a server session from the server session pool.
The server session must be loaded up with details of the message to process.
The work must be scheduled.
The Work Manager must start the work request.
If there is a delay in getting a server session or a thread from the WMQJCAResourceAdapter thread pool, or if the system is busy and it takes a long time for the Work Manager to schedule the work, then the time between WebSphere MQ passing details of the message and it actually being consumed might be longer than 5 seconds. What happens in this situation?
Well, if the message has been on the queue for longer than 5 seconds, the queue manager will unmark it, and another activation specification or listener port is then free to come along and get the message. If this happens, then when the server session that has previously been given details of this message tries to get it, it will find that the message is no longer on the destination, and write the following message to the application server SystemOut.log:
CWSJY0003W: JMSCC0108: WebSphere classes for JMS attempted to get
a message for delivery to an message listener that had previously been
marked using browse-with-mark, but the message was not there.
Should you see this message, you have three options:
Increase the value of the WebSphere MQ queue manager property Message mark browse interval (MARKINT), to give the activation specification more time to get the message. If you have multiple applications monitoring the same destination and want the messages to be processed quickly, you should think hard about adopting this approach, as increasing the amount of time the message is marked for will prevent any other applications from getting it.
Tune the application server so that it does not block either waiting for a server session or waiting for a thread from the WMQJCAResourceAdapter thread pool. To do this, increase the size of both the server session pool and the thread pool. This change will mean that messages can be processed within the default message browse mark interval, although more resources will be used within the application server as it will be able to process more messages concurrently.
Do nothing. Not recommended, because it means that the activation specification will waste time and resources trying to get messages that have already been picked up and processed by another application!
This article described the mechanisms that activation specifications use to get messages from WebSphere MQ queue managers, including how activation specifications create a connection to a queue manager and the mechanisms they use to monitor JMS destinations looking for suitable messages to process. The article also described how the application server schedules the processing of messages, once a suitable message has been found.
WebSphere® MQ V7.5 for Multiplatforms provides additional enhancements to IBM® Universal Messaging to deliver a single, integrated offering for all core messaging functions. Customers gain access to previously separately installable capabilities, enabling more complete solutions for data and message movement along with reduced complexity.
Tight integration of managed file transfer and advanced message security capabilities with the WebSphere MQ Queue Manager
Common installation experience based on the MQ Installer
Security enhancements including the integration of the WebSphere MQ Advanced Message Security function into the MQ Server
Licensing changes for the Extended Transactional Client
Licensing changes for WebSphere MQ Telemetry Client
Multiple transmission queues defined for use in a clustered queue manager
WebSphere MQ V7.5 delivers a single Universal Messaging solution. It enables the simple, rapid, reliable, and secure transport of data and messages between applications, systems, and services.
WebSphere MQ is the market-leading, message-oriented middleware product that delivers a reliable, proven universal messaging backbone for almost 10,000 organizations of different sizes, spanning many industries around the world.
This new release builds on the added capabilities and new functions that were delivered in WebSphere MQ V7.1, which was announced in October, 2011. It also builds on the previous announcements for WebSphere MQ File Transfer Edition V7.0.4, in April, 2011, and WebSphere MQ Advanced Message Security V7.0.1, in October, 2011.
Note: WebSphere MQ for z/OS®, V7.1 is not updated in this release of WebSphere MQ V7.5 for Multiplatforms.
You can obtain the new enhanced functions offered in WebSphere MQ V7.5 by migrating directly to V7.5 from WebSphere MQ V6.0, V7.0.1, or V.7.1 -- without migrating to an interim version or release.
A key new feature of WebSphere MQ V7.5 is the consolidation of multiple previously separate product capabilities into a single integrated offering. WebSphere MQ File Transfer Edition and WebSphere MQ Advanced Message Security, both previously announced separate products, are now available as integrated capabilities for optional installation as Managed File Transfer and Advanced Message Security components within the WebSphere MQ product. They are subject to appropriate licensing entitlement.
WebSphere MQ V7.1 introduced a capability to support install of different versions of WebSphere MQ in different locations on your system. WebSphere MQ V7.5 extends this to enable additional capabilities for Advanced Message Security and Managed File Transfer to be optionally installed as part of the WebSphere MQ server install.
WebSphere MQ V7.5 queue managers gain access to these additional capabilities without requiring additional access to code, or separate product installs. All tooling necessary to use these functions is included in the MQ Explorer tooling or command line tools for scripted configuration as standard for all customers, including those on z/OS.
Changes made to the licensing of the Extended Transactional Client and the WebSphere MQ Telemetry Client benefit all customers in providing increased entitlement. This applies to customers of WebSphere MQ V7.5 as well as WebSphere MQ V7.0.1 and WebSphere MQ V7.1.
For ordering, contact Your IBM representative or an IBM Business Partner. For more information contact the Americas Call Centers at 800-IBM-CALL (426-2255). Reference: YE001
WebSphere MQ delivers a single integrated Universal Messaging solution. This V7.5 release delivers new features and also enhances the total capabilities available through integrating managed file transfer functions and advanced message security capabilities that were previously available and installable as separate offerings.
Integrated managed file transfer
WebSphere MQ File Transfer Edition has been available as a separate product offering for a number of years, providing the ability for customers to move their business data stored in files over the WebSphere MQ infrastructure, improving the reliability, security, and management of their file transfers. WebSphere MQ V7.5 extends this capability from being a separate offering, to being an integrated optional feature of the WebSphere MQ server component as the WebSphere MQ Managed File Transfer Service. Now all WebSphere MQ servers can install, subject to entitlement, this server-based capability. Additional separately installable endpoints, WebSphere MQ Managed File Transfer Agents, are included in the package. These are entitled separately to allow customers to extend the managed file transfer infrastructure to any point in their enterprise connection to the WebSphere MQ server deployments.
Integrated advanced message security
WebSphere MQ Advanced Message Security, also available for a number of years, allows customers to protect the security of their messages from application to application, without the need to change the application code itself. With WebSphere MQ V7.5, this capability is included as a part of the install, making it simpler for customers to see the function, and to have it installed should they wish to buy license entitlement to use it.
Improved application isolation
WebSphere MQ V7.5 includes improved ability to scale for differing workload environments by the ability to configure multiple transmission queues in a WebSphere MQ clustered environment. This enables applications with different workloads and performance requirements to operate at their own rate without impacting other applications.
Enhancements for the managed file transfer capabilities
In addition to the tight integration, as a part of WebSphere MQ V7.5, which enhances the runtime control, the managed file transfer capabilities are also enhanced. There are additional choices for storing file transfer audit information with the addition of the file system as an option. There is also greater customization for the content and format, as well as more options for logging.
Enhancements for WebSphere MQ Security
The addition of Managed File Transfer capabilities as well as WebSphere MQ Advanced Message Security into WebSphere MQ V7.5 enables some improvements to the security possible within WebSphere MQ.
With WebSphere MQ V7.5 the Advanced Message Security feature built into all MQ clients, end-to-end encryption is allowed by updates to WebSphere MQ objects for customers who are licensed to use that function. There is also further support for protecting sensitive data such as passwords used in configuring managed file transfers.
Wider access to the Extended Transactional Client for all customers
The Extended Transactional Client enables customers to configure their WebSphere MQ client to participate in a transactional unit of work when exchanging messages to an MQ server. Use of this client without charge was previously restricted to customers using WebSphere Application Server, WebSphere Enterprise Service Bus, or WebSphere Process Server as the Transaction Manager. With the announcement of WebSphere MQ V7.5, the Extended Transactional Client is available for use in all client deployments without additional entitlement. This includes all supported versions of WebSphere MQ client connecting to any supported version of WebSphere MQ queue manager.
With the availability of WebSphere MQ V7.5 the capability previously delivered within the Extended Transactional Client is incorporated into the standard WebSphere MQ client. Customers using WebSphere MQ V7.0.1 and WebSphere MQ V7.1 gain the benefit of use of the Extended Transactional Client without charge from the date of this announcement. IBM is making available refreshed code including updated License Information. Customers can realize this benefit through the download and acceptance of this new License Information.
Using WebSphere MQ Telemetry Standard Client
WebSphere MQ Telemetry was included as part of the WebSphere MQ V7.1 offering, providing wider access to this capability for customers who wanted to deploy the WebSphere MQ Telemetry clients on suitable endpoints and connect them to their WebSphere MQ servers.
Connecting these clients to WebSphere MQ servers required purchasing entitlements based on the number of Telemetry clients connecting to a WebSphere MQ Queue Manager at any one time. With the announcement of WebSphere MQ V7.5, use of the WebSphere MQ Telemetry Client requires purchasing an entitlement for WebSphere MQ Telemetry Clients for each WebSphere MQ Server installed that will have WebSphere MQ Telemetry clients connected to it, with no limit to the number of clients connected. This change in the license entitlement for WebSphere MQ Telemetry Clients applies to not just WebSphere MQ V7.5. It also benefits customers using WebSphere MQ Telemetry Clients with WebSphere MQ V7.0.1 and WebSphere MQ V7.1 from the date of this announcement. IBM is making available refreshed code including updated License Information and customers can realize this benefit through the download and acceptance of this new License Information.
The WebSphere MQ Telemetry Advanced Client entitlements are still based on the number of clients connected at any one time.
Enhancements for the use of WebSphere MQ as a trial
Customers wanting to make a rapid start connecting applications with WebSphere MQ can take advantage of the availability of WebSphere MQ as a free download in a trial version. With WebSphere MQ V7.1 this feature was enhanced to enable trial versions of WebSphere MQ to be upgraded to a full production license. With the integration of additional entitled capabilities in WebSphere MQ V7.5, a trial download is still available. However, where entitlement was not purchased at the end of the trial for capabilities that were installed, then those capabilities without entitlement must be removed from the system.
Accessibility by people with disabilities
WebSphere MQ is capable as of June 15, 2012, when used in accordance with associated IBM documentation, of satisfying the applicable requirements of Section 508 of the Rehabilitation Act, provided that any assistive technology used with the product properly interoperates with it.
Getting the most out of WebSphere MQ first requires you to define performance. This should be followed up with a look at several factors that affect WebSphere MQ performance, and techniques that can be used to improve performance.
What is Performance
The concept of performance has different aspects. When addressing the issue of performance, your first consideration should be to identify the aspect you plan to address.
People associate performance with:
The measurement of response time. Thus, better performance is defined as the completion of a task in less time (quicker response).
Moving the work through the system with less impact on the system and other applications.
Providing the most throughput at peak times.
Availability. The application has to be available when required.
Based on your objectives, you may address performance similarly, but the trade-offs you choose may vary.
WebSphere MQ applications
Different types of WebSphere MQ applications have different performance characteristics. Several types are provided below.
The asynchronous send application sends messages based on some external activity. Examples might include a stock ticker or an application that reports scores for a sports event.
This application is primarily concerned with throughput, in that it needs to keep up with the rate that events occur. Whether the message takes seconds or days to reach its destination, once it is sent, it does not affect the application.
Synchronous send and receive
Another common type of WebSphere MQ application is the synchronous application. Technically, this application is not synchronous, but rather it is an asynchronous one that expects a timely response to the sent message. For this application, the key concern is the response time for the reply message(s). If the responding application is remote (on a network), this time includes WebSphere MQ processing on multiple hosts, the processing by the remote application and the network transmission time for both messages.
It is possible given the design of WebSphere MQ that the response may not be timely and must be dealt with by the application design. For example, if the application does an infinite wait for the message to arrive, this will consume system resources and could affect other applications.
The server application is complementary to the previous two examples. It processes WebSphere MQ messages; performs local processing, such as accessing a database; and may send a response. Multiple servers may be used to share a portion of the workload.
WebSphere MQ client
The WebSphere MQ client application may be an implementation of any of the previous applications, but introduces a key response-driven component. As there is no local queue manager, all requests must pass over the network to the associated server queue manager. The number of requests, the speed at which the request and response can be transmitted, and the additional processing time on the server are all components of the application's performance.
Factors that affect WebSphere MQ performance - an overview
There are many different factors that address performance. Given the diversity of WebSphere MQ environments, some recommendations may or may not be applicable. For each, there is typically a trade-off in providing better performance to the WebSphere MQ application, such as degrading another application. It is important that the cost and benefit are understood before making changes.
The usual suspects
The key elements of response time have not changed in more than 20 years.
The processor time to service the application plus the overhead of the operating environment (in this case, WebSphere MQ and the operating system)
The time spent waiting for I/O operations (all computers wait at the same speed)
The time spent transmitting requests over a network
Any contention for resources required by the application
Each of these key components will be addressed in the following sections.
Adding system resources
The most common approach to improving performance is simply to add additional resources. To accomplish this, you could try moving the queue manager to a larger server or adding additional memory. With today's price-to-performance ratios, this can provide significant improvements at little cost.
Performance Factors and Techniques
The main component of CPU consumption for a WebSphere MQ application is the type and number of MQI calls issued.
Calls in order of CPU consumption:
MQCONN - connects to the queue manager, creates required task structures and control blocks
MQOPEN - opens a specific queue for processing, may lock required resources and acquires control blocks
MQCLOSE - closes the queue, commits resources, frees locks and releases control blocks
MQPUT - puts a message to a queue (recovery processing may be required)
MQGET - gets a message from a queue (recovery processing may be required)
On S390, most CPU is charged to the calling application. On distributed systems, an agent process is used to communicate with WebSphere MQ, and this process will consume most WebSphere MQ-related CPU.
Avoid unnecessary calls
The best and most obvious way to avoid CPU consumption is to avoid unnecessary MQI calls. For example, consider the server application discussed previously. The application could be designed to trigger the server's start when a message arrives, connect to the queue manager, open the queue, retrieve the message and process the response (opening a second queue), close all queues and disconnect from the queue manager. The process would repeat for the next message, and so on. This may be a good solution for a low arrival rate. However, for higher message arrival rates, there are two alternatives.
First, rather than closing all queues and disconnecting1, the application could try to do an additional get with wait2 from the queue. If another message is already available, it could process this message and avoid additional connect and open calls3. This process could be repeated until no unprocessed messages remain, and only then would the server terminate. If the message arrival rate is high enough, rather than using triggering, the application could be permanently active, simply looping on a get with wait call. Note that if the arrival rate is insufficient, the above solutions could be unnecessary processing.
Reduce message size and/or compress messages
Message size is a key component in message processing. While application developers can be coerced into reducing message size, there is no guarantee that they will do so. Traditionally, software solutions to compress messages have had a greater success rate than those that relied on application methodology. As seen in Figure 1, which demonstrates the use of compression software, message size can have a significant impact on CPU time. This is primarily due to data movement within the queue manager. Data must be moved out of the application and into WebSphere MQ buffers. It must be logged if persistent, and may have to be written to and read from physical DASD.
Figure 1—CPU Consumption
Directly related to the reduced CPU, but also due to I/O and network transmission time savings achieved, the elapsed time of the compressed data is significantly lower than that of the native messages.
To achieve these savings however, it should be noted that messages must be compressed prior to their placement on the queue.
Reduced number of messages
Is one big message better than several small ones? Opinions vary.
Larger messages are subject to additional processing overhead, whereas each small message incurs a base amount of processing. WebSphere MQ now supports messages up to 100MB, so it is possible to logically join multiple records (taking care not to go overboard). Define messages that make sense from an application point of view, and don't overanalyze message design. If the number of messages is low, the difference in processing for either method will be small. If a large number of messages are being written, combining the messages may result in a significant reduction in processing overhead.
Use intermediate commits for large numbers of messages
There are several reasons to periodically commit messages. First, the processing required is not linear to the number of messages. The impact of the final commit increases as the number of messages in the unit of work increases. Second, periodic commits spread the total time to process over a longer period (less impact on other applications). Third, messages are not visible to other applications until they have been committed, thus the messages will appear all at once to the server application. The processing application may be overwhelmed. Of course, the commits must be reflected in completed units of work.
Is it better to have a single queue shared by multiple application instances or individual queues? This is an area of debate, but it typically does not make sense to share queues across different applications. However, it may make sense to share queues within an application domain. For example, the Command MQ for S/390 product from BMC Software supports multiple users connected to a single queue manager. It could have been designed with a unique queue per user, but instead it implements a single queue shared by all users based on correlation ID (CorrelId), resulting in fewer queues to manage.
WebSphere MQ on distributed platforms uses an indexed technique to make this efficient. On S/390 with V1.2 and later, the queue can be defined as indexed. This builds an in-storage index. The index can be based on message ID (MsgId) or CorrelId, but not both. This is not typically a problem, as applications use one or the other. However, if the queue is a priority-based queue, additional processing is required for each message with the same index.
If you have applications displaying this behavior, it is important that you define the associated queue as indexed. Consider a queue with 1,000 messages for application A, followed by 50 messages for application B. To read the 50 messages, application B would actually read the 1,000 messages for application A before hitting any of its own messages. Depending on application design, this could result in a total reference of 50,050 messages to process all 50 messages.
Defining a queue as indexed adds a minimal amount of additional processing during put processing, but can be noticeable during queue manager restart for large queues.
Additionally, if all access to the queue is by MsgId or CorrelId and message expiry is used, it is possible to fill a queue with expired messages. WebSphere MQ does not remove messages until the expired message's get with update is performed.
For distributed queue managers, another CPU consumer is process switching. Process switching prevents the corruption of WebSphere MQ due to application program errors.
The queue manager is isolated from the application program through the use of an agent process that executes within the queue manager domain. For each WebSphere MQ call, an IPC is used to switch from the application to the agent. When defined as trusted, the application, the agent and the queue manager are within a common domain. This eliminates the overhead, but leaves the queue manager open to corruption by the application. Thus, it is intended only for truly trusted applications.
Trusted applications are primarily used for the WebSphere MQ channel agents. While from a WebSphere MQ perspective these are application programs, from a customer point of view they are part of WebSphere MQ. These should be configured as trusted, reducing overhead for the channel processing. Channel exits will execute within the trusted environment, should be evaluated, and must conform to trusted application restrictions. Note that an application must be designed to use trusted binding. For example, it must use an MQCONNX call instead of the standard MQCONN call.
I/O can be a major component of a WebSphere MQ application, particularly if logging is the primary factor. In order to provide guaranteed once-and-once-only delivery, WebSphere MQ must log every processed message. Additionally, WebSphere MQ must ensure that the log has been committed prior to the work unit's completion.
Queue I/O is typically performed independent from application response time, but could affect device use. When processed within a resource manager, I/O to the queue is not performed unless buffer space is exhausted. Therefore, it is possible for a message to be sent to a queue and read by the processing application without ever being written to the physical queue storage.
Use nonpersistent messages when appropriate
Because logging is only performed for persistent messages, using nonpersistent messages eliminates logging activity. Nonpersistent messages are not guaranteed from a WebSphere MQ perspective. That is, they may never be delivered. Most notably, nonpersistent messages are not maintained across a restart of the queue manager. Some cases of non-persistent messages make sense. For example, consider an application that sends the current temperature. If a single reporting instance is lost, correction will occur with the next temperature report. However, stock trade messages cannot be lost.