Script:
Owner:
Subdir: middlewarenews
Blog ID: 93013848
Group ID: User ID: 92249378

My Profile

  • Jennifer
    Jennifer

  • sudheer
    sudheer

  • kotaru
    kotaru

  • ravindranadh
    ravindranadh

    IBM Websphere MQ - Server-connection channel limits - Middleware News

    Saturday, September 8, 2012, 1:29 AM
    Categories: MQ
    Posted By: Karthick

    You can set server-connection channel limits to prevent client applications from exhausting queue manager channel resources and to prevent a single client application from exhausting server-connection channel capacity.

    A maximum total number of channels can be active at any time on an individual queue manager, and the total number of server-connection channel instances are included in the maximum number of active channels.

    If you do not specify the maximum number of simultaneous instances of a server-connection channel that can be started, then it is possible for a single client application, connecting to a single server-connection channel, to exhaust the maximum number of active channels that are available. When the maximum number of active channels is reached, it prevents any other channels from being started on the queue manager. To avoid this, you must limit the number of simultaneous instances of an individual server-connection channel that can be started, regardless of which client started them.

    If the value of the limit is reduced to below the currently running number of instances of the server connection channel, even to zero, then the running channels are not affected. However, new instances cannot be started until sufficient existing instances have ceased to run so that the number of currently running instances is less than the value of the limit.

    Also, many different client-connection channels can connect to an individual server-connection channel. The limit on the number of simultaneous instances of an individual server-connection channel that can be started, regardless of which client started them, prevents any client from exhausting the maximum active channel capacity of the queue manager. However, if you do not also limit the number of simultaneous instances of an individual server-connection channel that can be started from an individual client, then it is possible for a single, faulty client application to open so many connections that it exhausts the channel capacity allocated to an individual server-connection channel, and this prevents other clients that need to use the channel from connecting to it. To avoid this, you must limit the number of simultaneous instances of an individual server-connection channel that can be started from an individual client.

    If the value of the individual client limit is reduced below the number of instances of the server-connection channel that are currently running from individual clients, even to zero, then the running channels are not affected. However, new instances of the server-connection channel cannot be started from an individual client that exceeds the new limit until sufficient existing instances from that client have ceased to run so that the number of currently running instances is less than the value of this parameter.

    0 (0 Ratings)
    [ 2151 views ] Leave a Comment

    IBM Websphere MQ Channels-Changing queue manager configuration information - Middleware News

    Saturday, September 8, 2012, 1:27 AM
    Categories: MQ
    Posted By: Karthick

    Use the Channels queue manager properties page from the WebSphere® MQ Explorer, or the CHANNELSstanza in the qm.ini file, to specify information about channels.

    MaxChannels=100|number
    The maximum number of channels allowed. The default is 100.
    MaxActiveChannels=MaxChannels_value
    The maximum number of channels allowed to be active at any time. The default is the value specified on the MaxChannels attribute.
    MaxInitiators=3|number
    The maximum number of initiators. The default and maximum value is 3. Any value greater than 3 will be taken as 3.
    MQIBindType=FASTPATH|SHARED
    The binding for applications:
    FASTPATH
    Channels connect using MQCONNX FASTPATH; there is no agent process.
    SHARED
    Channels connect using SHARED.
    PipeLineLength=1|number
    The maximum number of concurrent threads a channel will use. The default is 1. Any value greater than 1 is treated as 2.

    When you use pipelining, configure the queue managers at both ends of the channel to have aPipeLineLength greater than 1.

    Note: Pipelining is only effective for TCP/IP channels.
    AdoptNewMCA=NO|SVR|SDR|RCVR|CLUSRCVR|ALL|FASTPATH
    If WebSphere MQ receives a request to start a channel, but finds that an amqcrsta process already exists for the same channel, the existing process must be stopped before the new one can start. TheAdoptNewMCA attribute allows you to control the end of an existing process and the startup of a new one for a specified channel type.
    If you specify the AdoptNewMCA attribute for a given channel type, but the new channel fails to start because the channel is already running:
    1. The new channel tries to stop the previous one by requesting it to end.
    2. If the previous channel server does not respond to this request by the time the AdoptNewMCATimeout wait interval expires, the process (or the thread) for the previous channel server is ended.
    3. If the previous channel server has not ended after step 2, and after the AdoptNewMCATimeout wait interval expires for a second time, WebSphere MQ ends the channel with a CHANNEL IN USEerror.
    Note: AdoptNewMCA is not supported on requester channels.

    Specify one or more values, separated by commas or blanks, from the following list:

    NO
    The AdoptNewMCA feature is not required. This is the default.
    SVR
    Adopt server channels.
    SDR
    Adopt sender channels.
    RCVR
    Adopt receiver channels.
    CLUSRCVR
    Adopt cluster receiver channels.
    ALL
    Adopt all channel types except FASTPATH channels.
    FASTPATH
    Adopt the channel if it is a FASTPATH channel. This happens only if the appropriate channel type is also specified, for example, AdoptNewMCA=RCVR,SVR,FASTPATH.
    Attention!: The AdoptNewMCA attribute might behave in an unpredictable fashion with FASTPATH channels. Exercise great caution when enabling the AdoptNewMCA attribute for FASTPATH channels.
    AdoptNewMCATimeout=60|1 – 3600
    The amount of time, in seconds, that the new process waits for the old process to end. Specify a value in the range 1 – 3600. The default value is 60.
    AdoptNewMCACheck=QM|ADDRESS|NAME|ALL
    The type of checking required when enabling the AdoptNewMCA attribute. If possible, perform all three of the following checks to protect your channels from being shut down, inadvertently or maliciously. At the very least, check that the channel names match.

    Specify one or more values, separated by commas or blanks, to tell the listener process to:

    QM
    Check that the queue manager names match.
    ADDRESS
    Check the communications address. For example, the TCP/IP address.
    NAME
    Check that the channel names match.
    ALL
    Check for matching queue manager names, the communications address, and for matching channel names.

    AdoptNewMCACheck=NAME,ADDRESS is the default for FAP1, FAP2, and FAP3, whileAdoptNewMCACheck=NAME,ADDRESS,QM is the default for FAP4 and later.

    0 (0 Ratings)
    [ 1159 views ] Leave a Comment

    Directory structure (UNIX systems)

    Saturday, August 18, 2012, 12:24 PM
    Categories: MQ
    Posted By: Karthick

    Directory structure (UNIX systems)

    Figure 1 shows the general layout of the data and log directories associated with a specific queue manager. The directories shown apply to the default installation. If you change this, the locations of the files and directories are modified accordingly. For information about the location of the product files, see one of the following:

        WebSphere MQ for AIX® Quick Beginnings
        WebSphere MQ for HP-UX Quick Beginnings
        WebSphere MQ for Solaris Quick Beginnings
        WebSphere MQ for Linux Quick Beginnings

    In Figure 1, the layout is representative of WebSphere® MQ after a queue manager has been in use for some time. The actual structure that you have depends on which operations have occurred on the queue manager.
    Figure 1. Default directory structure (UNIX systems) after a queue manager has been started
    Default directory structure after a queue manager has been started (UNIX systems).

    By default, the following directories and files are located in the directory /var/mqm/qmgrs/qmname/ (where qmname is the name of the queue
    manager).

    Table 1. Default content of a /var/mqm/qmgrs/qmname/ directory on UNIX systemsamqalchk.fil     Checkpoint file containing information about the last checkpoint.

    auth/     Contained subdirectories and files associated with authority in WebSphere MQ prior to Version 6.0.

    authinfo/     Each WebSphere MQ authentication information definition is associated with a file in this directory. The file name matches the authentication information definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

    channel/     Each WebSphere MQ channel definition is associated with a file in this directory. The file name matches the channel definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

    clntconn/     Each WebSphere MQ client connection channel definition is associated with a file in this directory. The file name matches the client connection channel definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

    dce/     Used for DCE support prior to WebSphere MQ Version 6.0.

    errors/     Directory containing FFSTs, client application errors, and operator message files from newest to oldest:

        AMQERR01.LOG
        AMQERR02.LOG
        AMQERR03.LOG

    esem/     Directory containing files used internally.

    isem/     Directory containing files used internally.

    listener/     Each WebSphere MQ listener definition is associated with a file in this directory. The file name matches the listener definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

    msem/     Directory containing files used internally.

    namelist/     Each WebSphere MQ namelist definition is associated with a file in this directory. The file name matches the namelist definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

    plugcomp/     Empty directory reserved for use by installable services.

    procdef/     Each WebSphere MQ process definition is associated with a file in this directory. The file name matches the process definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

    qmanager/     

    QMANAGER
        The queue manager object.
    QMQMOBJCAT
        The object catalog containing the list of all WebSphere MQ objects; used internally.

    qm.ini     Queue manager configuration file.

    queues/     Each queue has a directory in here containing a single file called q.

    The file name matches the queue name, subject to certain restrictions; see Understanding WebSphere MQ file names.

    services/     Each WebSphere MQ service definition is associated with a file in this directory. The file name matches the service definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

    shmem/     Directory containing files used internally.

    spipe/     Used internally by channel processes.

    ssem/     Directory containing files used internally.

    ssl/     Directory for SSL key database files.

    startprm/     Directory containing temporary files used internally.

    zsocketapp/     Used internally for isolated bindings.

    zsocketEC/     Used internally for isolated bindings.
    @ipcc/     

    AMQCLCHL.TAB
        Client channel table file.

    esem/
        Directory containing files used internally.
    isem/
        Directory containing files used internally.
    msem/
        Directory containing files used internally.
    shmem/
        Directory containing files used internally.
    ssem/
        Directory containing files used internally.

    @qmpersist     

    esem/
        Directory containing files used internally.
    isem/
        Directory containing files used internally.
    msem/
        Directory containing files used internally.
    shmem/
        Directory containing files used internally.
    ssem/
        Directory containing files used internally.

    @qmpersist     

    esem/
        Directory containing files used internally.
    isem/
        Directory containing files used internally.
    msem/
        Directory containing files used internally.
    shmem/
        Directory containing files used internally.
    ssem/
        Directory containing files used internally.

    @app     

    esem/
        Directory containing files used internally.
    isem/
        Directory containing files used internally.
    msem/
        Directory containing files used internally.
    shmem/
        Directory containing files used internally.
    ssem/
        Directory containing files used internally.

    By default, the following directories and files are found in /var/mqm/log/qmname/ (where qmname is the name of the queue manager).
    The following subdirectories and files exist after you have installed WebSphere MQ, created and started a queue manager, and have been using that queue manager for some time.
    amqhlctl.lfh     Log control file.

    active/     This directory contains the log files numbered S0000000.LOG, S0000001.LOG, S0000002.LOG, and so on.
    0 (0 Ratings)
    [ 871 views ] Leave a Comment

    IBM Websphere MQ Directory structure (Windows systems) - Middleware News

    Saturday, August 18, 2012, 12:10 PM
    Categories: MQ
    Posted By: Karthick

    Directory structure (Windows systems)

    Table 1 shows the directories found under the root c:\Program Files\IBM\WebSphere MQ\. If you have installed WebSphere® MQ for Windows under a different directory, the root is modified appropriately.

    Table 1. WebSphere MQ for Windows directory structure\bin     Contains binary files (commands and DDLs).

    \config     Contains configuration information.

    \conv     Contains files for data conversion in folder \table.

    \errors     Contains the system error log files:

        AMQERR01.LOG
        AMQERR02.LOG
        AMQERR03.LOG

    These files contain information related errors that are not associated with a particular queue manager. AMQERR01.LOG contains the most recent error information.

    This folder also holds any FFST™ files that are produced.

    \exits     Contains channel exit programs.

    \licenses     Contains a folder for each national language. Each folder contains license information.

    \log     Contains a folder for each queue manager. The following subdirectories and files will exist for each queue manager after you have been using that queue manager for some time.

    AMQHLCTL.LFH
        Log control file.
    Active
        This directory contains the log files numbered S0000000.LOG, S0000001.LOG, S00000002.LOG, and so on.

    \qmgrs     Contains a folder for each queue manager; the contents of these folders are described in Table 2. Also contains the folder \@SYSTEM\errors,

    \tivoli     Contains the signature file used by Tivoli®.

    \tools     Contains all the WebSphere MQ sample programs. These are described in WebSphere MQ for Windows Quick Beginnings.

    \trace     Contains all trace files.

    \uninst     Contains files necessary to uninstall WebSphere MQ.

    Table 2 shows the directory structure for each queue manager in the
    c:\Program Files\IBM\WebSphere MQ\qmgrs\ folder. The queue manager might have been transformed as described in Understanding WebSphere MQ file names.

    Table 2. Content of a \queue-manager-name\ folder for WebSphere MQ for Windowsamqalchk.fil     Contains a checkpoint file containing information about the last checkpoint.

    \authinfo     Contains a file for each authentication information object.

    \channel     Contains a file for each channel object.


    \clntconn     Contains a file for each client connection channel object.

    \errors     Contains error log files associated with the queue manager:

        AMQERR01.LOG
        AMQERR02.LOG
        AMQERR03.LOG

    AMQERR01.LOG contains the most recent error information.

    \listener     Contains a file for each listener object.

    \namelist     Contains a file for each WebSphere MQ namelist.

    \Plugcomp     Directory reserved for use by WebSphere MQ installable services.

    \Procdef     Contains a file for each WebSphere MQ process definition. Where possible, the file name matches the associated process definition name, but some characters have to be altered. There might be a directory called @MANGLED here containing process definitions with transformed or mangled names.

    \Qmanager     Contains the following files:

    Qmanager
        The queue manager object.
    QMQMOBJCAT
        The object catalogue containing the list of all WebSphere MQ objects, used internally.
        Note: If you are using a FAT system, this name is transformed and a subdirectory created containing the file with its name transformed.
    QAADMIN
        File used internally for controlling authorizations.


    \Queues     Each queue has a directory here containing a single file called Q. Where possible, the directory name matches the associated queue name but some characters have to be altered. There might be a directory called @MANGLED here containing queues with transformed or mangled names.

    \services     Contains a file for each service object.

    \ssl     Contains SSL certificate stores.

    \Startprm     Contains temporary files used internally.
    0 (0 Ratings)
    [ 1089 views ] Leave a Comment

    WebSphere MQ and UNIX Process Priority - Middleware News

    Saturday, August 18, 2012, 11:56 AM
    Categories: MQ
    Posted By: Karthick

    This information applies to WebSphere® MQ running on UNIX systems only.

    If you run a process in the background, that process can be given a higher nice value (and hence lower priority) by the invoking shell. This might have general WebSphere MQ performance implications. In highly-stressed situations, if there are many ready-to-run threads at a higher priority and some at a lower priority, operating system scheduling characteristics can deprive the lower priority threads of CPU time.

    It is strongly recommended that independently started processes associated with queue managers, such as runmqlsr, have the same nice values as the queue manager they are associated with. Ensure the shell does not assign a higher nice value to these background processes. For example, in ksh, use the setting "set +o bgnice" to stop ksh from raising the nice value of background processes. You can verify the nice values of running processes by examining the NI column of a "ps -efl" listing.

    It is also recommended that you start WebSphere MQ application processes with the same nice value as the queue manager. If they run with different nice values, an application thread might block a queue manager thread, or vice versa, causing performance to degrade.
    0 (0 Ratings)
    [ 493 views ] Leave a Comment

    Shared memory on IBM AIX - Middleware News

    Saturday, August 18, 2012, 11:54 AM
    Categories: MQ
    Posted By: Karthick

    The AIX® model for System V shared memory differs from other UNIX platforms, in that a 32-bit process can only attach to 10 WebSphere® MQ memory segments concurrently.
    A typical 32-bit WebSphere MQ application requires two WebSphere MQ memory segments attached for every connected queue manager. Every additional connected queue manager requires one further WebSphere MQ memory segment attached.
    Note: During the MQCONN operation an additional shared memory segment is required. In a threaded process where multiple threads are connecting to the same queue manager, you must ensure an additional memory segment is available for every connected queue manager.

    A 64-bit process is not limited to attaching to only 10 WebSphere MQ memory segments concurrently. A typical 64-bit WebSphere MQ application requires three WebSphere MQ memory segments for every connected queue manager. The connection of additional queue managers typically requires two further WebSphere MQ memory segments for every connected queue manager. Applications that connect to heavily loaded queue managers can require additional memory segments.

    Start of changeWebSphere MQ Version 5.3 recommended the use of the environment variable EXTSHM to allow 32-bit applications to connect to more than 10 WebSphere MQ memory segments at a time. With WebSphere MQ Version 6, for 32-bit applications to benefit from EXTSHM facility, both the queue manager and the application need to be started with EXTSHM set in the environment.End of change
    0 (0 Ratings)
    [ 605 views ] Leave a Comment

    Clearing WebSphere MQ shared memory resources - Middleware News

    Saturday, August 18, 2012, 11:51 AM
    Categories: MQ
    Posted By: Karthick

    When a WebSphere MQ queue manager is ended normally, the queue manager removes the majority of the IPC resources that it was using. A small number of IPC resources remain and this is as designed: some of the IPC resources are intended to persist between queue manager restarts. The number of IPC resources remaining varies to some extent, depending on the operating conditions.End of change
    Start of changeThere are some situations when a larger proportion of the IPC resources in use by a queue manager might persist after that queue manager has ended:

        If applications are connected to the queue manager when it stops (perhaps because the queue manager was shut down using endmqm -i or endmqm -p), the IPC resources used by these applications might not be released.
        If the queue manager ends abnormally (for example, if an operator issues the system kill command), some IPC resources might be left allocated after all queue manager processes have terminated.

    In these cases, the IPC resources are not released back to the system until you restart (strmqm) or delete (dltmqm) the queue manager. End of change

    Start of changeIPC resources allocated by WebSphere MQ are maintained automatically by the allocating queue managers. You are strongly recommended not to perform manual actions on or remove these IPC resources. End of change

    Start of changeHowever, if it is necessary to remove IPC resources owned by mqm, follow these instructions. WebSphere MQ provides a utility to release the residual IPC resources allocated by a queue manager. This utility clears the internal queue manager state at the same time as it removes the corresponding IPC resource. Thus, this utility ensures that the queue manager state and IPC resource allocation are kept in step. To free residual IPC resources, follow these steps:End of change
    Start of changeStart of change

        End the queue manager and all connecting applications.
        Log on as user mqm.
        Type the following:
        On Solaris, HP-UX, and Linux:

        /opt/mqm/bin/amqiclen -x -m QMGR

        On AIX:

        /usr/mqm/bin/amqiclen -x -m QMGR

        This command does not report any status. However, if some WebSphere® MQ-allocated resources could not be freed, the return code is nonzero.
        Explicitly remove any remaining IPC resources that were created by user mqm.

    End of change
    Note: Start of changeIf a non-mqm application attempted to connect to WebSphere MQ before starting any queue managers, there might still be some WebSphere MQ IPC resources remaining even after following the above steps. These remaining resources were not created by user mqm and there is no straightforward way to reliably recognize them. However, these resources are very small and are reused when WebSphere MQ is next restarted.End of change
    0 (0 Ratings)
    [ 898 views ] Leave a Comment

    SSL CipherSpecs supported by IBM WebSphere MQ - Middleware News

    Saturday, August 18, 2012, 11:49 AM
    Categories: MQ
    Posted By: Karthick

    The following table lists the CipherSpecs supported by WebSphere MQ. Specify the CipherSpec name in the SSLCIPH property of the SVRCONN channel on the queue manager and in MQEnvironment.SSLCipherSpec

    Table 1. Supported CipherSpecsCipherSpec

    DES_SHA_EXPORT
    DES_SHA_EXPORT1024
    NULL_MD5
    NULL_SHA
    RC2_MD5_EXPORT
    RC4_56_SHA_EXPORT1024
    RC4_MD5_US
    RC4_MD5_EXPORT
    RC4_SHA_US
    TRIPLE_DES_SHA_US
    0 (0 Ratings)
    [ 869 views ] Leave a Comment

    AMQ9213 2009 MQRC_CONNECTION_BROKEN on IBM MQ clients - Middleware News

    Thursday, August 2, 2012, 8:31 AM
    Categories: MQ
    Posted By: Karthick

    You have WebSphere MQ clients which connect to several different MQ servers. The MQ clients are quite frequently disconnected with rc=2009, MQRC_CONNECTION_BROKEN. The clients are able to reconnect immediately. The queue managers are running well. You see no problems when issuing 'runmqsc' commands on the server.

    Symptom

    On the MQ server side you see the following message in the queue manager's error log, AMQERR01.LOG:

    AMQ9213: A communications error for TCP/IP occurred.
    EXPLANATION: An unexpected error occurred in communications.
    ACTION: The return code from the TCP/IP(select) [TIMEOUT] 660 seconds call was 11 (X'B'). Record these values and tell the systems administrator.

    Cause

    There was a parameter recently added in your qm.ini file called ClientIdle that was set to 600 secs. This caused the client connections to end after they were idle for the specified period of time + 60 seconds. After the connection is terminated at the server, the next attempt to send a request from the client side results in rc=2009.

    Resolving the problem

    You can either remove the ClientIdle parameter from the Channels stanza of your qm.ini files or you can set it to a value, which is higher than you expect your clients will be idle between calls. 
    The default path for the qm.ini file is /var/mqm/QMGRs//
    3.7 (1 Ratings)
    [ 2565 views ] Leave a Comment

    IBM Leap Second may cause Linux to freeze

    Tuesday, July 24, 2012, 12:01 AM
    Categories: MQ
    Posted By: Karthick

    After adding the leap second in Linux on 30 June, 2012, your WebSphere MQ queue manager has many FDC files related to resource issues or constraints and they will commonly report rc=xecP_E_NO_RESOURCE. You may also see your queue manager hang or freeze or there may be high cpu. The FDC's are being generated on a daily basis and may have probes of XY348010 or XC272003 from xcsCreateThread but there could be other FDC's with different probes also.

    Content

    On 30 June, the Network Time Protocol (NTP) daemon scheduled a leap second to occur at midnight, meaning that the final minute of the day was 61 seconds long. We have seen several problems with otherwise unexplainable high CPU usage on Linux systems caused by the leap second at the end of June.

    WebSphere MQ does not directly make calls which experience the problem, but we do use the pthreads library (NPTL), which in turn uses futexes ("fast userspace mutexes"), which can hit this problem. Busy systems running WebSphere MQ and other products are susceptible to this problem. You can read more about the problem at these links:

    Anyone else experiencing high rates of Linux server crashes during a leap second day?

    Leap Seconds in Red Hat Enterprise Linux

    Leap second: Linux can freeze

    This problem is solved by either applying Operating System (Linux) patches, resetting the date or rebooting the system. The resolution is dependent on your level of Linux and your environment. Please consult your Linux provider for details of the solution appropriate for your system.

    As a workaround you can follow these steps

    1. Check the Linux kernel version. In theory only 2.6.22 and newer levels should be affected:
      All: uname -r
    2. Switch to root or log in as root at the console
    3. Check to see if NTP is running:
      RHEL: service ntpd status
      SLES: /etc/init.d/ntp status
    4. If NTP is running, disable it:
      RHEL: service ntpd stop
      SLES: /etc/init.d/ntp stop
    5. Set the system clock to the current time:
      All sntp -P no -r pool.ntp.org
      Or: ntpdate 0.us.pool.ntp.org
    6. If NTP was running, reenable it:
      RHEL: service ntpd start
      SLES: /etc/init.d/ntp start
    0 (0 Ratings)

    Page 6 of 13  •  Prev 1 ... 4 5 6 7 8 ... 13 Next

Connect w/ Others

Member of the Month