Script:
Owner:
Subdir:
Blog ID: 75809171
Group ID: User ID:

Top Point Earners

    Support Analyst
    Indra
    Points: 156
    Community Manager
    Wellesley Information Services
    Points: 150
The Sphere Journal Online — February 2012
By Amber Linton
Feb 8, 2012 2:00 PM CST


IN THIS ISSUE...

IBM® WebSphere® Real Time V3.0:
A Java® Runtime for Consistent and High Performance

What’s new in WebSphere MQ V7.1

A Bird’s Eye View of the Intelligent Offer Data Extraction Utility

Renewing WebSphere MQ Certificates


QUICK LINKS...

Global WebSphere
Community

Previous Issue

GWC Newsletter


FROM THE PUBLISHER'S DESK...


Dear WebSpherian,

The WebSphere Community is buzzing with excitement about Impact 2012! Impact 2012 will take place April 29- May 4 in Las Vegas, Nevada. The Global WebSphere Community is hosting a number of activities, including the Virtual Community Challenge, a New Member Contest, and more.  You can also enter to win a complimentary Impact 2012 pass for contributing an Unconference topic.

As you prepare for Impact 2012, get a head start on your WebSphere education by reading about some key enhancements and updates to products in the WebSphere space:

With IBM WebSphere Real Time V3.0, there is now significantly more flexibility available across projects and across organizations to cost effectively apply the technologies that best suit each of their applications' needs individually without the need to split effort and costs across multiple Java runtimes.

New features of WebSphere MQ V7.1  can be utilized to achieve higher performance, greatly enhanced security, more flexible install, and more. Renewing WebSphere MQ certificates rather than replacing them can save you money and is less disruptive to the network.

Gain an understanding of Intelligent Offer data extraction utility and how it extracts data from the WebSphere Commerce database and formats and writes it into CSV files. Learn how it can be used to deliver real-time recommendations based on shoppers’ behavior.  

Best Regards,

Julia Weisman, Editor and Publisher
The Sphere Journal Online



IBM® WebSphere® Real Time V3.0
A Java® Runtime for Consistent and High Performance

Mark Stoodley, Senior Software Developer and Chief Architect, WebSphere Real Time

On October 7, 2011, IBM released version 3.0 of the WebSphere Real Time (WRT) product with a host of new features and expanded capabilities.  While the origins of this product are in the hard real-time space[1], the most recent release of the product leverages its industrial strength hardened real-time performance features designed to improve performance for a broad class of applications used in the modern enterprise.

With no changes needed to be made to your Java application, you can begin to take advantage of the capabilities of the WRT product by installing it on your system and then configuring your Java command line.  Applications with performance goals ranging from high throughput to extremely consistent response times—  or anywhere in between—  can run unmodified on top of the WebSphere Real Time foundation simply by using different command line options to tailor its performance characteristics according to each application's individual needs.

Application Performance Goals

All Java applications are written to the Java programming model which brings enormous productivity and efficiency benefits for program development.  But these benefits can only truly be realized if the Java Runtime Environment (JRE, or just Java runtime) used to run the application can meet the application's performance expectations.  It's not uncommon for an organization or even a single project to employ several different Java applications with possibly very different performance goals.

To clarify this point about application performance goals, let's look at three different examples of applications that could be written in Java that have different kinds of performance goals.  They may not match exactly the kinds of applications you work with in your organization, but you can probably think of applications that fall into most of these categories (even if you may not be using Java for these applications right now).

A first example is an application where raw throughput performance is the only thing that matters: batch processing, or just batch applications.  Batch applications, such as customer billing or payroll processing, are the workhorse applications of most organizations.  They require little or no human intervention and the goal is simply to complete the work as quickly as possible.

A second example is an application where low response times as well as consistent response times are important: online order processing.  For this application, there is typically a customer or another service waiting (sometimes anxiously) for the order response.  Providing a fast response to customer orders is important, but making that response time consistent for all orders is also important because no one likes to be the one made to wait.  Inconsistent response times can cause customer satisfaction problems that, in the long run, can result in lost opportunities or cause organizations to lose money.

Finally, a third example is an application where consistency is paramount: the control system for an assembly line robot.  An assembly line robot may be manipulating several arms or tools in a delicately orchestrated sequence of actions that must be done consistently every single time. Making an arm move faster or slower than it needs to could result in a poorly manufactured product, or it could damage the robot by putting two arms in the same place at the same time or a human could be injured, if there are humans in the same environment as the robot.  Real-time applications like this one require absolutely consistent performance.

Figure 1 places these three example applications into a graph loosely contrasting how fast the application must perform and how consistently the application must perform.  Also in this graph is an approximate technology curve showing the kind of trade-off that is achievable with available Java runtime technologies.  A batch application requires primarily high performance but not high consistency.  An order processing application requires a mix of performance and consistency (the graph shows one particular example that requires somewhat higher performance than it does consistency). Finally, the assembly line robot requires high consistency but doesn't have much need for really fast performance.

(Figure 1: Three applications with different needs for consistent versus fast performance)

How Does WRT Help?

Figure 2 shows the same graph as Figure 1 but indicates the primary technologies included in the IBM WebSphere Real Time product that are designed to be used for applications in each region of the graph.  These technologies are all available within WebSphere Real Time, and primarily revolve around different garbage collection (GC) policies.  Garbage collection is a fundamental process in Java whereby objects allocated by the program but no longer accessible to the program are reclaimed, but this process typically involves pausing the application while objects are reclaimed. Despite the presence of these application pauses, garbage collection enables one of the greatest values of the Java programming language: programmers need not worry about when to free objects because GC just takes care of it.

(Figure 2: WebSphere Real Time JRE technologies providing consistent or high performance, or both)

Let's look at these different technologies and how they support applications needing high performance, consistent performance, or a mix of the two.  We'll use Figure 3 to demonstrate how these different technologies interact with a Java application.

(Figure 3: Timelines contrasting garbage collector pause time impact for different policies)

Focus on High Performance

For applications where high performance is the main goal, such as batch applications, WebSphere Real Time has flat heap (optthruput) and generational (gencon) garbage collectors.  These policies are designed to minimize the amount of time that GC spends collecting the entire heap (global collections).

A flat heap policy GC policy completely avoids any GC work between global collections, so Java application performance can be extremely good until a global collection happens.  The top timeline in Figure 3 shows how global collections with the optthruput policy occur infrequently but that when they do occur, the application can be paused for a significant amount of time (possibly as long as seconds, depending on how much live data is on the heap).

In contrast, a generational policy, does perform some localized GC collections in a smaller portion of the heap called a nursery.  Because the nursery area is smaller than the full heap, however, the GC activity between global collections tends to introduce relatively short pauses (tens to hundreds of milliseconds, depending on the size of the nursery) to the application, but eventually most Java applications will still need to perform a global collection.  On the other hand, because some garbage is reclaimed when the nursery is collected, global collections are usually less frequent than for a flat heap policy.  The WebSphere Real Time gencon GC policy also performs much of its GC work concurrently with the application, further reducing the cost of nursery GC work.  The second timeline in Figure 3 shows the general behavior of WebSphere Real Time's gencon policy.  The gencon policy is an extremely effective GC policy for a wide variety of Java applications where most throughput matters and consistency are relatively smaller concerns.  For some applications, it can even provide higher performance than the flat heap policy.  Nonetheless, global collections are still required for most Java applications, so for applications where consistent performance is important enough, the WebSphere Real Time gencon GC policy may not be the best fit.

Focus on Consistent, High Performance

For applications where consistent and high performance are both important, WebSphere Real Time includes two effective GC policies: metronome and balanced.  These two GC policies take different approaches to garbage collection that let them take more control over the length of time an application will be paused.  They target very different goals, however, and so take very different approaches to achieve different degrees of consistency.

The metronome policy is designed from the ground up to regulate GC pauses and establish controls over how much time can be allocated to GC work versus work being performed by the Java application.  Rather than performing GC work in a single pause lasting as long as the GC cycle takes, the metronome GC policy divides its work up into many smaller chunks called “quanta”.  The two most important configuration options with this GC policy are:

  • How long can an individual GC quantum be (default 3 milliseconds)
  • What percentage of time should the application get to run when there is GC work happening (also known as utilization, and defaults to 70%)

The utilization figure acts over a time window 20 times the GC quantum length.  By default, the Java application gets 70% of 60ms (20 x 3ms) time window. Throughout that 60ms, the Java application will not be paused for more than about 3ms at a time.  This operation is shown in the third timeline in Figure 3.  Notice the individual pauses are very much smaller than for any of the other GC policies in the figure.

Because the application is allowed to run and allocate objects throughout a metronome GC cycle, you will probably need a larger heap when using the metronome policy than for other GC policies.  With too small a heap, the application can allocate enough objects to completely consume the heap, at which point the metronome policy will be forced into a flat heap-like mode where the application must be completely paused until the GC cycle completes.  For almost all realistic applications, however, this situation can be readily avoided by specifying an adequately large heap.

The balanced GC policy addresses a different problem than the metronome collector but also falls within the category of technologies designed to provide more consistent performance.  The balanced policy targets the problem of collecting very large, multi-gigabyte heaps (ranging from several GB up to 100s of GB).  This policy also divides GC work into smaller portions, but does it differently than the metronome collector.  Rather than dividing the work into smaller time packets as metronome does, the balanced policy partitions a large heap into smaller subsections which can be collected independently (also called a region based collector).  By dividing the heap into many smaller independent regions, the balanced collector can tailor the heap based on characteristics of both the hardware as well as the nature of the different kinds of objects the application allocates.  While the balanced policy does not ensure consistency to the same degree as the metronome policy, for very large heaps, even over 100 GB, the balanced policy can maintain GC pause times within a few hundred milliseconds.  Notice, in the fourth timeline in Figure 3, that the pause times are not all the same length because this collector does not necessarily do the same amount of work in every GC pause. Nonetheless, the GC pauses consistently complete within a few hundred milliseconds even though the heap used with this collector may be orders of magnitude larger than the heaps typically used with the other collectors.

Focus on Consistent Performance

Finally, for applications that must ensure extremely high levels of consistency, such as real-time systems, WebSphere Real Time includes additional specialized support selected by the -Xrealtime command-line option when running on a real-time operating system such as Red Hat's MRG or Novell's SLERT.  On such platforms, the metronome collector is used to provide extremely short GC pauses, consistently below 1ms and often as short as a few hundred microseconds.  With Ahead Of Time (AOT) compiled code, pauses introduced by sharing CPU cores with compilations on the Just In Time compiler thread can be eliminated, but this typically has an overall impact on performance due to the inherent complexity to generate Java compliant AOT code[2].  Alternatively, the Just In Time compiler has also been adapted to interfere less with Java application activities, especially those that have been modified to take advantage of the features of the Real Time Specification for Java (RTSJ).  On this platform, WebSphere Real Time is fully RTSJ compliant which provides access to a variety of new Java APIs to gain stricter control over thread scheduling, implement realistic event processing, handle task deadline failures in robust ways, create Java code that can run during a GC cycle, and others[dev works articles].  All of these facilities are provided within the Java 7 programming model.

Wrap Up

When asked whether an application requires high performance or consistent performance, most of us probably wonder “Why do I have to choose?”. Like any situation where you try to achieve two goals at the same time, however, getting higher performance may mean you get less consistent performance, or vice versa. Expert tuning can help, but if you don't start with a Java runtime that's capable of meeting your application's needs, you probably won't meet all your goals, which forces a trade-off: if you can't get both, do you want high performance or consistent performance? For a large number of applications, a realistic answer to this trade-off question is some variant of “I think I want consistent performance but I need high performance”, which has had a direct influence on how Java runtimes have evolved over time. Java performance has improved immensely over the last decade or so, and while many applications are still hungry for faster performance, the importance of consistent performance for better service quality and resource planning has also grown.

Many applications need to strike some balance between consistent and high performance, but getting that proper balance across a wide range of applications requires a Java runtime designed to offer different trade-offs between high and consistent performance. With the latest version 3.0 of the IBM WebSphere Real Time product, there is now significantly more flexibility available across projects and across organizations to cost effectively apply the technologies that best suit each of their applications' needs individually without the need to split effort and costs across multiple Java runtimes. Because IBM WebSphere Real Time is completely built on open standards (for example, WebSphere Real Time offers fully compliant implementations of both the Java 7 language specification and the Real Time Specification for Java 1.0.2), you can replace the Java runtime underneath all your Java applications with IBM WebSphere Real Time to take advantage of its features without any modification to your applications. Best of all, you can try out IBM WebSphere Real Time for free (without support) by downloading it from the developerWorks site[3][4].


[1]    Real-time Java, Part 1: Using Java code to program real-time systems. Mark Stoodley, Mike Fulton, Michael Dawson, Ryan Sciampacone, John Kacur.  http://www.ibm.com/developerworks/java/library/j-rtj1/index.html .

[2]    Real-time Java, Part 2: Comparing Compilation Techniques.  Mark Stoodley, Kenneth Ma, Marius Lut.  http://www.ibm.com/developerworks/java/library/j-rtj2/index.html .

[3]    Linux JDK download site.  Scroll down to look for the “WebSpere Real Time V3” under “Java SE Version 7” to find “Linux on 32-bit x86”, “Linux on 64-bit x86”, or “RT Linux on 32-bit x86” (Real-Time) installation packages. https://www.ibm.com/developerworks/java/jdk/linux/download.html .  You will need to register for a free IBM id to download the packages.

[4]    AIX JDK download site. Scroll down to look for the “WebSphere Real Time V3 on AIX” section to find “AIX for 32-bit POWER” or “AIX for 64-bit POWER”.     https://www.ibm.com/developerworks/java/jdk/aix/service.html .  You will need to register for a free IBM id to download the packages.

IBM and WebSphere are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.


Mark joined the IBM Toronto Lab in 2002 as a JIT compiler developer implementing compiler optimizations to improve the performance of Java applications.  In 2005, he began adapting the IBM Testarossa Just In Time compiler to support the Real Time Specification for Java as part of the Java runtime technology that would later be released as WebSphere Real Time V1. He is now Chief Architect of the WebSphere Real Time product leading a team on three continents.  Mark spends his spare time appreciating the world through his daughter's young eyes.

Copyright 2012 IBM Corporation

What’s New in WebSphere MQ V7.1

Leif Davidsen, Worldwide Senior Product Manager for WebSphere Messaging Portfolio including WebSphere MQ at IBM

Higher performance, greatly enhanced security, more flexible install and more

For a product such as WebSphere MQ, which has been around for nearly 20 years and is a critical part of thousands of business infrastructures, a major new release is not something that happens very often. Typically IBM has released updates to WebSphere MQ every 2-3 years, and the release of WebSphere MQ V7.1 in November 2011 followed the release of WebSphere MQ V7.0.1 in August 2009.

This new release of the product includes substantial changes, including brand new functions as well as updates and improvements to existing functions. Some of these capabilities will allow users to use WebSphere MQ in new and exciting ways; others will make the existing operation of WebSphere MQ smoother and simpler.

Multi-version install

Let’s start with one of the first features that a user will notice about the new release of WebSphere MQ V7.1: You can now define where you wish to install WebSphere MQ. The benefit of this is that for the first time you can have multiple versions of WebSphere MQ installed on your system at the same time. A key addition to this new function is that this capability also is supported with WebSphere MQ V7.0.1, if you have installed the latest fixpack (V7.0.1.6). This enables users to migrate between releases much more easily, without needing to remove or replace a running WebSphere MQ installation. This will make the transition to new releases and new fixpacks less troublesome in the future, enabling all users to gain the benefits of IBM enhancements faster and earlier than before. 

Security enhancements

The more we rely on our IT infrastructure, the more important it becomes that it is effectively secured. The challenge of security is that the more secure we make our systems, the harder it can be to both define the security we need, and to ensure that while protecting against threats, we don’t make the security so broad that it is unusable by authorized users. WebSphere MQ V7.1 includes many new security features. These include stronger encryption algorithms, more granular control of the security for Queue Managers, and a new wizard to help you set up and manage your security policies. This important topic will be described in more detail the Renewing WebSphere MQ Certificates article below.

Performance enhancements and exploitation of new hardware

WebSphere MQ is at the heart of many of the world’s largest businesses, and they drive a phenomenal number of messages through their systems every second of every day. Performance and message throughput has been a real focus in WebSphere MQ V7.1 to ensure that as businesses respond to changes in demand and workload, that the messaging layer remains robust, and delivers the performance required, exploiting the hardware improvements that are also available.

Depending on the operating system and the type of messages being sent then some incredible improvements in message throughput can be seen with WebSphere MQ V7.1. For 2KB persistent messages on Linux and Windows platforms, improvements of 100 percent or more in message throughput can be seen compared to the previous version. And for non-persistent 2KB messages, some Linux environments also see message throughput improvements of 50 to100 percent. WebSphere MQ V7.1 is also better able to exploit the new hardware available today, scaling to exploit multiple cores much better than previous versions. And some of the new functions available, (described below) also demonstrate strong scaling and performance.

Finally, for those businesses that use WebSphere MQ on the z/OS platform, there are some outstanding performance results. With WebSphere MQ V7.1 for z/OS, a single z/OS Queue Manager on a 30 processor z/OS LPAR using 2KB non-persistent, non-transactional messages demonstrated a rate of 1.1 million messages per second. Also compared to the previous version of WebSphere MQ for z/OS (V7.0.1), which made use of IBM DB2 for moving large messages through the Coupling Facility, WebSphere MQ for z/OS V7.1 — using Shared Message Data Sets — provides up to a 13x improvement for 64KB non-persistent shared messages.

New multicast capability

One of the brand new enhancements in WebSphere MQ V7.1 is a new multicast function. Many businesses are deploying solutions that use publish-subscribe, but as they distribute information over a growing network of endpoints, they look to have an improved and consistent delivery latency to larger numbers of endpoints to prevent some endpoints from getting information at different times. To address this scenario, WebSphere MQ has included this new capability, which has come from the separate WebSphere MQ Low Latency Messaging product, and this provides interoperability to that product. Compared to using publish-subscribe in the previous release, WebSphere MQ V7.1 offers performance improvements of 500 percent or more for distributing 256-byte messages to multiple subscribers.

Integrated MQ Telemetry Transport (MQTT)

For a number of years, IBM has supported the ability to reach out to non-traditional endpoints such as sensors, and other connected devices, and have seen increased business interest in this and other related areas in recent years, such as the growth in mobile computing devices and the need to connect these to business applications and infrastructure. The MQTT protocol that has been around for 10 years has been previously available as a separate offering, but with WebSphere MQ V7.1 this is now included in WebSphere MQ itself, enabling all users to access it, with a separate usage charge if deployed. With the move to a ‘Smarter Planet’, this protocol, which is designed for small lightweight devices, is ideal to enable business applications to gain access to all the data being generated and consumed across the business, both digital and physical.

Summary

The above lists a number of key new capabilities that have been keeping the WebSphere MQ development team busy over the last couple of years and addresses numerous other small improvements to the product. The inclusion of the new multi-version install should hopefully make it much simpler to try out WebSphere MQ, either for the first time, or to help you migrate up faster than before.

Leif Davidsen is the Product Manager for WebSphere Message Broker in IBM. Leif has worked at IBM's Hursley Lab since 1989, having joined with a degree in Computer Science from the University of London. Leif has had a varied IBM career, but for the last 12 years has been focusing on Product Management and Product Marketing for WebSphere. Most recent roles prior to his current one include Worldwide Marketing Manager for WebSphere Connectivity, Marketing lead for SOA Reuse & Connectivity and Industry focused marketing for WebSphere Connectivity.


Copyright 2012 IBM Corporation

A Bird’s Eye View of the Intelligent Offer Data Extraction Utility

Vipin Murali, Software Engineer, IBM
Dinup P Pillai, Software Engineer, IBM

The Intelligent Offer data extraction utility was developed as part of the WebSphere Commerce V7 feature pack 3 release. The utility extracts data from the WebSphere Commerce database and formats and writes it into CSV files. The extracted data is used by the Coremetrics Intelligent Offer to deliver real-time recommendations based on  shoppers’ behavior.

What is Intelligent Offer?

Intelligent Offer is a subscription-based Coremetrics solution that automatically generates personalized product recommendations on the storefront. The recommendations are based on the browsing, shopping, and purchasing behavior of individual customers. If the site is integrated with Coremetrics, it can display Intelligent Offer recommendations on the store pages.

The underlying architecture of the utility comprises WebSphere component services and the dataload framework. The utility invokes the component services to retrieve the data from the database.

What are WebSphere component services?

The component services are a set of services that follow the Service Component Architecture (SCA) paradigm, which is a set of specifications that describe a model for building applications and systems using a Service-Oriented Architecture (SOA).

What is dataload framework?

The dataload framework is a layered architecture that is comprised of the Data Reader layer, the Business Object Builder layer, the Business Object Mediator layer, and the Data Writer layer. The data extract framework makes use of the dataload framework to transform the extracted data into one of the following format options recommended by Coremetrics — Enterprise Category Definition File (ECDF) and Enterprise Product Content Mapping File (EPCMF).

The data extract framework can be customized in several ways to accomplish various objectives. A couple of them are described below:

1) Extend the data extract framework to perform delta extractions:

The default implementation of the data extraction utility retrieves all the catalog entries that belong to the store. Even if only a few records have been modified since the previous extraction, the full dataset is extracted from the source system each time the utility is run. The data extract framework can be extended to perform delta extraction.

What is delta extraction?

Delta extraction is a mechanism that retrieves only those records that have changed after a specified date, resulting in faster extraction of the dataset. It improves efficiency and performance by decreasing the extraction time and data volume when compared to full extraction. It internally invokes the change history API to retrieve the change history information.

What is change history API?

WebSphere Commerce provides the change history API that returns change history information, such as the primary object ID of the changed noun, based on the search criteria. The change history feature captures the information of a catalog entry if any of the following occur:

  • A new catalog entry is created.
  • An existing catalog entry is deleted.
  • An existing catalog entry's property is modified.

The data pertaining to the object IDs that is returned by the change history API can be extracted by invoking the component services. The data extract framework can be reused to transform the business objects and export it to the CSV files.


The DataExtract Utility passes the following search criteria parameters to the Change history API:

WorkSpace: Sets the workspace name.
TaskGroup: Sets the task group name.
ObjectType: Sets the type of the noun, for example, CatalogEntry.
StoreId: Sets the store ID from which the change history is to be retrieved.
StartDate: Sets the date starting from which change history information is returned.
UIObjectNames: Lists the catalog entry types to be retrieved, for example, product, kit, and so on.
Actions: Returns the change history information based on the actions performed on the noun, for example, N (new), D (delete), U (update).
DBType: Sets the data base type which determines the paging mechanism.
BeginIndex: Sets the begin index.
PageSize: Sets the page size.

The change history API fetches the change history information from the database with the help of the database connection passed by the utility. The database properties need to be configured by the user in the environment configuration file for the data extraction utility, wc-dataextract-env.xml.

The newly created data extract reader mediators invoke the change history API and component services to fetch the delta changes.

The mediator initializes the StartDate parameter configured in the business object configuration file. With the help of the primary object keys retrieved by the change history API, the catalog entry-specific data is returned based on the following actions:

Actions = (N, U): Primary object keys are passed on to the new catalog service as XPath parameters to retrieve the records.
Actions = (D): As the records pertaining to the deleted catalog entries do not exist in the database, a new response Business Object Document (BOD) is built for the deleted entries with its parent catalog group set to "Uncategorized".

In order to retrieve the catalog entry records, a new XPath expression that fetches the records by “UniqueId” is created in the query template file.

2) Customize the data extract framework to use in-memory paging:

The default implementation for the data extract solution performs "database level paging" by injecting paging indexes to the main SQL query. This query is executed for every service invocation. You can increase the database level paging size by changing the page size parameter in the data extract configuration to increase the data extract performance as needed.

However, if you have a very large dataset and want to achieve considerable performance improvements, you can use the in-memory paging custom approach for the Intelligent Offer data extraction utility. This approach runs the main SQL query only once and loads all the primary keys in the memory. Based on the specified paging parameters, the sub-list of the primary keys is passed on to the associated SQLs.

The in-memory paging mechanism is strongly recommended when the data set is huge and there are not too many customizations.


A new custom SQL Composer is written, which contains the code that executes the main SQL query on the first service request, and thereafter executes a dummy SQL query for the subsequent service requests. The custom XPath SQL key processor loads the list of primary keys returned by the main SQL query into the memory. It then creates a sublist of the primary keys depending on the paging parameters.

The new XPath expression is written, which makes use of the custom SQL Composer and the XPath key processor. This service is invoked by the newly created data extract reader mediator. The following parameters are passed along with the service request:

_cat.beginIndex: Sets the value of the record set start number.
_cat.maxItems: Sets the page size for the record set.
_cat.isFirstCall: Indicates the first service request.
_wcf.ap: Sets the access profile for the request.
_wcf.dataLanguageIds: Sets the data language ID for the request.

 Vipin Murali is a Software Developer with the WebSphere Commerce team at the IBM India Software Lab. He has three years of experience in the e-commerce field. His areas of experience include Java, J2EE, and web services.

 

Dinup P. Pillai is a Software Developer with the WebSphere Commerce team at the IBM India Software Lab. He has three years of experience in the e-commerce field. His areas of experience include Java, J2EE, and web services.

Copyright 2012 IBM Corporation

 Renewing WebSphere MQ Certificates

T. Rob Wyatt, WebSphere Connectivity & Integration Product Management at IBM

Recently, a customer asked me to help resolve some issues he encountered when renewing WebSphere MQ certificates. When I asked for his step-by-step process, it turned out that he was actually replacing the certificates rather than renewing them. Subsequent discussion on the Vienna List Server revealed that this is actually the norm among the community. The procedure goes something like this:

  1. Copy the key database files
  2. Delete the QMgr's personal certificate
  3. Create a new Certificate Signing Request (CSR) for the QMgr
  4. Submit the CSR to the Certificate Authority (CA) for signing
  5. Receive the signed certificate
  6. Swap out the key database files
  7. Issue REFRESH SECURITY TYPE(SSL) on the QMgr

Although this works, new certificates cost more than renewals. Also, if the CA is reputable, they won't allow multiple certificates with the same Common Name (CN) so you will need to manage the CN field with some unique qualifier rather than simply using the QMgr name. Many people insert the year to end up with Common Names like "QMGR 2011", "QMGR 2012", etc. This works, but it also impacts your SSLPEER filters and anything else sensitive to the CN certificate field.

A much better approach is to renew the certificate that you already have. The process goes something like this:

  1. Backup the key database files
  2. Recreate the signing request from the existing certificate
  3. Submit the CSR to the Certificate Authority (CA) for signing
  4. Receive the signed certificate
  5. Issue REFRESH SECURITY TYPE(SSL) on the QMgr

The result of a renewal is that the certificate expiration dates are extended, but otherwise it's the same certificate. All fields of the Distinguished Name remain the same, so anything using this field to filter requests— such as the channel's SSLPEER— won't be affected.

Verisign has a convenient Trial Certificate facility which I have used to provide a step-by-step walkthrough of the process. The procedure should be about the same with any other certificate authority, including an internal one. For completeness, the first part of the tutorial shows creation of the initial CSR and loading the CA signer certificates. The second part of the tutorial shows the renewal process. I have used the iKeyman GUI for simplicity, but the same steps could be scripted using any of the line commands.

Part 1: Initialize the Key Database with a signed certificate

In order to demonstrate certificate renewal, it is necessary to have first created and populated the key database (KDB). These are the steps I used to set up the key database using the Verisign Trial CA.

1. Create the KDB

Open iKeyman and click the button to create a new key database. The default type of CMS is correct for a queue manager's key database. Type in the desired path and name (including the .kdb extension) and then click OK.

2. Create a Certificate Signing Request (CSR)

From the "Personal Certificate Requests" panel, click "New…" and then fill in the fields. Make sure to fill in the label with the constant "ibmwebspheremq" followed by the queue manager name, all in lower case and without any embedded spaces.

The last field is the file name for the signing request. The output of the command is an ordinary text file containing the cryptographic representation of the signing request and certificate fields. This file both transmits the content of the request and also guarantees the integrity of the information, since changing as much as one of the printable characters renders the file unusable.

3. Submit the CSR to the CA

Your Certificate Authority will have a process with which to submit a signing request. Verisign uses a secure web form. The screen shot below shows how the text of the CSR is pasted into the web form to submit the request.

4. Obtain the CA signer certificates

Most CAs will include links to their signer certificates when they respond to your signing request. It is sound practice to use links so that you can verify that the root certificates are legitimate. If the CA—or anyone else for that matter— provides signer certificates as email attachments, it is advisable to visually verify these against known good versions before adding them to your keystore.

Verisign uses two signer certificates—a root certificate and an intermediate certificate. Both must be present in the key database to validate your certificate!

In this case, they are provided via the web so it is necessary to copy the certificate and save it as a text file.

5. Load the signer certificates into the KDB

In the iKeyman GUI, use the drop-down to select the Signer Certificates panel, then select "Add…". Enter a descriptive label for the certificate and then the path and file name you used for the root certificate file, and click OK. Repeat the process for the intermediate signer certificate. If your CA uses a chain of more than two certificates, it is necessary to load all of them.

With GSKit 8 delivered in WMQ v7.1, the certificates are not validated when you load them. This makes it easier to load many certificates since you do not need to pay attention to the order. If you are using a prior version of GSKit, then it is necessary to load the certificates in the correct order, starting with the Root and working back from there.

(Note: The command-line equivalents do validate the certificates as they are added, so if you are scripting these operations it is necessary to add the CA certificates in the right order and they must be in place before your CSR can be received.)

The GUI should show the entire chain of signer certificates now.

6. Receive the signed certificate

Now that you have added the signer certificates to the key database, it is time to add the new personal certificate. Your CA will have returned your signed certificate in the email as text, as an attachment, or possibly via a link to their web site. Save the text of the signed certificate as an .arm file in your queue manager's SSL directory. Next, select "Personal Certificates" from the drop down, then click "Receive…". Enter the path and name you used to save the certificate that your CA returned to you and click OK.

After successfully receiving the personal certificate, it will appear in the Personal Certificates panel.

This completes the first part of the tutorial. At this point, the key database contains a certificate that is capable of authenticating the queue manager to remote clients or other queue managers.

The next part of the tutorial will demonstrate the renewal.

Part 2: Renewing the queue manager's certificate

This section demonstrates how to renew an expiring certificate. For purposes of this demonstration, I waited one day after receiving my trial certificate so that the screen shots will show a different date after loading the renewal. In actual usage, it would be customary to update the certificates within a month or two of their expected expiration.

1. Recreate the signing request

From the iKeyman GUI, open the Key Database and navigate to the Personal Certificates panel. Next, click "Recreate Request…" and enter the path and name to save the file. It is a good idea to include a datestamp in the file name so that you can later verify that the renewal has been processed based on visual inspection.

2. Submit the CSR to the CA

Follow the same procedure that was specified in Step #3 of Part 1 in this article. You are simply submitting the CSR to your Certificate Authority.

3. Receive the renewed certificate

Your CA will have returned your signed certificate in the email as text, as an attachment, or possibly via a link to their web site. Save the text of the signed certificate as an .arm file in your queue manager's SSL directory. Next, select "Personal Certificates" from the drop down, then click "Receive…" Enter the path and name you used to save the certificate that your CA returned to you and click OK. Be sure to specify the file for the renewal and not the original signing request!

4. Verify the renewal

You will be able to verify that the renewal has been processed by viewing the certificate. Note that the Distinguished Name fields— such as Common Name, Organization, and so forth— will all remain the same. However the dates, fingerprint, serial number, and hash will all be different.

I have captured the before and after images of the certificates to indicate the changes.

Before:

After:

Summary

Renewing your certificates is usually much less expensive than replacing them and it is less disruptive to the network. With all the money you save, you can throw a party! Better yet, spend the money on a copy of Websphere MQ Advanced Message Security and start exploring the wonderful world of message-level security.

The same techniques shown here apply to any of the certificates used by the WebSphere MQ family of products, including WebSphere MQ Advanced Message Security. You might use a different tool or a Java keystore instead of a KDB, but the principles and the processes are the same. Just be aware that anything that looks at the certificate's serial number or fingerprint to validate it will notice the change.

Finally, don't forget to issue a REFRESH SECURITY TYPE(SSL) after making any changes to the queue manager's key database.

T.Rob is a WebSphere MQ security specialist working to make your messaging network safer for mission critical applications.  He is a regular speaker at the IMPACT and WebSphere Technical Conference, blogs about WebSphere MQ security at http://t-rob.net and http://websphereusergroup.org and is the author of the developerWorks Mission:Messaging column.  He is also a WebSphere Connectivity & Integration product manager at IBM and would love to visit you in person to help you get the most out of your WebSphere messaging.


Copyright 2012 IBM Corporation
Related
Comments

Connect with Others