Blockchain Proof of Concept, Medical Cannabis Seed to Sale Tracking

Industry: Government, Medicinal Cannabis

Overview: Utilize a Distributed Ledger Technology (DLT) or Blockchain Solution to create a trusted, immutable record for confirming registration, licensing, certification, credentialing, and payment processing. Also includes a tracking system component for chain of custody, supply chain management, and inventory management.

Challenge: Within the highly regulated medical cannabis industry there was a need for a secure solution that offered immutability, redundancy, transaction management, and provenance in an easy to use format that insured trust.

Solutions: Utilize DLT network through a web application interface.

Technologies: Hedera Hashgraph, Cross-platform mobile applications



The Medicinal and Recreational Cannabis industries are currently in early stages of growth. There are still several states that don’t offer medical cannabis, and even fewer offer access to recreational cannabis. As the highly regulated industry grows there is an increasing demand for trusted technology solutions that can answer the need for end to end tracking throughout the supply chain and purchasing process with proven trust and improved auditability.

Throughout its cycle of cultivation, extraction, testing, distribution and retail, cannabis must be tracked and documented at each step, and able to be easily audited by regulatory agencies. While many existing legacy technologies may have this capability, distributed ledgers have those features inherently, so DLT was chosen as the ideal technology behind the platform.

Some DLT platforms, most notably Blockchain, still have some technical shortcomings such as scalability and transaction speed. For this reason we turned to Hedera Hashgraph’s recently launched public network. Hedera’s platform offers many similar features including the same trust and immutability that other Blockchain Platforms can offer, but with vastly superior speed, efficiency, and scalability.

The solution includes an immutable data store (the Ledger) that cryptographically secures, holds and manages Credentials of Patients, Pharmacy/Retail Centers, and Cultivators – in alignment with the verification of Regulators. Smart Contracts are able to govern and validate transactions between all actors in the system, at every junction between the parties. In addition, the solution used the same technical capabilities to track and audit Product Inventory, Transformation, and Destruction activities. There was also an option to use a cryptocurrency as a payment mechanism, which is an important aspect of the system. It enables regulators to transparently collect applicable taxes at the time the transaction is executed, and has the potential to ease the difficulties that legal marijuana businesses encounter due to existing banking regulations.

Cannabis Tracking – Customer Purchase Example Walkthrough

Download the Cannabis Tracking Design Walkthrough PDF

Patient Benefit Management System and Data Warehouse

Industry: Healthcare/Health Insurance

Overview: A Pharmacy Benefit Management company that helps organizations provide lower cost prescription drugs to their employees was being weighed down by manual processes that have failed to scale with the explosive growth their business has experienced.

Challenge: The organization utilizes many manual, spreadsheet-based processes alongside a third-party Salesforce based system to provide their service to clients. With existing manual processes they were unable to meet increased demand to quickly scale operations.

Solutions: Our solution was to build a data warehouse, automated processes, and an easily accessible web application to replace cumbersome processes. By the end of the initial project our team also provided a plan to continue to improve operations and processes through future upgrades, and modifications.

Technologies: MS SQL Server, Node js, Angular


Summary:  Our customer, a Pharmacy Benefits Management organization, helps its customers offer lower prescription drug prices for medications not typically covered by traditional health insurance offerings. Their business had begun to grow rapidly, and it became clear that they couldn’t rely on the spreadsheets and manual processes they had put in place as they scaled up to meet demand. At the same time, they wanted to decrease their reliance on the third-party application they used to manage benefits.

TxMQ helped our customer build a data warehouse to store critical information about client companies, their enrolled employees, and their prescription drug claims. Microsoft SQL Server was chosen as the database technology to fit in with the customer’s Microsoft-centric environment.

In initial development for the project we automated processes to ingest data from flat files and the third-party Salesforce application to build a solid base on which to build scalable, automated replacements for existing manual processes. Now a company agent can easily input, access, and report information through a web application that can quickly generate lists of covered medications, enable exception processing, invoice customers, and support reporting.

The implementation enabled the company to scale more quickly to meet client demand and have grown both it’s client base and revenue more than it ever could have with the legacy solution. Though the initial project has made huge advances in capabilities, the overall modernization effort is ongoing and the client continues to rely on TxMQ as a trusted partner to enable them to continue to grow their business.

Pharmaceutical Supply Chain Track and Trace Proof-of-Concept

Industry: Medical Prescription Manufacturing, Shipping and Logistics

Overview: Utilize Blockchain to create a trusted, immutable record for tracking shipments of highly regulated substances such as opioids.

Challenge: Tracking shipments of highly regulated products is often cumbersome with many parties involved in manual processes which opens the process up to a high potential for inaccuracy or fraud.

Solutions: Utilize a distributed ledger, on a Swirlds Hashgraph network to more accurately track shipments of highly regulated substances.

Technologies: Swirlds Hashgraph



Within the pharmaceutical supply chain, regulatory compliance can place a massive burden on all parties. Many existing supply chains utilize manual processes that can drastically increase the possibility of manual errors and the potential for fraud. Since the introduction of Blockchain and Distributed Ledger Technology, Shipping, Logistics, and Supply Chain tracking have become one of the most popular use cases.

This proof of concept we were able to demonstrate that Blockchain technology is an ideal solution for tracking and compliance. Within the Pharmaceutical Supply Chain, Blockchain can be used to create more transparency throughout the process, in addition to an immutable record to ensure safety requirements and regulations are being met and are in compliance. It was built as a demonstration for applying Swirlds Hashgraph technology to the regulated supply chain space.

UX Examples of Reports from Solution

DLT Solution for Medical Credentialing Tracking

Industry: Healthcare

Overview: The Customer wanted to integrate DLT into their existing medical credentials management platform in order to enhance trust in the information and reduce the amount of time required to verify credentials.

Challenge: Verifying medical credentialing is a very lengthy process in a highly regulated industry. In most cases the process can take weeks or even months to verify credentials, and many times has to be redone for verification purposes in each instance, such as medical professionals taking on other work-as is common in the field.

Solutions: Using the customer’s existing Salesforce application, TxMQ connected it to a distributed ledger network via REST API. The system includes an innovative mechanism for storing credential documents securely, with high availability.

Technologies: Salesforce, Swirlds Hashgraph, Java 8, CouchDB



A Healthcare Practice Management organization wanted to add distributed ledger technology to their existing credential management platform. The client was looking for a way to shorten the lengthy credential verification process by enhancing trust in the credentials stored on the platform, while ensuring they were stored in a secure, tamper proof environment to avoid fraud and legal issues for their organization and clients.

A DLT solution quickly became the leading technology choice to underpin the solution and deliver the desired result. The Healthcare Practice Management organization had evaluated several blockchain-based platforms and found that they didn’t quite offer the needed stability, scalability, and high availability that was needed. For that reason they chose to utilize the technology from Swirlds, which uses Hashgraph with a unique Asynchronous Byzantine Fault Tolerant consensus mechanism that is built into a private network.

Blockchain Proof of Concept for Helios Energia, a Real Estate Investment Firm

Industry: Financial Services, Real Estate

Overview: Our client was looking to build a proof-of-concept to demonstrate the tokenization of ownership of real estate, and ability to fractionalize said ownership of real property.

Challenge: Helios Energia, a Real Estate Investment Firm was exploring the idea of creating an application that demonstrated the ability to fractionalize ownership of real estate to lower the barrier of entry for investments.

Solutions: Utilize a Blockchain network to tokenize property within a portfolio so ownership can be easily verified, and fractionalized between several parties.

Technologies: Angular, Iconic, Hyperledger Fabric and Composer, Node JS, IBM Cloud Kubernetes container



Helios Energia, a Real Estate Investment Firm approached TxMQ with an idea to fractionalize property ownership of an investment portfolio to lower the barrier of entry for potential investors. The idea is very similar to crowdfunding a product or service, but also creates an opportunity for real estate ownership for those who may not have the immediate capital or knowledge to invest on their own.

When the Helios team came to TxMQ, immediately it was identified that Blockchain would be a perfect match for their needs. This particular project required the property asset to be digitally tokenized so that ownership could be shared and verified in a secure trustless environment. The ability to tokenize assets is one of the main selling points for Blockchain adoption, and there are many new use cases that have been realized due to this unique feature.

In this case we delivered a Proof-of-Concept that conceptually showed it was possible to tokenize assets on a Blockchain which opened up the opportunity to fractionalize these assets for ownership. TxMQ delivered a Kubernetes containerized application, available on mobile UX interface for ease of management, and visibility.

Screen Shots, and UX examples for Proof of Concept:

Case Study: Client experiences WebSphere Business Integrator Outage

Project Description

Regional grocery chain (CLIENT) experienced outage in their WebSphere Business Integrator (WBI) application. WBI is no longer supported and the application developer was no longer available.

The Situation

This CLIENT was using an older WebSphere® product, WebSphere Business Integrator (WBI) any they had developed an application called Item Sync (developed by IBM Global). Item Sync was not working properly and the CLIENT needed to take steps to correct.
The application flow is work initiated by vendors and that work is visible in the Vendor Transaction List (VTL) screen. The WBI application is responsible for routing the work to the next step in the approval process. This routing was not occurring and was part of the overall problems.
The other side of the problem was that the sync application MQ Archive queue filled up. Because it was filled up, it was not accepting new messages. The initial corrective steps taken by the customer was to purge the archive queue. They then proceeded to reboot the WBI server; the MQ collectors were verified as active and in their correct state and sequence. New work was then being seen in the VTL screen, but it was not being seen in the next step.

The Response

One week earlier the CLIENT had experienced problems when its archive queue became full and workflow process was not working. They proceeded to purge the queue, which included all messages. As these messages were not persistent the messages had not been logged and were therefore lost.
As part of the initial response the CLIENT proceeded to reboot the WBI server and the MQ collectors were verified as running in their correct state and sequence.
A conference call between TxMQ and the CLIENT was conducted. During this call, in addition to hearing the issue, TxMQ’s consultant also made a recommendation to restore the backed up configuration and determine if the connectivity for the workflow processing would begin working. During this call the recommendation was made to begin using the native MQ alerts which provide an early warning system in the event of problem like queue filling.
The same evening after the conclusion of the conference call, the CLIENT proceeded to restore the configuration, which included MQ, connectors and WBI. The CLIENT then tested the environment by bringing up the environment and testing. The environment came up and was operational.
The Queue depth on the archive queue had also been expanded so that the issue with the archive queue filling would not happen again.
The next morning, the CLIENT and TxMQ reconvened. The current situation had been noted and the last steps included recreating the messages deleted from the archive queue. The customer was looking for TxMQ to help recreate the lost application messages. Unfortunately since TxMQ was unfamiliar with the application schema and process, the restore for the application messages would be better left to the business owners.

The Results

In this scenario, queues should not be purged in an overflow condition. The correct action to would have been to copy the messages to a backup queue or file system to be replayed later.
Before steps were taken to partially initiate a fix, care should have gone into making sure that it was a complete fix that would work.

Case Study: Medical Mutual Reduces Fees By $500K Through Real-Time Processing, Security

by Natalie Miller | @natalieatWIS for IBM Insight Magazine

Project Description

Ohio healthcare provider Medical Mutual wanted to take on more trading partners and more easily align with government protocols, but was without the proper robust and secure infrastructure needed to support the company’s operations.“We needed to set up trading partner software and a B2B infrastructure so we could move the data inside and outside the company,” says Eleanor Danser, EDI Manager, Medical Mutual of Ohio. “The parts that we were missing were the trading partner software and the communications piece to support all the real-time protocols that are required from the ACA, which is the Affordable Care Act.
”Medical Mutual already had IBM WebSphere MQ and IBM WebSphere Message Broker, as well as IBM WebSphere Transformation Extender (TX) in its arsenal to move the company’s hundreds of daily file transfer protocol (FTP) transactions. Healthcare providers are constantly moving data and setting up connections between different industry sectors—efforts that involve securing information from providers and employers who then send out to clearinghouses and providers.
“It’s constantly moving data back and forth between different entities—from claims data, membership data, eligibility and benefit information, claims status—all the transactions that the healthcare industry uses today,” says Danser.
However, as the healthcare industry evolves, so does its need for streamlined and easy communication. Medical Mutual also realized that their current infrastructure didn’t provide the company with the necessary authentication and security. It needed a Partner Gateway solution with batch and real-time processing that could match or exceed the 20-second response window in order to stay HIPAA compliant.
Medical Mutual sought a solution to aid with the communications piece of the transaction, or the “the handshake of the data,” explains Danser. “You must build thorough and robust security and protocols built the authentication of a trading partner to be able to sign in and drop data off to our systems, or for us to be able to drop data off to their systems …. It’s the authentication and security of the process that must take place in order to move the data.”
Without the proper in-house expertise for such a project, Medical Mutual called upon TxMQ, an IBM Premier business partner and provider of systems integration, implementation, consultation and training.

Choosing a solution and assembling a team

Since Medical Mutual already had an existing infrastructure in place using IBM software, choosing an IBM solution for the missing trading partner software and the communication piece was a practical decision.
We went out and looked at various vendor options. If we went outside of IBM we would have had to change certain parts of our infrastructure, which we really didn’t want to do. So this solution allowed us to use our existing infrastructure and simply build around it and enhance it. It was very cost effective to do that..
“We went out and looked at various vendor options,” explains Danser. “If we went outside of IBM we would have had to change certain parts of our infrastructure, which we really didn’t want to do. So this solution allowed us to use our existing infrastructure and simply build around it and enhance it. It was very cost effective to do that.”
In December 2012, Danser and her team received approval to move forward with IBM WebSphere DataPower B2B Appliance XB62 — a solution widely used in the healthcare industry with the built-in trading partner setup and configurations Medical Mutual wanted to implement.

TxMQ’s experience and connections set Medical Mutual up for success

The project kicked off in early 2013 with the help of four experts from TxMQ. The TxMQ team of four worked alongside Danser’s team of four fulltime staff members from project start through the September 2013 launch of the system.
“[TxMQ] possessed the expertise we needed to support what we were trying to do,” says Danser of the TxMQ team, which consisted of an IBM WebSphere DataPower project manager, an IBM WebSphere DataPower expert, an IBM WebSphere TX translator expert, and an IBM WebSphere Message Broker expert. “They helped us with the design of the infrastructure and the layout of the project. “
The design process wrapped up in April 2013, after which implementation began. According to Danser, the TxMQ project manager was on-site in the Ohio office once a week for the first few months. The Message Broker expert was onsite for almost four months. Some of the experts, for IBM WebSphere DataPower as one example, had weekly meetings from an offsite location.

Overcoming Implementation Challenges

TxMQ stayed on until the project went live in September 2013— two-and-a-half months past Danser’s original delivery date estimate. The biggest challenge that contributed to the delay was Medical Mutual’s limited experience with the technology, which required cross-training.
“We didn’t have any expertise in-house,” explains Danser, adding that the IBM WebSphere DataPower systems and the MQFTE were the steepest parts of the learning curve. “We relied a lot on the consultants to fill that gap for us until we were up to speed. We did bring in some of the MQ training from outside, but primarily it was learning on the job, so that slowed us down quite a bit. We knew how our old infrastructure worked and this was completely different.”
Another issue that contributed to delay was the need to search and identify system-platform ownership. “Laying out ownership of the pieces … took a while, given the resources and time required,” explains Danser. “It involved trying to lay out how the new infrastructure should work and then putting the processes we had in place into that new infrastructure. We knew what we wanted it to do—it was figuring out how to do that.”

   We also wanted to make sure that the solution would support us for years to come, not just a year or two. By the time we were done, we were pretty confident with the decision that we made. Overall we feel the solution was appropriate for Medical Mutual.   

– Eleanor Danser, EDI Manager, Medical Mutual of Ohio

And because Danser’s team wanted the system to work the same way as the existing infrastructure, heavy customization was also needed. “There was a lot of homegrown code that went into the process,” she adds.

Project realizes cost savings, increased efficiency

Since the implementation, Medical Mutual reports real cost savings and increased efficiency. As was the goal from the beginning, the company can now more easily take on trading partners. According to Danser, the use of IBM WebSphere DataPower creates an infrastructure that greatly improves the time needed to set up those trading partner connections, including a recent connection with the Federal Exchange. Medical Mutual is now able to shorten testing with trading partners’ and move data more quickly.
“Before, it would take weeks to [take on a new partner], and now we are down to days,” says Danser.
“We’re not restricted to just the EDI transactions anymore,” she continues, explaining that Medical Mutual’s infrastructure is now not only more robust, but more flexible. “We can use XML [Management Interface] and tools like that to move data also.”
IBM WebSphere DataPower additionally moved Medical Mutual from batch processing into a real-time environment. The new system gives trading partners the ability to manage their own transactions and automates the process into a browser-based view for them, so onboarding new partners is now a faster, more scalable process.
Additionally, Medical Mutual has been able to significantly reduce transaction fees for claims data by going direct with clearinghouses or other providers. According to Danser, Medical Mutual expects an annual savings of $250,000 to $500,000 in transactional fees.
Photo courtesy of Flickr contributor Michael Roper

Case Study: Middleware Health Check

Project Description

An online invoicing and payment management company (client) requested a WebSphere Systems Health Check to ensure their systems were running optimally and prepared for an expected increase in volume. The project scope called for onsite current state data collection and analysis followed by offsite detailed data analysis and White Paper preparation and delivery. Next, our consultant performed the recommended changed (WebSphere patches and version upgrades).

Click here to learn more about our Middleware Health Check.

The Situation

The client described their systems as “running well” but they were concerned they may have problems as they experienced additional load (700% estimated); memory consumption was a particular concern. TxMQ completed the Health Check and worked with representatives from the client for access to production Linux boxes, web servers, application servers, portal servers, database servers and LDAP servers. Additional client representatives would be available for application-specific questions.

The Response

Monitoring of the components had to be completed during a normal production day. The web servers, application servers, database server, directory server and Tomcat server all needed to be monitored for several hours. The normal production day typically showed a low volume of transactions so the when the monitoring statistics began they were all very normal; resource usage on the boxes was very low. Log files were extracted from the web servers, directory server, database server, deployment manager, application servers and Tomcat (batch) server. Verbose garbage collection was enabled for one of the application servers for analysis and a Javacore and Heap Dump was generated on an application server to analyze threads and find potential memory leaks.

Monitoring and analysis tool options were discussed with the client. TxMQ recommended additional IBM tools and gave a tutorial on the WebSphere Performance Viewer (built into the WebSphere Admin Console). In addition, TxMQ’s consultant sent members of the client’s development team links to download IBM’s Heap Analyzer and Log Analyzer (very useful for analyzing WAS System Logs and Heap Dumps). TxMQ’s consultants met with the client’s development and QA staff to debrief them on the data gathered.

The Results

Overall, the architecture was sound and running well but the WebSphere software had not been patched for several years and code-related errors filled the system logs. There were many potential memory leaks which could have caused serious response and stability problems as the application scales for more users.

The QA team ran stress tests which indicated that the response times would get worse very quickly as more users were added. Further, the version of software and web server plugin was 6.1.0 and vulnerable to many security risks.

The HTTP access and error logs had no unusual or excessive entries. The http_plugin logs were very large and were rotated – making it faster and easier to access the most recent activity.

One of the web servers was using much more memory than the other although it should have been configured exactly the same. The production application servers were monitored over a three-day period and didn’t exhibit any outward signs of stress; the CPU was very low, memory was not maxed out, and the threads & pools were minimally used. There were a few configuration errors and warnings to research but the Admin Console settings were all well within norms.

Items of concern:

1) A large number of application code related errors in the logs; and
2) The memory consumption grows dramatically during the day.

These conditions can be caused by unapplied software patches and code-related issues. In a 24-hour period, Portal node 3 experienced 66 errors and 227 warnings in the SystemOut log and 1396 errors in the SystemErr log. These errors take system resources to process, will cause unpredictable application behavior, and can cause hung threads and memory leaks. The production database server was not stressed – it has plenty of available CPU, memory and disk space. The DB2 diagnostic log had recorded around 4536 errors and 17,854 warnings in the previous few months. The Tivoli Directory server was not stressed – plenty of available CPU, memory and disk space. The SystemOut log recorded 107 errors and 8 warnings in the previous year -­ many of these could be fixed by applying the latest Tivoli Directory Server patch ( The Batch Job (Tomcat) server was not stressed – plenty of available CPU, memory and disk space. The catalina.out log file is 64Mb and contained many errors and warnings.

The HealthCheck written analysis was delivered to the client with recommended patches and application classes to investigate for errors and memory leaks. In addition, a long-­term plan was outlined to upgrade to a newer version of WebSphere Application Server and migrate off WebSphere Portal Server (since its features were not needed).

Photo courtesy of Flickr contributor Tristan Schmurr

Case Study: WAS Infrastructure Review

Project Description

Large financial services firm (client) grew and began experiencing debilitating outages and application slowdowns – which were blamed on their WebSphere Application Server and entire WebSphere infrastructure. The client and IBM called TxMQ to open an investigation, identify what was causing the problem, and determine how to put systems, technology and a solution in place to prevent the problems from recurring in the future, while at the same time allow the client to scale for continued planned growth.

The Situation

As transaction volume increased, and more locations came online, the situation got worse and worse, at times, actually shutting down access completely at some sites. TxMQ proposed a one-week onsite current state analysis to be followed by one?week configuration change testing in the QA region and then one week of document preparation and presentation.

The Challenge

The primary function of the application is to support around 550 of the company locations financial processes; the average number of terminals in a location is around five, so more than 2,500 connections are possible. Our client suspected the HTTP sessions were large and thus interfering with their ability to turn on session replication. The code running on the multiple WebSphere Application servers was Java/JEE with a Chordiant framework (an older version not currently in support). There are 48 external web services including CNU and Veritec – mostly batch-oriented. The Oracle database was running on an IBM p770 and was heavily utilized during slowdowns. Slowdowns could not be simulated in the pre-­production environment; transaction types and workflow testing had not been automated (a future project would do just that).

The Response

TxMQ’s team met with members of the client’s WebSphere production environment. There were two IBM HTTP web servers with NetScaler as the front-­end IP sprayer; the web servers ran 3-­5% CPU and were not suspected to be a bottleneck. The web servers round robin to multiple WebSphere Application Servers configured the same way – except two servers had a few small additional applications. The application servers ran on IBM JS21 blade servers which were approximately 10 years old. Some recent diagnostics indicated a 60% session overhead (garbage collection), so more memory was added to the servers (total 12 GB per server) and the WebSphere JVM heap size was increased to 4GB; some performance improvement was realized. The daily processing peak times were from 11 am – ­1 pm and 4 – ­5 pm with Fridays the busiest. Oracle 11g served as the database with an instance for operational processing and a separate instance for DR and BI processing; the client drivers are version 10g. Our team met with client team members to discuss the AIX environment. The client ran running multiple monitoring tools, so some data was available to analyze the situation at multiple levels. The blade servers were suspected to be underpowered for the application and future growth. Our consultants learned that there was an initiative to upgrade the servers to IBM Power PS700 blades planned in the first quarter of the next year. The client also indicated that the HTTP sessions may be very large and the database was experiencing a heavy load possibly from un-tuned SQLs or DB locks.

The Results

TxMQ’s team began an analysis of the environment, including working with the client to collect baseline (i.e. normal processing day) analysis data. We observed the monitoring dashboard, checked WAS settings and collected log files and Javacores with the verbose garbage collection option ‘on.’ In addition, we collected the system topology documents. The following day, TxMQ continued to monitor the well-­performing production systems, analyzed the systems analysis data collected the previous day, and met with team members about the Oracle Database. TxMQ’s SME noted that the WAS database connection pools for Chordiant were using up to 40 of the 100 possible connections, which was not an indication of a saturated database. The client explained that they use Quest monitoring tools and showed the production database current status. The database was running on a p770 and could take as many of the 22 CPUs as needed -­ they had seen up to 18 CPUs used on bad days. The client’s DBAs have a good relationship with the development group and monitor resource utilization regularly – daily reports are sent to development with the SQLs consuming the most CPU. No long-­running SQLs were observed that day; most were running fewer than 2­?5 seconds. Our SME then met with the client’s middleware group and communicated preliminary findings. In addition, he met with the development group, since they had insight into the Chordiant framework. Pegasystems purchased Chordiant in 2010 and then abandoned the product. The database calls were SQL, not stored procedures. The application code has a mix of Hibernate (10%), Chordiant (45%), and direct JDBC (45%) database accesses. Large HTTP session sizes were noticed and the development group noted that the session size could likely be reduced greatly. The client’s application developers didn’t change any Chordiant code -­ they are programming to the APIs. The developers used RAD but have not run the application profiler on their application. Rational Rose modeler provided the application architecture (business objects, business services, worker objects, service layer). In addition, the application used JSPs but was enhanced with Café tags.

Applications worthy of code review/rewrite included the population of GL events into the shopping cart during POS transactions. On the following day the slowdown event occurred; by 10:00 am all application servers were over 85% CPU usage and the user response times were climbing over 10 seconds. At 10:30 am and again at 12:15 pm database locks and some hung sessions were terminated by the DBAs. The customer history table was getting long IO wait times. One application server was restarted at the operations group request. The SIB queue filled up at 50,000 (due to CPU starvation). A full set of diagnostic logs and dumps were created for analysis. By 4:30 pm the situation had somewhat stabilized. TxMQ SME’s observed that the short-­term problem appeared to be a lack of application server processing power and the long-­term problems could best be addressed after dealing with the short-­term problem. They recommended an immediate processor upgrade. Plans were made to upgrade the processors with a backup box. Over the weekend the client moved their WebSphere application servers to a backup p770 server. A load similar to the problem load was experienced again several days later and it was reported that the WAS instances ran around 40% CPU load and user response times were markedly better than Friday.


TxMQ presented a series of recommendation for the client’s executive board, including but not limited to:

Chordiant framework and client application code should be the highest priority due to the number and type of errors.

The Chordiant framework has been made obsolete by Pegasystems and should be replaced with newer, SOA-­friendly framework(s) such as Spring and Hibernate.

Review of application code. There are many errors in the error logs, which can be fixed in the code.

Reduce the size of the HTTP Session. Less than 20K is a good target.

WAS 6.1 should be upgraded to Version 7 or 8 before it goes out of support.

IBM extended support or a third-­party (TxMQ) support is available

Upgrade may not be possible until Chordiant framework replaced

Create a process for tracing issues to the root cause. An example would be the DB locks, which had to be terminated (and some user sessions terminated). These issues should be followed up to determine the cause and remedial actions.

Enhance the System Testing to emulate realistic user loads and workflows. This will allow more thorough application testing and gives the administrators a chance to tweak configurations under production-­like loads.

Photo courtesy of Flickr contributor “Espos”

Case Study: WAS ND 7 Migration

Project Description

Reputable national life insurance company (client) prepares its back end development for the installation of TemenosTM application on the WebSphere Application Servicer (WAS) installation. Client sought a provider to migrate WAS 7 ND from jBoss on a Windows 2008 platform. With more than 1,000,000 insurance policies, customers rely on the company for life insurance, annuity and travel accident insurance.

The main issue within the migration was making jBoss classes visible in the EARs. Due to the difference in default binding patterns, the JDNI naming had to be modified. In addition, specific open source libraries had to be included through packaging within the application archive file or using shared libraries. Adding to the challenge was a persistence.xml file within the app_EJB/META-­? INF folder that had to be modified to work with WebSphere.

The successful migration began with a thorough analysis of jBoss deployment artifacts via IBM RAD. It was imperative that the TxMQ consultants understood the hardware runtime environment and that the tuning was completed accordingly. One of the most important components was infrastructure management. This management included logging, monitoring and deployment. TxMQ consultants also created several comparable sandbox environments in which to configure, deploy and properly test and tune the migration. Security also needed to be decided upon. After careful consideration SiteMinder and WebSphere LTPA were chosen.

The outset of the project saw migration for 16 JVMs across environments. TxMQ consultants completed performance testing to set JVM heaps/tuning parameters by using SOAPui toolsets for web services. The client chose Alfresco as the content management system and TxMQ’s consultant wrote deployment scripts to manage content into web-­?servers using Alfresco API calls. IBM ISA v4.0 was used for WebSphere monitoring, Javacore analysis and GC analysis and WebSphere’s JVM’s were configured for monitoring via SNMP.
Photo courtesy of Flickr contributor Claus Rebler