Why ETL Tools Might Hobble Your Real-Time Reporting & Analytics

Companies report large investments in their data warehousing (DW) or Business Intelligence (BI) systems with a large portion of the software budget spent on extract, transformation and load (ETL) tools that provide ease of populating such warehouses, and the ability to manipulate data to map to the new schemas and data structures.

Companies do need DW or BI for analytics on sales, forecasting, stock management and so much more, and they certainly wouldn’t want to run the additional analytics workloads on top of already taxed core systems, so keeping them isolated is a good idea and a solid architectural decision. Considering, however, the extended infrastructure and support costs involved with managing a new ETL layer, it’s not just the building of them that demands effort. There’s the ongoing scheduling of jobs, on-call support (when jobs fail), the challenge of an ever-shrinking batch window when such jobs are able to run without affecting other production workloads, and other such considerations which make the initial warehouse implementation expense look like penny candy.

So not only do such systems come with extravagant cost, but to make matters worse, the vast majority of jobs run overnight. That means, in most cases, the best possible scenario is that you’re looking at yesterday’s data (not today’s). All your expensive reports and analytics are at least a day later than what you require. What happened to the promised near-realtime information you were expecting?
Contrast the typical BI/DW architecture to the better option of building out your analytics and report processing using realtime routing and transformation with tools such as IBM MQ, DataPower and Integration Bus. Most of the application stacks that process this data in realtime have all the related keys in memory –customer number or ids, order numbers, account details, etc. – and are using them to create or update the core systems. Why duplicate all of that again in your BI/DW ETL layer? If you do, you’re dependent on ETLs going into the core systems to find what happened during that period and extracting all that data again to put it somewhere else.

Alongside this, most organizations are already running application messaging and notifications between applications. lf you have all the data keys in memory, use a DW object, method, function or macro to drop the data as an application message into your messaging layer. The message can then be routed to your DW or BI environment for Transformation and Loading there, no Extraction needed, and you can get rid of your ETL tools.

Simplify your environments and lower the cost of operation. If you have multiple DW or BI environments, use the Pub/Sub capabilities of IBM MQ to distribute the message. You may be exchanging a nominal increase in CPU for eliminating problems, headaches and costs to your DW or BI.

Rethinking your strategy in terms of EAI while removing the whole process and overhead of ETL may indeed bring your whole business analytics to the near-realtime reporting and analytics you expected. Consider that your strategic payoff. Best regards in your architecture endeavors!
Image by Mark Morgan.

How Do I Support My Back-Level IBM Software (And Not Blow The Budget)?

So you’re running outdated, obsolete, out-of-support versions of some of your core systems.? WebSphere MQ maybe? or WebSphere Process Server or Datapower…the list is endless.
Staff turnover may be your pain point – a lack of in-house skills – or maybe it’s lack of budget to upgrade to newer, in-support systems. A lost of times it’s just a matter of application dependencies, where you can’t get something to work in QA, and you’re not ready to migrate to current versions just yet.
The problem is that management requires you to be under support. So you get a quote from IBM to support your older software, and the price tag is astronomical – not even in the same solar system as your budget.
The good news is you do have options.

We were able to offer a 6-month support package, which eventually ran 9 months in total. Total cost was under $1,000 a month.

Here at TxMQ, we have a mature and extensive migration practice, but we also offer 24×7 support (available as either 100% onshore, or part onshore, part offshore) for back-level IBM software and product support – all at a fraction of IBM rates.
Our support starts at under $1,000 a month and scales with your size and needs.
TxMQ has been supporting IBM customers for over 35 years. We have teams of architects, programmers, engineers and others across the US and Canada supporting a variety of enterprise needs.
Case Studies
A medical-equipment manufacturer planned to migrate from unsupported versions of MQ and Message Broker. The migration would run 6 to 9 months past end-of-support, but the quote from IBM for premium support was well beyond budget.
The manufacturer reached out to TxMQ and we were able to offer a 6-month support package, which eventually ran 9 months in total. Total cost was under $1,000 a month.
Another customer (a large health-insurance payer) faced a similar situation. This customer was running WebSphere Process Server, Ilog, Process Manager, WAS, MQ, WSRR, Tivoli Monitoring, and outdated DataPower appliances. TxMQ built an executed a comprehensive “safety net” plan to support this customer’s entire stack during a very extensive migration period.
It’s never a good idea to run unsupported software – especially in these times of highly visible compliance and regulatory requirements.
In addition to specific middleware and application support, TxMQ can also work to build a compliance framework to ensure you’re operating within IBM’s license restrictions and requirements – especially if you’re running highly virtualized environments.
Get in touch today!
(Image from Torkild Retvedt)

Rigorous Enough! MQTT For The Internet Of Things Backbone

The topic of mobile devices and mobile solutions is a hot one in the IT industry. I’ll devote a series of articles to exploring and explaining this very interesting topic. This first piece will focus on MQTT for the Internet of Things – a telemetry functionality originally provided through IBM.
MQTT provides communication in the Internet of Things – specifically, between the sensors and actuators. The reason MQTT is unique is, unlike several other communication standards, it’s rigorous enough to support low latency and poor connectivity and still provide a well-behaved message-delivery system.
Within the Internet of Things there’s a universe of devices that provide inter-communication. These devices and their communications are what enables “smart devices,” and these devices connect to other devices or networks via different wireless protocols that can operate to some extent both interactively and autonomously. It’s widely believed that these types of devices, in very short time, will outnumber any other forms of smart computing and communication, acting as useful enablers for the Internet of Things.
MQTT architecture is publish/subscribe and is designed to be open and easy to implement, with up to thousands of remote clients capable of being supported by a single server. From a networking standpoint, MQTT operates using TCP for its communication. TCP (unlike UDP) provides stability to message delivery because of its connection-oriented standard. Unlike the typical HTTP header, the MQTT header can be as little as 2 bytes, and that 2 bytes can store all of the information required to maintain a meaningful communication. The 2 bytes store the information in binary using 8 bits to a byte. It has the capability to add an optional header of any length. The 2 bytes of the standard header can carry such information as QOS, type of message, clean or not.
The quality-of-service parameters control the delivery of the message to the repository or server. The options are:

Quality-Of-Service Option Meaning
1 At most once
2 At least once
3 Exactly once

These quality-of-service options control the delivery to the destination. The first 4 bits of the byte control the type of message, which defines who’ll be interested in receipt of these messages. The type of message indicates the topic of the message, which will manage who receives the message. The last element will be the clean byte, which like the persistence in MQ will determine whether the message should be retained or not. The clean option goes a step further in that it will also tell the repository manager whether messages related to this topic should be retained.
In my next blog I’ll discuss the broker or repository for these messages. There are several repositories that can be used, including MessageSight and Mosquitto among others. The beauty of these repositories is their stability.
(Photo by University of Liverpool Faculty of Health & Life)

MQ In The Cloud: How (Im)Mature Is It?

Everyone seems to have this concept that deploying all of your stuff into the cloud is really easy – you just go up into there, set up a VM, install your data and you’re done. And when I say “everyone” I’m referring to CIOs, software salespeople, my customers and anyone else with a stake in enterprise architecture.
When I hear that, immediately I jump in and ask: Where’s the integration space in the cloud today? Remember that 18 or 20 years ago we were putting up application stacks in datacenters that were 2- or 3-tier stacks. They were quite simple stacks to put up, but there was very little or no integration going on. The purpose was singular: Deal with this application. If we’d had a bit more foresight, we’d have done things differently. And I’m seeing the same mistake right now in this rush to the cloud.
Really, what a lot of people are putting up in the cloud right now is nothing more than a vertical application stack with tons of horsepower they couldn’t otherwise afford. And guess what? That stack still can’t move sideways.
Remember: We’ve been working on datacenter integration since the old millennium. And our experience with datacenter integration shows that the problems of the last millennium haven’t been solved by cloud. In fact, the new website, the new help desk, the new business process and solutions like IBM MQ that grew to solve these issues have all matured within the datacenter. But the cloud’s still immature because there’s no native, proven and secure integration. What we’re doing in the cloud today is really the same thing we did 20 years ago in the datacenter.
I’m sounding the alarm, and I’m emphasizing the need for MQ, because in order to do meaningful and complicated things in the cloud, we need to address how we’re going to do secure, reliable, high-demand integration of systems across datacenters and the cloud. Is MQ a pivotal component of your cloud strategy? It’d better be, or we’ll have missed the learning opportunity of the last two decades.
How mature is the cloud? From an integration standpoint, it’s 18 to 20 years behind your own datacenter. So when you hear the now-familiar chant, “We’ve got to go to the cloud,” first ask why, then ask how, and finish with what then? Remind everyone that cloud service is generally a single stack, that significant effort and money will need to be spent on new integration solutions, and that your data is no more secure in the cloud than it is in a physical datacenter.
Want to talk more about MQ in the cloud? Send me an email and let’s get the conversation started.
(Photo by George Thomas and TxMQ)

The Need For MQ Networks: A New Understanding

If I surveyed each of you to find out the number and variety of technical platforms you have running at your company, I’d likely find that more than 75% of companies haven’t standardized on a single operating system – let alone a single technical platform. Vanilla’s such a rarity – largely due to the many needs of our 21st-century businesses – and with the growing popularity of the cloud (which makes knowing your supporting infrastructure even more difficult) companies today must decide on a communications standard between their technical platforms.
Why MQ networks? Simple. MQ gives you the ability to treat each of your data-sharing members as a black box. MQ gives you simple application decoupling by limiting the exchange of information between application endpoints and application messages. These application messages have a basic structure of “whatever” with an MQ header for routing destination and messaging-pattern information. The MQ message become the basis for your inter-communication protocols that an application can access no matter where the application currently runs – even when the application gets moved in the future.
This standard hands your enterprise the freedom to manage applications completely independent of one another. You can retire applications, bring up a new application, switch from one application to another or route in parallel. You can watch the volume and performance of applications in real-time, based on the enqueuing behavior of each instance to determine if it’s able to keep up with the upstream processes. No more guesswork! No more lost transactions! And it’s easy to immediately detect an application outage, complete with the history and how many messages didn’t get processed. This is the foundation for establishing Service Level Management.
The power of MQ networks gives you complete control over your critical business data. You can limit what goes where. You can secure it. You can turn it off. You can turn it on. It’s like the difference between in-home plumbing and a hike to the nearest watersource. It’s that revolutionary for the future of application management.

Measuring MQ Capacity: How To Talk To A Bully

TxMQ senior consultant Allan Bartleywood doesn’t like bullies. Didn’t like them when he was a wee lad chasing butterflies across the arid hardscrabble of the Zimbabwean landscape. And certainly won’t tolerate them today in giant enterprise shops across the world.
Here’s the deal: Allan’s an MQ architect. Pretty much the best there is. He’s got a big peacenik streak. And he likes to stick up for his guys when a company bully starts to blame MQ.
You’ve heard it before: “MQ is the bottleneck. We need more MQ connections. It’s not my application – it’s your MQ.”
We all know it isn’t, but our hands are tied because we can’t measure the true capacity of MQ under load. So we blame the app and the bully rolls his eyes and typically wins the battle because apps are sexy and MQ is not and the app bully has been there 10 years and we’ve been there 3.
But Bartleywood’s new utility – the aptly named MQ Capacity PlannerTM (MQCP) –  unties our hands and allows us to stand up to the bully.
“I’m giving everyone the information we need to defend our environments – to stand up for our MQ,” Bartelywood says. “The Tivolis, the BMCs, the MQ Statistics Tools can’t speak to capacity because they can’t gin the information to tell you what true capacity is. I absolutely love how MQCPTM allows me, and you, to turn the whole argument upside-down and ask the bully: ‘Here’s what the MQ capacity is. Does the demand you put on MQ meet what it can truly deliver? Can you actually consume connections as fast as MQ can deliver them?'”
MQCP is now available to the public for the first time. It’s simply the best tool to develop an accurate picture of the size and cost of your environment. Ask about our special demo for large enterprise shops.
Photo by Eddie~S

Managed File Transfer: Your Solution Isn't Blowing In The Wind

If FTP were a part of nature’s landscape, the process would look a lot like a dandelion gone to seed. The seeds need to go somewhere, and all it takes is a bit of wind to scatter them toward some approximate destination.
Same thing happens on computer networks every day. We take a file, we stroke a key to nudge it via FTP toward some final destination, then turn and walk away. And that’s the issue with using FTP and SFTP to send files within the enterprise: The lack of any native logging and validation. Your files are literally blowing in the wind.
The popular solution is to create custom scripts to wrap and transmit the data. That’s why there’s typically a dozen or so different homegrown FTP wrappers in any large enterprise – each crafted by a different employee, contractor or consultant with a different skillset and philosophy. And even if the file transfers run smoothly within that single enterprise, the system will ultimately fail to deliver for a B2B integration. There’s also the headache of encrypting, logging and auditing financial data and personal health information using these homegrown file-transfer scripts. Yuck.
TxMQ absolutely recommends a managed system for file transfer, because a managed system:

  • Takes security and password issues out of the hands of junior associates and elevates data security
  • Enables the highest level of data encryption for transmission, including FIPS
  • Facilitates knowledge transfer and smooth handoffs within the enterprise (homegrown scripts are notoriously wonky)
  • Offers native logging, scheduling, success/failure reporting, error checking, auditing and validation
  • Integrates with other business systems to help scale and grow your business

TxMQ typically recommends and deploys IBM’s Managed File Transfer (MFT) in two different iterations: One as part of the Sterling product stack, the other as an extension on top of MQ.
When you install MFT on top of MQ, you suddenly and seamlessly have a file-transfer infrastructure with built-in check-summing, failure reporting, audit control and everything else mentioned above. All with lock-tight security and anybody-can-learn ease of use.
MFT as part of the Sterling product stack delivers all those capabilities to an integrated B2B environment, with the flexibility to quickly test and scale new projects and integrations, and in turn attract more B2B partners.
TxMQ is currently deploying this solution and developing a standardization manual for IBM MFT. Are you worried about your file transfer process? Do you need help trading files with a new business partner? The answer IS NOT blowing in the wind. Contact us today for a free and confidential scoping.
Photo by Alberto Bondoni.

MQ Capacity Planner: More Info About MQ Monitoring

TxMQ is set to debut its new MQ Capacity Planner (MQCP) utility next week at the MQ Technical Conference in Sandusky, Ohio. We’re offering two live-demo sessions with MQCP author Allan Bartleywood:

  • Monday, Sept. 29 at 11:15 a.m.
  • Wednesday, Oct. 1 at 11:15 a.m.

For those who can’t attend, MQCP is a brand-new, proprietary MQ monitoring and testing utility for MQ message flow. More specifically, MQCP is a multithread testing tool for IBM WebSphere MQ environments that is capable of testing any volume of application-data messages generated by any number of concurrent application instances assigned to any number of queue managers in order to obtain highly detailed performance reports of queue times and package priorities measured against total message capacity, CPU loads and throughput times.
Results provide accurate estimates of optimal message sizes to better diagnose bottlenecks and boost overall MQ, network and application performance.
To dig a bit deeper into functionality, MQCP’s strength is in the detail. Typical MQ test scripts simply can’t offer the insight and absolute detail of MQCP, which essentially allows the user to shine a light into the dark corners of an MQ environment to reveal any cobwebs that slow down performance. And the tool is indispensible for network change control: Anytime you change out a network configuration item, run MQCP again and compare performance to the previous baseline to measure how an implementation truly affects MQ performance. It’s really that simple.
More details on MQCP will emerge over the following weeks. There’s additional information included on our MQCP page (click here to visit).
Interested in trying the MQCP? Contact TxMQ president Chuck Fried and ask about our MQCP Pilot Program: (716) 636-0070 x222, [email protected].

Details: Fix Pack for WebSphere MQ 8.0

IBM recently released its first fix pack for WebSphere MQ 8.0. The fix pack is now available on the following:

  • AIX
  • Linux on x86
  • Linux on x86_64
  • Linux on zSeries 64-bit
  • Linux on POWER
  • HP-UX for Itanium
  • Solaris SPARC
  • Solaris on x86_64
  • Windows
  • IBM i

The fix pack addresses the following APARS:

IT00493         Mqxr server receives probe ID XR071002 unsubscribe failed with mqcc_failed RC=2429 mqrc_subscription_in_use AMQXR0004E
IT00497         WebSphere MQ 7.0.1: queue manager can not start after upgrade TOV7.0.1.10 or V7.0.1.11
IT00960         WebSphere MQ V7 client .NET applications using get with waitinterval greater than 300 seconds fail with MQRC=2009.
IT01241         WebSphere MQ V7 client application reports sigsegv on while connecting to the queue manager using ccdt file.
IT01374         WMQ V7 java: a message may not be converted to unicode when SHARECNV=0 is set on a client channel.
IT01511         WMQ mft: new transfer request panel from the WMQ explorer does not function properly when a sfg agent is selected.
IT01607         WMQ ams: AMQ9044 log message says message was sent to system.protection.error.queue but was rolled back
IT01798         WMQ 7.5: WebSphere MQ default configuration wizard on Windows terminates with no error message.
IT01799         Dspmqrte returns 2046 ‘mqrc_options_error’ when connecting in client mode to a V7.1 queue manager running on z/OS.
IT01966         Creation of a 64-BIT Oracle switch load file for WebSphere MQ Java client fails on Linux 64.
IT01972         Queue manager trace is turned off for an application thread withmultiple shared connections after an mqdisc call is issued
IT02055         FDC probe XC130004 within function rfichooseone reporting sigfpeexception, and termination of queue manager processes
IT02122         Unable to connect to WMQ mft configuration via remote queue manager using ccdt under WMQ explorer
IT02194         WebSphere mq: clwlrank and clwlprty ignored when using like parameter
IT02389         Amqsbcg retreives incorrect message on the destination queue when API exit removed message properties
IT02422         WMQ V7.5 Java application fails with reason code 2025 (mqrc_max_conns_limit_reached) after network outages
IT02480         WebSphere MQ output from ‘dmpmqcfg’ is incorrect for runmqsc input for defining selector strings
IT02684         Data missing from WMQ V7.5 .NET application trace when tracing is repeatedly stopped and started while application is running
IT02701         MQ 7.5 setmqm fails without error when mqs.ini contains a blank line(s) at the end of the file.
IT02920         FDC with probe ID CO052000 and errorcode rrce_bad_data_received is generated by the WebSphere MQ V8 queue manager.
IT02981         WebSphere MQ V7.5: addmqinf command fails if queue manager file system is not available.
IT03124         WMQ 7.5: a svrconn channel terminates when browsing the system.admin.trace.activity.queue
IT03154         Ibm MQ 8.0: AMQ5657 message is written in error log without the text AMQ5657
IT03205         Defxmitq can be set to system.cluster.transmit.queue using the crtmqm -d switch, but this should not be allowed
IT03551         WMQ V7.5: .NET application fails to connect to queue manager with RC=2232 (mqrc_unit_of_work_not_started).
IT03711         WebSphere MQ 7.5 probe ID XC333030 component xlspostevent reports major error code 16 (einval)
IT03825         WMQ V8.0: rc 2195 FDC probe ID XC130031 when using authinfo withauthtype(idpwldap)
IV40268         AMQ9636: ‘ssl distinguished name does not match peer name’ errorwhen using ssl/tls channels with multi-instance queue managers.
IV56612         Channel moves to running state and ping completes on a sender channel with trptype(tcp) and receiver channel TRPTYPE(LU62)
IV58306         Memory leak in amqrmppa observed while queue manager is running
IV59264         ABN=0C4-00000004 in csqmcprh when using the WebSphere MQ classesfor Java
IV59891         Ibm MQ 7.1 or 7.5 dspmqtrc writes out incorrect time stamps whenformatting 7.0.1 trace files
IV62648         Mqcmd_reset_q_stats processing ends for all queues if one queue is damaged
IV63397         WebSphere MQ queue manager is unresponsive and generatedfdc’s with probe id’s XC034070 and XC302005
IV64351         MQ runmqras command fails to ftp data with error message “address unresolved for server address ftp.emea.ibm.com”
PI19991         Various problems encountered in the qmgr and chin late in the final test cycle. fix needed for stability and migration
SE59149        WebSphere MQ V710: language MQ ptf is incorrectly replacing the qsys prx cmds with the real cmds instead
SE59368        After executing the wrkmqmcl command the wrkmqm command falsely shows active queue managers as inactive.
XX00217        MQ V8 explorer password field in the userid pane of the queue manager properties appears populated when no password defined
XX00222        MQ explorer 8.0 on windows: when trying to export/import, using french version, unable to select a destination file or folder
XX00223        MQ managed file transfer plugin for MQ explorer cannot connect to a coordination queue manager configured to use SSL
“It’s In Our Name!” – TxMQ is an IBM Premier Business Partner and we specialize in WebSphere MQ consulting. Initial consultations are free and communications are always confidential. Contact vice president Miles Roty for more information: (716) 636-0070 x228, [email protected].
(Photo by Kate Ter Haar, Creative Commons license.)