Today’s breaking news of the Unix “Shellshock” vulnerability reminds me instantly of the famous auror-turned-Hogwarts-professor Alastor Moody, who preaches that the fight against the dark arts demands “Constant Vigilance.” Same for cybersecurity. Constant Vigilance.
Consider: The Heartbleed issue affected potentially 500,000 machines worldwide. The new Shellshock (or “Bash Bug”) could potentially affect 500 million.
Cures for the Shellshock vulnerability, at the time of this writing, are still being sorted out. It affects Unix-based operating systems such as Linux and Mac OS X, which in some non-default configurations could allow a remote attacker to execute arbitrary code on an affected system. The weakness lies within the Bash (for Bourne-Again Shell) command prompt.
The simplicity of an attack is what scares system admins the most: The vulnerability is truly easy to exploit.
The US Computer Emergency Readiness Team (US-CERT) is tracking the issue (see Bourne Again Shell (Bash) Remote Code Execution Vulnerability.) Following is CERT’s list of vendors that are confirmed to be exposed to the vulnerability. This list is initial and is expected to grow.
US-CERT recommends the following system-specific pages for hardening and patch info:
US-CERT aldo recommends users and administrators review TA14-268A, Vulnerability Note VU#252743 and the Redhat Security Blog for additional details and to refer to their respective Linux or Unix-based OS vendor(s) for an appropriate patch. A GNU Bash patch is also available for experienced users and administrators to implement.
Not sure where to start, or if your systems are affected? Contact TxMQ president Chuck Fried for an immediate and confidential consultation: (716) 636-0070 x222, [email protected].
IBM recently released its first fix pack for WebSphere MQ 8.0. The 188.8.131.52. fix pack is now available on the following:
- Linux on x86
- Linux on x86_64
- Linux on zSeries 64-bit
- Linux on POWER
- HP-UX for Itanium
- Solaris SPARC
- Solaris on x86_64
- IBM i
The 184.108.40.206 fix pack addresses the following APARS:
IT00493 Mqxr server receives probe ID XR071002 unsubscribe failed with mqcc_failed RC=2429 mqrc_subscription_in_use AMQXR0004E
IT00497 WebSphere MQ 7.0.1: queue manager can not start after upgrade TOV220.127.116.11 or V18.104.22.168
IT00960 WebSphere MQ V7 client .NET applications using get with waitinterval greater than 300 seconds fail with MQRC=2009.
IT01241 WebSphere MQ V7 client application reports sigsegv on while connecting to the queue manager using ccdt file.
IT01374 WMQ V7 java: a message may not be converted to unicode when SHARECNV=0 is set on a client channel.
IT01511 WMQ mft: new transfer request panel from the WMQ explorer does not function properly when a sfg agent is selected.
IT01607 WMQ ams: AMQ9044 log message says message was sent to system.protection.error.queue but was rolled back
IT01798 WMQ 7.5: WebSphere MQ default configuration wizard on Windows terminates with no error message.
IT01799 Dspmqrte returns 2046 ‘mqrc_options_error’ when connecting in client mode to a V7.1 queue manager running on z/OS.
IT01966 Creation of a 64-BIT Oracle switch load file for WebSphere MQ Java client fails on Linux 64.
IT01972 Queue manager trace is turned off for an application thread withmultiple shared connections after an mqdisc call is issued
IT02055 FDC probe XC130004 within function rfichooseone reporting sigfpeexception, and termination of queue manager processes
IT02122 Unable to connect to WMQ mft configuration via remote queue manager using ccdt under WMQ explorer
IT02194 WebSphere mq: clwlrank and clwlprty ignored when using like parameter
IT02389 Amqsbcg retreives incorrect message on the destination queue when API exit removed message properties
IT02422 WMQ V7.5 Java application fails with reason code 2025 (mqrc_max_conns_limit_reached) after network outages
IT02480 WebSphere MQ output from ‘dmpmqcfg’ is incorrect for runmqsc input for defining selector strings
IT02684 Data missing from WMQ V7.5 .NET application trace when tracing is repeatedly stopped and started while application is running
IT02701 MQ 7.5 setmqm fails without error when mqs.ini contains a blank line(s) at the end of the file.
IT02920 FDC with probe ID CO052000 and errorcode rrce_bad_data_received is generated by the WebSphere MQ V8 queue manager.
IT02981 WebSphere MQ V7.5: addmqinf command fails if queue manager file system is not available.
IT03124 WMQ 7.5: a svrconn channel terminates when browsing the system.admin.trace.activity.queue
IT03154 Ibm MQ 8.0: AMQ5657 message is written in error log without the text AMQ5657
IT03205 Defxmitq can be set to system.cluster.transmit.queue using the crtmqm -d switch, but this should not be allowed
IT03551 WMQ V7.5: .NET application fails to connect to queue manager with RC=2232 (mqrc_unit_of_work_not_started).
IT03711 WebSphere MQ 7.5 probe ID XC333030 component xlspostevent reports major error code 16 (einval)
IT03825 WMQ V8.0: rc 2195 FDC probe ID XC130031 when using authinfo withauthtype(idpwldap)
IV40268 AMQ9636: ‘ssl distinguished name does not match peer name’ errorwhen using ssl/tls channels with multi-instance queue managers.
IV56612 Channel moves to running state and ping completes on a sender channel with trptype(tcp) and receiver channel TRPTYPE(LU62)
IV58306 Memory leak in amqrmppa observed while queue manager is running
IV59264 ABN=0C4-00000004 in csqmcprh when using the WebSphere MQ classesfor Java
IV59891 Ibm MQ 7.1 or 7.5 dspmqtrc writes out incorrect time stamps whenformatting 7.0.1 trace files
IV62648 Mqcmd_reset_q_stats processing ends for all queues if one queue is damaged
IV63397 WebSphere MQ 22.214.171.124 queue manager is unresponsive and generatedfdc’s with probe id’s XC034070 and XC302005
IV64351 MQ runmqras command fails to ftp data with error message “address unresolved for server address ftp.emea.ibm.com”
PI19991 Various problems encountered in the qmgr and chin late in the final test cycle. fix needed for stability and migration
SE59149 WebSphere MQ V710: language MQ ptf is incorrectly replacing the qsys prx cmds with the real cmds instead
SE59368 After executing the wrkmqmcl command the wrkmqm command falsely shows active queue managers as inactive.
XX00217 MQ V8 explorer password field in the userid pane of the queue manager properties appears populated when no password defined
XX00222 MQ explorer 8.0 on windows: when trying to export/import, using french version, unable to select a destination file or folder
XX00223 MQ managed file transfer plugin for MQ explorer cannot connect to a coordination queue manager configured to use SSL
“It’s In Our Name!” – TxMQ is an IBM Premier Business Partner and we specialize in WebSphere MQ consulting. Initial consultations are free and communications are always confidential. Contact vice president Miles Roty for more information: (716) 636-0070 x228, [email protected].
(Photo by Kate Ter Haar, Creative Commons license.)
I’m reposting an interesting blog that was shared with us from a Partner organization. Please read and enjoy!
How many Intel x86 servers do you need to match the performance of a zEnterprise and at what cost for a given workload? That is the central question every IT manager has to answer.
It is a question that deserves some thought and analysis. Yet often IT managers jump to their decision based on series of gut assumptions that on close analysis are wrong. And the resulting decision more often than not is for the Intel server although an honest assessment of the data in many instances should point the other way. DancingDinosaur has periodically looks at comparative assessments done by IBM. You can find a previous one, lessons from Eagle studies, here.
The first assumption is that the Intel server is cheaper. But is it? IBM benchmarked a database workload on SQL Server running on Intel x86 and compared it to DB2 on z/OS. To support 23,000 users, the Intel system required 128 database cores on four HP servers. The hardware cost $0.34 million and the software cost $1.64 million for a 3-year TCA of $1.98 million. The DB2 system required just 5 cores at a hardware/software combined 3-year TCA of $1.4 million
What should have killed the Intel deal was the software cost, which has to be licensed based on the number of cores. Sure, the commodity hardware was cheap, but the cost of the database licensing drove up the Intel cost. Do IT managers wonder why they need so many Intel cores to support the same number of users they can support with far fewer z cores? Obviously many don’t.
Another area many IT managers overlook is I/O performance and its associated costs. This becomes particularly important as an organization deploys virtual machines. Increasing the I/O demand on an Intel system uses more of the x86 core for I/O processing, effectively reducing the number of virtual machines that can be deployed per server and raising hardware costs.
The zEnterprise handles I/O differently. It provides 4-16 dedicated system assist processors for the offloading of I/O requests and an I/O subsystem bus speed of 8 GBps.
The z also does well with z/VM for Linux guest workloads. In this case IBM tested three OLTP database production workloads (4 server nodes per cluster), each supporting 6,000 trans/sec, Oracle Enterprise Edition, and Oracle Real Application Cluster (RAC) running on 12 HP DL580 servers (192 cores). This was compared to three Oracle RAC clusters of 4 nodes per cluster with each node as a Linux guest under z/VM . The zEC12 had 27 IFLs. Here the Oracle HP system cost $13.2 million, about twice as much as on the zEC12, $5.7 million. Again, the biggest cost savings came from the need for fewer Oracle licenses due to fewer cores.
The z also does beats Intel servers when running mixed high- and low- priority workloads on the same box. In one example, IBM compared high priority online banking transaction workloads with low priority discretionary workloads. The workloads running across 3 Intel servers with 40 cores each (120 cores total) cost $13.7 million compared to z/VM on an zEC12 running 32 IFLs, which cost $5.77 million (58% less).
Another comparison demonstrates that core proliferation between Intel and the z is the killer. One large workload test required sixteen 32-way HP Superdome App. Production/Dev/ Test servers and eight 48-way HP Superdome DB Production/Dev/Test for a total of 896 cores. The 5-year TCA came to $180 million. The comparable workload running on a zEC12 41-way production/dev/test system used 41 general purpose processors (38,270 MIPS) with a 5-year TCA of $111 million.
When you look at the things a z can do to keep concurrent operations running that Intel cannot you’d hope non-mainframe IT managers might start to worry. For example, the z handles core sparing transparently; Intel must bring the server down. The z handles microcode updates while running; Intel can update OS-level drivers but not firmware drivers. Similarly, the z handles memory and bus adapter replacements while running; Intel servers must be brought down to replace either.
Not sure what it will take for the current generation of IT managers to look beyond Intel. Maybe a new business class version of the zEC12 at a stunningly low price. You tell me.
You can see the original posting here.