White Paper: Scaling SaaS Adoption in Large Enterprises

IBM White Paper: Scaling SaaS Adoption in Large Enterprises

Due to the simplicity and low-cost to get started with Software-as-a-Service, departments within an organization have often introduced online services with almost trivial ease.

Use of software as a service application like Salesforce may well begin under the radar of corporate IT governance, perhaps initially born from a need to rapidly adopt a customer relationship management (CRM) tool, but then quickly realizing the broader capabilities of the overall Salesforce platform. Due to the low entry cost, SaaS can often be deployed without the need for centralized Capex funding and the cost model fits within a department’s operating budget.

Of course, decentralization was and is one of the biggest benefits of SaaS, and we’re not saying that’s a bad thing. However, decentralization also leads to multiple sources and sites of data which are unconnected with each other and with the different business units using the software.

In a CRM scenario, for example, sales, marketing, and customer service could all be pulling data for the same customer from different databases. They don’t get the unified customer-centric view they need, and the customer doesn’t get the holistic experience they expect.


Want the full Scaling SaaS Adoption In Large Enterprises white paper by IBM?


White Paper: What is a Runbook

White Paper: What is a Runbook?

TxMQ provides highly skilled remote technical support and tailored managed-services solutions for our customer’s middleware.

Whether it’s database (Oracle™, IBM DB2™, Microsoft™ SQL Server, MySQL™), Java-brokering software (WebSphere Application Server™, Liberty™, JBOSS™), messaging middleware (IBM MQ™), transformation technologies (IIB™, WTX™) to name a few, or other technologies supported under the TxMQ MSP, RTS (remote technical support), or remote systems management (RSM) programs. Continue reading

White Paper: SmartGridSB Solution Delivers Real-Time Visibility & Actionable Insight For Utilities Sector

Utilities companies have widely embraced smart meter technology, but often fail to implement networking tools and analytics to harvest actionable data from this new smart meter network.
Smart Grid Smart Business (SmartGridSB) delivers a real-time solution that empowers energy and utility companies to automate proactive, cost-saving decisions about infrastructure and operations. These automations are based on the integration of smart meter networks with analytic software solutions, which drive event correlation and decision management. The SmartGridSB solution provides visibility, insight and analytic awareness for smart grid operations. It streamlines information within the operational processes to identify patterns and analyze data that enable you to make the best response and then implement the correct action in less time.

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row][et_pb_column type="4_4"][et_pb_text admin_label="Text - Want the %91white paper name%93?" _builder_version="3.0.98" background_layout="light"]

Want the full SmartGridSB Solutions white paper?

Fill in the form below to get it delivered right to your inbox:

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row][et_pb_column type="1_4"][/et_pb_column][et_pb_column type="1_2"][et_pb_code admin_label="Code - HubSpot Form" _builder_version="3.0.98" background_color="rgba(86,86,91,0.08)" text_orientation="center" animation_style="fold" custom_padding="30px|20px|30px|20px" custom_css_main_element="border: solid 3px #199ad6;"]<center><!--[if lte IE 8]><!-- [et_pb_line_break_holder] --> <script charset="utf-8" type="text/javascript" src="//js.hsforms.net/forms/v2-legacy.js"></script><!-- [et_pb_line_break_holder] --> <![endif]--><!-- [et_pb_line_break_holder] --> <script charset="utf-8" type="text/javascript" src="//js.hsforms.net/forms/v2.js"></script><!-- [et_pb_line_break_holder] --><script><!-- [et_pb_line_break_holder] --> hbspt.forms.create({<!-- [et_pb_line_break_holder] --> portalId: '2682341',<!-- [et_pb_line_break_holder] --> formId: 'c950b59d-17a5-4865-a3d6-05ddc2492844'<!-- [et_pb_line_break_holder] --> });<!-- [et_pb_line_break_holder] --></script></center>[/et_pb_code][/et_pb_column][et_pb_column type="1_4"][/et_pb_column][/et_pb_row][/et_pb_section]

Whitepaper: 2014 BPM Products – Mature But Not Equal

Project Description
Business Process Management or BPM is a hot topic in 2014. With escalating costs of employee benefits, coupled with the ongoing flat or barely growing economy; finding new ways to reduce costs and personnel-effort is a primary goal for many of our customers.
BPM software has reached a level of maturity in the marketplace that encompass many options and variety of capability for customers to choose between. Gartner’s magic quadrant scatters a number of top vendor products having many similar attributes, but varying levels of integration capability and need for customization.
When evaluating BPM products, tailoring business requirements around adoption, and the life expectancy of the product are imperative in attaining True Cost of Ownership. In addition, an evaluation of ongoing support costs, including training, and retaining/recruiting skilled workers to support the products, must be included in any cost discussion. Building applications from scratch (do-it-yourself approach) may result in lower up-front costs, but such solutions are generally able to realize less than 75% of the functional business process requirements. All the while, they regularly fail to eliminate overhead costs associated with approximately 25% of exceptions and exception handling (those not cost-effective to automate).
Do-it-yourself approaches are likely to require reworking, or even complete rewriting – contributing to failure in meeting the business objective for cost reduction around business process automation. When businesses are starting from zero automation to 75% automation, and there’s certainty that the business process needs will not change in 5 years, the business may successfully implement a solution. However, it will likely be the cost to customize or replace the system within the same five years that will result in failure to meet ROI expectations.
Furthermore, the operational support costs and upgrades to the supporting infrastructure are rarely considered in ROI calculations – especially if memory management or performance become a problem with a new application, requiring larger hardware and software costs than initially predicted. This is an issue before the aforementioned labor costs to support non-industry standard solutions are even factored into the equation. Such costs are sure to compromise ROI, as the business experiences incremental increases in IT costs without an accompanying increase in perceived value. What is unique about BPM initiatives, is their non-functional requirements external to the business process, for functional requirements from the business. These requirements must be identified and prioritized as part of the up-front project costs and product selection criteria.
Prior to evaluating any BPM product, BPM initiatives must have the following documentation fully prepared:
Business Requirements
Business requirements should be outlined, demonstrating a detailed process analysis with use cases representing all required functionality. The business units must prioritize use cases individually, with a plan to measure acceptance for each use case. These requirements should also include costs for the way the business operates, based on current and historical data.
Business requirements are basic to any application initiative, but BPM doesn’t stop there. Rarely are today’s businesses able to predict changes in acquisitions, mergers, compliance, business climate, or other regulatory impacts on business processes. This leads to the need for flexibility in modeling and making changes to the BPM product as a basic supposition, characterizing and differentiating BPM products from other applications that simply automate business functions.
BPM Capabilities
BPM capabilities typically include what is coined “BPEL” or business process execution language, which is used as input by BPM tools for the decisions that determine the routing or processing path within a business transaction. When automating a business process, flexibility must be kept in the forethought for ongoing change management with each business process being fully automated, keeping in line with the concept of 80% automation and 20% exception handling, which requires routing via BPM technologies. Such flexibility requires an agile business process model, for which many BPM products do not account (buyer beware). This leaves customers with the need to perform regular “rewrites” of their code, leading to costly “workarounds,” which could outweigh the initial benefits of automation if such rules are not externalized from the business modules.
Business Process Management involves channeling 80% of the work through automated mechanisms, with the left over 20% exceptions utilizing a BPM product. These remaining 20% of exceptions are then identified in terms of the business process status and escalated to humans who can act on the exception (approve it, reject it, call a customer, call a supplier, etc.). The BPM rules effectively regulate how long a pending action can stay in the system within a certain status until another escalation, notification, or exception occurs to initiate another management condition.
The process of identifying a business event, or hung transaction, automation according to accepted routing rules, and event management are part and post of the inherent BPM functions. These “standard” BPM functions are typically handled by state management inside the BPM database. Such database functionality can degrade quickly when transactions begin stacking up. It is imperative to validate the scalability of the underlying database technology. BPM databases behave differently from typical applications, since they are managing “in-flight” status information, and upon completion of the business process, the data is quickly archived and moved off the system. This requires different optimization mechanisms, which should be discussed with your DBA in light of your BPM transaction volumes.
The business process automation being implemented should give heavy consideration to ongoing management and feedback to the business unit about the number of transactions processed straight through, versus exception handling, both historically (year over year) and real-time. Each business process owner will want automated reports, or possibly ad-hoc reporting capabilities, to know exact measurements and statistics about each process, likely down to specific characteristics of each transaction.
The best BPM solutions will provide mechanisms that allow for a variety of reporting capabilities, but should make reporting available through standard database queries, or by exporting data to a data warehouse where enterprise reporting tools and capability are readily available. This is an accepted approach, given that until such automation is in place, business owners rarely have detailed requirements around their reporting needs. Be certain that your product selection provides for robust and detailed reporting by date range input, and in real-time (preferably using configurable dashboards that can be customized to each business process owner). Each dashboard can then be given a URL or logon that the business process owner can use to access individual information and reports.
Many of today’s BPM products provide a “modeling” capability with “deployment” to a run-time environment. This approach delivers flexibility so that such models can be changed, tested, accepted, and deployed to a training environment on a regular application release basis. This enables business employees to regularly adopt change and process improvement. Such tools require multiple environments to enable flexibility. The days of application environments consisting of a single dev and prod instance are long gone. It’s far more complex now. Flexible architectures require dev, test, stage, train and production environments with additional needs for high volume transactional and integrated environments, in addition to performance testing and DR instances, insuring against loss of revenue in light of business or technical interruption.
For each business process slated for automation (such determination must be based on current costs, current transaction counts and growth predictions, and/or SLA information in terms of time to complete), inputs must be evaluated for the BPM platform selection, infrastructure sizing and costs. Such sizing and platform selection should be based on solid business transaction volume projections for each use case. If the idea is to “grow” the infrastructure with the business transaction volume growth, then costs must also include the systems management software, personnel, and mechanisms for enabling a performance and capacity management process, as well as monitoring software and monitoring automation.
Some proactive work should be done to determine the “as-is” situational analysis, and to develop the envisioned “to-be” or target system that will address the needs or concerns with the current “as-is”. Once agreement has been attained on the vision going forward, the development of a gap analysis is necessary to identify the effort and costs to go from the current situation to the proposed vision. During this process, many alternatives will be identified with varying cost scenarios as well as timeline, and resource impacts. Formalizing an “Impact Statement” process may be highly valuable in identifying the costs, timeline, and adoption associated with the various ways to address gaps.
BPM product selection should always begin with a good understanding of your BPM needs. Vendors are eager to showcase their individual product capabilities and give customer references. Check out BPM trade shows, articles, websites, and request product demos. Within every IT shop, there are experienced and valued technicians with experience to help identify what went well and what didn’t go well with past BPM initiatives. Whether or not past BPM initiatives met their ROI or business goals can be difficult information to obtain, but well worth the research. Businesses should ask vendors to provide the cost savings basis for each customer, to effectively identify opportunities for realizing cost reduction with any new BPM initiative. Many vendors have developed costing formulas that can help businesses build an effective business use case scenario to drive a BPM initiative that might otherwise flounder.
In contemplating a BPM approach, consideration should be given to the product selection based on best-of-breed vendor products. Best-of-breed products typically involve a higher level of investment as the intent of these products is to integrate them as cornerstone technologies within an enterprise, with the expectation that critical business processes will be running on them. BPM tools are expensive, generally requiring a change in IT culture for adoption and integration of the BPM services into your SDLC, with centralized BPM expert(s) for ongoing support and maintenance of BPM suites.
If “best-of-breed” is outside your financial reach (i.e. approved budget), re-evaluate your business use cases and areas of savings. Building out the many BPM mechanisms for exception handling and management of state in a database with scalability and management capability is a difficult and lengthy development initiative with high risk. Open source BPM products have a higher risk of pushing the length of time for adoption and could possibly zero out your business case ROI with increased support costs, thus exchanging business personnel for more expensive IT personnel required for ongoing development and support of a customized open source solution. Of even more concern, as business process transactions increase, you alone are responsible for scalability and performance of the open source solution, which may turn into a 7x24x365 support basis.
Image provided by Enfasis Logistica Mexico

White Paper: z/Linux Performance Configuration & Tuning for IBM® WebSphere® Compendium

TxMQ Staff Consultants Contributed To This Write-Up

Project Description


All guide sources come from well-documented IBM or IBM partner’s reference material. The reason for this document is simple: Take all relevant sources and put their salient points into a single, comprehensive document for reliable set-up and tuning of a z/Linux environment.
The ultimate point is to create a checklist and share best practices.


Assemble a team that can address all aspects of the performance of the software stack. The following skills are usually required:

  • Overall Project Coordinator
  • VM Systems Programmer. This person set up all the Linux guests in VM.
  • Linux Administrator. This person installed and configured Linux.
  • WebSphere Administrator.
  • Lead Application Programmer. The person can answer questions about what the application does and how it does it.
  • Network Administrator.


TIP: Start from the outside and work inward toward the application.
The environment surrounding the application causes about half of the potential performance problems. The other half is caused by the application itself.
Start with the environment that the application runs in. This eliminates potential causes of performance problems. You can then work toward the application in the following manner.
1. LPAR Things to look at: Number of IFLs, weight, caps, total real memory, memory allocation between cstore and xstore
2. VM Things to look at: Communications configuration between Linux guests and other LPARs, paging space, share settings
3. Linux Things to look at: Virtual memory size, virtual CPUs, VM share and limits, swapping, swap file size, kernel tuning
4. WebSphere Things to look at: JVM heap size, connection pool sizes, use of caches. WebSphere application performance characteristics


Defining LPAR resource allocation for CPU, memory, DASD, and network connections


Adjust depending on the environment (prod, test, etc…)
With VSWITCH, the routing function is handled directly by the virtual machine’s (VM’s) Control Program instead of the TCP/IP machine. This can help eliminate most of the CPU time that was used by the VM
router it replaces, resulting in a significant reduction in total system CPU time.
– When a TCP/IP VM router was replaced with VSwitch, decreases ranging from 19% to 33% were observed.
– When a Linux router was replaced with VSwicth, decreases ranging from 46% to 70% were observed.
NOTE: The security of VSwitch is not equal to a dedicate firewall or an external router, so when high security is required of the router function, consider using those instead of VSwitch.


Configuration resulted in higher throughput than the Guest LAN feature.


Guest LAN is ring based. It can be much simpler to configure and maintain.


Tips for Avoiding Eligible Lists:

  • Set each Linux machines virtual-storage size only as large as it needs to be to let the desired Linux application(s) run. This suppresses the Linux guest’s tendency to use its entire address space for file cache. If the Linux file system is hit largely by reads, you can make up for this with minidisk cache (MDC). Otherwise, turn MDC off, because it induces about an 11-percent instruction-path-length penalty on writes, consumes storage for the cached data, and pays off little because the read fraction isn’t high enough.
  • Use whole volumes for VM paging instead of fractional volumes. In other words, never mix paging I/O and non-paging I/O on the same pack.
  • Implement a one-to-one relationship between paging CHPIDs and paging volumes.
  • Spread the paging volumes over as many DASD control units as possible.
  • Turn on thw paging control units of they support non-volatile storage (NVS) or DASD fast write (DASDFW), (applies to RAID devices).
  • Provide at least twice as much DASD paging space (CP QUERY ALLOC PAGE) as the sum of the Linux guests’ virtual storage sizes.
  • Having at least one paging volume per Linux guest is a great thing. If the Linux guest is using synchronous page faults, exactly one volume per Linux guest will be enough. If the guest is using asynchronous page faults, more than one per guest might be appropriate; one per active Linux application will serve the purpose.
  • In queued direct I/O (QDIO)-intensive environments, plan that 1.25MB per idling real QDIO adapter will be consumed out of CP below-2GB free storage, for CP control blocks (shadow queues). If the adapter is being driven very hard, this number could rise to as much as 40MB per adapter. This tends to hit the below-2 GB storage pretty hard. CP prefers to resolve below-2GB contention by using expanded storage (xstore).
  • Consider configuring at least 2GB to 3GB of xstore to back-up the below-2GB central storage, even if central storage is otherwise large.
  • Try CP SET RESERVE to favor storage use toward specific Linux guests.


Memory Management and Allocation
Add 200-256MB for WebSphere overhead per guest.
Configure 70% of real memory as central storage (cstore).
Configure 30% of real memory as expanded storage (xstore). Without xstore VM must page directly to DASD, which is much slower than paging to xstore.
CP SET RESERVED. Consider reserving some memory pages for one particular Linux VM, at the expense of all others. This can be done with a z/VM command (CP SET RESERVED).
If unsure, a good guess at VM size is the z/VM scheduler’s assessment of the Linux guest’s working set size.
Use whole volumes for VM paging instead of fractional volumes. In other words, never mix paging I/O and non-paging I/O on the same pack.
Implement a one-to-one relationship between paging CHPIDs and paging volumes.
Spread the paging volumes over as many DASD control units as you can.
If the paging control units support NVS or DASDFW, turn them on (applies to RAID devices).
CP QUERY ALLOC PAGE. Provide at least twice as much DASD paging space as the sum of the Linux guests’ virtual storage sizes.
Having at least one paging volume per Linux guest is beneficial. If the Linux guest is using
synchronous page faults, exactly one volume per Linux guest will be enough. If the guest is using asynchronous page faults, more than one per guest may be appropriate; one volume per active Linux application is realistic.
In memory over commitment tests with z/VM, increasing the memory over commitment up to a ratio of 3.2:1 occurred without any throughput degradation.
Cooperative Memory Management (CMM1) and Collaborative Memory Management (CMM2) both regulate Linux memory requirements under z/VM. Both methods improve performance when z/VM hits a system memory constraint.
Utilizing Named Saved Segments (NSS), the z/VM hypervisor makes operating system code in shared real memory pages available to z/VM guest virtual machines. With this update, multiple Red Hat Enterprise Linux guest operating systems on the z/VM can boot from the NSS and be run from a single copy of the Linux kernel in memory. (BZ#474646)
Expanded storage for VM. Here are a few thoughts on why:
While configuring some xstore may result in more paging, it often results in more consistent or better response time. The paging algorithms in VM evolved around having a hierarchy of paging devices. Expanded storage is the high speed paging device and DASD the slower one where block paging is completed. This means expanded storage can act as a buffer for more active users as they switch slightly between working sets. These more active users do not compete with users coming from a completely paged out scenario.
The central versus expanded storage issue is related to the different implementations of LRU algorithms used between stealing from central storage and expanded storage. In short, for real storage, you use a reference bit, which gets reset fairly often. While in expanded storage, you have the luxury of having an exact timestamp of a block’s last use. This allows you to do a better job of selecting pages to page out to DASD.
In environments that page to DASD, the potential exists for transactions (as determined by CP) to break up with the paging I/O. This can cause a real-storage-only configuration to look like the throughput rate is lower.
Also configure some expanded storage, if needed, for guest testing. OS/390, VM, and Linux can all use expanded storage.


Linux is a long-running virtual machine and VM, by default, is set up for short-running guests. This means that the following changes to the VM scheduler settings should be made. Linux is a Q3 virtual machine, so changing the third value in these commands is most important. Include these settings in the profile exec for the operator machine or autolog1 machine:
set srm storbuf=300,200,200
set srm ldubuf=100,100,100
Include this setting in the PROFILE EXEC for the operator machine or AUTOLOG1 machine.


YES. One of the most common mistakes with new VM customers is ignoring paging space. The VM system, as shipped, contains enough page space to get the system installed and running some small trial work. However, you should add DASD page space to do real work. The planning and admin book has details on determining how much space is required.
Here are a few thoughts on page space:
If the system is not paging, you may not care where you put the page space. However, sooner or later the system grows to a point where it pages and then you’ll wish you had thought about it before this happens.
VM paging is most optimal when it has large, contiguous available space on volumes that are dedicated to paging. Therefore, do not mix page space with other space (user, t-disk, spool, etc.).
A rough starting point for page allocation is to add up the virtual machine sizes of virtual servers running and multiple by 2. Keep an eye on the allocation percentage and the block read set size.
See: Understanding poor performance due to paging increases


If you have command privilege class E, issue the following CP command to view information about these classes of user: INDICATE LOAD


A minimal Linux guest system fits onto a single 3390-3 DASD, and this is the recommended practice in the field. This practice requires that you do not use GNOME or KDE window managers in order to retain the small size of the installed system. (The example does not do this because we want to show the use of LVM and KDE).


If your Linux distribution supports the “VM shared kernel support” configuration option, the Linux kernel can be generated as a shareable NSS (named saved system). Once this is done, any VM users can IPL LXSHR and about 1.5M of the kernel is shared among all users. Obviously, the greater number of Linux virtual machines running, the greater the benefit of using the shared system.


Makes a virtual machine exempt from being held back in an eligible list during scheduling when system memory and/or paging resources are constrained. Virtual machines with QUICKDSP set on go directly to the dispatch queue and are identified as Q0 users. We prefer that you control the formation of eligible lists by tuning the CP SRM values and allowing a reasonable over-commitment of memory and paging resources, rather than depending on QUICKDSP.


Defined with an assigned number of virtual CPs, and a SHARE setting that determines each CP’s share of the processor cycles available to z/VM.
When running WebSphere applications in Linux, you are typically able to over-commit memory at a
1.5/1 ratio. This means for every 1000 MB of virtual memory needed by a Linux guest, VM needs to
have only 666 MB of real memory to back that up. This ratio is a starting point and needs to be adjusted based on experience with your workload.


Try to avoid swapping in Linux whenever possible. It adds path length and causes a significant hit to response time. However, sometimes swapping is unavoidable. If you must swap, these are some pointers:
Prefer swap devices over swap files.
Do not enable MDC on Linux swap Mini-Disks. The read ratio is not high enough to overcome the write overhead.
We recommend a swap device size approximately 15% of the VM size of the Linux guest. For example, a 1 GB Linux VM should allocate 150 MB for the swap device.
Consider multiple swap devices rather than a single, large VDISK swap device. Using multiple swap devices with different priorities can alleviate stress on the VM paging system when compared to a single, large VDISK.
Linux assigns priorities to swap extents. For example, you can set up a small VDISK with higher priority (higher numeric value) and it will be selected for swap as long as there is space on the VDISK to contain the process being swapped. Swap extents of equal priority are used in round-robin fashion. Equal prioritization can be used to spread swap I/O across chpids and controllers, but if you are doing this, be careful not to put all the swap extents on Mini-Disks on the same physical DASD volume. If you do, you will not be accomplishing any spreading. Use swapon-p… to set swap extent priorities.


The advantage of VDISK is that a very large swap area can be defined at very little expense. The VDISK is not allocated until the Linux server attempts to swap. Swapping to VDISK with the DIAGNOSE access method is faster than swapping to DASD or SCSI disk. In addition, when using a VDISK swap device, your z/VM performance management product can report swapping by a Linux guest.


Swapping to DCSS is the fastest known method. As with VDISK, the solution requires memory. But lack of memory is the reason for swapping. So it could preferably be used as a small fast swap device in peak situations. The DCSS swap device should be the first in a cascade of swap devices, where the following could be bigger and slower (real disk). The swapping to DCSS adds complexity.
Create an EW/EN DCSS and configure the Linux guest to swap to the DCSS. This technique is useful for cases where the Linux guest is storage-constrained but the z/VM system is not. The technique lets the Linux guest dispose of the overhead associated with building channel programs to talk to the swap device. For one illustration of the use of swap-to-DCSS, read the paper here.


If the storage load on your Linux guest is large, the guest might need a lot of room for swap. One way to accomplish this is simply to ATTACH or DEDICATE an entire volume to Linux for swapping. If you have the DASD to spare, this can be a simple and effective approach.


Using a traditional Mini-Disk on physical DASD requires some setup and formatting the first time and whenever changes in size of swap space are required. However, the storage burden on z/VM to support Mini-Disk I/O is small, the controllers are well-cached, and I/O performance is generally very good. If you use a traditional Mini-Disk, you should disable z/VM Mini-Disk Cache (MDC) for that Mini-Disk (use MINIOPT NOMDC statement in the user directory).


A VM temporary disk (t-disk) could be used. This lets one define disks of various sizes with less consideration for placement (having to find ‘x’ contiguous cylinders by hand if you don’t have DIRMAINT or a similiar product). However, t-disk is temporary, so it needs to be configured (perhaps via PROFILE EXEC) whenever the Linux VM logs on. Storage and performance benefits of traditional Mini-Disk I/O apply. If you use a t-disk, you should disable Mini-Disk cache for that Mini-Disk.


A VM virtual disk in storage (VDISK) is transient like a t-disk is. However, VDISK is backed by a memory address space instead of by real DASD. While in use, VDISK blocks reside in central storage (which makes it very fast). When not in use, VDISK blocks can be paged out to expanded storage or paging DASD. The use of VDISK for swapping is sufficiently complex, so reference this separate tips page.


Attach expanded storage to the Linux guest and allow it to swap to this media. This can give good performance if the Linux guest makes good use of the memory, but it can waste valuable memory if Linux uses it poorly or not at all. In general, this is not recommended for use in a z/VM environment.



The -Xgcpolicy options have these effects:


Disables concurrent mark. If you do not have pause time problems (as seen by erratic application response times), you get the best throughput with this option. Optthruput is the default setting.


Enables concurrent mark with its default values. If you are having problems with erratic application response times that are caused by normal garbage collections, you can reduce those problems at the cost of some throughput, by using the optavgpause option.


Requests the combined use of concurrent and generational GC to help minimize the time that is spent in any garbage collection pause.


Disables concurrent mark. It uses an improved object allocation algorithm to achieve better performance when allocating objects on the heap. This option might improve performance on SMP systems with 16 or more processors. The subpool option is available only on AIX®, Linux® PPC and zSeries®, z/OS®, and i5/OS®.


Resulted in a significant throughput improvement over the no caching case, where Distributed map caching generated the highest throughput improvement.


Interesting feature meant to significantly improve the performance with small caches without additional CPU cost.


The following recommendations from the Washington Systems Center can improve the performance of your WebSphere applications:

    • Use the same value for StartServers, MaxClients, and MaxSpareServers parameters in the httpd.conf file.
      Identically defined values avoid starting additional servers as workload increases. The HTTP server error log displays a message if the value is too low. Use 40 as an initial value.
    • Serve image content (JPG and GIF files) from the IBM HTTP Server (IHS) or Apache
      Web server.
      Do not use the file serving servlet in WebSphere. Use the DocumentRoot and
      directives, or the ALIAS directive to point to the image file directory.
    • Cache JSPs and Servlets using the servletcache.xml file.
      A sample definition is provided in the servletcache.sample.xml file. The URI defined in the
      servletcache.xml must match the URI found in the IHS access log. Look for GET statements, and a definition for each for each JSP or servlet to cache.
    • Eliminate servlet reloading in production.
      Specify reloadingEnabled=”false” in the ibm-web-ext.xml file located in the application’s
      WEB-INF subdirectory.
    • Use Resource Analyzer to tune parameter settings.
      Additionally, examine the access, error, and native logs to verify applications are functioning correctly.
    • Reduce WebSphere queuing.
      To avoid flooding WebSphere queues, do not use an excessively large MaxClients value in the httpd.conf file. The Web Container Service General Properties MAXIMUM THREAD SIZE value should be two-thirds the value of MaxClients specified in the httpd.conf file. The Transport MAXIMUM KEEP ALIVE connections should be five more than the MaxClients value.



      • vmstat
      • sysstat package with sadc, sar, iostat
      • dasd statistics
      • SCSI statistics
      • netstat
      • top

z/VM Performance Toolkit. PERFORMANCE TOOLKIT FOR VM, SG24-6059
This perl script can help determine application memory usage. It displays memory used by WebSphere as well as memory usage for active WebSphere application servers. Using the Linux ps command, the script displays all processes containing the text “ActiveEJBServerProcess” (the WebSphere application server process). Using the RSS value for these processes, the script attempts to identify the amount of memory used by WebSphere applications.

White Paper: Why Upgrade from WebSphere Application Server (WAS) v7 to v8.x?

One of the more common questions we field at TxMQ comes from the enterprise community. Customers ask: We already upgraded our WebSphere Application Server (WAS) from 6 to 7, why should we now upgrade from 7 to 8? With the amount of chatter surrounding this topic, there’s clearly a bit of disconnect, so here’s some insight to help in the decision-making process.
There are several compelling reasons to upgrade from WAS v7 to v8, and they center on performance and setup/configuration improvements. The performance gains help you maximize your hardware investments because you won’t outgrow your servers as quickly. That ultimately leads to a reduction in your Total Cost of Ownership (TCO).
The setup/configuration improvements will speed up your end-to-end development cycle. You’ll therefore enable better, faster development using the same resources.
Lastly, the mobile-application feature pack offered in WAS v8 is a big advantage for companies already involved with, or wanting to become involved with mobile-app development and operation. This feature pack helps immensely.
That’s the broad-stroke look at what a WAS v8 upgrade delivers. Following is a more granular look at the specific features and benefits of a WAS v8 upgrade, including features exclusive to the latest v8.5 update.
An upgrade from WAS v7 to v8.x delivers:
• Application performance improvements of up to 20%
• Up to 20% faster server startup time for developers
• Up to 28% faster application deployments in a large topology
• JPA 2.0 optimizations with DynaCache and JPA Level 2 cache
The new Liberty Profile option is a highly composable, fast-to-start and ultra-lightweight profile of the application server and is optimized for developer productivity and web-application deployment.
• Up to 15% faster product installations
• Up to 323% faster application-server creation in a large topology
• Up to 45% faster application-server cluster creation in a large topology
• Up to 11% better vertical scaling on larger multicore systems
WAS v8 includes JVM runtime enhancements and JIT optimizations. It lowers risks through end-to-end security-hardening enhancements including security updates defined in the Java EE 6 specifications, additional security features enabled by default and improved security-configuration reporting.
Security default behavior is enhanced: SSL communication for RMI/IIOP, protected contents of HTTP session objects, Cookie protection via HttpOnly attribute is enabled.
• Java Servlet 3.0 security now provides three methods – login(), logout() and authenticate() – that can be used with an HTTPServletRequest object and the ability to declare security constraints via annotations
• Basic security support for the EJB embeddable container
• Support for Java Authentication SPI for containers (JASPI)
• Web Services Security API (WSS API) and WS-Trust support in JAX-WS to enable users building single sign on Web services-based applications
• Security enhancement for JAX-RS 1.1
The focus on simplification continues in EJB 3.1 with several significant new functions including optional Business Interfaces, Singleton EJBs and Asynchronous Session Bean method invocation.
• CDI 1.0 – New API to support Context and Dependency Injection
• Bean Validation 1.0 – New API for validating POJO Beans
• JSF 2.0 – Adds Facelets as a view technology targeted at JSF
• Java Servlet 3.0 – Makes extensive use of annotations, introduces web fragments and a new asynchronous protocol for long-running requests
• JPA 2.0 – Has improved mapping support to handle embedded collections and ordered lists, and adds the Criteria API
• JAX-RS 1.1 – Delivers Web 2.0 programming model support within Java EE
• JAX-WS 2.2 – Extends the functionality provided by the JAX-WS 2.1 specification with new capabilities. The most significant new capability is support for the Web Services Addressing (WS-Addressing) Metadata specification in the API
• JSR-109 1.3 – Adds support for singleton session beans as endpoints, as well as for CDI in JAX-WS handlers and endpoints, and for global, application and module-naming contexts
• JAXB 2.2 – Offers improved performance through marshalling optimizations enabled by default
The listings here are important and many.
• The Web 2.0 feature pack – new revenue opportunities and rich user experiences enabled by extending enterprise applications to mobile devices
• Faster migrations with less risk of downtime through improved automation and tools, including a no-charge Migration Toolkit for migrating version-to-version and from competition
• Improved developer and administrator productivity through new and improved features such as improved developer productivity via monitored directory install, uninstall, and update of Java EE applications to accelerate the edit-compile-debug development lifecycle
• Enhanced administrator productivity through automated cloning of nodes within clusters, simpler ability to centrally expand topologies and to locate and edit configuration properties
• Faster problem determination through new binary log and trace framework
• Simpler and faster product install & maintenance with new automated prerequisite and interdependency checking across distributed and z/OS environments
• Deliver better, faster user experiences by aligning programming model strengths with project needs through WebSphere’s leadership in the breadth of programming models supported: Java EE, OSGi Applications, Web 2.0 & Mobile, Java Batch, XML, SCA (Service Component Architecture), CEA (Communications Enabled Apps), SIP (Session Initiation Protocol) & Dynamic Scripting
• Integration of WebSphere Application Server v7 feature packs to simplify access to new programming models
• Deliver single sign-on (SSO) applications faster through new and improved support for SAML, WS Trust and WSS API specifications
• Most v7 feature packs are integrated into v8 OSGi Applications, Service Component Architecture (SCA), a Java Batch Container, Communications Enabled Applications (CEA) programming model, and XML programming model improvements XSLT 2.0, XPath 2.0 and xQuery 1.0
• Automatic Node Recovery and Restart
The following enhancements are specific to WAS v8.5.
• Application Edition Management enables interruption-free application rollout. Applications can be upgraded without incurring outages to your end-users
• Health Management monitors the status of your application servers and is able to sense and respond to problem areas before end-users suffer an outage. Problem areas include increased time spent on garbage collection, excessive request timeouts, excessive response time, excessive memory and much more
• Intelligent Routing improves business results by ensuring priority is given to business-critical applications. Requests are prioritized and routed based upon administrator-defined rules
• Dynamic Clustering can dynamically provision and start or stop new instances of application server Java Virtual Machines (JVM) based on workload demands. It provides the ability to meet Service Level Agreements when multiple applications compete for resources
• Enterprise Batch Workload support leverages your existing Java online transaction processing (OLTP) infrastructure to support new Java batch workloads. Java batch applications can be executed across multiple Java Enterprise Edition ( Java EE) environments
• IBM WebSphere SDK Java Technology Edition v7.0 as an optional pluggable Java Development Kit (JDK)
• Web 2.0 and Mobile Toolkit provides an evolution of the previous feature pack
• Updated WebSphere Tools bundles provide the “right-fit” tools environment to meet varied development and application needs
Are you evaluating a WAS upgrade? TxMQ can help. To get started, contact vice president Miles Roty at (716) 636-0070 x228, [email protected].
Photo courtesy of Lynn

White Paper: Four-Quadrant Analysis

Prepared By: Cindy Gregoire, TxMQ Practice Manager, Middleware & Application Integration Services


The principle in developing solutions that brings business value for multiple business units called four quadrant analysis.

  • Are you having difficulty getting your SOA off the ground?
  • Are business initiatives dragging down your infrastructure with lots of low-quality web services that you wouldn’t even consider for reuse?
  • Are you able to realize your SOA investments with rapid development and high business value and high acclaim from your business counterparts?
Using Service Oriented Architecture

It may be time to consider changing your requirements gathering process.
There is a principle in developing solutions that bring business value for multiple business units called four quadrant analysis. This analysis involves the interview and collection of information gathering into four distinct categories or quadrants that help to bring together a more complete architecture framework for completing business process management initiatives leveraging your middleware services and application provisioning within the framework of a service oriented architecture.
Within a service oriented architecture (SOA), the style of developing and deploying applications involves an assembly of reusable components which are modified for a new purpose – the goal of which is to minimize development efforts, thereby mimizing the extensive needs for testing, and therefore resulting in rapid delivery of business solutions. Delivery of business automation involves the implementation of software to perform work upon business data. In most cases, it also involves the routing of decisions, inputs, or outputs to various user groups who operate independently from one another or may involve service providers or external business entities.

SOA Requirements

In implementing a SOA, requirements may originate from a number of sources making the job of the business analysis even more difficult as analysts attempt to identify and define what is needed to realize the SOA investments for flexibility, reuse, and speed-to-market.
Functional requirements obtained through agile development efforts tend to have a myopic focus on the user interfaces as requirements become known during an iterative process between application users and “agile” developers leaving BAs drowning in a mire of minute detail around the many options of where, how, and when to display fields, fonts, and typestyle themes.
Requirements that are defined through the Agile Methodology, tend to be end-user focused with “look and feel” playing the higher priority than efficiency of code, system performance, or keystroke/user efficiency behaviors. Such requirements are labeled “non-functional” and generally become background information as projects get closed out based only on meeting requirements created based on the application web graphical user interfaces or originating on one of the many portal technologies.
Portal technologies are patterned for language and usage patterns which can be modified quickly providing a unique look and feel for user groups. Department portals may be modified without development code and optimized for the user groups accessing them, however, storage of business data and critical information can cause problems across the enterprise as decisions are made on an application-by-application basis (as they often were pre-SOA days).
The issue comes in as data, process, and procedures now vary considerably by business process or department with information needing to be supplemented, indexed, or compared against metrics for proper handling or escalation surfacing all over the organization in portals, applications, Excel spreadsheets, Access databases, etc. whereby analysis data is not maintained or shared beyond an individual or department level source – and may not even be known by others outside the department or business function.
As a result, simple changes to the organizational structure can result in major loss of critical data (wiped off an end user’s desktop), immediate retirement of business assets (through non-use), or the requirement for entire groups to “re-tool” (subjecting the company to further inefficiencies and delays) to require use of only certain applications or interfaces used by priority groups.

Leveraging the Flexibility of SOA

In an environment such as the above, how do you analyze the functional and non-functional requirements required to effectively leverage the flexibility and capabilities of the emerging SOA technologies without causing continual chaos in your service delivery?
How do you recognize applications which should be designed as common services across the enterprise? How do you manage proliferation of applications that all handle similar data – duplicate data – and end up requiring you to host large farms of application servers hosting undocumented applications with unknown owners handling data when you are not sure what the applications are, who is using them, or what they are being used for?

The Y2X Flowdown

Enter: Four Quadrant Analysis – a new perspective on information gathering and requirements modeling that addresses issues introduced by the SOA space leveraging Six Sigma. This technique involves first and foremost, interviewing key stakeholders of your SOA initiatives for first and foremost, prioritizing the key objectives for the company’s SOA framework using the Y2X FlowDown tool.
The Y2X FlowDown is a Six Sigma developed technique that organizes the project deliverables, identifies dependencies, and creates a visual diagram from stakeholder input for the identification of project stages and measurable objectives that will be realized over the completion of one or more projects. During the Y2X FlowDown meeting, it may become apparent that some of the project objectives will not be realized during initial phases of a SOA. This is an important step during project initiation to ensure expectations are being managed realistically and in the appropriate context of cost management.
As a result of the Y2X FlowDown analysis, it may be found that additional software or major investments may be required to “measure” the successes of the project. Additionally, it may become clear through the flow down discussion that not all needed stakeholders have been identified or involved in the discussions. The value of the Y2X FlowDown process is twofold: (1) getting stakeholders on the same page with project outcome expectations, and (2) the Y2X FlowDown diagram, and example of which is noted below in figure 1 – which will become a reference point referred to again and again during each project, and at beginning and end of project phases as a “roadmap” of expectations for the project.

The Y2X flowdown process unites the SOA sponsors with the infrastructure and development teams in an outcome-focused planning effort which is not distracted by the details of the project delivery. It simply answers the question, what will be delivered, how will success be measured, and when can I expect delivery?
Once the Y2X Flow Down diagram has been created for each initiative going forward, the specific objectives and how they will be measured become a regular discussion point using the diagram as a key project management tool between the various groups that will be impacted by the new SOA, development approach, and process for rolling out business services.
From this point forward, your service delivery should be governed by a process which prioritizes and leverages the SOA components and patterns which are performing well within the SOA. This is key for manageability and obtaining the promises of SOA.

The Role of Business Analysis

While business initiatives and infrastructure projects are being identified and prioritized, SOA business analysts are re-assessing critical business processes for process improvement opportunities. If they are not working on continual process improvement, focus might be on the wrong things and many companies make this mistake.
They either hire BAs who are not technical enough to understand what they are mapping requirements to, or BAs that are too myopic, not able to focus on how services can leverage other services to become composite services. Instead they focus on end-user requirements while ignoring critical details such as throughput, error handling, escalation and routing of exceptions where such frameworks become the basis for a Business Process Management tool optimization.
The role of the business analyst is to quantify the “as is” the “to be” and to prioritize the value of filling the “gaps”. Outputs may consist of activity diagrams, SIPOC diagrams, requirements models, and data flow diagrams. Many companies are implementing “swim lane” diagrams to portray the business process, however, these diagrams and information become difficult to review, and can even become a source of confusion when a change in business flow occurs, or a new packaged application is purchased.
If start and stop points, inputs and outputs of each business process are not easily identified, process improvements are impossible to identify and the business process is then mapped to how the “new” application works, creating yet another source of information overload for business users when the next business service is rolled out. This common business approach is responsible for creating even more sources of duplicate data across the enterprise, with multiple groups performing similar tasks (duplicity), and can lead to major out-of-synch data and data management nightmares.
It is at this juncture that many companies begin to take SOA governance seriously as a business service development approach. It should be considered that such confusion resolution must belong to the BA role. Problems that are created because of duplicate data, duplicate functionality within applications, and business functions being performed in multiple places in the organization – those are all problems that require solving by a BA.

Four Quandrant Analysis: Business Analysis

The tool of the Technical Business Analyst (TBA) is the collective assessment and evaluation of technical requirements into these four quadrants. The TBA is a role which is commonly being filled by what is more recently known as the Enterprise Architect as enterprise level tools, data, and services are being defined for the sake of leveraging a common toolkit across the enterprise. Such efforts result in introducing ERP technologies such as SAP, or outsourced functions such as payroll and accounting. The TBA is able to approach the organization holistically by building a visual that helps the organization to understand its use of technology with the understanding that businesses run effectively because of people following procedures, accessing tools which use corporate data – whether such staff are internal, partners, or outsourced.
These are the four quadrants in this model: People, Procedures, Tools, Data. There is another element in the model to bridge where there are groups that use the same data – those points are “bridged” by what we call Architecture which visually speaking can take the form of figure 2.

For each department in the company, these four elements are assessed by the TBA – starting with the critical business process activity diagrams. The TBA becomes quickly knowledgeable about which department require what information, using which tools (applications and services), to complete which procedures (business processing). This approach prevents “compartmentalization” whereby the focus is quickly lent over to a specific need (where the squeeky wheel gets the grease), rather than its relevance to particular business functions which all have quantifiable value to the overall business process.
This fundamental visual of the organization provides the “connectors” to be quickly identified between departments, between roles within the organization and escalations within a business process, critical dependencies, need for SLA and time-critical processing, and need for data maintenance standards to control and manage enterprise data. The “bridge” between staff and tools is the “keystone” or what we commonly refer to in IT as “architecture”. This distinct approach leads to the detection of existing assets and modifying them for multi-use functions, preserving integrity of data, decreasing the need for data maintenance, and keeping controls in place that regulate efficiencies. When incorporated into business requirements analysis, there are few “dropped balls” or missing requirements when all four quadrants are addressed methodically.
Where connectors are identified, the need for architecture to quickly bridge between quadrant elements, mapping of capability to applications and services, while quickly providing the most effective use of your cornerstone technologies for an organization are key inputs to the enterprise architecture which enables rapid service deployment across business units.
This approach can also speed the creation of the critical inventory of enterprise assets that can be reviewed for optimizing vendor relationships, consolidation, and allow you to realize exponential cost savings as you streamline your assets, redesign and realize your greatest business optimization ever.

If you are interested in revitalizing your business requirements gathering process – Contact Us