APIs & DevOPs are part of the Hybrid Cloud Answer

TxMQ’s Hybrid Cloud Practice

Most companies today have a huge investment in legacy systems and applications. Some may be mainframe based, containing collections of old Cobol code that continue to run the business. Other situations find companies with ERP or MRP systems that have been customized and retooled so much, they barely resemble the original out-of-the-box application that was originally purchased. Whatever the case, chances are high your legacy systems requires support and ongoing management by a team of developers and admins – likely many of whom are beginning to plan their retirements. Do you have a plan in place to handle this not-so-far-off future?
Cloud is all the buzz today, but you are stuck in the past with systems and technology that just can’t be ‘lifted and shifted’ to the cloud. It’s either that, or you are hamstrung by regulatory requirements that simply preclude much of your data from a cloud migration.

APIs are a part of the Hybrid Cloud answer (or at least will get you started down the right path)

An API strategy allows you to leverage your back end assets by exposing them for consumption to trusted partners, or even end consumers. All while ensuring the security and reliability of your legacy systems.
This approach is one we call ‘Hybrid Cloud’. It is the first step in a cloud strategy that doesn’t have to involve anything more than rethinking how legacy workloads are used and accessed. At the same time, this strategy allows a rethinking of new workload deployments, and new ways of working.

DevOps

A DevOps approach allows for a continuous, rapid, iterative application development cycle, while ensure equally continuous testing of new applications and code and while automating code deployment. IBM’s Urban Code Deploy is an important part of a Dev Ops strategy for customers with extensive legacy workloads and compliance concerns.
Today’s developers want to work in a more rapid and nimble way. ‘Failing fast’ is the new mantra, and maps well to line of business (LOB) owners pushing for quicker time to market for their applications. Historically, this way of working wouldn’t play well with legacy shops. Yet with the introduction of a Hybrid Cloud approach, companies can rethink new workloads and new application requests, including non-production environments, to leverage DevOps. Furthermore, companies can begin to look at Platform as a Service (PaaS) options allowing for more rapid environment spin ups, and more stable and rapid application testing.
Also, this Hybrid Cloud approach allows for the rapid integration of cloud based solutions like Workday, Salesforce, Netsuite, and other web based technologies companies continue to adopt.
TxMQ helps companies evaluate their legacy infrastructure to identify quick wins leadership can use to begin a roadmapping and ‘future planning’ strategy. Get in touch today to learn how we can help you create a future that leverages next generation options for aging infrastructure and applications.
Next time, we’ll discuss the next step on this journey…converged or hyper converged infrastructure.
 
 

The Legacy of Legacy Applications and Cloud Enablement

The Legacy of Legacy Applications

When the tombstone is written one day on legacy applications (metaphorically mind you, this logically will never happen), it may read something like: “We knew ye well, yet hardly knew ye!”

There is a ton of noise in the marketplace lately about legacy applications, especially as relates to moving to the cloud. Rebuilding them, replatforming them, cloud enabling them, outsourcing their management, you name it, it’s being blogged about, discussed at conferences, and white papered to death. To be sure, you are reading such a document right now aren’t you?

I have only a few things to add to the discourse so I’ll approach this piece as a bit of an aggregation of my readings, mixed in with some musings gleaned over my 30 years in the field from some of my direct experiences.

The industry seems to have silently adopted the term ‘Legacy applications” to reference all the old, messy stuff companies run. It includes old ERP systems, mainframe applications, old websites, or ecommerce applications, point solutions, third party systems, and even home grown systems written by long retired developers, many times with little documentation.

These systems keep working (in most cases), may have indeterminate application (and/or end point) interdependencies, and in general, the thought of unplugging them scares the daylights out of most IT managers. Simply put, in many cases, they don’t know exactly what many of these applications do. Thus the fear of turning off even an apparently isolated, rarely run and little understood application, might have a disastrous cascading impact on other systems.

The fact is much of what we lump into the Legacy Software bucket is software that keeps the lights on for companies.   These are core systems that drive significant revenue, and often have high availability requirements that make a cloud discussion, and the latency inherent in such a conversation problematic.

Among many challenges of this software is the time and resource commitment companies make to manage it.   When too much energy is devoted to legacy applications, too little remains to focus on growth, strategic initiatives, and of course, evaluating alternate ways of deploying new workloads.

So what to do, and where to start?

An important initial step to take in evaluating cloud options (by cloud, let us presume we are entertaining public, private and hybrid as options for now), is portfolio (application) management.

This means being aware of all the software you own (or run, in the case of some mainframe subsystem software like DB2, CICS, where ownership remains with IBM, while customers pay a monthly license charge). This is often a harder mountain to climb than most companies want to admit. Thus this is often when companies engage with providers like TxMQ to begin an assessment, sometimes yes, including building a complete catalog of applications in their portfolio. From here, a discussion ensues around the business value of the applications. This step is critical, as it is imperative we match resources to applications based on their business criticality. We often see situations where companies devote as much time and energy, and thus precious dollars, to non critical but poorly performing applications, as to mission critical revenue generating software.

Once a complete view exists of software (and of course hardware) assets, the conversation can move to assigning priority to these assets. Prioritization includes looking at the lifecycle of applications, as well as the infrastructure required to support them. Knowing an application’s payback (or ROI), as well as it’s life expectancy is nearly as important as knowing it’s function.

Problems often arise during staff turnover, where critical legacy application knowledge departs as employees come and go.   This can often (and should) lead to an important asset management conversation as well.

Once we have a handle on the relative priority of applications, we can start the conversation around where and how to run them. The end goal being cost control. By properly matching resources to needs, we will be able to free up resources (people and dollars) to once again, focus on innovating, something IT has been hard-pressed to do when focused on legacy application maintenance and firefighting.

The goal, remember, is to be able to continuously deploy software, rapidly and of high quality. IT can, and should be a part of the strategy conversation, not viewed as a necessary evil to keep the corporate lights on.

In truth, most companies get caught in this “I don’t have time to innovate, we are to busy putting out fires” mentality. This is a never-ending cycle, which has permeated IT for years. This portfolio realignment as a path to cloud enablement can help end this fatal loop.

There are several established methods TxMQ (and others) use to help customers evaluate and prioritize their legacy applications. The goal, in the end, is to identify areas for quick wins. It is inevitable that some legacy applications are simply monsters, and will require massive rework, or rethinking, and companies have a tendency to get lost in this sauce. Let’s not open that can of worms first.

Lets learn to start with the easy wins. Many java and .net legacy applications are excellent early candidates for cloud enablement with little code tweaking required.

It is important too, that any corporate sacred cows be identified early on for ‘special handling’. Many an IT project, and more than a few IT leaders have seen their early demise by carelessly poking around in the data center. Political minefields exist in all companies and should be delicately identified and avoided. Sometimes, its not a good idea to ‘whale hunt’, and better progress can be made with smaller wins to gain traction, demonstrate successes to leadership, and ultimately gain buy-in to later tackle the major legacy application monsters.

Once completed, or at least well underway, most projects show a common split of applications by category. We usually see a significant amount of large legacy applications, up to 85% in some cases. Next we see 10 % or more applications that appear to be cloud ready with some retrofitting. Lastly we see the remaining being in flight, or to be developed applications that can be stood up as cloud ready from the start.

Perhaps not surprisingly, much of the heavy lifting required to become a ‘cloud ready’ organization involves organizational mindset. Cloud based applications ideally, are developed very differently than legacy applications. Thus IT groups need to rethink application development.

Legacy application development involved (or involves as much continues today) moving development through gates, and often times, a change review board. Some of this process may be required for regulatory reasons.   Some may just be in place ‘because that’s how we have always done things’.

We must tackle the process just as we tackle the applications themselves. If we don’t adopt a new mentality, a new paradigm, nothing will change. Yet change is painful, and time consuming. By showing quick early wins in both moving applications to the cloud, as well as rapidly iterating new applications, organizations will realize the burden of old onerous processes, and the advantages of rapid application development.   Change review boards can and do learn.   If they can be shown that development by making rapid, small, incremental change to code can actually reduce risk and exposure while increasing code quality, they will adopt the new paradigm. This can also lead to a fruitful recognition of the value in automating the build process, which will reap further rewards.

My goal here was to start the conversation around cloud, and replatforming in particular, while remaining platform and technology, and vendor agnostic. All major vendors have valid cloud options, some well-tailored to specific needs, and many are viable options regardless of the legacy application language. Stating a desire to become cloud-ready, is the beginning of a journey of discovery. It will lead (in time) to IT once again being invited to the table for broad strategy discussions, as they will have proven their worth by navigating this industry inflection point. We encourage you to begin this journey with your eyes and mind open.

Read the original article published on LinkedIn here.