Contemplations of Modernizing Legacy Enterprise Technology

What should you think about when modernizing your legacy systems?

Updating and replacing legacy technology takes a lot of planning and consideration. It can take years for a plan to become fully realized. Often poor choices during the initial planning process can destabilize your entire system, and it’s not unheard of for shoddy strategic technology planning to put an entire organization out of business.

At TxMQ we play a major role in helping our clients plan, integrate, and maintain legacy and hybrid systems. I’ve outlined a few areas to think about in the course of planning, (or in some cases re-planning) your modernization of legacy systems.

1.The true and total cost of maintenance
2.Utilize technology that integrates well with other technologies and systems
3. Take your customer’s journey into consideration
4. Ensure that Technical Debt doesn’t become compounded
5. Focus on fixing substantiated validated issues
6. Avoid technology lock-in

1. The true and total cost of maintenance

Your ultimate goal may be to replace the entire system but taking that first step typically means making the move to a hybrid environment.

Hybrid Environments utilize multiple systems and technologies for various processes, and can be extremely effective, but difficult to manage by yourself. If you are a large corporation with seemingly endless resources and have an agile staff with an array of skill sets, then you may be prepared. However, the reality is most IT departments are on a tight budget, people are multitasking and working far more than 40 hours a week just to maintain current systems.

These days most IT Departments just don’t have the resources. This is why so many organizations are moving to Managed IT Services to help mitigate the costs, take back some time, and are becoming more agile in the process.

When you’re deciding to modernize your old legacy systems you have to take into consideration the actual cost of maintaining multiple technologies. As new tech enters the marketplace, and older technologies and applications are moving towards retirement, so are the people who historically managed those technologies for you. It’s nearly impossible today to find a person willing to put time into learning technology that’s on it’s last leg. It’s a waste of time for them, and will be a huge drain on time and economical resources for you. It’s like learning how to fix a steam engine over learning more modern electric engines. I’m sure it’s a fun hobby but it it will probably never pay the bills.

You can’t expect newer IT talent to accept work that means refining and utilizing skills that will soon no longer be needed, unless you’re willing to pay them a hefty sum not to care. Even then, it’s just a short term answer and don’t expect them to stick around for long, always have a back up plan. Also, it’s always good to have someone on call that can help in a pinch and provide needed fractional IT Support.

2. Utilize technology that integrates well with other technologies and systems.

Unless you’re looking to rip and replace your entire system, you need to ensure that the new plays well with the old. Spoiler alert!, different technologies and systems often don’t play well together.

When you thought you’ve found that missing piece of software that seems to fix all the problems your business leaders insist they need, you will find that integrating it into your existing technology stack is much more complicated than you expected. If you’re going at this alone please take into consideration when planning a project, two disparate pieces of technology often act like two only children playing together. Sure they might get along for a bit but as soon as you turn your back there’s going to be a miscommunication and they start fighting.

Take time in finding someone with expertise in integrations, preferably a consultant or partner with plenty of resources and experience in integrating heterogeneous systems.

3. Take your customer’s journey into consideration

The principal reason any business should contemplate upgrading legacy technology is that it improves on the customer experience. Many organizations make decisions based on how it will increase profit and revenue without taking into consideration how profit and revenue are made.

If you have an established customer base, improving their experience should be a top priority because they require minimal effort to retain. However, no matter how superior your services or products are if there is a process with a smoother customer experience you can be sure that your long time customer will move on. As humans, we almost always take the path of least resistance. If you can improve even one aspect of a customer’s journey you have chosen wisely.

4. Ensure that Technical Debt doesn’t become compounded

Technical Debt is the idea that choosing the simple low cost solution will most certainly ensure you pay a higher price in the long run. The more you choose this option, the more likely you are to increase this debt, and in the long run you will be paying more, with interest.

Unfortunately, this is one of the most common mistakes when undertaking a legacy upgrade project. This is where being frugal will not pay off. If you can convince the decision makers and powers that be of one thing, it should be to not choose exclusively based on lower upfront costs. You must take into account the total cost of ownership. If you’re going to take time and considerable effort to do something you should always make sure it’s done right the first time, or it could end up costing a lot more.

5. Focus on fixing substantiated validated issues

It’s not often, but sometimes when a new technology comes along, like blockchain for instance, we become so enamored by the possibilities that we forget to ask, do we need it?

It’s like having a hammer in hand and running around looking for something to nail. Great you have the tool, but if there’s not an obvious problem to fix it with then it’s just a status symbol and that doesn’t get you very far in business. There is no cure-all technology. You need outline what problems you have, then prioritize to find the right technology that best suits your needs. Problem first, then Solution.

6. Avoid technology and Vendor lock-in

After you’ve defined what processes you need to modernize be very mindful when choosing the right technology and vendor to fix that problem. Vendor lock-in is serious and has been the bane of many technology leaders. If you make the wrong choice here, it could end up costing you substantially to make a switch. Even more than the initial project itself.

A good tip here is to look into what competitors are doing. You don’t have to copy what everyone else is doing, but to remain competitive you have to at least be as good as your competitors. Take the time to understand and research all of the technologies and vendors available to you, and ensure your consultant has a good grasp on how to plan your project taking vendor lock-in into account.

Next Steps:

Modernizing a major legacy system may be one of the biggest projects your IT department has ever taken on. There are many aspects to consider and no two environments are exactly the same, but it’s by no means an impossible task. It has been done before, and thankfully you can draw from these experiences.

The best suggestion I can give, is to have experience available to help guide you through this process. If that’s not accessible within your current team, find a consultant or partner that has the needed experience so you don’t have to worry about making the wrong choices and end up creating a bigger issue than you had in the first place.

At TxMQ we’ve been helping businesses assess, plan, implement, and manage desperate systems for 39 years. If you need any help or have any question please reach-out today. We would love to hear from you.

I'm A Mainframe Bigot

I make no apologies for my bigotry when I recommend mainframes for the new economy. Dollar for dollar, a properly managed mainframe environment will nearly always be more cost effective for our customers to run. This doesn’t mean there aren’t exceptions, but we aren’t talking about the outliers – we’re looking at the masses of data that support this conclusion.
To level-set this discussion: If you’re not familiar with mainframes, move along.
We aren’t talking about the Matrix “Neo, we have to get to the mainframe” fantasy world here. We’re talking about “Big Iron” – the engine that drives today’s modern economy. It’s the system where most data of record lives, and has lived for years. And this is a philosophical discussion, more than a technical one.
I’d never say there aren’t acceptable use cases for other platforms. Far from it. If you’re running a virtual-desktop solution, you don’t want that back end on the mainframe. If you’re planning to do a ton of analytics, your master data of record should be on the host, and likely there’s a well-thought-out intermediate layer involved for data manipulation, mapping and more. But if you’re doing a whole host (pun intended) of mainstream enterprise computing, IBM’s z systems absolutely rule the day.
I remember when my bank sold off its branch network and operations to another regional bank. It wasn’t too many years ago. As a part of this rather complicated transaction, bank customers received a series of letters informing them of the switch. I did some digging and found out the acquiring bank didn’t have a mainframe.
I called our accountant, and we immediately began a “bake off” among various banks to decide where to move our banking. Among the criteria? Well-integrated systems, clean IT environment, stability (tenure) among bank leadership, favorable business rules and practices, solid online tools, and of course, a mainframe.
So what’s my deal? Why the bigotry? Sure, there are issues with the mainframe.
But I, and by extension TxMQ, have been doing this a long time. Our consultants have collectively seen thousands of customer environments. Give us 100 customers running mainframes, 100 customers who aren’t, and I guarantee there are far more people, and far greater costs required to support similar-size adjusted solutions in non-mainframe shops.
Part of the reason is architecture. Part is longevity. Part is backward-compatibility. Part is security. I don’t want to get too deep into the weeds here, but in terms of hacking, unless you’re talking about a systems programmer with a bad cough, the “hacking” term generally hasn’t applied to a mainframe environment.
Cloud Shmoud
Did you know that virtualization was first done on the mainframe? Decades ago in fact. Multi-tenancy? Been there, done that.
Reliability, Availability and Serviceability define the mainframe. When downtime isn’t an option, there’s no other choice.
Enough said. Mainframes are just plain more secure than other computer types. The NIST vulnerabilities database, US-CERT, rates mainframes as among the most secure platforms when compared with Windows, Unix and Linux, with vulnerabilities in the low single digits.
I had a customer discussion that prompted me to write this short piece. Like any article on a technology that’s been around for over half a century, I could go on for pages and chapters. That’s not the point. Companies at times develop attitudes that become so ingrained, no one challenges them, or asks if there’s any proof. For years, the mainframe took a bad rap. Mostly due to very effective marketing by competitors, but also because those responsible for supporting the host began to age out of the workforce. Kids who came out of school in the ’90s and ’00s weren’t exposed to mainframe-based systems or technologies, so interest waned.
Recently, the need for total computing horsepower has skyrocketed, and we’ve seen a much-appreciated resurgence in the popularity of IBM’s z systems. Mainframes are cool again. Kids are learning about them in university, and hopefully, our back-end data will remain secure as companies realize the true value of Big Iron all over again.