Digital Transformation: When It Makes Sense and When It Doesn’t

TxMQ DIgital Transformation

This isn’t the first time I’ve written about digital transformation, nor is it likely to be the last.

Digital transformation has become a “must use” catchphrase for investor and analyst briefings and annual reports. Heaven help the foolish Fortune 500 company that fails to use the buzzword in their quarterly briefing. It’s the “keto diet” of the technology world.

It’s a popular term, but what does digital transformation really mean? 

Legacy debt. 

In a world of enterprises that have been around for longer than a few years, there is significant investment in legacy processes and technical systems (together what we like to call legacy debt) that can inhibit rapid decision making.  This is a combination of not just core systems, but also decades-old processes and decision-making cycles…bureaucracy in other words.

So why do we care about rapid decision making? Simply put, in years past, decisions were less consumer-driven and more company-driven, or dare I say it, focus-group driven.

Companies could afford to take their time making decisions because no one expected overnight change. Good things used to take a long time. 

We now live in a world where consumers demand rapid change and improvement to not just technology, but also processes. On a basic level, this makes sense. After all, who hasn’t had enough of poorly designed AI-driven, voice-activated phone trees when we just want to ask the pharmacist a question about our prescription refill? 

Too often, however, legacy debt leads to rapid implementations to meet customer demands – often with unintended (and catastrophic) consequences.  Often this is the result of rapid, poorly built (or bought) point solutions. This is where disruptors (aka startup companies) often pop up with quick, neat, point solutions of their own to solve a specific problem: a better AI-driven phone solution, a cuter user interface for web banking, sometimes even a new business model entirely. Your CIO sees this in an article or at a conference and wonders, “why can’t we build this stuff in-house?”

Chasing the latest greatest feature set is not digital transformation. Rather, digital transformation begins with recognizing that legacy debt must be considered when evaluating what needs changing, then figuring out how to bring about said change, and how to enable future rapid decision making. If legacy systems and processes are so rigid or outdated that a company cannot implement change quickly enough to stay competitive, then, by all means, rapid external help must be sought. Things must change.

However, in many cases what passes for transformation is really just evolution. Legacy systems, while sometimes truly needing a redo, do not always need to be tossed away overnight in favor of the hottest new app. Rather, they need to be continually evaluated for better integration or modernization options. Usually by better exposing back end systems. Transformation is just another word for solving a problem that needs solving, not introducing a shiny object no one has asked for. Do our systems and processes, both new and old, allow us to operate as nimbly as we must, to continue to grow, thrive and meet our customer demands today and tomorrow?

The Steve Jobs effect

Steve Jobs once famously stated (it’s captured on video, so apparently it really happened), when asked why he wasn’t running focus groups to evaluate the iPod, “How would people know if they need it, love it or want it if I haven’t invented it yet?”

Many corporate decision-makers think they are capable of emulating Steve Jobs. Dare I say it, they are not, nor are most people. Innovating in a vacuum is a very tricky business. It’s best to let the market and our customers drive innovation decisions. Certainly, I advocate for healthy investment in research and development, yet too often innovation-minus-customers equals wasted dollars. Unless one is funding R&D for its own sake, certainly a worthy cause, one needs some relative measure of the value and outcomes around these efforts. Which usually translates to marketability and ultimately profits.

Measurement

Perhaps the most often forgotten reality of our technology investments is understanding what the end goal, or end-state, is, and measuring whether or not we accomplished what we set out to do. Identifying a problem and setting a budget to solve that problem makes sense. But failing to measure the effectiveness after the fact is a lost opportunity. Just getting to the end goal isn’t enough, if in the end the problem we sought to solve remains. Or worse yet we created other more onerous unintended consequences.

Digital transformation isn’t about buzzwords or “moving faster” or outpacing the competition. It’s all of that, and none of that at the same time. It’s having IT processes and systems that allow a firm to react to customer-driven needs and wants, in a measured, appropriate, and timely way. And yes, occasionally to try to innovate toward anticipated future needs.
Technology is just the set of tools we use to solve problems.

Does it answer the business case?

“IT” is never — or at least shouldn’t be — an end-in-itself: it must always answer to the business case. What I’ve been describing here is an approach to IT that treats technology as a means to an end. Call it “digital transformation,” call it whatever you want — I have no use for buzz words. If market research informs you that customers need faster web applications, or employees tell you they need more data integration, then it’s IT’s job to make it happen. The point is that none of that necessitates ripping and replacing your incumbent solution. 

IT leaders who chase trends or always want the latest platform just for the sake of being cool are wasting money, plain and simple. Instead, IT leaders must recognize legacy debt as the investment it is. In my experience, if you plug this into the decision-making calculus, you’ll find that the infrastructure you already have can do a lot more than you might think. Leverage your legacy debt, and you’ll not only save time delivering new products or services, but you’ll also minimize business interruption — and reduce risk in the process. 

That’s the kind of digital transformation I can get behind.

TxMQ’s Chuck Fried and Craig Drabik Named 2020 IBM Champions

More than 40 years ago, TxMQ was founded by veterans of IBM who believed in supporting mainframe customers through new solutions built for IBM products. We’ve come a long way since 1979: we’ve moved our headquarters from Toronto to the U.S., our leadership team has grown, and we continue to enhance our roster of services. And though our capabilities and products have advanced, we’ve still managed to maintain a close connection to our roots at IBM. Our mission has also remained the same: to empower companies to become more dynamic, secure and nimble through technology solutions.

This mission has helped us assemble a team of innovators who constantly strive to help our clients meet their business goals through technological advancements.

Chuck Fried and Craig Drabik are great examples of TxMQ’s consistent excellence in bringing the best solutions to our enterprise clients. They were recently named to IBM’s 2020 Class of Champions for demonstrating extraordinary expertise, support and advocacy for IBM technologies, communities and solutions. Champions are thought leaders in the technical community who continuously strive to innovate and support new and legacy IBM products. As IBM states, “champions are enthusiasts and advocates… who support and mentor others to help them get the most out of IBM software, solutions, and services.” Here, Chuck and Craig share what IBM and being named IBM Champions means to them:

IBM Champion of Cloud, Cloud Integration, and Blockchain

Chuck Fried
President, TxMQ

“I’ve been building technological solutions for over 30 years, and have worked with many large software and technology companies. As we help our clients evolve, I am constantly drawn back to IBM. They are thought leaders in the technology industry, bringing the best new software and services to the market. Working with them, we know that our clients are getting the best possible solution. I’m proud to continue advocating for their brand.”

IBM Champion of Blockchain

Craig DrabikCraig Drabik
Technical Lead, Disruptive Technologies Group

“Although IBM is often associated with mainframe and legacy technologies, they offer so much more to the technology industry. Being named a Champion, when involved in disruptive technologies, proves this.  IBM is progressive and innovative, and strives to develop solutions for a range of products and industries. Working with IBM, we have access to world-renowned solutions that are trustworthy.”

As TxMQ builds new tools to support and grow the IBM ecosystem, having two Champions is a great achievement for our company. With this recognition, we can continue fostering our relationship with IBM and building life-changing technology for our customers.

Generating OpenAPI or Swagger From Code is an Anti-Pattern, and Here’s Why

(This article was originally posted on Medium.)

I’ve been using Swagger/OpenAPI for a few years now, and RAML before that. I’m a big fan of these “API documentation” tools because they provide a number of additional benefits beyond simply being able to generate nice-looking documentation for customers and keep client-side and server-side development teams on the same page. However, many projects fail to fully realize the potential of OpenAPI because they approach it the way they approach Javadoc or JSDoc: they add it to their code, instead of using it as an API design tool.

Here are five reasons why generating OpenAPI specifications from code is a bad idea.

You wind up with a poorer API design when you fail to design your API.

You do actually design your API, right? It seems pretty obvious, but in order to produce a high-quality API, you need to put in some up-front design work before you start writing code. If you don’t know what data objects your application will need or how you do and don’t want to allow API consumers to manipulate those objects, you can’t produce a quality API design.

OpenAPI gives you a lightweight, easy to understand way to describe what those objects are at a high level and what the relationships are between those objects without getting bogged down in the details of how they’ll be represented in a database. Separating your API object definitions from the back-end code that implements them also helps you break another anti-pattern: deriving your API object model from your database object model. Similarly, it helps you to “think in REST” by separating the semantics of invoking the API from the operations themselves. For example, a user (noun) can’t log in (verb), because the “log in” verb doesn’t exist in REST — you’d create (POST) a session resource instead. In this case, limiting the vocabulary you have to work with results in a better design.

It takes longer to get development teams moving when you start with code.

It’s simply quicker to rough out an API by writing OpenAPI YAML than it is to start creating and annotating Java classes or writing and decorating Express stubs. All it takes to generate basic sample data out of an OpenAPI-generated API is to fill out the example property for each field. Code generators are available for just about every mainstream client and server-side development platform you can think of, and you can easily integrate those generators into your build workflow or CI pipeline. You can have skeleton codebases for both your client and server-side plus sample data with little more than a properly configured CI pipeline and a YAML file.

You’ll wind up reworking the API more often when you start with code.

This is really a side-effect of #1, above. If your API grows organically from your implementation, you’re going to eventually hit a point where you want to reorganize things to make the API easier to use. Is it possible to have enough discipline to avoid this pitfall? Maybe, but I haven’t seen it in the wild.

It’s harder to rework your API design when you find a problem with it.

If you want to move things around in a code-first API, you have to go into your code, find all of the affected paths or objects, and rework them individually. Then test. If you’re good, lucky, or your API is small enough, maybe that’s not a huge amount of work or risk. If you’re at this point at all, though, it’s likely that you’ve got some spaghetti on your hands that you need to straighten out. If you started with OpenAPI, you simply update your paths and objects in the YAML file and re-generate the API. As long as your tags and operation Ids have remained consistent, and you’ve used some mechanism to separate hand-written code from generated code, all you’re left to change is business logic and the mapping of the API’s object model to its representation in the database.

The bigger your team, the more single-threaded your API development workflow becomes.

In larger teams building in mixed development environments, it’s likely you have people who specialize in client-side versus server-side development. So, what happens when you need to add to or change your API? Well, typically your server-side developer makes the changes to the API before handing it off to the client-side developer to build against. Or, you exchange a few emails, each developer goes off to do his own thing, and you hope that when everyone’s done that the client implementation matches up with the server implementation. In a setting where the team reviews the proposed changes to the API before moving forward with implementation, you’re in a situation where code you write might be thrown away if the team decides to go in a different direction than the developer proposed.

It’s easy to avoid this if you start with the OpenAPI definition. It’s faster to sketch out the changes and easier for the rest of the team to review. They can read the YAML, or they can read HTML-formatted documentation generated from the YAML. If changes need to be made, they can be made quickly without throwing away any code. Finally, any developer can make changes to the design. You don’t have to know the server-side implementation language to contribute to the API. Once approved, your CI pipeline or build process will generate stubs and mock data so that everyone can get started on their piece of the implementation right away.

The quality of your generated documentation is worse.

Developers are lazy documenters. We just are. If it doesn’t make the code run, we don’t want to do it. That leads us to omit or skimp on documentation, skip the example values, and generally speaking weasel out of work that seems unimportant, but really isn’t. Writing OpenAPI YAML is just less work than decorating code with annotations that don’t contribute to its function.

IBM Db2 v10.5 End of Support: April 30, 2020

Are you still running IBM Db2 v10.5 (or alternately, an even earlier build)?

IBM has announced an end of support date on Db2 v10.5 for Linux, Unix, and Windows. If you are still using that version it’s recommended that you upgrade to v11.1 to avoid any potential security issues that may occur in earlier, unsupported versions of Db2. 
Db2 v11.1 Highlights:

  • Column-organized table support for partitioned database environments
  • Advances to column-organized tables
  • Enterprise encryption key management
  • IBM Db2 purScale Feature enhancements 
  • Improved manageability and performance
  • Enhanced upgrade performance from earlier versions 

What are your plans for DB2?

I plan to upgrade:

It’s never too late to start planning your upgrade and upgrading to IBM Db2’s newest v11.1 is a great option. There are great new features that help you manage costs better, improve efficiency, and manageability. 
Take a closer look here at some of the enhancements.  
If you are still considering your plans, now’s a great time to speak with our SME Integration Upgrade Team. Reach out to us today to set up a free Discovery Session or contact us directly for any questions

I would like to continue to use v10.5 (or earlier versions):

It’s ok if you’re not ready for the newest version of Db2 just yet. However, it’s important to remember that without support you may not be protected against avoidable security risks and additional support costs. IBM does offer Extended Premium support but be prepared, that option will be very expensive. 
Alternatively, as an IBM Business Partner TxMQ offers expert support options. As a business partner, we have highly specialized skills in IBM software. We can help guide you through issues that may arise at a fraction of the cost with the added benefit of flexibility in services. Check out more on TxMQ’s Extended Support Page.

I will support it internally:

If you have an amazing internal team inhouse, odds are they don’t have much time to spare. Putting the gravity of a big project on your internal team can cut into their productivity. For many organizations, this will limit a team’s ability to focus on innovation and improving customer experience. This will make your competitors happy but your customers and clients definitely won’t be. 
Utilizing a trusted partner like TxMQ can help cut costs and give back some time to your internal team to focus on improvements and not just break/fix maintenance. Reach out to our team and learn how we can help maintain your existing legacy applications and platforms so your star team can focus on innovation again. Reach out and ask how we can help today.

I don’t know, I still need help!

Reach out to TxMQ today and schedule a free Discovery Session to learn what your best options are! TxMQ.com/Contact/

DLT Applications: Tracking medication through the healthcare supply chain

This article was originally published by MCOL.com on 12/19. 

It’s no secret that we have a dangerous opioid epidemic in the United States, as well as in many other parts of the world. Efforts to address the issue have come from both industry and government entities alike. In 2017, there were 47,600 overdose deaths in the U.S. involving opioids, which led to the U.S. Department of Health & Human Services (HHS) declaring a healthcare crisis. In April 2017, HHS outlined an Opioid Strategy, which included, among other components, the desire to create methods to strengthen public health data reporting and collection to inform a real-time public health response as the epidemic evolves.

Opioids are strong pain medications that mimic the pain-reducing qualities of opium, and when used improperly, are extremely dangerous and highly addictive. The increasing epidemic has highlighted the need for organizations to keep secure, reliable and actionable product lifecycle data, ensuring that they can track the entire supply chain for sensitive medications. In addition to meeting regulatory compliance requirements, cost and efficiency benefits may also be realized through tighter tracking and better data. Most importantly, it can help to cut down on the lives that are lost because of opioids and other medications being misused.

Healthcare Supply Chains

When discussing technology integration in a highly regulated industry like healthcare, it is hard to find solutions that work to both reduce costs and improve efficiencies, while still maintaining high levels of security and usability. This is why many healthcare organizations are turning towards supply chain management for new solutions; it will still improve efficiencies and cost, but it rarely involves personal health information, making it easier to satisfy regulatory requirements. In cases that use blockchain or distributed ledger solutions, it can also use immutable data and analytics, which can address suppliers’ fears of being hacked or losing sensitive proprietary information. On top of that, supply chain management can provide results to healthcare organizations to ensure that the solution is working effectively. In a 2018 Global Healthcare Exchange survey, nearly 60 percent of respondents said that data and analytics improvements were their highest priority. Supply chain management has many benefits for healthcare organizations, without having to work around highly regulated and secure data.

Supply chain management involves tracking supplies from the distributors or manufacturers, all the way through the healthcare organization to the patients receiving the medication or supplies. Many organizations still track supplies by hand, which can result in high margins of error. Also, many healthcare management systems are not integrated with each other, which means that patients can take advantage of these systems and access dangerous medications more easily. As healthcare systems move beyond hospitals and into non-acute sites, supply chain management becomes increasingly complex and difficult to manage. With supply chain management, healthcare organizations can track down errors and find out who made the error and when. When prescription drugs are involved, this would include knowing which patients, physicians, or prescribers are abusing the system by accessing more pain medications and opioids than they actually require or by over-prescribing more than should be allowed. This can help end addictions and overdoses.

Distributed Ledger Technology Solution

Accurate, timely information is critical in any supply chain. In the pharmaceutical industry, regulatory oversight and the potential for serious consequences for patients make supply chain traceability even more important. The ability to assess the behavior of the participants—patients, providers, distributors, manufacturers and pharmacists—within the supply chain is a useful tool in the battle against substance abuse. Developing a controlled substance management system as a robust, compliant supply chain management solution can help to track the movement of orders and medications through the pharmaceutical supply chain, from manufacturer and distributor to pharmacy and patient. Participants generate activity daily by consuming medication and refilling their prescriptions when they run out. Similarly, pharmacies and distributors place orders when supplies run low. Building this solution on a distributed ledger technology such as Hashgraph allows for increased security, immutable, time-stamped data, fast throughput, and easy customization to meet the needs of healthcare providers. It can even be customized to flag violations of laws or best practices, such as refilling prescriptions too often or over-prescribing.

Distributed ledger solutions have the ability to enforce rules on each participant with regards to the amount of medication that can be consumed, manufactured, distributed or prescribed. Patients’ refills can be limited based on their needs. Physicians, pharmacies and distributors have limits on the amount of medication they can prescribe or order in a given period of time to ensure that they are not abusing the system either. Participants who exceed these set limits are flagged by the system and can be removed, meaning that they are no longer able to order, prescribe or refill specific substances. This system can track a number of elements or components. In this case, it could track the distributor, the manufacturer, the prescribing physician, the pharmacy, the medication or opioid, and the patient. Time-stamped, immutable data allows for healthcare organizations to easily see when an error or an abuse of the system took place.

Distributed leger solutions are built on a system of nodes and each node processes each transaction. Each record or transaction is signed using the signature of the previous transaction to guarantee the integrity of the chain or the ledger. This means that these systems are difficult to breach or hack. Although supply chain management does not directly use confidential patient health information, it is important that all solutions that are integrated into a healthcare system are secure to ensure that data cannot be manipulated, allowing for further abuse of dangerous medications.

Finding a Solution

To save lives, it is imperative to find effective solutions to issues facing healthcare and the opioid epidemic. Unfortunately, within this industry, it can be hard to innovate due to privacy and regulations. Distributed ledger technology has the chance to innovate and potentially save lives when implemented as a sensitive medication supply chain management system. Its high-security, transparency and immediate auditability makes it an effective solution to track how harmful medications are being abused and to put an immediate stop to these issues. Technology already exists to solve these problems; it is only a matter of the healthcare industry taking these solutions seriously and implementing them before more lives are lost.

 

Bringing Offshore Contracts Back Onshore

User Groups Hero Banner

Although many companies rely on offshore technology teams to save costs and build capacity, there are still many challenges around outsourcing. Time zone issues, frequent staff turnover, difficulty managing workers, language barriers—the list goes on and on. Offshore workers can allow companies to save money. But what if offshore pricing was available for onshore talent? What if the best of both worlds – an easily managed workforce at a competitive cost – was possible. In fact, it is.

For all the pains and issues related to building global technology teams, outsourcing remains a viable option for many companies that need to build their engineering groups while controlling costs. With the U.S. and Europe making up almost half of the world’s economic output, but only 10% of the world’s population, it’s no secret that some of the world’s best talent can be found in other countries. That’s how countries such as India, China, and Belarus have become global hubs for engineering. And why not? They have great engineering schools, low costs of living, and large numbers of people who are fully qualified to work on most major platforms.

Reinventing Outsourcing 

This is basic supply and demand: companies want to hire people at as competitive a price point as possible without sacrificing quality. This is exactly how Bengaluru and Pune became technology juggernauts in the 1990s, and how Minsk became a go-to destination a decade later. The problem, of course, is that what was once a well-kept secret became well known…and wages started creeping up.

With salaries increasing in countries that typically are used for offshore talent, the cost of offshore labor is also on the rise. In India, a traditional favorite of offshore workers, annual salaries have been steadily rising by 10% since 2015, making it less beneficial for companies to hire workers there. In fact, in one of the biggest outsourced areas, call centers, workers in the U.S. earn on average only 14% more than outsourced workers. In the next few years, the gap will be narrow enough that the benefits of setting up a call center in Ireland or India just won’t make sense. What the laws of supply-and-demand can give, they can also take away. That’s why “outsource it to India” is no longer an automatic move for growing technology companies, financial institutions, and other businesses looking to rapidly grow their teams. It’s also why major Indian outsourcing companies such as Wipro and Infosys are diversifying into other parts of the world. 

As political and economic instability grow, moving a company’s outsourced work domestically can help to mitigate the risks of an uncertain landscape. A perfect example of this is China. Hundreds of American companies have set up development offices in China to take advantage of a skilled workforce at a low price point. So far so good, right? Well, not really. Due to concerns about cybersecurity and intellectual property theft, companies such as IBM have mandated that NONE of their code can come from China. All of a sudden, setting up shop in Des Moines is a lot more attractive than going to Dalian.

The federal government, as well as many states and municipalities, are also playing an active role in keeping skilled technology jobs at home through grants and tax breaks. New programs and training schools are also emerging, which are helping to build talent in the U.S. at a lower cost and helping companies take advantage of talented workers outside of large cities with low costs of living. hiring 100 engineers in midtown Manhattan might not be cost-effective, but places like Phoenix and Jacksonville allow companies to attract world-class talent without breaking the bank.

This doesn’t mean the end of offshoring, of course, When looking for options to handle mainframe support, and legacy systems services, including Sparc, AIX, HPUX, and lots of back-leveled IBM, Oracle and Microsoft products, the lure of inexpensive offshore labor often wins. Unlike emerging technologies, legacy systems do not require constant updates and front-end improvements to keep up with competitors. The typical issues that affect offshore outsourcing aren’t as big of an issue when legacy systems are involved. So where does it make sense to build teams, or hire contractors, domestically?

Domestic Offshoring (sometimes called near-shoring)

There is a key difference between outsourcing development to overseas labs and building global teams, but the driving force behind both approaches is pretty much the same: cut costs while preserving quality. Working with IT consulting and staffing companies like TxMQ is a prime example of how businesses can take advantage of outsourcing onshore without going into the red. Unlike technology hubs such as Silicon Valley, these companies are typically located in areas such as the Great Lakes region, where outstanding universities (and challenging weather!) yield inexpensive talent due to lower living costs. With aging populations creating need for skilled workers in the Eastern United States, more states are introducing benefits to attract more workers. This is already creating an advantage for companies that provide outsourced staffing because they can charge lower prices than traditional technology hubs. It’s the perfect mix of ease, quality, and cost.  

Global 2000 companies face challenges resulting from their large legacy debt, and the costs to support their systems are high. As they struggle to transform and evolve their technology to today’s containerized, API-enabled, microservices-based world, they need lower-cost options to both support their legacy systems and build out new products.

While consulting and staffing companies are well known for transformational capabilities and API enablement, there are other advantages that aren’t as well known. For these transformational services, many companies also support older, often monolithic, applications, including those likely to remain on the mainframe forever. From platform support on IBM Power systems to complete mainframe management and full support for most IBM back-leveled products, companies like TxMQ have found a niche providing economical support for enterprise legacy systems, including most middleware products of today, and yesterday. This allows companies to invest properly in their enterprise transformation while maintaining their critical legacy systems.

The Future of Work

In a 2018 study of IT leaders and executives, more than 28 percent planned to increase their on-shore spending in the next year. With the ability to move work online, companies can support outsourced teams easily, whether onshore or offshore. To mitigate age-old issues such as different time zones and language barriers, and as the pay gap closes between the U.S. and other nations, employing the talents of outsourced workers onshore can help companies benefit from outsourcing without having to fly 15 hours across two oceans to do it.

Tackling digital transformation proactively — before a crisis hits

Digital Disruption Image

This article was originally published by CIO Dive on 12/9/19.

Digital transformation is the headline driver in most enterprises today, as companies realize that in order to stay relevant, engage current and new customers and thrive, they need to constantly reevaluate their technology stack.

Unfortunately, real transformation is an intensive process that is neither easy nor smooth. Digital transformation tends to take place reactively, whether it’s in response to losing a customer as new competition arises or as a way to manage issues.

But reactive digital transformation tends to be unsustainable because without a real strategy no one knows what they are trying to change and why. The “why” needs to evolve from what the customer expects and what can be implemented in order to retain business.

In a data-driven age, digital transformation needs to ensure that customers can access what they need, when, how and where they want, while still keeping their information safe. The fundamental structure of how data is gathered, compiled, used and distributed needs to evolve.

As new competitors arrive with the latest greatest new age disruptive technology, companies must deal with their legacy debt and the associated costs which prohibit quick upgrades to systems and processes (not to mention the critical retraining of staff).

CIOs are aware that this change is necessary and 43% think legacy and core modernization will have the largest impact on businesses over the next three years. But knowing it’s needed and knowing how to prepare require different mindsets.

In order to ensure they are addressing change effectively, CIOs need to prepare for digital transformation proactively — before a crisis arises.

Proactive digital transformation: The cost of legacy debt

There are three types of debt impacting most companies: process, people and technical.

Process debt refers to the appropriate frameworks needed to operate using modern systems and is highly important to address as it has a cascading effect into other types of debt.

A new type of workforce is needed to negotiate emerging trends and new technologies, and a lack of highly skilled workers is referred to as people debt. Similarly, existing staff must be retrained on new technologies and processes.

Technical debt involves legacy software, computers and tools that have become outdated, and are expensive to manage or upgrade. Companies often find the cost of maintaining legacy technologies is so great it diverts resources away from modernizing, evolving or adapting the business.

The cost of legacy debt is more than just the amount of money and time it will take to upgrade to the latest solutions. It impacts all parts of the business, including the productivity of the company.

Companies with high technical debt are 1.6 times less productive than companies with low or no legacy debt. The importance of upgrading is clearly necessary to drive better business results.

In order to address all types of debt, companies need to foster an environment where people are willing to learn and update processes in order to modernize.

Preparing for a digital transformation

Some 20% of companies fail to achieve their digital transformation goals, so preparation is key when it comes to finding successes. Here are integral steps for how companies can prepare for the transformation.

1. Involve executives

Although the CIO may be spearheading the digital transformation initiative, all C-level executives should be involved from the start to align goals.

One must establish a consistent and clear story across the organization and ensure that all executives are prepared to communicate this across departments. Ensure alignment on objectives and create a clear path to success by creating goals for each department that focus on transformation.

People debt can be a huge barrier to successful digital transformation. To help mitigate this issue, offer leadership development opportunities that focus on knowledge for digital transformation and coaching programs to help manage employees in their new mode of working.

It also may be necessary to redefine roles at the organization to support digital transformation. This can help clarify what the roles will look like in the digital-first environment. Companies that integrate this practice in their digital transformation plan are 1.5 times more likely to succeed. Culture, after all, starts at the top of all organizations.

2. Define the customers’ needs

Although digital transformation manifests as a technological change, the real driver for the changes are customers’ needs.

Among obvious concerns customers express are security, mobile and digital experiences, and digital support. Conduct research and speak with the entire organization to identify pain points internally and externally that are impacting the customer experience.

Currently, only 28% of organizations are starting their digital transformation initiatives with clients as a priority. By placing emphasis on customers’ needs and looking for solutions that directly impact their experiences, the enterprise will have a unique advantage over other companies that are working on digital transformation.

3. Break down silos

Although not all departments may reap the benefits of digital transformation at once, if they understand that the client needs come first and feel as though they have input on how to create change, there is a better chance that the transformation will be smooth. Collaboration on the unified vision will be key to supporting the goals of the company.

Employees, much like executives, will also have to understand that they will be asked to work in new ways. Emphasize agility in the work environment; the ability for employees to adapt and change will be pivotal to the success of the company. Also, encourage employees to find new ways to work that support the path to digital transformation.

4. Break down goals

Implementing an entirely new digital strategy can be overwhelming. Discuss and decide upon key priorities with the organization and with stakeholders.

From there, break down those goals into smaller stepping stones that are easily achievable and work towards the overall goals of the transformation. This way, everyone knows the task at hand and can focus on achieving those smaller goals.

Communicate these small victories with the team to raise morale and ensure that they know that, although the goals are lofty, they are achievable.

Finding success

Over the years, $4.7 trillion have been invested across all industries in digital transformation initiatives, yet only 19% of customers report seeing the results of these transformations. This is because companies are failing to consider a key element of transformation—putting the customers’ needs first.

In order to retain clients and improve client satisfaction, CIOs need to have a plan in place that addresses customer concerns. Successful digital transformation will not come from a moment of panic: it requires proactive preparation.

Introducing Aviator DLT by TXMQ’s DTG

In 2017, TxMQ’s Disruptive Technologies Group created Exo – an open-source framework for developing applications using the Swirlds Hashgraph consensus SDK. Our intention for Exo was to make it easier for Java developers to build Hashgraph and other DLT applications. It provided an application architecture for organizing business logic, and an integration layer over REST or Java sockets. Version 2 of the framework introduced a pipeline model for processing transactions and the ability to monitor transactions throughout their life cycle over web sockets.

TxMQ used Exo as the basis of work we’ve delivered for our customers, as well as the foundation for a number of proofs-of-concept. As the framework has continued to mature, we began to realize its potential as the backbone for a private distributed ledger platform.

By keeping the programming and integration model consistent, we are able to offer a configurable platform that is friendlier to enterprise developers who don’t come from a distributed ledger background. We wanted developers to be able to maximize the investment they’ve made in the skills they already have, instead of having to tackle the considerable learning curve associated with new languages and environments.

Enterprises, like developers, also require approachability – though from a different perspective. Enterprise IT is an ecosystem in which any number of applications, databases, APIs, and clients are interconnected. For enterprises, distributed ledger is another tool that needs to live in the ecosystem. In order for DLT to succeed in an enterprise setting, it needs to be integrated into the existing ecosystem. It needs to be manageable in a way that fits with how enterprise IT manages infrastructure. From the developer writing the first line of code for their new DApp all the way down to the team that manages the deployment and maintenance of that DApp, everyone needs tooling to help them come to grips with this new thing called DLT. And so the idea for Aviator was born!

Aviator is an application platform and toolset for developing DLT applications. We like to say that it is enterprise-focused yet startup-ready. We want to enable the development of private ledger applications that sit comfortably in enterprise IT environments, flattening the learning curve for everyone involved.

There are three components of Aviator: The core platform, developer tools, and management tools.

Think of the core platform as an application server for enterprise DApps. It hosts your APIs, runs your business logic, handles security, and holds your application data. Each of those components is meant to be configurable so Aviator can work with the infrastructure and skill-sets you already have. We’ll be able to integrate with any Hibernate-supported relational database, plus NoSQL datastores like MongoDB or CouchDB. We’ll be delivering smart contract engines for languages commonly used in enterprise development, like Javascript, Java, and C#. Don’t worry if you’re a Solidity or Python developer, we have you on our radar too. The core platform will provide a security mechanism based on a public key infrastructure, which can be integrated into your organization’s directory-based security scheme or PKI if one is already in place. We can even tailor the consensus mechanism to the needs of an application or enterprise.

Developing and testing DApps can be complicated, especially when those applications are integrated into larger architectures. You’re likely designing and developing client code, an API layer, business logic, and persistence. You’re also likely writing a lot of boilerplate code. Debugging an application in a complicated environment can also be very challenging. Aviator developer tools help to address these challenges. Aviator can generate a lot of your code from Open API (Swagger) documents in a way that’s designed to work seamlessly with the platform. This frees developers to concentrate on the important parts and cuts down on the number of bugs introduced through hand-coding. We’ve got tools to help you deploy and test smart contracts and more tools to help you look at the data and make sure everything is doing what is supposed to do. Finally, we’re working on ways to use those tools the way developers will want to use them, whether that’s through integrations with existing IDEs like Visual Studio Code or Eclipse, or in an Aviator-focused IDE.

The work doesn’t end when the developers have delivered. Deploying and managing development, QA, and production DLT networks is seriously challenging. DLT architectures include a number of components, deployed across a number of physical or virtual machines, scaled across a number of identical nodes. Aviator aims to have IT systems administrators and managers covered there as well. We’re working on a toolset for visually designing your DLT network infrastructure, and a way to automatically deploy that design to your physical or virtual hardware. We’ll be delivering tooling to monitor and manage those assets through our own management tooling, or by integrating into the network management tooling your enterprise may already have. This is an area where even the most mature DLT platforms struggle, and there are exciting opportunities to lower frictions when managing DLT networks through better management capabilities.

So what does this all mean for Exo, the framework that started our remarkable journey? For starters, it’s getting a new name and a new GitHub. Exo has become the Aviator Core Framework, and can now be found on TxMQ’s official GitHub at https://github.com/txmq. TxMQ is committed to maintaining the core framework as a free, open source development framework that anyone can use to develop applications based on Swirlds Hashgraph. The framework is a critical component of the Aviator Platform, and TxMQ will continue to develop and maintain it. There will be a path for applications developed on the framework to be deployed on the Aviator Platform should you decide to take advantage of the platform’s additional capabilities.

For more information on Aviator, please visit our website at http://aviatordlt.com and sign up for updates.

 

 

 

 

Which of these MQ mistakes are you making?

IBM MQ Logo

IBM MQ Tips Straight From TxMQ Subject Matter Experts

At TxMQ we have extensive experience helping clients with their IBM MQ deployments including planning, integrations, management, upgrades, & enhancements. Throughout the years we’ve seen just about everything, and we have found there are some common mistakes that are easy to stay away from with a little insight. Here are some a few tips to keep you up and running:IBM MQ is a powerful tool that makes a huge difference in your life every day, but for most of us you only notice it when it’s not working. One small mistake can cause havoc to your entire system.

1. Don’t use MQ as a database. MQ is for moving messages from one application or system to another. Databases are the data repository of choice. MQ resources are most optimized when considering data throughput and message delivery efficiency. IBM MQ is optimized when messages are kept small.

2. Don’t expect assured delivery if you didn’t design for it! Assured one-time message delivery is provided by IBM MQ through setting message persistence and advanced planning in the application integration design process. This plans for poisoned message handling that could otherwise cause issues and failures or worse, unexpected results. Know your business requirements for quality of service for message delivery and make sure your integration design accommodates advanced settings and features as appropriate.

3. Don’t give every application their own Queue Manager! Sometimes yes, sometimes no. Understand how to analyze what is best for your needs. Optimize your messaging architecture for shared messaging to control infrastructure costs and optimize operational support resources.

4. Don’t fall out of support. While TxMQ can offer support for OUT of support products, it’s costly to let support lapse on current projects, and even more so if you have to play catch up!

5. Don’t forget monitoring! MQ is powerful and stable, but if a problem occurs, you want to know about it right away. Don’t wait until your XMITQs and application queues fill up and bring the Queue Manager down before you respond!

In the cloud, on mobile, on-prem or in the IoT, IBM MQ simplifies, accelerates, and facilitates security-rich data exchange. Keep your MQ running smoothly, reach out to talk with one of our Subject Matter Experts today!

If you like our content and would like to keep up to date with TxMQ don’t forget to sign up for our monthly newsletter.

 

Contemplations of Modernizing Legacy Enterprise Technology

What should you think about when modernizing your legacy systems?

Updating and replacing legacy technology takes a lot of planning and consideration. It can take years for a plan to become fully realized. Often poor choices during the initial planning process can destabilize your entire system, and it’s not unheard of for shoddy strategic technology planning to put an entire organization out of business.

At TxMQ we play a major role in helping our clients plan, integrate, and maintain legacy and hybrid systems. I’ve outlined a few areas to think about in the course of planning, (or in some cases re-planning) your modernization of legacy systems.

1.The true and total cost of maintenance
2.Utilize technology that integrates well with other technologies and systems
3. Take your customer’s journey into consideration
4. Ensure that Technical Debt doesn’t become compounded
5. Focus on fixing substantiated validated issues
6. Avoid technology lock-in

1. The true and total cost of maintenance

Your ultimate goal may be to replace the entire system but taking that first step typically means making the move to a hybrid environment.

Hybrid Environments utilize multiple systems and technologies for various processes, and can be extremely effective, but difficult to manage by yourself. If you are a large corporation with seemingly endless resources and have an agile staff with an array of skill sets, then you may be prepared. However, the reality is most IT departments are on a tight budget, people are multitasking and working far more than 40 hours a week just to maintain current systems.

These days most IT Departments just don’t have the resources. This is why so many organizations are moving to Managed IT Services to help mitigate the costs, take back some time, and are becoming more agile in the process.

When you’re deciding to modernize your old legacy systems you have to take into consideration the actual cost of maintaining multiple technologies. As new tech enters the marketplace, and older technologies and applications are moving towards retirement, so are the people who historically managed those technologies for you. It’s nearly impossible today to find a person willing to put time into learning technology that’s on it’s last leg. It’s a waste of time for them, and will be a huge drain on time and economical resources for you. It’s like learning how to fix a steam engine over learning more modern electric engines. I’m sure it’s a fun hobby but it it will probably never pay the bills.

You can’t expect newer IT talent to accept work that means refining and utilizing skills that will soon no longer be needed, unless you’re willing to pay them a hefty sum not to care. Even then, it’s just a short term answer and don’t expect them to stick around for long, always have a back up plan. Also, it’s always good to have someone on call that can help in a pinch and provide needed fractional IT Support.

2. Utilize technology that integrates well with other technologies and systems.

Unless you’re looking to rip and replace your entire system, you need to ensure that the new plays well with the old. Spoiler alert!, different technologies and systems often don’t play well together.

When you thought you’ve found that missing piece of software that seems to fix all the problems your business leaders insist they need, you will find that integrating it into your existing technology stack is much more complicated than you expected. If you’re going at this alone please take into consideration when planning a project, two disparate pieces of technology often act like two only children playing together. Sure they might get along for a bit but as soon as you turn your back there’s going to be a miscommunication and they start fighting.

Take time in finding someone with expertise in integrations, preferably a consultant or partner with plenty of resources and experience in integrating heterogeneous systems.

3. Take your customer’s journey into consideration

The principal reason any business should contemplate upgrading legacy technology is that it improves on the customer experience. Many organizations make decisions based on how it will increase profit and revenue without taking into consideration how profit and revenue are made.

If you have an established customer base, improving their experience should be a top priority because they require minimal effort to retain. However, no matter how superior your services or products are if there is a process with a smoother customer experience you can be sure that your long time customer will move on. As humans, we almost always take the path of least resistance. If you can improve even one aspect of a customer’s journey you have chosen wisely.

4. Ensure that Technical Debt doesn’t become compounded


Technical Debt is the idea that choosing the simple low cost solution will most certainly ensure you pay a higher price in the long run. The more you choose this option, the more likely you are to increase this debt, and in the long run you will be paying more, with interest.

Unfortunately, this is one of the most common mistakes when undertaking a legacy upgrade project. This is where being frugal will not pay off. If you can convince the decision makers and powers that be of one thing, it should be to not choose exclusively based on lower upfront costs. You must take into account the total cost of ownership. If you’re going to take time and considerable effort to do something you should always make sure it’s done right the first time, or it could end up costing a lot more.

5. Focus on fixing substantiated validated issues

It’s not often, but sometimes when a new technology comes along, like blockchain for instance, we become so enamored by the possibilities that we forget to ask, do we need it?

It’s like having a hammer in hand and running around looking for something to nail. Great you have the tool, but if there’s not an obvious problem to fix it with then it’s just a status symbol and that doesn’t get you very far in business. There is no cure-all technology. You need outline what problems you have, then prioritize to find the right technology that best suits your needs. Problem first, then Solution.

6. Avoid technology and Vendor lock-in

After you’ve defined what processes you need to modernize be very mindful when choosing the right technology and vendor to fix that problem. Vendor lock-in is serious and has been the bane of many technology leaders. If you make the wrong choice here, it could end up costing you substantially to make a switch. Even more than the initial project itself.

A good tip here is to look into what competitors are doing. You don’t have to copy what everyone else is doing, but to remain competitive you have to at least be as good as your competitors. Take the time to understand and research all of the technologies and vendors available to you, and ensure your consultant has a good grasp on how to plan your project taking vendor lock-in into account.

Next Steps:

Modernizing a major legacy system may be one of the biggest projects your IT department has ever taken on. There are many aspects to consider and no two environments are exactly the same, but it’s by no means an impossible task. It has been done before, and thankfully you can draw from these experiences.

The best suggestion I can give, is to have experience available to help guide you through this process. If that’s not accessible within your current team, find a consultant or partner that has the needed experience so you don’t have to worry about making the wrong choices and end up creating a bigger issue than you had in the first place.

At TxMQ we’ve been helping businesses assess, plan, implement, and manage desperate systems for 39 years. If you need any help or have any question please reach-out today. We would love to hear from you.