How Bank IT Leaders Can Get out of Reactive Mode (and Start Preparing for Tomorrow)

I spend a lot of time talking to IT professionals in banks across the US and Canada. Some large and global, others regional. As the CEO of a technology consultancy that works with financial institutions of all sizes, having varied conversations with our clients is a big part of my job. And I can tell you that almost every single one of them says the same thing: I’m so busy reacting to day-to-day issues that I just don’t have time to really plan for the future.

In other words, they’re always in a reactive mode as they deal with issues that range from minor (slow transaction processing) to major (catastrophic security breaches). But while playing whack-a-mole is critical to any bank, even a small shift in priorities can give CIOs and their teams the room to get ready for tomorrow rather than just focusing on today.

How to get out of reactive mode

Every bank technology person intuitively knows all this, of course, but it’s almost impossible for most to carve out the time to do any real planning. What they need are some ways to break the cycle. To that end, here are just a few suggestions for IT leaders, based on my experiences with bank IT organizations, to get out of reactive mode and start preparing for tomorrow.

Have a clear vision

A clear vision is important in all organizations. Knowing what we’re all marching towards not only helps keep teams focused and unified, but also ensures high morale and a sense of teamwork. The day-to-day menial tasks mean a lot more when understood in the context of the overall goal.

Break projects into smaller projects

As a runner, I’ve participated in my share of marathons, and what I can say is that I’ve never started one only to tell myself, “Okay, just 26.2 miles to go!” Rather, like most runners, I break the race down into digestible (and mentally palatable!) chunks. It starts with a 1 mile run. Then I work up to a 10k (about 6 miles), and so on, until I reach the final 5k.

Analogously, I’ve seen successful teams in large organizations do amazing things just by breaking huge, company-shifting tasks into smaller projects — smaller chunks that everyone can understand, get behind, and see the end of. Maybe it’s a three-week assessment to kick off a six-month work effort. Or maybe it’s a small development proof of concept before launching a huge software redeployment initiative slated to last months. Whatever the project, making things smaller allows people to enjoy the little successes, and helps keep teams focused.

Get buy-in from company leadership

IT leaders are constantly going to management asking for more money to fund their projects and operations. And a lot of times, management doesn’t want to give it to them. It’s a frustrating situation for both parties, to be sure, but consider that one of the reasons management might be so reluctant to divert even more money to IT is you have nothing to show them for all the cash they’ve put into it previously. In their minds, they keep giving you more money, but nothing really changes. You’re still putting out fires and playing whack-a-mole.

If, on the other hand, you’re able to show them a project that will ultimately improve operations (or improve the customer experience, or whatever your goal is) they’ll be a lot more likely to agree. As an IT leader, it’s your job to seek out these projects and bring them to business leaders’ attention.

Implement DevOps methodology

I find a lot of financial institutions are still stuck in the old ways of managing their application lifecycles. They tend to follow an approach — the so-called “waterfall” model — that’s horribly outdated. The waterfall model for building software essentially involves breaking down projects into sequential, linear phases. Each phase depends on the deliverables of the previous phase to begin work. While it sounds straightforward enough, the flaw with the waterfall model is that it doesn’t reflect the way software is actually used by employees and customers in the real world. The reality is, requirements and expectations change even as the application is being built, and a rigid methodology like the waterfall model lacks the responsiveness that’s required in today’s business environment.

To overcome these flaws, we recommend a DevOps methodology. DevOps combines software development with IT operations to shorten application development lifecycles and provide continuous delivery. In essence, DevOps practitioners work to increase communication between software development teams and IT teams, automating those communication processes wherever possible. This collaborative approach allows IT teams to get exactly what they need from development teams, faster, to do their job better. “Fail fast, fail often” is a common mantra. Encourage the failure, learn from it, and then iterate to improve.

DevOps is obviously a radical shift from the way many bank IT professionals are used to making and using enterprise software, and to really implement it right, you need someone well-versed in the practice. But implemented correctly, it has the capacity to kickstart an IT organization that’s stuck in a rut.

Getting ahead

As an IT consultant, I’ve heard all the answers in the book for why your organization can’t seem to get ahead of the day-to-day. But these excuses are just that: excuses. If you’re an IT leader, by definition you have the power to change your organization. You just need to exercise it effectively.

Remember: our world of technology has three pillars: people, process, and technology. No stool stands on two legs, nor does IT. Understand these three complementary components, and you’re well on your way to transforming your organization.

Why Banks Need to Start Thinking Like Tech Companies

Historically, for most Americans (and Canadians), the local bank branch has always been where you go not just to deposit and withdraw cash, but to manage your retirement or savings account, apply for a credit card and secure a home, car or small business loan. Today, however, the bank’s ascendancy is being challenged by the rise of alternative institutions and other scrappy players who are trying to tap into areas that were formerly the exclusive domain of banks. This category of emerging fintech companies includes online-only banks, credit unions, retirement planning apps, online lending marketplaces, peer-to-peer payment platforms and others too numerous to mention. And while banks may have the size advantage, nothing in business lasts forever. Do these Davids have a chance to slay Goliath? And what do the banks need to do to protect themselves from upstart challengers?

Studies indicate these new entities are giving banks a run for their money (no pun intended). The top five U.S. banks, for instance, accounted for only 21% of mortgage originations in 2019, compared to half of mortgages in 2011. Filling the gap are non-bank lenders, which not only offer a convenient, digital-first customer experience, but also tend to approve more applicants. Similar trends can be witnessed in small business loans and personal loans.

It’s not a stretch to say the traditional bank is facing an existential crisis. This has been partly brought on by a general lack of competition for so long. For example, at one point, towns had just one bank. This single bank didn’t have to innovate in the face of zero competition. That reality may have led to a decades-long attitude of complacency, which as a result, has led to a failure to innovate. Retail banks need to rethink pretty much everything. In short, they need to start thinking like a startup—more specifically, a tech startup. Silicon Valley is driven in large part by a philosophy of disruption, innovation and entrepreneurship. Many alternative lenders have been empowered by this philosophy, but that’s not to say that traditional banks can’t make use of it, too. Far from it, here are some ways that banks can start thinking more like tech companies so they can stay competitive against alternative providers.

Embrace lean methodology. 

Startups, by definition, lack the resources of more established businesses, but they don’t let those limitations stifle innovation. In fact, those limitations actually serve to encourage innovation. Lean methodology is a way of designing and bringing new products to market specifically designed to fit the limited financial resources of startup organizations. First outlined by entrepreneur Eric Ries in “The Lean Startup,” this approach emphasizes building and testing iteratively to reduce waste and achieve a better market fit.

To become vehicles of innovation, banks should consider adopting similar methodologies. I’m not suggesting that they should create artificial obstructions or arbitrary constraints. But no matter the size of the institution, budgets are always going to feel too small—not least of all because product developers for massive institutions need to develop huge products to match. With tried-and-true methodologies for innovation like the Lean Startup out there, scarcity shouldn’t be an excuse for not innovating. 

Fail fast, iterate often. Adopt Agile.

Startups know that rapid iteration cycles mean rapid innovation. It also means embracing a culture of failure. Failing to fail means failing to succeed. These are the lynchpins of agile or lean methodologies. Excellence is the enemy of success and progress. Get it done, get it out there in front of the market and then iterate improvements.

Identify opportunities with big data. 

One of the reasons alternative lenders are able to offer such high rates of approval is that they employ state-of-the-art AI and machine learning techniques to get a better picture of their customer than a simple credit or background check can deliver. Well-trained AI algorithms can efficiently comb through a wide body of available data to uncover trends and make predictions about the risk of lending to a given individual with incredible accuracy. 

Online-first lenders have such an advantage here because they’re in a better place to mine that data. What a lot of people forget about data analytics is that the greatest algorithms are only as good as the data you feed them. Businesses, and banks especially, generate millions of data points per day—data that could prove valuable for data mining and other similar uses. However, the majority of this data is unstructured and heterogeneous, and often time, siloed and difficult to access. Many successful online-first lenders have carefully structured their digital loan applications to be useful for data analytics purposes from the ground up. When nearly 40% of the work of data analytics is gathering and cleaning data, this represents a huge advantage to the fintech startup. 

But traditional banks can take advantage of this, too. Developing online and mobile banking applications to replace old-fashioned paper forms for most activities would set banks up to make better use of that data by ingesting it in a cleaner format. Add in the fact that customers are demanding mobile banking features anyway, and there’s no excuse for not offering customers a more robust set of mobile banking features.

Shrink bloated bureaucracy with cross-functional organizations.

Think about all the startups you’ve visited. Did teams operate in silos, constantly blaming other teams for their inability to make progress? Or did they adapt to situations, never believing their roles to be fixed or immutable?

To become the latter kind of organization, traditional banks need to break the cycle of bureaucratic apathy. One way to do that is to have disparate teams work together on projects. Working on shared projects not only helps develop a sense of shared purpose, but it also empowers employees to solve problems in areas that are not considered in their traditional wheelhouse. That, in turn, reduces the inefficiency of teams passing the baton to another division until it’s been weeks or months until the customer’s concern has even been truly considered. Moreover, bringing together different kinds of minds and thinkers encourages the kind of fertile ground in which innovation is known to thrive.

Reports of the bank’s death have been greatly exaggerated.

Ultimately, banks have numerous advantages that they can leverage over most fintech startups. They have their brick-and-mortar retail locations, allowing them to make personal connections with customers that drive loyalty. They’re considered more trustworthy to the average consumer (for the most part). And a lot of people just want to do all their banking at a single bank branch rather than shop around for various piecemeal banking solutions. If banks can innovate their information technology and organizational structures to meet the changing needs of today’s customers, they can continue to dominate the financial market.


Can We Make Zelle Cool?

Let’s face it: people like using cool technology. They want the latest iPhone, the hottest games and the newest social platform. One can argue the relative merits of this obsession, or the need for the brightest and shiniest object, but the reality remains: everyone wants the latest, most up-to-date products and apps. Anyone who remembers moving from Myspace to Facebook knows this reality. And one of today’s “cool” apps is Venmo, the person-to-person payments app that boasts over 40 million users, 50 percent of them being millennials. That’s a big problem for Zelle®?, but it doesn’t have to be.

Digital payments platforms—the person-to-person kind—aren’t especially new. PayPal and Square have been around for more than 10 years, Stripe is newer, but still well established, and long-forgotten startups like Brodia and Qpass existed even before then. But somehow Venmo, a PayPal subsidiary after a recent acquisition, managed to become so hip that the company name actually became a verb. Think of it as the fun, cool little brother of the PayPal behemoth. Unfortunately, they were so successful that traditional financial institutions started wondering why they were being shut out of the incredibly lucrative mobile peer-to-peer payments market. That’s how we ended up with Zelle, which was launched in 2017 with the express purpose of beating Venmo at its own game.

At face value, Zelle checks off all the boxes. It was built for a specific task and does pretty much everything it’s supposed to do. But where it has succeeded on the technical side, it’s still coming up short where it really matters: user adoption. Chalk it up to any number of factors, from security concerns to Venmo’s massive head start, but the reality is an inescapable one: Venmo is cool and Zelle isn’t. Venmo is what millennials use while their parents use Zelle, or so the perception goes. That may not matter to C-level executives at banks, whose aspirations run far beyond what is cool, but it’s a huge determinant of success or failure in the marketplace.

The good news for Zelle is that this problem can be fixed. The bad news is that it’s going to take some heavy lifting to change consumer opinion and drive adoption by people who see Zelle in the same way that seventh graders see the assistant principal at the school dance. Even if he has the best moves in the room, he’ll always be a square.

It might be tempting to see this as a marketing problem, but if Zelle is going to gain market traction—and silence the naysayers—it’s going to need to make some technical changes to make it better, and more useful, than Venmo.

First, some low-hanging fruit. Venmo explicitly prohibits money transfers for the purchase of goods or services unless one is a Venmo verified merchant. It is meant for person-to-person payments. For Zelle, this is a massive opportunity to gain adoption where Venmo fears to tread. And because it’s backed by banks, they have the data and the AI capabilities to pull it off with much less risk than an independent player—even one as large as Venmo.

It could also be argued that if one can’t win on the cool factor—and cool is so intangible as to be ethereal—can a company really make a cool app? No, coolness flows from something as a result of its utility. Yet, one can win in other ways.

Be Friendly. For starters, apps need to be open and inviting. Big banks have a sometimes well-earned reputation for being fiercely competitive and protective of their turf. Millennials—or to be more broad, consumers under age 35—want openness, not interoperability. Though it’s an advantage, systems and tools that are perceived as playing nicely with other ecosystems are unnecessary. Big banks making the news for increased difficulty for consumers trying to set up Venmo is not the way to win favor with the target customers. Adopting Open Banking is.

Be open. Give consumers a choice. The banks that combined forces to create Zelle are all but forcing their customers to use it. With more banks in the queue to have Zelle as their primary P2P payments tool, more people will become Zelle users by default, not by choice. As a marketing strategy, this won’t work. Yes, adoption will grow, but customer satisfaction will drop. If the end game is a broader use of the bank and all of its products and services, Zelle is but one tool among many, not the end all, be all. Banks would do well to focus on the war to gain customers, not the battle to win users.

Simplify. When in doubt, software engineers and designers should focus on this one point. Although under-35-year-olds are known to be capable of understanding and learning technology easily, this doesn’t mean that they want to spend hours working through the app interface. When designing the app, remember that less is more. Simplify, eliminate options and remember KISS.

Integrate. Better yet, integrate with social media. Yes, Zelle has tried to copy this element of Venmo’s success, but somehow it feels awkward. Social media integration needs to be a priority, and it needs to be done well, something that Zelle still seems to be missing.

In the person-to-person payments discussion, there are many factors to consider. In the end though, this is just a skirmish in the broader contest for consumer-banking relationships. By forcing users to adopt, and adapt to, Zelle, banks are taking away agency from the consumers. This will inevitably result in Zelle never becoming the “cool” payments app. Banks need to consider that larger picture in this conversation, and that goes far beyond what app is victorious in the battle for what’s cool. This is not likely to be a winner take all situation, rather a gradual war of attrition, with many winners, and more losers.

Banks would do well to remember this.

Chuck Fried is president and chief executive of TxMQ. Prior to TxMQ, Chuck founded multiple businesses in the IT and other technology services spaces. Before that, he served in IT leadership roles in both the public and private sectors. In 2020, he was named an IBM Champion, affirming his commitment to IBM products, offerings and solutions.


IBM Db2 v10.5 End of Support: April 30, 2020

Are you still running IBM Db2 v10.5 (or alternately, an even earlier build)?

IBM has announced an end of support date on Db2 v10.5 for Linux, Unix, and Windows. If you are still using that version it’s recommended that you upgrade to v11.1 to avoid any potential security issues that may occur in earlier, unsupported versions of Db2. 
Db2 v11.1 Highlights:

  • Column-organized table support for partitioned database environments
  • Advances to column-organized tables
  • Enterprise encryption key management
  • IBM Db2 purScale Feature enhancements 
  • Improved manageability and performance
  • Enhanced upgrade performance from earlier versions 

What are your plans for DB2?

I plan to upgrade:

It’s never too late to start planning your upgrade and upgrading to IBM Db2’s newest v11.1 is a great option. There are great new features that help you manage costs better, improve efficiency, and manageability. 
Take a closer look here at some of the enhancements.  
If you are still considering your plans, now’s a great time to speak with our SME Integration Upgrade Team. Reach out to us today to set up a free Discovery Session or contact us directly for any questions

I would like to continue to use v10.5 (or earlier versions):

It’s ok if you’re not ready for the newest version of Db2 just yet. However, it’s important to remember that without support you may not be protected against avoidable security risks and additional support costs. IBM does offer Extended Premium support but be prepared, that option will be very expensive. 
Alternatively, as an IBM Business Partner TxMQ offers expert support options. As a business partner, we have highly specialized skills in IBM software. We can help guide you through issues that may arise at a fraction of the cost with the added benefit of flexibility in services. Check out more on TxMQ’s Extended Support Page.

I will support it internally:

If you have an amazing internal team inhouse, odds are they don’t have much time to spare. Putting the gravity of a big project on your internal team can cut into their productivity. For many organizations, this will limit a team’s ability to focus on innovation and improving customer experience. This will make your competitors happy but your customers and clients definitely won’t be. 
Utilizing a trusted partner like TxMQ can help cut costs and give back some time to your internal team to focus on improvements and not just break/fix maintenance. Reach out to our team and learn how we can help maintain your existing legacy applications and platforms so your star team can focus on innovation again. Reach out and ask how we can help today.

I don’t know, I still need help!

Reach out to TxMQ today and schedule a free Discovery Session to learn what your best options are!

Bringing Offshore Contracts Back Onshore

User Groups Hero Banner

Although many companies rely on offshore technology teams to save costs and build capacity, there are still many challenges around outsourcing. Time zone issues, frequent staff turnover, difficulty managing workers, language barriers—the list goes on and on. Offshore workers can allow companies to save money. But what if offshore pricing was available for onshore talent? What if the best of both worlds – an easily managed workforce at a competitive cost – was possible. In fact, it is.

For all the pains and issues related to building global technology teams, outsourcing remains a viable option for many companies that need to build their engineering groups while controlling costs. With the U.S. and Europe making up almost half of the world’s economic output, but only 10% of the world’s population, it’s no secret that some of the world’s best talent can be found in other countries. That’s how countries such as India, China, and Belarus have become global hubs for engineering. And why not? They have great engineering schools, low costs of living, and large numbers of people who are fully qualified to work on most major platforms.

Reinventing Outsourcing 

This is basic supply and demand: companies want to hire people at as competitive a price point as possible without sacrificing quality. This is exactly how Bengaluru and Pune became technology juggernauts in the 1990s, and how Minsk became a go-to destination a decade later. The problem, of course, is that what was once a well-kept secret became well known…and wages started creeping up.

With salaries increasing in countries that typically are used for offshore talent, the cost of offshore labor is also on the rise. In India, a traditional favorite of offshore workers, annual salaries have been steadily rising by 10% since 2015, making it less beneficial for companies to hire workers there. In fact, in one of the biggest outsourced areas, call centers, workers in the U.S. earn on average only 14% more than outsourced workers. In the next few years, the gap will be narrow enough that the benefits of setting up a call center in Ireland or India just won’t make sense. What the laws of supply-and-demand can give, they can also take away. That’s why “outsource it to India” is no longer an automatic move for growing technology companies, financial institutions, and other businesses looking to rapidly grow their teams. It’s also why major Indian outsourcing companies such as Wipro and Infosys are diversifying into other parts of the world. 

As political and economic instability grow, moving a company’s outsourced work domestically can help to mitigate the risks of an uncertain landscape. A perfect example of this is China. Hundreds of American companies have set up development offices in China to take advantage of a skilled workforce at a low price point. So far so good, right? Well, not really. Due to concerns about cybersecurity and intellectual property theft, companies such as IBM have mandated that NONE of their code can come from China. All of a sudden, setting up shop in Des Moines is a lot more attractive than going to Dalian.

The federal government, as well as many states and municipalities, are also playing an active role in keeping skilled technology jobs at home through grants and tax breaks. New programs and training schools are also emerging, which are helping to build talent in the U.S. at a lower cost and helping companies take advantage of talented workers outside of large cities with low costs of living. hiring 100 engineers in midtown Manhattan might not be cost-effective, but places like Phoenix and Jacksonville allow companies to attract world-class talent without breaking the bank.

This doesn’t mean the end of offshoring, of course, When looking for options to handle mainframe support, and legacy systems services, including Sparc, AIX, HPUX, and lots of back-leveled IBM, Oracle and Microsoft products, the lure of inexpensive offshore labor often wins. Unlike emerging technologies, legacy systems do not require constant updates and front-end improvements to keep up with competitors. The typical issues that affect offshore outsourcing aren’t as big of an issue when legacy systems are involved. So where does it make sense to build teams, or hire contractors, domestically?

Domestic Offshoring (sometimes called near-shoring)

There is a key difference between outsourcing development to overseas labs and building global teams, but the driving force behind both approaches is pretty much the same: cut costs while preserving quality. Working with IT consulting and staffing companies like TxMQ is a prime example of how businesses can take advantage of outsourcing onshore without going into the red. Unlike technology hubs such as Silicon Valley, these companies are typically located in areas such as the Great Lakes region, where outstanding universities (and challenging weather!) yield inexpensive talent due to lower living costs. With aging populations creating need for skilled workers in the Eastern United States, more states are introducing benefits to attract more workers. This is already creating an advantage for companies that provide outsourced staffing because they can charge lower prices than traditional technology hubs. It’s the perfect mix of ease, quality, and cost.  

Global 2000 companies face challenges resulting from their large legacy debt, and the costs to support their systems are high. As they struggle to transform and evolve their technology to today’s containerized, API-enabled, microservices-based world, they need lower-cost options to both support their legacy systems and build out new products.

While consulting and staffing companies are well known for transformational capabilities and API enablement, there are other advantages that aren’t as well known. For these transformational services, many companies also support older, often monolithic, applications, including those likely to remain on the mainframe forever. From platform support on IBM Power systems to complete mainframe management and full support for most IBM back-leveled products, companies like TxMQ have found a niche providing economical support for enterprise legacy systems, including most middleware products of today, and yesterday. This allows companies to invest properly in their enterprise transformation while maintaining their critical legacy systems.

The Future of Work

In a 2018 study of IT leaders and executives, more than 28 percent planned to increase their on-shore spending in the next year. With the ability to move work online, companies can support outsourced teams easily, whether onshore or offshore. To mitigate age-old issues such as different time zones and language barriers, and as the pay gap closes between the U.S. and other nations, employing the talents of outsourced workers onshore can help companies benefit from outsourcing without having to fly 15 hours across two oceans to do it.

Tackling digital transformation proactively — before a crisis hits

Digital Disruption Image

This article was originally published by CIO Dive on 12/9/19.

Digital transformation is the headline driver in most enterprises today, as companies realize that in order to stay relevant, engage current and new customers and thrive, they need to constantly reevaluate their technology stack.

Unfortunately, real transformation is an intensive process that is neither easy nor smooth. Digital transformation tends to take place reactively, whether it’s in response to losing a customer as new competition arises or as a way to manage issues.

But reactive digital transformation tends to be unsustainable because without a real strategy no one knows what they are trying to change and why. The “why” needs to evolve from what the customer expects and what can be implemented in order to retain business.

In a data-driven age, digital transformation needs to ensure that customers can access what they need, when, how and where they want, while still keeping their information safe. The fundamental structure of how data is gathered, compiled, used and distributed needs to evolve.

As new competitors arrive with the latest greatest new age disruptive technology, companies must deal with their legacy debt and the associated costs which prohibit quick upgrades to systems and processes (not to mention the critical retraining of staff).

CIOs are aware that this change is necessary and 43% think legacy and core modernization will have the largest impact on businesses over the next three years. But knowing it’s needed and knowing how to prepare require different mindsets.

In order to ensure they are addressing change effectively, CIOs need to prepare for digital transformation proactively — before a crisis arises.

Proactive digital transformation: The cost of legacy debt

There are three types of debt impacting most companies: process, people and technical.

Process debt refers to the appropriate frameworks needed to operate using modern systems and is highly important to address as it has a cascading effect into other types of debt.

A new type of workforce is needed to negotiate emerging trends and new technologies, and a lack of highly skilled workers is referred to as people debt. Similarly, existing staff must be retrained on new technologies and processes.

Technical debt involves legacy software, computers and tools that have become outdated, and are expensive to manage or upgrade. Companies often find the cost of maintaining legacy technologies is so great it diverts resources away from modernizing, evolving or adapting the business.

The cost of legacy debt is more than just the amount of money and time it will take to upgrade to the latest solutions. It impacts all parts of the business, including the productivity of the company.

Companies with high technical debt are 1.6 times less productive than companies with low or no legacy debt. The importance of upgrading is clearly necessary to drive better business results.

In order to address all types of debt, companies need to foster an environment where people are willing to learn and update processes in order to modernize.

Preparing for a digital transformation

Some 20% of companies fail to achieve their digital transformation goals, so preparation is key when it comes to finding successes. Here are integral steps for how companies can prepare for the transformation.

1. Involve executives

Although the CIO may be spearheading the digital transformation initiative, all C-level executives should be involved from the start to align goals.

One must establish a consistent and clear story across the organization and ensure that all executives are prepared to communicate this across departments. Ensure alignment on objectives and create a clear path to success by creating goals for each department that focus on transformation.

People debt can be a huge barrier to successful digital transformation. To help mitigate this issue, offer leadership development opportunities that focus on knowledge for digital transformation and coaching programs to help manage employees in their new mode of working.

It also may be necessary to redefine roles at the organization to support digital transformation. This can help clarify what the roles will look like in the digital-first environment. Companies that integrate this practice in their digital transformation plan are 1.5 times more likely to succeed. Culture, after all, starts at the top of all organizations.

2. Define the customers’ needs

Although digital transformation manifests as a technological change, the real driver for the changes are customers’ needs.

Among obvious concerns customers express are security, mobile and digital experiences, and digital support. Conduct research and speak with the entire organization to identify pain points internally and externally that are impacting the customer experience.

Currently, only 28% of organizations are starting their digital transformation initiatives with clients as a priority. By placing emphasis on customers’ needs and looking for solutions that directly impact their experiences, the enterprise will have a unique advantage over other companies that are working on digital transformation.

3. Break down silos

Although not all departments may reap the benefits of digital transformation at once, if they understand that the client needs come first and feel as though they have input on how to create change, there is a better chance that the transformation will be smooth. Collaboration on the unified vision will be key to supporting the goals of the company.

Employees, much like executives, will also have to understand that they will be asked to work in new ways. Emphasize agility in the work environment; the ability for employees to adapt and change will be pivotal to the success of the company. Also, encourage employees to find new ways to work that support the path to digital transformation.

4. Break down goals

Implementing an entirely new digital strategy can be overwhelming. Discuss and decide upon key priorities with the organization and with stakeholders.

From there, break down those goals into smaller stepping stones that are easily achievable and work towards the overall goals of the transformation. This way, everyone knows the task at hand and can focus on achieving those smaller goals.

Communicate these small victories with the team to raise morale and ensure that they know that, although the goals are lofty, they are achievable.

Finding success

Over the years, $4.7 trillion have been invested across all industries in digital transformation initiatives, yet only 19% of customers report seeing the results of these transformations. This is because companies are failing to consider a key element of transformation—putting the customers’ needs first.

In order to retain clients and improve client satisfaction, CIOs need to have a plan in place that addresses customer concerns. Successful digital transformation will not come from a moment of panic: it requires proactive preparation.

Contemplations of Modernizing Legacy Enterprise Technology

What should you think about when modernizing your legacy systems?

Updating and replacing legacy technology takes a lot of planning and consideration. It can take years for a plan to become fully realized. Often poor choices during the initial planning process can destabilize your entire system, and it’s not unheard of for shoddy strategic technology planning to put an entire organization out of business.

At TxMQ we play a major role in helping our clients plan, integrate, and maintain legacy and hybrid systems. I’ve outlined a few areas to think about in the course of planning, (or in some cases re-planning) your modernization of legacy systems.

1.The true and total cost of maintenance
2.Utilize technology that integrates well with other technologies and systems
3. Take your customer’s journey into consideration
4. Ensure that Technical Debt doesn’t become compounded
5. Focus on fixing substantiated validated issues
6. Avoid technology lock-in

1. The true and total cost of maintenance

Your ultimate goal may be to replace the entire system but taking that first step typically means making the move to a hybrid environment.

Hybrid Environments utilize multiple systems and technologies for various processes, and can be extremely effective, but difficult to manage by yourself. If you are a large corporation with seemingly endless resources and have an agile staff with an array of skill sets, then you may be prepared. However, the reality is most IT departments are on a tight budget, people are multitasking and working far more than 40 hours a week just to maintain current systems.

These days most IT Departments just don’t have the resources. This is why so many organizations are moving to Managed IT Services to help mitigate the costs, take back some time, and are becoming more agile in the process.

When you’re deciding to modernize your old legacy systems you have to take into consideration the actual cost of maintaining multiple technologies. As new tech enters the marketplace, and older technologies and applications are moving towards retirement, so are the people who historically managed those technologies for you. It’s nearly impossible today to find a person willing to put time into learning technology that’s on it’s last leg. It’s a waste of time for them, and will be a huge drain on time and economical resources for you. It’s like learning how to fix a steam engine over learning more modern electric engines. I’m sure it’s a fun hobby but it it will probably never pay the bills.

You can’t expect newer IT talent to accept work that means refining and utilizing skills that will soon no longer be needed, unless you’re willing to pay them a hefty sum not to care. Even then, it’s just a short term answer and don’t expect them to stick around for long, always have a back up plan. Also, it’s always good to have someone on call that can help in a pinch and provide needed fractional IT Support.

2. Utilize technology that integrates well with other technologies and systems.

Unless you’re looking to rip and replace your entire system, you need to ensure that the new plays well with the old. Spoiler alert!, different technologies and systems often don’t play well together.

When you thought you’ve found that missing piece of software that seems to fix all the problems your business leaders insist they need, you will find that integrating it into your existing technology stack is much more complicated than you expected. If you’re going at this alone please take into consideration when planning a project, two disparate pieces of technology often act like two only children playing together. Sure they might get along for a bit but as soon as you turn your back there’s going to be a miscommunication and they start fighting.

Take time in finding someone with expertise in integrations, preferably a consultant or partner with plenty of resources and experience in integrating heterogeneous systems.

3. Take your customer’s journey into consideration

The principal reason any business should contemplate upgrading legacy technology is that it improves on the customer experience. Many organizations make decisions based on how it will increase profit and revenue without taking into consideration how profit and revenue are made.

If you have an established customer base, improving their experience should be a top priority because they require minimal effort to retain. However, no matter how superior your services or products are if there is a process with a smoother customer experience you can be sure that your long time customer will move on. As humans, we almost always take the path of least resistance. If you can improve even one aspect of a customer’s journey you have chosen wisely.

4. Ensure that Technical Debt doesn’t become compounded

Technical Debt is the idea that choosing the simple low cost solution will most certainly ensure you pay a higher price in the long run. The more you choose this option, the more likely you are to increase this debt, and in the long run you will be paying more, with interest.

Unfortunately, this is one of the most common mistakes when undertaking a legacy upgrade project. This is where being frugal will not pay off. If you can convince the decision makers and powers that be of one thing, it should be to not choose exclusively based on lower upfront costs. You must take into account the total cost of ownership. If you’re going to take time and considerable effort to do something you should always make sure it’s done right the first time, or it could end up costing a lot more.

5. Focus on fixing substantiated validated issues

It’s not often, but sometimes when a new technology comes along, like blockchain for instance, we become so enamored by the possibilities that we forget to ask, do we need it?

It’s like having a hammer in hand and running around looking for something to nail. Great you have the tool, but if there’s not an obvious problem to fix it with then it’s just a status symbol and that doesn’t get you very far in business. There is no cure-all technology. You need outline what problems you have, then prioritize to find the right technology that best suits your needs. Problem first, then Solution.

6. Avoid technology and Vendor lock-in

After you’ve defined what processes you need to modernize be very mindful when choosing the right technology and vendor to fix that problem. Vendor lock-in is serious and has been the bane of many technology leaders. If you make the wrong choice here, it could end up costing you substantially to make a switch. Even more than the initial project itself.

A good tip here is to look into what competitors are doing. You don’t have to copy what everyone else is doing, but to remain competitive you have to at least be as good as your competitors. Take the time to understand and research all of the technologies and vendors available to you, and ensure your consultant has a good grasp on how to plan your project taking vendor lock-in into account.

Next Steps:

Modernizing a major legacy system may be one of the biggest projects your IT department has ever taken on. There are many aspects to consider and no two environments are exactly the same, but it’s by no means an impossible task. It has been done before, and thankfully you can draw from these experiences.

The best suggestion I can give, is to have experience available to help guide you through this process. If that’s not accessible within your current team, find a consultant or partner that has the needed experience so you don’t have to worry about making the wrong choices and end up creating a bigger issue than you had in the first place.

At TxMQ we’ve been helping businesses assess, plan, implement, and manage desperate systems for 39 years. If you need any help or have any question please reach-out today. We would love to hear from you.

Managed Services: Regain focus, improve performance and move forward.

Managed IT Services Help You Improve Your Team’s Performance.

There is no such thing as multitasking. It’s been scientifically proven that if you spread your focus too thin, something is going to suffer. Just like a computer system, if you are trying to work on too many tasks at once, it’s going to slow down, create errors and just not perform as expected.

You must regain your focus in order to improve performance and move forward.

The fact is, no matter what your industry or business is today, your success is dependent upon your IT team staying current, competent, and competitive within the industry. Right now there is more on your IT departments “plate” than ever before. Think about how much brain power and talent you’re misusing knowing that your best IT talent is spending the bulk of their efforts just managing the day to day. Keep in mind that most of these issues can be easily fixed, and even avoided with proper preparation.

How do you continuously put out fires, keep systems running smoothly, and still have time to plan for the future?

As the legendary Ron Swanson once said “Never half-ass two things. Whole-ass one thing.”
Ron Swanson Yep

Don’t go at it alone when you can supplement your team and resources.

Overworked employees have higher turnover (-$), make more mistakes (-$), and work slower (-$)(-$). This is costing you more than you can ever fully account for, though your CFO may try. Think about those IT stars that are so important to the success of your business. You may just lose them, starting another fire to put out when you’re trying to gain, train, and grow.

Managed IT Services Can Put You Back on Track!

No one knows your business better than you, and I’m just guessing but, I bet you’ve gotten pretty good at it by now. However, as any good leader or manager knows, without your focus on the future you could lose out as your industry changes, and if you didn’t notice it’s already changing.

To remain competitive, you need an advantage that can help you refocus your team and let you do you, because that’s what you do best.

At TxMQ we are not an expert in your business, and we would never claim to be one. Our Managed Services mission is to take care of the stuff you don’t have the resources, expertise, or the time for, and then we make it run at it’s best. You can refocus your full attention to improving your business.
Whether your producing widgets to change the world, a life saving drug, or providing healthy food for the masses, you don’t have to spread yourself thin. We Monitor, Manage and Maintain, Middleware Systems & Databases that power your business. As a provider we are technology and systems agnostic.
What we do is nothing you can’t do yourself or maybe, already are doing. If resources are scarce, putting extra work on your existing team can cost you more than it needs to. TxMQ’s Managed Services teams fill in the gaps within your existing IT resources to strengthen and solidify your systems, so that you can focus on everything else.

TxMQ’s Managed Services team helps you refocus, so can concentrate on growth and tackling your industry and business challenges.

If you’re interested in getting more sleep at night and learning more about our Managed Services please reach out or click below for more info.

Learn About Managed Services and Support With TxMQ Click Here!

North America: Don’t Ignore GDPR – It Affects us too!

Hey, North America – GDPR Means Us, Too!

It’s well documented, and fairly well socialized across North America that on May 25th of 2018, the GDPR, or the General Data Protection Regulation, formally goes into effect in the European Union (EU).
Perhaps less well known, is how corporations located in North America, and around the world, are actually impacted by the legislation.

The broad stroke is, if your business transacts with and/or markets to citizens of the EU, the rules of GDPR apply to you.

For those North American-based businesses that have mature information security programs in place (such as those following PCI, HIPAA, NIST and ISO standards), your path to compliance with the GDPR should not be terribly long. There will be, however, some added steps needed to meet the EU’s new requirements; steps that this blog is not designed to enumerate, nor counsel on.
It’s safe to say that data protection and privacy is a concern involving a combination of legal, governance, process, and technical considerations. Here is an interesting and helpful FAQ link on the General Data Protection Regulation policies.
Most of my customers represent enterprise organizations, which have a far-reaching base of clients and trading partners. They are the kinds of companies who touch sensitive information, are acutely aware of data security, and are likely to be impacted by the GDPR.
These enterprises leverage TxMQ for, among other things, expertise around Integration Technology and Application Infrastructure.
Internal and external system access and integration points are areas where immediate steps can be taken to enhance data protection and security.

Critical technical and procedural components include (but are not limited to):

  • Enterprise Gateways
  • ESB’s and Messaging (including MQ and FTP – also see Leif Davidsen’s blog)
  • Application & Web Servers
  • API Management Strategy and Solutions
  • Technology Lifecycle Management
    • Change Management
    • Patch Management
    • Asset Management

The right technology investment, architecture, configuration, and governance model go a long way towards GDPR compliance.
Tech industry best practices should be addressed through a living program within any corporate entity. In the long run, setting and adhering to these policies protect your business, and save your business money (through compliance and efficiency).
In short, GDPR has given North America another important reason to improve upon our data and information security.
It affects us, and what’s more, it’s just a good idea.

What you need to know about GDPR

What is GDPR?

GDPR is the European Union’s General Data Protection Regulation.
In short, it is known as the ‘right to be forgotten’ rule. The intent of GDPR is to protect the data privacy of European Union (or EU) citizens, yet it’s implications are potentially far reaching.

Why do EU citizens need GDPR?

In most of the civilized world, individuals have little true awareness of the amount of data that is stored about us. Some accurate, some quite the opposite.

Personal data is defined by both the directive and GDPR as information relating to a person who can be identified directly or indirectly in particular by reference to name, ID number, location data, or other factors related to physical, physiological, mental, economic, cultural, or related factors (including social identity).

If I find an error strewn rant about my small business somewhere online, my ability to correct it, or even have it removed is limited quite completely to posting a counter statement or begging whoever owns that content in question, to remove it. I have no real legal recourse short of a costly, and destined-to-fail law suit.
The EU sought to change this for their citizens, and thus GDPR was born.
In December of 2015, the long process of designing legislation to create a new legal framework to ensure the rights of EU citizens was completed. This was ratified a year later and becomes enforceable on May 25th of this year (2018).

There are two primary components to the GDPR legislation.

  1. The General Data Protection Regulation, or GDPR, is designed to enable individuals to have more control of their personal data.

It is hoped that these modernized and unified rules will allow companies to make the most of digital markets by reducing regulations, while regaining consumers trust.

  1. The data protection directive is a second component.

It ensures that law enforcement bodies can protect the rights of those involved in criminal proceedings. Including victims, witnesses, and other parties.

It is also hoped that the unified legislation will facilitate better cross border participation of law enforcement to proactively enforce the laws, while facilitating better capabilities of prosecutors to combat criminal and terrorist activities.

Key components of GDPR

The regulation is intended to establish a single set of cross European rules, designed to make it simpler to do business across the EU.  Organizations across the EU are subject to regulation just by collecting data on EU citizens.

Personal Data

Personal data is defined by both the directive and GDPR as information relating to a person who can be identified directly or indirectly in particular by reference to name, ID number, location data, or other factors related to physical, physiological, mental, economic, cultural, or related factors (including social identity).
So, this means many things including IP addresses, cookies, and more will be regarded as personal data if they can be linked back to an individual.
The regulations separate the responsibilities and duties of data controllers vs data processors, obligating controllers to engage only those processors that provide “sufficient guarantees to implement appropriate technical and organizational measures” to meet the regulations requirements and protect data subjects’ rights.
Controllers and processors are required to “implement appropriate technical and organizational measures” taking into account “the state of the art and costs of implementation” and “the nature, scope, context and purposes of the processing as well as the risk of varying likelihood and severity for the rights and freedoms of individuals”.

Security actions “appropriate to the risk”

The regulations also provide specific suggestions for what kinds of security actions might be considered “appropriate to the risk”, including:

  • The pseudonymization and/or encryption of personal data.
  • The ability to ensure the ongoing confidentiality, integrity, availability, and resilience of systems and services processing persona data.
  • The ability to restore the availability and access to data in a timely manner in the event of a physical or technical incident.
  • A process for regularly testing, assessing and evaluating the effectiveness of technical and organizational measures for ensuring the security of the processing.

Controllers and processors that adhere to either an approved code of conduct or an approved certification may use these tools to demonstrate their compliance (such as certain industry-wide accepted tools).
The controller-processor relationships must be documented and managed with contracts that mandate privacy obligations.

Enforcement and Penalties

There are substantial penalties and fines for organizations that fail to conform with the regulations.
Regulators will now have the authority to issue penalties equal to the greater of 10 Million Euro, or 2% of the entity’s global gross revenue for violations of record keeping, security, breach notifications and privacy impact assessment obligations. However, violations of obligations related to legal justification for processing (including consent), data subject rights, and cross border data transfers, may result in double the above stipulated penalties.
It remains to be seen how the legal authorities tasked with this compliance will perform.

Data Protection Officers

Data Protection Officers must be appointed for all public authorities, and where the core activities of the controller or the processor involve “regular and systematic monitoring of data subjects on a large scale”, or where the entity conducts large scale processing of “special categories of personal data”; personal data such as that revealing racial or ethnic origin, political opinions, religious belief, etc. This likely encapsulates large firms such as banks, Google, Facebook, and the like.
It should be noted that there is also NO restriction on organization size, down to small start-up firms.

Privacy Management

Organizations will have to think harder about privacy. The regulations mandate a risk-based approach, where appropriate organization controls must be developed according to the degree of risk associated with the processing activities.
Where appropriate, privacy impact assessments must be made, with the focus on individual rights.
Privacy friendly techniques like pseudonymization will be encouraged to reap the benefits of big data innovation while protecting privacy.
There is also an increased focus on record keeping for controllers as well.


Consent is a newly defined term in the regulations.
It means “any freely given, specific informed and unambiguous indication of his or her wishes by which the data subject, either by a statement or by clear affirmative action, signifies agreement to personal data relating to them being processed”. The consent does need to be for specified, explicit, and legitimate purposes.
Consent should also be demonstrable. Withdrawal of consent must be clear, and as easy to execute as the initial act of providing consent.


Profiling is now defined as any automated processing of personal data to determine certain criteria about a person.

In particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, behaviors, location and more”.

This will certainly impact marketers, as it appears that consent must be explicitly provided for said activities.
There is more, including details on breach notification.
It’s important to note that willful destruction of data is dealt with as severely as a breach.

Data Subject Access Requests

Individuals will have more information how their data is processed, and this information must be available in a clear and understandable way.
If said requests are deemed excessive, providers may be able to charge for said information.

Right to be Forgotten

This area, while much written about, will require some further clarification, as there are invariably downstream implications the regulations haven’t begun to address. Yet the intent of “right to be forgotten” is clear; individuals have certain rights, and they are protected.

Think you’re ready for GDPR?

Is your business really ready for GDPR? What measures have you taken to ensure you’re in compliance?
With the GDPR taking effect this coming May, companies around the world have a long, potentially costly, road ahead of them to demonstrate that they are worthy of the trust that so many individuals place in them.