Managed Services: Regain focus, improve performance and move forward.

Managed IT Services Help You Improve Your Team’s Performance.

There is no such thing as multitasking. It’s been scientifically proven that if you spread your focus too thin, something is going to suffer. Just like a computer system, if you are trying to work on too many tasks at once, it’s going to slow down, create errors and just not perform as expected.

You must regain your focus in order to improve performance and move forward.

The fact is, no matter what your industry or business is today, your success is dependent upon your IT team staying current, competent, and competitive within the industry. Right now there is more on your IT departments “plate” than ever before. Think about how much brain power and talent you’re misusing knowing that your best IT talent is spending the bulk of their efforts just managing the day to day. Keep in mind that most of these issues can be easily fixed, and even avoided with proper preparation.

How do you continuously put out fires, keep systems running smoothly, and still have time to plan for the future?

As the legendary Ron Swanson once said “Never half-ass two things. Whole-ass one thing.”
Ron Swanson Yep

Don’t go at it alone when you can supplement your team and resources.

Overworked employees have higher turnover (-$), make more mistakes (-$), and work slower (-$)(-$). This is costing you more than you can ever fully account for, though your CFO may try. Think about those IT stars that are so important to the success of your business. You may just lose them, starting another fire to put out when you’re trying to gain, train, and grow.

Managed IT Services Can Put You Back on Track!

No one knows your business better than you, and I’m just guessing but, I bet you’ve gotten pretty good at it by now. However, as any good leader or manager knows, without your focus on the future you could lose out as your industry changes, and if you didn’t notice it’s already changing.

To remain competitive, you need an advantage that can help you refocus your team and let you do you, because that’s what you do best.

At TxMQ we are not an expert in your business, and we would never claim to be one. Our Managed Services mission is to take care of the stuff you don’t have the resources, expertise, or the time for, and then we make it run at it’s best. You can refocus your full attention to improving your business.
Whether your producing widgets to change the world, a life saving drug, or providing healthy food for the masses, you don’t have to spread yourself thin. We Monitor, Manage and Maintain, Middleware Systems & Databases that power your business. As a provider we are technology and systems agnostic.
What we do is nothing you can’t do yourself or maybe, already are doing. If resources are scarce, putting extra work on your existing team can cost you more than it needs to. TxMQ’s Managed Services teams fill in the gaps within your existing IT resources to strengthen and solidify your systems, so that you can focus on everything else.

TxMQ’s Managed Services team helps you refocus, so can concentrate on growth and tackling your industry and business challenges.

If you’re interested in getting more sleep at night and learning more about our Managed Services please reach out or click below for more info.

Learn About Managed Services and Support With TxMQ Click Here!

North America: Don’t Ignore GDPR – It Affects us too!

Hey, North America – GDPR Means Us, Too!

It’s well documented, and fairly well socialized across North America that on May 25th of 2018, the GDPR, or the General Data Protection Regulation, formally goes into effect in the European Union (EU).
Perhaps less well known, is how corporations located in North America, and around the world, are actually impacted by the legislation.

The broad stroke is, if your business transacts with and/or markets to citizens of the EU, the rules of GDPR apply to you.

For those North American-based businesses that have mature information security programs in place (such as those following PCI, HIPAA, NIST and ISO standards), your path to compliance with the GDPR should not be terribly long. There will be, however, some added steps needed to meet the EU’s new requirements; steps that this blog is not designed to enumerate, nor counsel on.
It’s safe to say that data protection and privacy is a concern involving a combination of legal, governance, process, and technical considerations. Here is an interesting and helpful FAQ link on the General Data Protection Regulation policies.
Most of my customers represent enterprise organizations, which have a far-reaching base of clients and trading partners. They are the kinds of companies who touch sensitive information, are acutely aware of data security, and are likely to be impacted by the GDPR.
These enterprises leverage TxMQ for, among other things, expertise around Integration Technology and Application Infrastructure.
Internal and external system access and integration points are areas where immediate steps can be taken to enhance data protection and security.

Critical technical and procedural components include (but are not limited to):

  • Enterprise Gateways
  • ESB’s and Messaging (including MQ and FTP – also see Leif Davidsen’s blog)
  • Application & Web Servers
  • API Management Strategy and Solutions
  • Technology Lifecycle Management
    • Change Management
    • Patch Management
    • Asset Management

The right technology investment, architecture, configuration, and governance model go a long way towards GDPR compliance.
Tech industry best practices should be addressed through a living program within any corporate entity. In the long run, setting and adhering to these policies protect your business, and save your business money (through compliance and efficiency).
In short, GDPR has given North America another important reason to improve upon our data and information security.
It affects us, and what’s more, it’s just a good idea.

What you need to know about GDPR

What is GDPR?

GDPR is the European Union’s General Data Protection Regulation.
In short, it is known as the ‘right to be forgotten’ rule. The intent of GDPR is to protect the data privacy of European Union (or EU) citizens, yet it’s implications are potentially far reaching.

Why do EU citizens need GDPR?

In most of the civilized world, individuals have little true awareness of the amount of data that is stored about us. Some accurate, some quite the opposite.

Personal data is defined by both the directive and GDPR as information relating to a person who can be identified directly or indirectly in particular by reference to name, ID number, location data, or other factors related to physical, physiological, mental, economic, cultural, or related factors (including social identity).

If I find an error strewn rant about my small business somewhere online, my ability to correct it, or even have it removed is limited quite completely to posting a counter statement or begging whoever owns that content in question, to remove it. I have no real legal recourse short of a costly, and destined-to-fail law suit.
The EU sought to change this for their citizens, and thus GDPR was born.
In December of 2015, the long process of designing legislation to create a new legal framework to ensure the rights of EU citizens was completed. This was ratified a year later and becomes enforceable on May 25th of this year (2018).

There are two primary components to the GDPR legislation.

  1. The General Data Protection Regulation, or GDPR, is designed to enable individuals to have more control of their personal data.

It is hoped that these modernized and unified rules will allow companies to make the most of digital markets by reducing regulations, while regaining consumers trust.

  1. The data protection directive is a second component.

It ensures that law enforcement bodies can protect the rights of those involved in criminal proceedings. Including victims, witnesses, and other parties.

It is also hoped that the unified legislation will facilitate better cross border participation of law enforcement to proactively enforce the laws, while facilitating better capabilities of prosecutors to combat criminal and terrorist activities.

Key components of GDPR

The regulation is intended to establish a single set of cross European rules, designed to make it simpler to do business across the EU.  Organizations across the EU are subject to regulation just by collecting data on EU citizens.

Personal Data

Personal data is defined by both the directive and GDPR as information relating to a person who can be identified directly or indirectly in particular by reference to name, ID number, location data, or other factors related to physical, physiological, mental, economic, cultural, or related factors (including social identity).
So, this means many things including IP addresses, cookies, and more will be regarded as personal data if they can be linked back to an individual.
The regulations separate the responsibilities and duties of data controllers vs data processors, obligating controllers to engage only those processors that provide “sufficient guarantees to implement appropriate technical and organizational measures” to meet the regulations requirements and protect data subjects’ rights.
Controllers and processors are required to “implement appropriate technical and organizational measures” taking into account “the state of the art and costs of implementation” and “the nature, scope, context and purposes of the processing as well as the risk of varying likelihood and severity for the rights and freedoms of individuals”.

Security actions “appropriate to the risk”

The regulations also provide specific suggestions for what kinds of security actions might be considered “appropriate to the risk”, including:

  • The pseudonymization and/or encryption of personal data.
  • The ability to ensure the ongoing confidentiality, integrity, availability, and resilience of systems and services processing persona data.
  • The ability to restore the availability and access to data in a timely manner in the event of a physical or technical incident.
  • A process for regularly testing, assessing and evaluating the effectiveness of technical and organizational measures for ensuring the security of the processing.

Controllers and processors that adhere to either an approved code of conduct or an approved certification may use these tools to demonstrate their compliance (such as certain industry-wide accepted tools).
The controller-processor relationships must be documented and managed with contracts that mandate privacy obligations.

Enforcement and Penalties

There are substantial penalties and fines for organizations that fail to conform with the regulations.
Regulators will now have the authority to issue penalties equal to the greater of 10 Million Euro, or 2% of the entity’s global gross revenue for violations of record keeping, security, breach notifications and privacy impact assessment obligations. However, violations of obligations related to legal justification for processing (including consent), data subject rights, and cross border data transfers, may result in double the above stipulated penalties.
It remains to be seen how the legal authorities tasked with this compliance will perform.

Data Protection Officers

Data Protection Officers must be appointed for all public authorities, and where the core activities of the controller or the processor involve “regular and systematic monitoring of data subjects on a large scale”, or where the entity conducts large scale processing of “special categories of personal data”; personal data such as that revealing racial or ethnic origin, political opinions, religious belief, etc. This likely encapsulates large firms such as banks, Google, Facebook, and the like.
It should be noted that there is also NO restriction on organization size, down to small start-up firms.

Privacy Management

Organizations will have to think harder about privacy. The regulations mandate a risk-based approach, where appropriate organization controls must be developed according to the degree of risk associated with the processing activities.
Where appropriate, privacy impact assessments must be made, with the focus on individual rights.
Privacy friendly techniques like pseudonymization will be encouraged to reap the benefits of big data innovation while protecting privacy.
There is also an increased focus on record keeping for controllers as well.


Consent is a newly defined term in the regulations.
It means “any freely given, specific informed and unambiguous indication of his or her wishes by which the data subject, either by a statement or by clear affirmative action, signifies agreement to personal data relating to them being processed”. The consent does need to be for specified, explicit, and legitimate purposes.
Consent should also be demonstrable. Withdrawal of consent must be clear, and as easy to execute as the initial act of providing consent.


Profiling is now defined as any automated processing of personal data to determine certain criteria about a person.

In particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, behaviors, location and more”.

This will certainly impact marketers, as it appears that consent must be explicitly provided for said activities.
There is more, including details on breach notification.
It’s important to note that willful destruction of data is dealt with as severely as a breach.

Data Subject Access Requests

Individuals will have more information how their data is processed, and this information must be available in a clear and understandable way.
If said requests are deemed excessive, providers may be able to charge for said information.

Right to be Forgotten

This area, while much written about, will require some further clarification, as there are invariably downstream implications the regulations haven’t begun to address. Yet the intent of “right to be forgotten” is clear; individuals have certain rights, and they are protected.

Think you’re ready for GDPR?

Is your business really ready for GDPR? What measures have you taken to ensure you’re in compliance?
With the GDPR taking effect this coming May, companies around the world have a long, potentially costly, road ahead of them to demonstrate that they are worthy of the trust that so many individuals place in them.

The Case for the Crypto Crash

The Case for the Crypto Crash

Anyone who knows me knows that I am a big supporter of, and believer in distributed ledger technology, or DLT.
It has the potential to revolutionize the way we build certain types of solutions, and the way certain systems, software, and institutions, even people, interact.
That being said, I also believe a strong argument can be made that crypto currencies, at least in their current incarnation, are destined to fail.
Full disclosure: I own no crypto currencies.
There is a foundational flaw in the use case of cryptocurrency. It is NOT easily transacted; often having lengthy or ungainly settlement times, requiring in almost all cases, conversion to fiat currency, and it’s generally ill suited to the very task it was designed to perform: storing value for transacting business.

It’s hard for people to use crypto currencies.

I have heard first hand, countless stories of transactions intended to be conducted using crypto currencies, where the parties wouldn’t agree to them without one or the other agreeing to guarantee value to some fiat currency.
If this continues and people aren’t able to use cryptocurrency as a currency, what then? Normal market rules would dictate a crash to zero.
But is this a market that follows normal market rules? What exactly is normal?

Fiat Currency, or Fiat Money:

Let’s back up and look at fiat currency. Take the US dollar.
Early on, the United States dollar was tied to the gold standard. The United States Bullion Depository, more commonly known as Fort Knox, was established as a store in the US for gold that backed the dollar.
In 1933, the US effectively abandoned this standard, and in 1971, all ties to gold were severed.
So what happened? Effectively nothing. Yet the US backs the dollar.
Why? The US dollar is intimately tied to our financial system by the Federal Reserve, which, as demonstrated for better or worse in the financial crisis of 10 years ago, will do everything in its power to shore up the currency when needed.
So we operate today with the shared belief, some might call it a shared delusion, that there is value in the money we carry in our wallets and in the bank statements we see online.

Is cryptocurrency approaching this level of shared belief?

Who will step in if crypto crashes? In short, no one. There is no governing body, by design, behind any cryptocurrency.
As I write this, all crypto currencies are down over 10%, some are down over 20%. Nothing will prop it back up other than buyers: speculators hoping to buy on the downswing, hoping to hold until it rises again.
So, is this a normal market? I say no, it is not. I see no ultimate destination on this journey, other than disappointment.
If you have risk capital to play with, go ahead, risk some on crypto if you wish.
Personally, I would rather invest my money in companies I can understand, with business models that make sense. That being said, in my case, this also means investing in my company’s work to build solutions on the technology underlying crypto currency, or distributed ledger technology.

You may be asking yourself, how can he support distributed ledger technology and not have faith in cryptocurrency?

The answer here is simple. The technology is solid, the use case of crypto is flawed. Java is solid, but not all java applications are good applications. Crytpo currency is just another application running on distributed ledger, and as I have posited herein, a bad one.

Chuck Fried is the President and CEO of TxMQ, a systems integrator and software development company focused on building technical solutions for their customers across the US and Canada.
Follow him at, or @chuckfried on Twitter, or here on the TxMQ blog.

Economic Theory and Cryptocurrency

This post was originally published on, with permission to repost on

Economic Theory and Cryptocurrency

In a rational market, there are basic principles, which apply to the pricing and availability of goods and services. At the same time, these forces affect the value of currency. Currency is any commodity or item whose principle use is as a store of value.
Once upon a time, precious metals and gems were the principle value store used. Precious jewels, gold, and silver were used as currency to acquire goods and services. Over time, as nations industrialized, trading required proxy value stores, and paper money was introduced, which was tied to what became the gold standard. This system lasted into the 20th century.
As nations moved off the gold standard, Keynesian economics became a much-touted model. Introduced by John Maynard Keynes, a British economic theorist in his seminal, depression era work “The General Theory of Employment, Interest and Money”, it introduced a demand side model whereby nations were shown to have the ability to influence macro economics by modifying taxes and government spending.
Recently, crypto currency has thrown a curveball into our economic models, with the introduction of virtual currencies. Bitcoin is the most widely known, but there are multiple other virtual currencies or crypto currencies as they are now called because of the underlying mathematical formulas and crypto graphic algorithms which govern the network these are built on.

Whether these are currencies or not is itself an interesting rabbit hole to climb down, and a bit of a semantic trap.

They are not stores of value, nor proxies for precious goods, but if party a perceives a value in a bitcoin, and will take it in trade for something, does that not make it a currency?
Webster’s defines currency as circulation as a medium of exchange, and general use, acceptance or prevalence. Bitcoin seems to fit this definition.
Thus the next question…

What is going on with the price of bitcoin?

Through most of 2015, the price of one bitcoin started a slow climb from the high 200s to the mid 400s in US dollars; that in itself is a near meteoric climb. The run ended at around $423, for reasons outside the scope of this paper, actual pricing is dependent on the exchange one references for this data.
2016 saw an acceleration of this climb, with a final tally just shy of $900.
It was in 2017 where the wheels really came off, with a feverish, near euphoric climb in the past weeks to almost $20,000, before settling recently to a trading range of $15-$16,000 per bitcoin.
So what is going on here? What economic theory describes this phenomenon?
Sadly, we don’t have a good answer, but there are some data points we should review.
First, let’s recognize that for many readers, awareness of bitcoin happened only recently. It bears pointing out that one won’t buy a thing if one is unaware of that thing. Thus, the awareness of bitcoin has played a somewhat significant role in driving up it’s value.
To what extent this affected the price is a mystery, but if we accept this as given, clearly as more and more people learn about bitcoin, more and more people will buy bitcoin.

So what is bitcoin?

I won’t go deep here since it’s likely if you are reading this, you have this foundational knowledge, but bitcoin was created by a person, persons, or group using the pseudonym Satoshi Nakomoto in 2009. It was created to eliminate the need for banks, or third parties in transactions; it also allows for complete anonymity of the holder of the coin.
There is a finite upper limit of the maximum number of bitcoins that can ever be created. There is a mathematical formula described in detail in various online sources, including Wikipedia, so I won’t delve into that here. This cap, set at 21 million coins, will be reached when the last coin is mined (again, see Wikipedia). This is variously estimated to likely occur in the year 2140.

What makes Bitcoin Valuable?

So this ‘capped’ reality also adds to value, since like most stores of value, there is a rarity to bitcoin, a fixed number in existence today, and a maximum number that will ever exist.
In addition, more and more organizations are accepting bitcoin as a payment method. This increase in utility, and subsequent liquidity (it’s not always easy to sell units of bitcoin less than full coins) has also increased the perceived value of the coin.
Contributing to this climb in value recently has been the CryptoKitties phenomenon; a gaming application that rose to popularity far more rapidly than its creators could have foreseen. The subsequent media exposure thrust blockchain, and correspondingly bitcoin, further into the limelight, and the value continued to spike.
Lastly, the CBOE Options Exchange announced that on Monday, December 11th, they will begin trading bitcoin futures. Once again, this action broadcast to a widening audience that bitcoin was real, viable, and worth looking at as a part of some portfolios.; adding both legitimacy, as well as ease of trade to the mix.
The number of prognosticators calling bitcoin a farce seems near equal to the number calling for a coin to hit a $1 million valuation in 4 years. Who will be right remains to be seen.
For the moment, this author sees this as a bit like Vegas gambling. It’s fun, it’s legal, but you can also lose every penny you gamble; so bet (invest) only what you can afford to lose, and enjoy the ride.

What Digital Cats Taught Us About Blockchain

Given the number of cat pictures that the internet serves up every day, perhaps we shouldn’t be surprised that blockchain’s latest pressure-test involves digital cats. CryptoKitties is a Pokémon-style collecting and trading game built on Ethereum where players buy, sell, and breed digital cats. In a matter of a week, the game has gone from a relatively obscure but novel decentralized application (DAPP) to the largest DAPP currently running on Ethereum. Depending on when you sampled throughput, CryptoKitties accounted for somewhere in the neighborhood of 14% of Ethereum’s overall transaction volume. At the time I wrote this, players had spent over $5.2 million in Ether buying digital cats. The other day, a single kitty was sold for over $117,000.

Wednesday morning I attended a local blockchain meet-up, and the topic was CryptoKitties.

Congestion on the Ethereum node that the player was connected to was so bad, gas fees for buying a kitty could be as high as $100. The node was so busy that game performance was significantly degraded to the point where the game became unusable. Prior to the game’s launch, pending transaction volume on Ethereum was under 2,000 transactions. Now it’s in the range of 10,000-12,000 transactions. To summarize: A game where people pay (lots of) real money to trade digital cats is degrading the performance of the world’s most viable general-purpose public blockchain.

If you’re someone who has been evaluating the potential of blockchain for enterprise use, that sounds pretty scary. However, most of what has been illustrated by the CryptoKitties phenomenon isn’t news. We already knew scalability was a challenge for blockchain. There are a proliferation of off-chain and side-chain protocols emerging to mitigate these challenges, as well as projects like IOTA and Swirlds which aim to provide better throughput and scalability by changing how the network communicates and reaches consensus. Work is ongoing to advance the state of the art, but we’re not there yet and nobody has a crystal ball.

So, what are the key takeaways from the CryptoKitties phenomenon?

Economics Aren’t Enough to Manage the Network

Put simplistically, as the cost of trading digital cats rises, the amount of digital cat trading should go down (in an idealized, rational market economy that is). Yet both the cost of the kitties themselves – currently anywhere from $20 to over $100,000 – and the gas cost required to buy, sell, and breed kitties has gone up to absurd levels. The developers of the game have also increased fees in a bid to slow down trading. Up to now, nothing has worked.

In many ways, it’s an interesting illustration of cryptocurrency in general: cats have value because people believe they do, and the value of a cat is simply determined by how much people are willing to pay for it. In addition, this is clearly not an optimized, nor ideal, nor rational market economy.

The knock-on effects for the network as a whole aren’t clear either. Basic economics would dictate that as a resource becomes more scarce, those who control that resource will charge more for it. On Ethereum, that could come in the form of gas limit increases by miners which will put upward pressure on the cost of running transactions on the Ethereum network in general.

For businesses looking to leverage public blockchains, the implication is that the risk of transacting business on public blockchains increases. The idea that a CryptoKitties can come along and impact the costs of doing business adds another wrinkle to the economics of transacting on the blockchain. Instability in the markets for cryptocurrency already make it difficult to predict the costs of operation for distributed applications. Competition between consumers for limited processing power will only serve to increase risk and likely the cost of running on public blockchains.

Simplify, and Add Lightness

Interestingly, the open and decentralized nature of blockchains seems to be working against a solution to the problem of network monopolization. Aside from economic disincentives, there isn’t a method for ensuring that the network isn’t overwhelmed by a single application or set of applications. There isn’t much incentive for applications to be good citizens when the costs can be passed on to end-users who are willing to absorb those costs.

If you’re an enterprise looking to transact on a public chain, your mitigation strategy is both obvious and counter-intuitive: Use the blockchain as little as possible. Structure your smart contracts to be as simple as they can be, and handle as much as you can either in application logic or off-chain. Building applications that are designed to be inexpensive to run will only pay off in a possible future where the cost of transacting increases. Use the right tools for the job, do what you can off-chain, and settle to the chain when necessary.

Private Blockchains for Enterprise Applications

The easiest way to assert control over your DAPPs are to deploy them to a network you control. In the enterprise, the trustless, censorship-free aspects of the public blockchain are much less relevant. Deploying to private blockchains like Hyperledger or Quorum (a permissioned variant of Ethereum), gives organizations a measure of control over the network and its participants. Your platform then exists to support your application, and your application can be structured to manage the performance issues associated with blockchain platforms.

Even when the infrastructure is under the direct control of the enterprise, it’s still important to follow the architectural best practices for DAPP development. Use the blockchain only when necessary, keep your smart contracts as simple as possible, and handle as much as you can off-chain. In contrast to traditional distributed computing environments, scaling a distributed ledger platform by adding nodes increases fault tolerance and data security but not performance. Structuring your smart contracts to be as efficient as possible will ensure that you make best use of transaction bandwidth as usage of an application scales.

Emerging Solutions

Solving for scalability is an area of active development. I’ve already touched on solutions which move processing off-chain. Development on the existing platforms is also continuing, with a focus on the mechanism used to achieve consensus. Ethereum’s Casper network proposes to change the consensus mechanism to a proof-of-stake system, where miners put up an amount of cryptocurrency as proof that they aren’t acting maliciously. While proof-of-stake has the potential to increase throughput, it hasn’t yet been proven to be.

Platforms built on alternatives to mining are also emerging.

IOTA has been gaining traction as an Internet of Things scale solution for peer-to-peer transacting. It has the backing of a number of large enterprises including Microsoft, is open-source, and freely available. IOTA uses a directed acyclic graph as its core data structure, which differs from a blockchain and allows the network to reach consensus much more quickly. Swirlds is coming to market with a solution based on the Hashgraph data structure. Similar to IOTA, this structure allows for much faster time to consensus and higher transaction throughput. In contrast to IOTA, Swirlds is leaderless and Byzantine fault tolerant.

As with any emerging technology, disruption within the space can happen at a fast pace. Over the next 18 months, I expect blockchain and distributed ledger technology to continue to mature. There will be winners and losers along the way, and it’s entirely possible that new platforms will supplant existing leaders in the space.

Walk Before You Run

Distributed ledger technology is an immature space. There are undeniable opportunities for early adopters, but there are also pitfalls – both technological and organizational. For organizations evaluating distributed ledger, it is important to start small, iterate often, and fail fast. Your application roadmap needs to incorporate these tenets if it is to be successful. Utilize proofs of concept to validate assumptions and drive out the technological and organizational roadblocks that need to be addressed for a successful production application. Iterate as the technology matures. Undertake pilot programs to test production readiness, and carefully plan application roll out to manage go-live and production scale.

If your organization hasn’t fully embraced agile methods for application development, now is the time to make the leap. The waterfall model of rigorous requirements, volumes of documentation, and strictly defined timelines simply won’t be flexible enough to successfully deliver products on an emerging technology. If your IT department hasn’t begun to embrace a DevOps-centric approach, then deploying DAPPs is likely to meet internal resistance – especially on a public chain. In larger enterprises, governance policies may need to be reviewed and updated for applications based on distributed ledger.

The Future Is Still Bright

Despite the stresses placed on the Ethereum network by an explosion of digital cats, the future continues to look bright for distributed ledger and blockchain. Flaws in blockchain technology have been exposed somewhat glaringly, but for the most part these flaws were known before the CryptoKitties phenomenon. Solutions to these issues were under development before digital cats. The price of Ether hasn’t crashed, and the platform is demonstrating some degree of resilience under pressure.

We continue to see incredible potential in the space for organizations of all sizes. New business models will continue to be enabled by distributed ledger and tokenization. The future is still bright – and filled with cats!


IBM WebSphere Application Server (WAS) v.7 & v.8, and WebSphere MQ v.7.5 End of Support: April 30, 2018

Are you presently running on WAS versions 7 or 8?
   Are you leveraging WebSphere MQ version 7.5?

Time is running out, IBM WebSphere Application Server (WAS) v.7 & v.8, and WebSphere MQ v.7.5 support ends in less than 6 months. As of April 30th 2018, IBM will discontinue support on all WebSphere Application Server versions 7.0.x & v8.0.x; and WebSphere MQ v7.5.x.

It’s recommended that you migrate to WebSphere Application Server v.9 to avoid potential security issues that may occur on the early, unsupported versions of WAS (and Java).
It’s also recommended that you upgrade to IBM MQ version 9.0.x, to leverage new features, and avoid costly premium support fees from IBM.

Why should you go through an upgrade?

Many security risks can percolate when running back-level software, especially WAS running on older Java versions. If you’re currently running on software versions that are out of support, finding the right support team to put out your unexpected fires can be tricky and might just blow the budget.
Upgrading WAS & MQ to supported versions will allow you to tap into new and expanding capabilities, and updated performance enhancements while also protecting yourself from unnecessary, completely avoidable security risks and added support costs.

WebSphere Application Server v.9 Highlights

WebSphere Application Server v.9.0 offers unparalleled functionality to deliver modern applications and services quickly, securely and efficiently.

When you upgrade to v.9.0, you’ll enjoy several upgrade perks including:
  • Java EE 7 compliant architecture.
  • DevOps workflows.
  • Easy connection between your on-prem apps and IBM Bluemix services (including IBM Watson).
  • Container technology that enables greater development and deployment agility.
  • Deployment on Pivotal Cloud Foundry, Azure, Openshift, Amazon Web Services and Bluemix.
  • Ability to provision workloads to IBM cloud (for VMware customers).
  • Enhancements to WebSphere extreme scale that have improved response times and time-to-configuration.


IBM MQ v.9.0.4 Highlights

With the latest update moving to MQ V9.0.4, there are even more substantial updates of useful features for IBM MQ, even beyond what came with versions 8 (z/OS) & 8.5.

When you upgrade to v.9.0.4, enhancements include:
  • Additional commands supported as part of the REST API for admin.
  • Availability of a ‘catch-all’ for MQSC commands as part of the REST API for admin.
  • Ability to use a single MQ V9.0.4 Queue Manager as a single point gateway for REST API based admin of other MQ environments including older MQ versions such as MQ V9 LTS and MQ V8.
  • Ability to use MQ V9.0.4 as a proxy for IBM Cloud Product Insights reporting across older deployed versions of MQ.
  • Availability of an enhanced MQ bridge for Salesforce.
  • Initial availability of a new programmatic REST API for messaging applications.

This upgrade cycle also offers you the opportunity to evaluate the MQ Appliance. Talk to TxMQ to see if the MQ Appliance is a good option for your messaging environment.

What's your WebSphere Migration Plan? Let's talk about it!

Why work with an IBM Business Partner to upgrade your IBM Software?

You can choose to work with IBM directly – we can’t (and won’t) stop you – but your budget just might. Working with a premier IBM business partner allows you to accomplish the same task with the same quality, but at a fraction of the price IBM will charge you, with more personal attention and much speedier response times.
Also, IBM business partners are typically niche players, uniquely qualified to assist in your company’s migration planning and execution. They’ll offer you and your company much more customized and consistent attention. Plus, you’ll probably be working with ex-IBMers anyway, who’ve turned in their blue nametags to find greater opportunities working within the business partner network.

There are plenty of things to consider when migrating your software from outdated versions to more current versions.

TxMQ is a premier IBM business partner that works with customers to oversee and manage software migration and upgrade planning. TxMQ subject matter experts are uniquely positioned with relevant experience, allowing them to help a wide range of customers determine the best solution for their migration needs.
Get in touch with us today to discuss your migration and back-level support options. It’s never too late to begin planning and executing your version upgrades.

To check on your IBM Software lifecycle, simply search your product name and version on this IBM page or, give TxMQ a call…

How much do you know about Blockchain and is it just hype?

Blockchain Basics: a Primer

Every short while, a technology comes along that promises to turn everything upside down.

Sometimes this happens, sometimes it’s hype.

Think of Yahoo’s search in it’s early days, later overturned by Google’s better algorithms and business model (they did, after all, download the entire internet at one time). There was peer-to-peer networking, popularized by Napster, Application (App) stores, and now, Blockchain has the valley, and startups buzzing away.

But is this hype or reality? Perhaps some of both.

Let’s start by defining Blockchain & Bitcoin

So what is Blockchain?

Blockchain is, at it’s a core, a new way to code, somewhat specific to assets, and primary as things relate to chain of custody.

It started with Bitcoin; but what is Bitcoin?

Bitcoin is one of many (though perhaps the most well known) of a new type of currency called crypto currency (aka digital currency).

The crypto references the highly secure nature of it. It is a virtual currency. There is no physical coin, bill, or other instrument. It’s computer code, plain and simply.

Bitcoin was first mentioned in a now infamous white paper authored by a person, or persons using the pseudonym Satoshi Nakamoto in 2008.

Though by no means the first reference to a digital currency, this paper detailed an innovative peer-to-peer electronic monetary system called Bitcoin that enabled online payments to be transferred directly, without an intermediary: person-to-person, or institution-to-institution.

It was built on what we now call Blockchain.

While obvious on the surface, there are dramatic limitations to physical transactions that are one-to-one. Person-to-person, entity-to-entity; both parties have to agree on value, typically both parties have to be present, and both parties have to bring along the matter and currency to be transacted.

In short, this doesn’t scale well.

This process led to the evolution from the barter systems of days gone by, to the establishment of monetary systems by the Romans and other societies to set published values for marks, coinage, and other instruments of currency (aka – a common currency).

Common currency led to the development of banks and other intermediary systems, like Federal Reserve banks, central banks, and other regulatory bodies, who set established value for mutually agreed upon instruments, whether they be coins, or paper money.

Yet these intermediaries have grown in power, can delay processing, and some feel, do not always add the value to the transaction that they charge to conduct it. These intermediaries do still provide valuable services; they track our funds, lend funds, and clear transactions for us when needed.

Imagine showing up to a house closing with a bag of gold.

No need, a cashier’s check from the bank, against a mortgage (another monetary instrument) is all you need (plus lots of contract paperwork).

So how might we move forward to a system both digital, yet trusted by all, whilst not compromising security?

Let’s track back to where we began: Bitcoin vs Blockchain

Bitcoin is a digital currency, built on a technology we call Blockchain. Blockchain is a distributed ledger technology; distributed, meaning, that multiple systems across the internet store identical information about an asset, or a data file.

Technically, these are redundant data files, kept in synchronization, that can be stored on the public internet, or in a closed, secured system, or both.

In the current, traditional banking system, our accounts are stored in a single centralized database. If a person transacts business (think of going to an ATM, or cashing a check) at a bank that they don’t hold an account at, the system we are accessing must link to and validate our information from said person’s home bank where the account records live.

When a digital transaction is carried out on a currency built on Blockchain (Bitcoin is but one of many digital currencies), it is grouped together in a cryptographically protected block with other transactions that have occurred in the last set amount minutes (typically ten minutes) and sent out to the entire network. From there, Miners (members in the network with high levels of computing power-basically powerful, specially constructed servers) then compete to validate the transactions by solving complex coded problems. The first miner to solve the problems and validate the block receives a reward. (In the Bitcoin Blockchain network, for example, a miner would receive Bitcoins).

The validated block of transactions is then timestamped and added to a chain in a linear, chronological order. New blocks of validated transactions are linked to older blocks, making a chain of blocks that show every transaction made in the history of that blockchain. The entire chain is continually synched with each instance so that every ledger in the network is the same, giving each member the ability to prove who owns what at any given time.

The Evolution of Blockchain

One example of the evolution and broad application of blockchain, beyond digital currency, is the development of the Ethereum public blockchain, which is providing a way to execute peer-to-peer contracts.

It is this decentralized, open and secure attribute, that allows for trust between parties, and eliminates the need for intermediaries. It’s also important to note that traditional hacking type attacks would struggle to crack this widespread system. You might be wondering why and/or how would hackers struggle to crack this widespread system? They would struggle since multiple systems and files would need to be accessed in order to execute a traditional hacking type attack, and almost simultaneously; meaning, the likelihood and feasibility of this happening hovers somewhere right around zero.

So is this hype or reality?

It is real enough that TxMQ has committed to building a Center of Excellence focused on building Blockchain solutions for our customers.

Already, use cases are being evaluated for industries as widespread as the airlines, global logistics, pharmaceuticals, banking and finance, and even personal health records, auto manufacturers, and real estate transactions. Imagine following the custody chain of drugs from point of manufacturer to ultimate consumption or destruction. Imagine the value to a car manufacturer if they knew the precise ownership and chain of custody of not just every vehicle manufactured, but of each and every after market part produced.

So our money is on reality more than hype.

Certainly, not all coding need be done using blockchain; yet the ability to digitize assets, and track the precise chain of custody, is game changing.

There are countless millions globally who are counted among the unbanked; whether they be people in areas too rural, or people who are just too distrustful of these systems. Yet, most of the population has a cell phone, and with that instrument, can have access to this newfound democratization of society. We’re already seeing companies like Apple hint at introducing peer to payment into their future operating systems.

For more information on Blockchain and Blockchain solutions, feel free to call or email us.

I’m always interested in hearing about new startup ventures, or talking with other cutting edge thinkers interested in Blockchain, digital currency mining, and other cutting edge ways of solving today’s challenges. Look me up at, my personal blog at, or find me on LinkedIn.

POODLE Vulnerability In SSLv3 Affects IBM WebSphere MQ

Secure Socket Layer version 3 (SSLv3) is largely obsolete, but some software does occasionally fall back to this version of SSL protocol. The bad news is that SSLv3 contains a vulnerability that exposes systems to a potential attack. The vulnerability is nicknamed POODLE, which stands for Padding Oracle On Downgraded Legacy Encryption.

The vulnerability does affect IBM WebSphere MQ because SSLv3 is enabled by default in MQ.
IBM describes the vulnerability like this: IBM WebSphere MQ could allow a remote attacker to obtain sensitive information, caused by a design error when using the SSLv3 protocol. A remote user with the ability to conduct a man-in-the-middle attack could exploit this vulnerability via a POODLE (Padding Oracle On Downgraded Legacy Encryption) attack to decrypt SSL sessions and access the plaintext of encrypted connections.”

The vulnerability affects all versions and releases of IBM WebSphere MQ, IBM WebSphere MQ Internet Pass-Thru and IBM Mobile Messaging and M2M Client Pack.

To harden against the vulnerability, users should disable SSLv3 on all WebSphere MQ servers and clients and instead use the TLS protocol. More specifically, WebSphere MQ channels select either SSL or TLS protocol from the channel cipherspec. The following cipherspecs are associated with the SSLv3 protocol and channels that use these should be changed to use a TLS cipherspec:

On UNIX, Linux, Windows and z/OS platforms, FIPS 140-2 compliance mode enforces the use of TLS protocol. A summary of MQ cipherspecs, protocols and FIPS compliance status can be found here.

On the IBM i platform, use of the SSLv3 protocol can be disabled at a system level by altering the QSSLPCL system value. Use Change System Value (CHGSYSVAL) to modify the QSSLPCL value, changing the default value of *OPSYS to a list that excludes *SSLV3. For example: *TLSV1.2, *TLSV1.1, TLSV1.

TxMQ is an IBM Premier Business Partner and “MQ” is part of our name. For additional information about this vulnerability and all WebSphere-related matters, contact president Chuck Fried: 716-636-0070 x222,

TxMQ recently introduced its MQ Capacity Planner – a new solution developed for performance-metrics analysis of enterprise-wide WebSphere MQ (now IBM MQ) infrastructure. TxMQ’s innovative technology enables MQ administrators to measure usage and capacity of an entire MQ infrastructure with one comprehensive tool.
(Photo from J Jongsma)

Discussion: The Four Categories Of Technology Decisions and Strategy

Authored by Chuck Fried, President – TxMQ, Inc.

Project Description

There are countless challenges facing IT decision-makers today. Loosely, we can break them down into four categories: Cost control, revenue generation, security and compliance, and new initiatives.
There’s obvious overlap between these categories: Can’t cost control be closely related to security considerations when evaluating how much to invest in the latter? Aren’t new initiatives evaluated based on their ability to drive revenues, or alternatively to control costs? Yet these categories will at least allow us to focus and organize the discussion below.

Cost Control

Cost control has long been a leading driver in IT-spend decisions. From the early days of IT in the ’50s and ’60s, technology typically had a corporate champion who argued that failure a to invest in tomorrow would yield to customer attrition and market-share erosion. A loss of competitive advantage and other fear-mongering were common arguments used to steer leadership toward investments in something the leadership couldn’t fully grasp. Think about the recent arc on AMC TV’s Mad Men featuring an early investment in an IBM mainframe in the 1960s ad agency – championed by few, understood by even fewer.

From these early days, IT often became a black box in which leadership saw a growing line of cost, with little understanding attached to what was happening inside that box, and no map of how to control those growing expenditures.

Eventually, these costs became baked-in. Companies had to contend with the reality that IT investment was a necessary, if little-understood reality. Baseline budgets were set, and each new request from IT needed to bring with it a strong ROI model and cost justification.

In more recent years – and in a somewhat misdirected attempt to control these costs – we witnessed the new option to outsource and offshore technology. It was all an attempt to reduce baked-in IT costs by hiring cheaper labor (offshoring), or wrapping the IT costs into line-item work efforts managed by third parties (outsourcing). Both these efforts continue to meet with somewhat mixed reviews. The jury, in short, remains out.

IT costs come from five primary sources – hardware, software, support (maintenance and renewals), consulting and people. Note that a new line-item might be called “cloud,” but let’s keep that rolled into hardware costs for now. Whether cloud options change the investment equation or not isn’t the point of this paper. For now, we’ll treat them as just another way to acquire technology. There’s little, but growing evidence that cloud solutions work as a cost reducer. Instead, they more often alter the cash-flow equation.

Hardware costs are the most readily understood and encompass the servers, desktops, printers – the physical assets that run the technology.

Software, also relatively well understood, represents the systems and applications that the company has purchased (or invested in if homegrown) that run the business, and run on the hardware.
Support refers to the dollars charged by software and/or hardware companies to continue to run licensed software and systems.

Consulting means dollars invested in outside companies to assist in both the running of the systems, as well as in strategic guidance around technology.

People, of course, is the staff tasked with running and maintaining the company’s technology. And it’s the people costs that are perhaps the most sensitive. While a company might want to downsize, no one wants the publicity of massive layoffs to besmirch a brand or name. Yet a reality of a well-built, well-designed and well-managed IT infrastructure should be a reduction in the headcount required to run those systems. If a company doesn’t see this trend in place, consider it a red flag worthy of study.

How should a company control IT costs? Unless a company is starting from scratch, there’s going to be some level of fixed infrastructure in place, as well as a defined skills map of in-place personnel. A company with a heavy reliance on Microsoft systems and technology, with a well-staffed IT department of Microsoft-centric skills, should think long and hard before bringing in systems that require a lot of Oracle or open-source capabilities.

Companies must understand their current realities in order to make decisions about technology investment, as well as cost control. If a company relies on homegrown applications, it can’t readily fire all of its developers. If a company relies mostly on purchased applications, it might be able to run a leaner in-house development group.

Similarly, companies can’t operate in a vacuum, nor can IT departments. Employees must be encouraged to, and be challenged to attain certifications, attend conferences and network with peers outside of the company. Someone, somewhere already solved the problem you’re facing today. Don’t reinvent the wheel. Education is NOT an area we’d encourage companies to cut back on.

At the same time, consulting with the right partner can produce dramatic, measurable results. In the 21st century, it’s rather easy to vet consulting companies – to pre-determine capability and suitability of fit both skills-wise, as well as culturally. The right consulting partner can solve problems quicker than in-house teams, and also present an objective outsider’s view of what you’re doing right, and what, perhaps, you could do better. “We’ve always done it this way” isn’t an appropriate reason to do anything. A consulting company can ask questions oftentimes deemed too sensitive even for leadership. I’ve been brought in to several consulting situations where leadership already knew what decision would be made, but needed an outside party to make it and present the option to leadership so things would appear less politically motivated.

Similarly, consulting companies can offer opinions on best practices and best-product options when required. A software company, on the other hand, won’t recommend a competitor’s product. A consulting company that represents many software lines, or even none at all, can make a far more objective recommendation.

In point of fact: One of the primary roles for an experienced consulting company is to be the outside partner that makes the painful, outspoken recommendations leadership knows are necessary, but can’t effectively broadcast or argue for within the boardroom.
Innovation As Cost Control

There are several areas of technology spend with legitimately demonstrated ROI. Among these are BPM, SOA, process improvement (through various methodologies and approaches from six sigma to Agile development among many others), as well as some cloud offerings previously touched on.
BPM, or business-process management, broadly describes a category of software whereby a company’s processes, or even lines of business can be automated to reduce complexity, introduce uniformity and consistency of delivery, and yes, reduce the needed headcount to support the process.

We’re not talking about pure automation. Loosely, BPM refers to a system that cannot be fully automated, but can be partially automated yet still require some human interaction or involvement. Think loan application or insurance claim. Many of the steps can be automated, yet a person still has to be involved. At least for now, a person must appraise a damaged car or give final loan approval.

SOA, discussed elsewhere in this paper, means taking advantage of a loosely coupled infrastructure to allow a more nimble response to business realities, and great cost controls around application development and spend. Process improvement, of course, means becoming leaner, and better aligning business process with business realities, as well as ensuring greater consistency of delivery and less stress on internal staff due to poor process control.

Big Data As Cost Control

Big data, discussed later in this paper, can also be used, or some might say misused, to drive down costs. An insurance company can use data analysis to determine who might be too costly to be covered any longer. A bank might also feel certain neighborhoods or even cities are too risky to offer loans. There’s a dark side to big data. At times even unintended consequences may result. It’s important for human oversight to remain closely aligned to system-generated decisions in these nascent days of big data. While Google may have figured out how to automate the driving process, companies today are still hopefully more than a few years away from 100% automated decisioning. One hopes society, as much as government oversight will help ensure this remains the case for the foreseeable future.

Integration And Service Oriented Architecture As Cost Control

Too often companies rely on disparate systems with limited, if any ability to interact. How often have you been logged on to your bank’s online system, only to have to log on a second time to access other accounts at the same institution?

Companies must be quick to recognize that consumers today expect seamless, complete integration at all points of their interaction. Similarly, suppliers and trading partners are ever more expectant of smooth onboarding and ease-of-business transacting. Faxing purchase orders is yesterday. Real-time tracking and reordering of dwindling products in the supply chain is the new normal.

At the same time, companies must recognize that to be nimble and adaptable against the ever-changing reality of their businesses, applications must be designed and implemented differently. It’s one thing if a company has five different systems and no more. A simple 1980s-era point-to-point architecture might suffice. Yet other than that limited example, enterprises show an ever-changing array of systems and needs. Hardcoding applications to each other yields an inflexible and challenging infrastructure that’s cost-prohibitive to change.

Yet other than that limited example, enterprises show an ever-changing array of systems and needs. Hardcoding applications to each other yields an inflexible, and challenging infrastructure that’s cost prohibitive to change.

An Enterprise Service Bus, or ESB, is what most enterprises and midmarket companies today recognize as a model architecture. Decoupling applications and integrating them loosely and asynchronously enables rapid application design and more nimble business decision-making. At the same time, an SOA-enabled environment also allows for the rapid adoption of new products and services, as well as the rapid rollout of new offerings to customers and trading partners. Yes, this does require a changing of attitudes at the development, QA and production-support levels, but it’s a small price to pay for long-term corporate success.

Revenue Generation

The idea of IT helping companies generate new revenue streams isn’t new. It’s as old as the technology itself. Technology has always played a role in driving revenue – from airline-reservation systems, to point-of-sale systems, to automated teller machines (allowing banks to charge fees). Yet these systems didn’t create new revenues (with the possible exception of the ATM example) – they simply automated previously existing systems and processes.

Airlines were booking flights before the Internet made this possible for the mass public, and restaurants have, of course, been serving people dating back to the dawn of civilization. Technology improved these processes – it didn’t created them.

But until recently there were no app stores and certainly no online videogame communities. Not to mention the creation of entire companies whose revenue streams weren’t even theoretically possible years ago – think Netflix or Square or Uber. So new revenue streams are as likely to be the sole source of revenue for a company as they are to supplement an already in-place business model.
Perhaps one of the more exciting new-revenue developments has come to be called the API Economy. An API, or Application Programming Interface, is a bit of code or rules for how a software component should interact with other software or components. As an example, has a published API to define how another application can integrate with it.

This new ecosystem is based on what grew out of web-browser cookies years ago. Cookies are bits of data on individuals’ personal systems used to identify returning visitors to websites. Today, the API economy describes how and why our browsers look the way they do, why we see which ads where, and much more. A visit today to most websites involves that site looking at your profile and comparing it to data that particular company has on you from other APIs it might own or subscribe to. Facebook, Google, Amazon and others allow companies to target-market to the visitor based on preferences, other purchases and the like. Similarly, one can opt in to some of these capabilities to allow far greater personalization of experiences. A user might opt in to a feed from Starbucks, as well as a mall he or she frequents, so upon entry to the mall, Starbucks sends a coupon if the guest doesn’t come in the store on that particular visit.

Big Data As Revenue Generator

Taking a bit of the API example from above a step further, companies can combine data from multiple sources to run even more targeted research or marketing campaigns. Simply put, Big Data today leverages incredible computing power, with the ever-increasing amount of publicly (and privately when allowed by individuals) data to drive certain outcomes, or uncover previously unknown trends and information.

There are a variety of dramatic examples recently cited in this space. Among them, a realization that by combining information on insurance claims, water usage, vacancy rates, ownership and more, some municipalities have been able to predict what properties are more likely than others to experience failure due to theft or fire. What makes this possible is technology’s nearly unimaginable ability to crunch inconceivably large data sets that no longer have to live, or even be moved, to a common system or platform.

In the past, to study data one had to build a repository or data warehouse to store the data to be studied. One had to use tools (typically ETL) and convoluted twists and orchestrations of steps to get data into one place to be studied. At the same time, one had to look at subsets of data. One couldn’t study ALL the data – it just wasn’t an option. Now it is.

In another dramatic and oft-cited example, it was discovered that by looking at the pattern of Google searches on “cold and flu remedies,” Google was better able than even the CDC to predict the pattern of influenza outbreaks. Google had access to ALL the data. The data set equaled ALL. The CDC only had reports, or a subset of the data to look at. The larger the data set, oftentimes the more unexpected the result. It’s counter-intuitive, but true.

How and where revenue models work within this space is evolving, but anyone who’s noticed more and more digital and mobile ads that are more and more appropriate to their likes (public or private) is experiencing Big Data at work, combined with the API Economy.

Mobile As A Revenue Generator

Mobile First. It’s a mantra we hear today that simply means companies must realize that their customers and employees are increasingly likely to access the company’s systems using a mobile platform first, ahead of a desktop or laptop device. It used to be an afterthought, but today a company must think first about deploying its applications to support the mobile-first reality. But how can mobile become a unique revenue driver?

Outside of apps in the Android and Apple App stores, companies struggle with use cases for mobile platforms. This author has seen the implementation of many creative use cases, and there are countless others yet to be imagined. Many retailers have followed Apple’s model of moving away from fixed checkout stations to enable more of their employees to check people out from a mobile device anywhere in the store. Similarly, many retailers have moved to tablets to enable their floor employees to remain engaged with consumers and to show the customers some catalog items perhaps not on display. The mobile platform can even be used to communicate with a backroom clerk to bring out a shoe to try on while remaining with the guest. It’s a great way to reduce the risk of an early departure. A strong search application might also allow the retailer to show the customer what other guests who looked at the item in question also looked at (think Amazon’s famous “others who bought this also bought these” feature). We’ll discuss mobile further later in this paper.

Security and Compliance

Not enough can be said about security. All vendors have introduced products and solutions with robust new enhancements in security – from perimeter hardening to intrusion detection. Yet security begins at home. Companies must begin with a complete assessment of their unique needs, capabilities, strengths and weaknesses. There’s no such thing as too much security. A bank might know it can require fingerprint verification, photo id and more at the point of sale prior to opening a new account. Yet that same bank might also acknowledge that too high a barrier might turn off the very customers it needs to attract. It’s a delicate tightrope.

All experts agree that the primary security threats are internal. More theft happens from within an organization than from without. Proper procedures and controls are critical steps all enterprises must take.

The following areas we can loosely discuss under the Security and Compliance umbrella. While they may or may not all tightly fit herein, it is the most useful group to place them in for our purposes.

Mobile Device Diversity And Management

Mobile’s everywhere. Who among us doesn’t have at least one, if not many mobile devices always within arm’s reach at all times, often at all hours of the day? It’s often the first thing we check when we rise in the morning and the last thing we touch as we lay down at night. How can companies today hope to manage the desires of their employees to bring their own devices to work and plug into internal systems? Then there’s the concern about choosing what platforms can and should be supported to allow for required customer- and trading-partner interaction, not to mention securing and managing these devices.

A thorough study of this topic would require volumes and far greater study than is intended here. Yet a comprehensive device strategy is requisite today for any midmarket to enterprise company, and a failure to recognize that fact will lead only to disaster. A laptop left in a cab or a cell phone forgotten in a restaurant can lead to a very costly data breach if the device isn’t both locked down, as well as enabled with a remote-management capability to allow immediate data erasure.
As to enabling customer and partner access: As stated earlier, companies are made up of people, and people are more likely to access a system today from a mobile device than from a desktop. It’s crucial that companies engineer for this reality. A robust mobile application isn’t enough. Customers demand a seamless and completely integrated frontend to backend system for mobile to mirror a desktop-quality experience.


So what is a cloud, and why is everyone talking about it? As most realize now, cloud is a loose term meant to refer to any application or system NOT hosted, run or managed in the more traditional on-premises way. In the past, a mainframe ran an application accessed by dedicated terminals. This evolved to smaller (yet still large) systems also with dedicated “slaved” terminals, and later to client-server architecture where the client could run its own single-user systems (think MS Office Suite) as well as access back-end applications.

Cloud applications are easiest thought of as entire systems that are simply run elsewhere. Apple’s App Store,, online banking and many examples too numerous to mention are typical. Companies today can also choose to have their traditional systems hosted and run, and/or managed in the cloud, but can also architect on-premises clouds, or even hybrid on- and off-premises clouds.

The advantages are numerous, but the pitfalls are too. Leveraging someone else’s expertise to manage systems can produce great returns. Yet companies often fail to properly negotiate quality-of-services and SLAs appropriate to their unique needs, and complain of poor response time, occasional outages and the like. We can’t even begin to address the security implications about Cloud, including what customer data, or patient data can, and cannot be stored off premises.
There are countless consulting firms that can guide companies needing help to navigate the messy landscape of cloud options and providers.

New Initiatives:

Internet Of Everything (IOT)

A web-enabled refrigerator? A thermostat that can automatically set itself? A garage door that can send an email indicating it was left open? GPS in cars, watches and phones? These are just a few of the ideas we once thought silly, or simply couldn’t conceive of yet. But they’re a reality today. Right now we’re like the 1950s science-fiction filmmaker: We’re unable to foresee the depth and degree to which this will all end up, and what we can comprehend will probably seem silly and simple a decade from now.

CIOs and IT decision-makers today are faced with a never-ending list of things to worry about and think about. Today the IOT is a complex maze of what-ifs where reality seems to grow murkier, not than clearer at times. What is clear, though, is that just because something can be done doesn’t mean it should be done. Many of these interesting innovations will lead to evolutionary deadends as surely as the world moved away from VCRs and Walkman cassette players. Tomorrow’s technology will get here, and the path will likely be as interesting as the one we’ve all followed to make it this far.

CIOs must continue to educate and re-educate themselves on technology and consumer trends. This author found himself at a technology conference recently with executives from several global technology companies. Also in attendance was the author’s 17-year-old son, who quickly found himself surrounded with IT leaders asking his opinion on everything from mobile-device preferences to purchasing decisions in malls. When the dust settled, the teenager turned to the author and said rather decisively: “They all have it mostly wrong”.

Cloud, discussed earlier, as well as new mobile offerings can also be fit under this category of discussion.

Personnel And Staffing

More has likely been written on the topic of personnel and staffing in recent years than on all the above topics combined, and thus we left it as a unique item for final thought. Companies struggle with identifying talent to fill all of their vacant positions. At the same time, the populace complains that there aren’t enough jobs. Where’s the disconnect?
The United States and most democratized nations, are at a societal inflection point. As we move beyond the industrial revolution into this new Internet or Information Age, there will be pain.
Just as the United States struggled to adjust to the move from an agrarian society to an industrial one, we now struggle yet again.

We must be careful to not make pendulum-like moves without a careful study of the consequences – both intended and otherwise. It’s very difficult to change the momentum of a pendulum, and some policy decisions are difficult to undo.

There is an income gap. No economist would argue against that point. Wealth continues to accumulate at the top and leave a growing gap between the haves and have-nots. Some of this is expected and will settle out over time, yet most requires broad-based policy decisions on the part of lawmakers and corporate leaders alike.

Education reform and immigration reform are the tips of the iceberg we must begin to tackle. Yet we must also tackle corporate resistance to a more mobile workforce. Too often this author hears of a critical technology position that remains vacant because the employer refuses to hire a remote worker who could very readily do the job, and perhaps at a lesser cost attributed to their willingness to work for less from their home.

While technology leaders like IBM, Oracle, HP, Microsoft and others have quickly moved to adopt this paradigm, corporate America has moved rather more slowly.
Wherever we arrive at in the future, it will be the result of decisions made today by leaders facing a rather unique set of challenges. Yet never before have we had access to the amount of information we have today to make those decisions.

With careful thought, and proper insight, the future looks rather exciting.