How Big Data Crowdsource Strategies Aim To Improve Navigation Charts

Some might be quick to poo-poo the industry of fishing and outdoor recreation – especially when it comes to technology. Too bad, because this vertical’s a ripe testbed for technological innovation and application. I’ll repeat an axiom I put forth a few days ago: Major technological advances are driven by two factors – war and entertainment. It’s no surprise to me that cartography has undergone a recent revolution, led by the manufactures of recreational fishing and boating electronics and their customers.
The new buzzword in this vertical is crowdsource charting. It’s a big-data project where the public supplies the sonar charting data, which is then uploaded and integrated into a master map, which is then served back to the public as a sum of the different community edits and adds.
It’s been done before in other forms – Yelp, Google, iTunes and so many other apps and platforms crowdsource reviews, tips, photos and public/government data. But crowdsource cartography is different because it deals with water depths and features – stuff that’s just as rare and valuable and malleable today as it was 300 years ago when Blackbeard had to pick his way through to Ocracoke Inlet.
The power of the crowdsource strategy lies in its promise to develop pinpoint depth accuracy fed by near-real-time updates to changing water depths, sandbars and hazards. Most navigation charts were sounded decades ago. In the case of reservoirs, the navigation charts may have simply been created using topographic maps that were surveyed years prior to fill.
The first marine electronics company to embrace crowdsource technology was Navionics, which manufactures third-party upgrades and add-ons for all popular electronics platforms. The Navionics app has been downloaded more than 1.5 million times. And now, the Navionics SonarCharts project allows boaters and anglers to record soundings throughout their day, then upload them to a central server for more accurate charts.
Lowrance, a division of Navico, recently launched its Insight Genesis project, which follows a similar strategy, with the difference that Insight Genesis is only compatible with Navico products (Lowrance, Simrad, B&G). Another interesting feature of the Insight Genesis project: Users can upload and use maps for free, but they need to pay a premium to keep them private. That’s a nice bonus option for secretive anglers.
Interestingly, the other major electronics player, Humminbird, hasn’t embraced crowdsource mapping. Its AutoChart program allows users to generate private charts only. But given the fact that Humminbird is geared nearly 100% toward the angling market, the privacy play makes sense.
I think the major takeaway at this point is that crowdsource marine charting is here to stay and the involved companies will soon possess hordes of valuable big data that will grow in worth and equity over the coming decade as new platforms and businesses find new ways to leverage and monetize such data.
Interested in big data? Want to know how to implement big-data architecture and strategy in your enterprise? TxMQ can help. Contact TxMQ president Chuck Fried for a free and confidential consultation: (716) 636-0070 x222, [email protected].

Mobile Data: What It Means To 'Engage Customers In Context'

Here’s a stat to get you thinking:
Only 21% of marketers actively use mobile, but 81% of mobile leaders say that mobile fundamentally changed their businesses
Bottom line: If your business touches the public, and you’re not using mobile, then your business is immobile.
The world of mobile-data analytics and marketing is undergoing a revolution. It’s driving new revenue and forging new connections to the public. And it allows businesses to engage customers in context.
What does it mean to engage customers in context? In the simplest terms, it means the ability to serve customers content and experiences that they want within certain surroundings or as events or experiences unfold.
Wimbledon’s a great example. Only a few hundred thousand people can attend the event. And television coverage is often limited to choice matches at inconvenient viewing times. IBM developed the Wimbledon app and crunched streaming analytics and big data to deliver real-time info on every point in every match on every one of Wimbledon’s 19 courts.
The data involved 101,778 tennis points across 660 matches, corresponding to 852,752 data points. A team of 48 statisticians – all of them high-quality tennis players – provided contextual data (like speed of the serve) to enrich the machine-collected data. All data was combined with historical performance data and live data from the web and social networks, then fed into an advanced set of analytical tools to provide real-time insight to sports analysts, TV presenters and the global audience.
Impressive? Absolutely. But the same tools can be employed within any business that engages customers. Instead of data about serves and historical matches and points totals, businesses can directly engage customers in the act of shopping, or searching, or traveling, or vacationing with information that incorporates social-media activity, preferred brands, coupons, recent purchases, weather forecasts and so on. Businesses that provide value, or important information, or community – in other words, businesses that engage their customers in context – realize a much greater ROI on their marketing spends.
That’s what is means to engage customers in context, and that’s why it’s so important.
It’s not automatic though. Efforts to engage in context take foresight, solid application integration and a business climate ready to embrace change within the new mobile landscape.
Interested in mobile development, deployment or integration? TxMQ can help. Initial consultations are free and communications are always confidential. Contact vice president Miles Roty for more information: (716) 636-0070 x228, [email protected].
 
 
 
 

IBM Watson & Why I Believe In The Goodness Of Technology

Count me as genuinely excited about IBM’s announcement that researchers are now able to use its Watson cognitive computer for medical research.  This is the computer that dusted the all-time human Jeopardy champs in a real-time game. The announcement came a few days after I toured the Museum of Computer History in Palo Alto, Calif. and stood at the podium of the actual Jeopardy set used for the Watson game.
I want to see cancer gone. I have family members surviving it, I’ve lost family members because of it over the past 2 years. And although we’ve improved some treatments, it just seems we’re nowhere nearer a cure. A computer like Watson can help. It can essentially synthesize all the world’s data on the disease. It can fairly quickly scan and distill more than 60,000 or 600,000 journal articles about a single topic, whereas a researcher is lucky to be able to read one or two articles a day.
IBM calls the new cloud-based Watson service “Discovery Advisor” – a nod toward a conviction I share, that technology combined with human curiosity and passion is what drives exploration, discovery and advancement.
The fact that we can all now essentially tap into the most powerful computer in the world – a computer unlike built before – is a comforting light in a world that suddenly seems to be turning darker with armies on the march that want nothing more than to destroy technology and launch a second Dark Ages.
Here’s a great retroactive vid on Watson’s Jeopardy victory, in case you missed it the first time around.

(Photo courtesy of IBM)

IBM's Big Spend: $3 Billion To Reach 7 Nanometers

I get excited when I hear about major new R&D, backed by major investment, all for a major goal. Like this one: IBM’s long-term goal to build a neurosynaptic system with ten billion neurons and a hundred trillion synapses, all while consuming only one kilowatt of power and occupying less than two liters of volume.
As a step toward that goal, IBM is committing $3 billion over the next 5 years for R&D to push the limits of chip technology. Cloud computing and big-data systems pose new demands like bandwidth-to-memory, high-speed communication and power consumption, which in turn demand more horsepower. IBM wants to breed the ultimate thoroughbred. So it’s using the $3 billion spend to push the limits of chip technology to smaller and more powerful scales. The R&D teams will include IBM research scientists from Albany and Yorktown, N.Y., Almaden, Calif. and Europe.
What’s really interesting is the semiconductor threshold: IBM says it wants to use the $3 billion to pave the way toward the 7 nanometer plateau (10,000 times thinner than a strand of human hair). IBM researchers and other semiconductor experts predict that while challenging, semiconductors show promise to scale from today’s 22-nanometer standard down to 14 and then 10 nanometers in the next several years. However, scaling to 7 nanometers (and perhaps below) by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing.
What happens beyond 7 nanometers? Then it’s time to ditch silicon and move to potential alternatives like carbon nanotubes or non-traditional computational approaches such as neuromorphic computing, cognitive computing, machine-learning techniques and quantum computing. So the quicker we get to 7 nanometers, the quicker we break into the promise of, say, quantum computing. And the quicker we break into the next computing revolution, the quicker we reach defining milestones of human history like interstellar travel and the end of disease. I firmly believe that.
IBM chip timeline

Latest News And Musings From IBM's iSeries 5 (i5)

With the advent of the Power Systems from IBM, the traditional iSeries (AS/400) and pSeries (AIX) have now merged into a single hardware platform presence. The AS/400 – iSeries – i5/OS (whatever you want to call it) is now 25 years old and that’s a long time running in the technology world. Experts over the years have always promised the demise of the iSeries with whatever new flavor is on the market. This post a couple of years ago by the Info-Tech research group dispelled several myths surrounding the potential demise of the iSeries: Is IBM i a dying platform, or still going strong?
All of these are true and growing stronger with each passing release and technology update. The “green screen” is still in existence but IBM and business partner ISVs have been expanding the delivery methods that include:

  • WebSphere
  • Tomcat
  • Apache web server
  • Lotus suite
  • DB2/i Web Query
  • iSeries Navigator for the web
  • Development tools including JAVA, C/C++, PHP, etc.
  • ERP software advancements; SAP, Oracle/PeopleSoft/JDEdwards, etc.

Several items on this list have been with the iSeries for many years, yet many people have never knew about those capabilities. With the latest technology release, IBM has upgraded and improved many web-development areas – JAVA, ARE (application runtime expert), free-form RPG coding, plus several hardware feature improvements including:

  • Smaller form factor for SSD drives.  Both 387 and 775Gb drives.
  • New 1.2Tb 10k RPM SAS drive
  • Higher level of cache on SAS Raid adapters
  • Continued SQL improvements for DB2/i.

The future of technology is the ever-changing presentation of information to the consumers – iPhone, Droids, tablets, etc – by way of the ability to process data, analyze it, turn it into information, and then present it to consumers in whatever “omni-channel” they choose. IBM and iSeries will continue to flourish and grow with more integration, more big data processing, maximum u-time, and one of the highest TCO in the market place.
TxMQ can help support your hardware needs. Look to us as a full-service solutions provider, from Power System and iSeries sales to support. Call Miles Roty, 716-636-0070 (228) or email [email protected] for more information.

What is BIG DATA???

Big Data.
We are seeing the term tossed around today the way ‘e-commerce’ was tossed around in the late 90’s.  So what is Big Data, and what’s all the fuss about.  Some history is in order to set the stage.
Statistics evolved as a science based upon using samplings of data to derive conclusions about the larger, or ‘total’ sample.  As an example, one might survey a group of people shopping in a mall on a given day to ask what they are buying, or find out how long they planned to be in the mall, or what brought them out that day.  The survey company would have decided that they needed to obtain answers from some ‘sample size’ equal to some percentage of what they thought the total foot traffic in the mall was likely to be that day.  From that ‘sample’ one could extrapolate what the answers would be if they had theoretically polled 100% of all mall shoppers.
For most of history, this was the only way to look at data.  Looking at ever larger sample sizes wasn’t feasible given the tabulating ability, or later, the computational ability of the systems of the day.
 

Enter Big Data

 
Big Data is simply a data set where the sample size (n) = all.   There is NO sampling.  All, or nearly all of the data is analyzed.   What we find is remarkable.  In addition to far more detailed information, correlations where none would have been visible in the past appear.
In one interesting example of this, WalMart looked at some purchase data a few years ago.
Wal Mart has captured and stored 100% of their customer transactions forever.  In a study looking at what products people purchased leading up to major projected storms (hurricanes, tornadoes, etc), they found the usual items one would expect.  Water, batteries, etc.  What they also found, unexpectedly, was Pop Tarts.   There was an unreasonably high expectation that purchasers of storm related items would also buy Pop Tarts!
There is no effort made to study the ‘why’ of this data point.  Just the what. Big Data can’t tell us why something is, just that it is.  Wal Mart began reconfiguring their stores in storm paths to butt end caps of Pop Tarts near the other supplies, in addition to adding to their stock of these items, and sales soared.
Coming up next…the truth is in the noise…working with messy data.
TxMQ has a Business Intelligence practice helping companies work with and manage large data sets to derive actionable information from them.   Contact an account executive, or [email protected] today for a free initial consultation.
Chuck
Inline image 13
Inline image 12
Inline image 15
__________________________________________________________________

 

Contact us today for more information or assistance on getting your business on the right track with IBM® Cognos®.