You Had the Right Analytics Platform

If you’re at the point in determining what the optimum IoT analytics framework is for your business unit, then you’re like the vast majority of unit managers today. You’ve already made one major decision; now you’re grappling with the other.

You’ve already determined that a cloud-based IoT analytics strategy is best – what many are calling the Cloud of Things. Now you’re investigating which is the best option for your cloud-based platform: to build in-house; to buy a cloud solution for your business unit; or to invest in an organization-wide analytics architecture.

You can start by looking at a big-picture truism: that deployments of the Internet of Things, of big data, and of cloud computing are still in their infancy. What will the emergence of solid-state battery technology mean to the auto industry, for instance? How much and what types of data will a fully equipped smart city generate in 10 years? Twenty?

For that matter, what will machine learning and robotics look like in the next decade? We can only guess at the answers. And we can only guess at how analytic applications will evolve to keep up the pace.

Then, another truism: that the basic principles of software design will not change. Remember the “spaghetti-code” problems of the past, when developers tried to heap code upon code, new upon old, in order to keep old applications up to date? As programming languages grew in sophistication, businesses were hard-put to find developers who were still familiar with the old constructs.

A similar problem could arise with IoT analytics, as businesses bring in analytic applications to solve specific problems, then add other applications to tackle new problems. These applications will likely be cloud-based, but those clouds won’t necessarily predict fair weather. They’ll risk growing into unwieldy mixtures of public, private and hybrid architectures, and they‘ll portend additional layers of management services.

Needed: An Open Framework
These are some of the risks you’ll take by turning to off-the-shelf or in-house created “point” analytic applications. For the majority of applications an open framework that can support standards-based as well as custom components will be the best way forward.

What should it look like? For handling data sources and targets, such a framework will connect with a variety of standard database technologies, from SAP HANA to Hadoop. It will provide hooks for custom databases too, since all of these together will be needed to make up the data lakes that will hold any and all types of structured, unstructured and binary data. The framework will support other standards, too, from EDI and XML for commerce data to REST and SOA for cloud and enterprise applications.

Needed: A Robust Development Environment
An ideal application development environment will be one that facilitates teamwork among business users, data scientists and developers, and that is open-ended enough to leave room for a future of near-infinite innovation.

The environment should be compatible with – and capable of generating – popular IoT analytics languages such as Spark and Hive; it should employ dashboards and wizards, and it should support building-block design to help business-analyst teams assemble functional blocks to create, say, complete KPI data streams. It should encompass wide-ranging modeling capabilities and be able to generate the massive time-series data cubes that are integral to building past-present-future analytics. And it should possess robust visualization capabilities, including visual flow models to depict drill-ins, drill-downs, roll-ups, as well as highly complex geospatial and chart overlays.

Needed: A Production-Quality Analytics Engine
This is the elephant in the room, as it were, the one thing that can’t, or shouldn’t be missed. The framework engine should be blazingly fast, to handle event-time processing and real-time analytics, not to mention the compute-heavy machine language models of the future. It should employ specially optimized modules for handling specific high-cycle functions, with modules dedicated to KPIs, live collection, or predictive analytics, for example. And it should be able to translate everything into action – into automated processes – instantly. That, plus being able to do it at scale, is what “production quality” analytics will demand.

Check out Vitria WHAT IF white paper to learn how service operations performance can benefit from an advanced analytic solution.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>