Denver - Berlin
303.786.9000
720.282.5800
Epic Application Failure
Private Networks for Innovation - 7 Dec 2021
Epic Application Failure

how to liberate our data

by Glen A. Allmendinger, Founder and President of Harbor Research

Systems and Politics

Prior to the Iowa Caucus of 2020, Democratic Party operatives decided to build a phone app for collecting voting results from around the state and sending them back to headquarters. The protagonists of this story knew about the hard-won best-practices for designing and building software. They just chose to ignore their existence.

They did no research into the people who would be using their tool. They paid no attention to security issues. They did no real-world testing. In fact, they kept the app secret from its users during development, then sprung it on them at the last minute. When users didn’t know what to do with it, they were told to “play around with it.”

It seems unbelievable, but it happened. Not surprisingly, the app failed spectacularly and the Iowa Caucus became an embarrassment for many people and institutions. The one we’re most concerned about is technology itself. Within hours of the news, many voices around the Net were saying the same thing: You can’t bring elections online. Our elections are sacred. That environment can’t be trusted.

On one level, it’s nonsense. Our banking system and securities markets are fully digital, online and fully automated. The same is true for most other operations, from factories to weapons systems. There’s no reason that digital systems can’t be used in our elections.

But on another level, we understand the misgivings of the common person. They understand that digital reality is too small and fast for human perception, that there’s no real basis for trusting it. Our elections are sacred, and major corporations lose control of our data all the time. That’s bad enough when a person’s online identity is compromised. Imagine if we couldn’t know whether or not our elections were valid.

Information Islands

In the course of the last two decades, the world has become so dependent upon the existing ways computing is organized that most people, inside IT and out, cannot bring themselves to think about it with any critical detachment. Even in sophisticated discussions, today’s key enabling information technologies are usually viewed as utterly inevitable and unquestionable.

The client-server model underlying today’s computing systems greatly compounds the problem. Regardless of data-structure, information in today’s computing systems is machine-centric because its life is tied to the life of a physical machine and can easily become extinct.

All of this adds up to a huge collection of information-islands whether on your servers, your service provider’s servers or anywhere else. Assuming the islands remain in existence reliably, they are still fundamentally incapable of truly interoperating with other information-islands.

This is the issue with all of the so-called IoT platforms that have flooded the market—they are really “data traps” and information islands. We can create bridges between them, but islands they remain, because that’s what they were designed to be.

From our perspective, this description of today’s IoT systems architecture looks very familiar and is largely organized like client-server based computer systems. This should come as no surprise: They were designed in the 1990’s.

Peer-to-Peer

However, we can now begin to imagine an application environment where there will be widely diverse operational technology (OT) computing devices running applications dispersed across sensors, actuators and other intelligent devices sharing and leveraging the compute power of a whole “herd”—a smart building application, for example, where the processor in an occupancy sensor is used to turn the lights on, change the heating or cooling profile or alert security. In this evolving architecture, the network essentially flattens until the end-point devices are merely peers and a variety of applications reside on one or more OT computing devices.

From our view the movement towards peer-to-peer, and the view that many people hold that this is somehow novel, is ironic given that the Internet was originally designed for peer-to-peer interactions. We seem to be heading “back to the future.”

We’ve been saying this for years. Today’s computing systems were not really designed for a world driven by pervasive information flow and are falling far short of enabling adaptable real-time intelligence. This intelligence is based on data, relationships and interactions. If we mapped these elements we would find that today’s applications are so monolithic that they force organizations to use the same data, relationships and interactions over and over again and again creating huge redundancies and vast replication because of the closed nature of how data is used today.

Instead of Liberating Data We Trap It

When telephones first came into existence, all calls were routed through switchboards and had to be connected by a live operator. It was forecast that if telephone traffic continued to grow in this way, everybody in the world would have to become a switchboard operator. That didn’t happen because we automated the systems that handled common tasks like connecting calls.

The tools we are working with today to make products “smart” were not designed to handle the diversity of device data types, the scope of interactions, and the massive volume of data-points generated from devices. Each new device requires too much customization and maintenance just to perform the same basic tasks. These challenges are diluting the ability of organizations to efficiently and effectively manage development.

This is the move we’ve been waiting for, the transition to a truly distributed architecture. Today’s systems will not be able to scale and interact effectively where there are billions of nodes involved. The notion that all these “things” and devices will produce streaming data that has to be processed in some cloud will simply not work. It makes more sense structurally and economically to execute these interactions in a more distributed architecture near the sensors and actuators where the application-context prevails.

Dispersed computing devices will become unified application platforms from which to provide services to devices and users where the applications actually run—where the data is turned into information, where storage takes place, where the browsing of information ultimately takes place too—not in some server farm in a cloud data center. Even the mobile handsets we admire so much today are but a tiny class of user interface and communications devices in an Internet of Things world where there will be 100 times more “things” than humans.

Future systems require a different approach to data

The Future Is Trying to Break Out

Machine learning, artificial intelligence and the Internet of Things are all in some way trying to break from today’s computing paradigms to enable intelligent, real-world, physical systems. As these devices and systems become more and more intelligent, the data they produce will become like neurons of the brain, or ants in an anthill, or human beings in a society, as well as information devices connected to each other.

The many “data nodes” of a network may not be very “smart” in themselves, but if they are networked in a way that allows them to connect effortlessly and interoperate seamlessly, they begin to give rise to complex, system-wide behavior. That is, an entirely new order of intelligence “emerges” from the system as a whole—an intelligence that could not have been predicted by looking at any of the nodes individually. There’s a distinct magic to emergence, but it happens only if the network’s nodes and data are free to share information and processing power. ◆

To download our Technology Insight “How Do We Get There from Here?” please fill out the form below.

February 6, 2020