Blog Posts Process Management

The Internet of Trustworthy Things: Can We Trust Emerging Technologies?

Blog: The Tibco Blog

As first appeared on Tech Target’s IoT Agenda

With the expansion of the Internet of things, objects that were once “dumb” now have the capacity for “intelligence” and their capabilities are becoming integral in our everyday lives whether we like it or not. But what happens if those things can’t be trusted?

In January 2016, Google-owned Nest Labs fell under scrutiny when its self-learning thermostats stopped working during one of the coldest times of the year, leaving thousands of users without heat in their own homes. Then, in July of the same year, Nest thermostats stopped working again, this time during a widespread heatwave across the United States.

In December of 2016, police in Bentonville, Ark., issued a warrant demanding that Amazon hand over any audio recordings from an Echo customer when they believed it was possible that the Echo might have overheard and recorded evidence of a homicide (or at least foul play leading up to the death of the homeowner’s friend).

Despite the growing number of devices now available for in-home use, the Internet of things doesn’t end when we leave our homes. In fact, more and more objects around us will continue to be embedded with tiny sensors and radios that allow them to connect to the Internet and to one another. For example, the cars we drive today are joining the IoT movement and are beginning to communicate with the cloud, other cars, and even objects in the environment such as street signs, traffic lights and the roads themselves.

Most of us are likely familiar with the smart car company Tesla Motors and its fleet of all-electric vehicles. What many people aren’t familiar with is the autopilot feature now available in many of its available models. This feature became available when Tesla remotely issued an over-the-air update that was downloaded and installed by the cars directly. Autopilot allows drivers to basically sit back and relax while their cars drive themselves; similar technology is also being manufactured by BMW, Audi, Volvo, and Google, among others.

Traditionally, computers and other electronics must follow a predefined set of instructions provided by their creators. If a certain function or task isn’t provided by programmers in a predefined list of instructions, then the “thing” isn’t aware of that function and therefore can’t perform that task. This makes it possible for us to trust that our electronics will do only what they have been programmed to do and nothing more. After all, computers can’t lie—or can they?

For things to truly become “smart,” technologies (including those mentioned above) are now utilizing a type of artificial intelligence called machine learning. This allows them to adapt to unforeseen circumstances and to more or less evolve beyond their initially programmed capabilities. For example, there is no way for auto manufacturers to identify and program for every possible scenario a self-driving car might experience. Instead, auto manufacturers utilize machine learning by training algorithmic models using data they do have (such as what a street sign looks like compared to what a person on a bicycle might look like). They then feed these algorithms as much known data as possible and test their models using more data (which the algorithms were not trained with) to determine how well they can perform with unfamiliar input, such as that found in the real world.

This doesn’t mean that computers and other electronics now have the ability to rise up and take over the world like pop culture wants us to believe. But it does mean that those things are now capable of doing more than they can under traditional programming paradigms. It also means that those things are now capable of doing something completely unexpected and unintended—something programmers haven’t planned for.

As an example, what happens if a self-driving car learns to be aggressive and to take necessary actions to avoid damage to itself? Since there are no hardcoded rules in self-driving cars that specifically define what a street sign looks like versus a person on a bicycle, it is up to the car itself to learn the difference and to treat them accordingly (i.e., the street sign is not expected to move but the person on the bicycle is expected to move). What happens if the car learns that damage to itself is minimized by hitting the person on the bicycle in place of the sign if there are no other possible actions to take? Or what happens if the car cannot distinguish the difference between the two at all and doesn’t stop or swerve to avoid hitting a moving bicyclist or the sign? Who would be held accountable in this situation? The bicyclist? The car manufacturer? The cloud service provider who developed and trained the algorithmic models? All of the above? Typically, the driver of the car could be held accountable, but there is no human driver in this picture. And what if the human passenger doesn’t own the vehicle (such as with a taxi or Uber). What if there are no passengers in the car at all?

Going back to the cases where IoT thermostats failed during harsh weather, who should be held accountable if such failures lead to injury or even death? And since many manufacturers now build devices with required Internet-connection dependencies, who bears responsibility for malfunction? If the Internet connection fails, rendering the device incapable of performing, should the Internet service provider be held accountable?

As exciting as emerging technologies appear on paper, there is still an underlying concern about whether we can trust them. For example, during commercial flights, the pilots will sometimes engage the autopilot system and it never concerns us for two reasons: first, we don’t know when the autopilot has been engaged since we can no longer see into the flight deck, and second, we don’t concern ourselves because we feel comfortable knowing and trusting the fact that there is a (hopefully) well-trained human pilot there to take over in the event something goes awry. In the Tesla autopilot example, the driver can also take over control of the car at any time if he feels like the car is performing erroneously. However, not all scenarios will include human failsafe measures.

Both the autopilot-enabled aircraft and autonomous cars are required to go through rigorous development and testing procedures to meet certain requirements before they can be trusted for safe and guaranteed operation by the public. Nevertheless, systems can and do fail. As more things become dependent on the cloud and are equipped with the ability to think and plan for themselves, we must continue to question whether they can be trusted. Therefore, it is up to us to put pressure on product manufacturers and service providers to make sure the “things” we are bringing into our homes and trusting with our lives will do the right thing, or at least continue to remember what the right thing is. Questions such as who should be held responsible when something goes wrong are continuously up for debate. We all need to voice our concerns, demand transparency and require accountability so that every iteration is better than the last.

Whether it is thermostats that learn the patterns of their owners and know when to turn on/off and at what temperatures, or home assistants such as the Amazon Echo or Google Assistant that literally listen to every word we say, the things that we assume will simplify our lives are becoming smart and therefore a bit creepy. With respect to privacy, security, and dependability, manufacturers of these devices tell us there is nothing to worry about and that we should “just trust them.” But does it really have to be that way? Should we just take the word of the device manufacturers and service providers at face value? Should we “just trust” that the things are actually only doing what the manufacturers are telling us? Should we “just trust” that our cars won’t learn that it is better to hit a pedestrian than to cause self-harm to the vehicle instead? Should we “just trust” that our private data won’t be somehow used against us? Food for thought.

Even though technologies will continue to improve as time progresses, it is still up to us to provide feedback, hold people accountable and contribute to the safe operation of those technologies. It is also up to us to help answer the ethical and legal questions with respect to their use, misuse, and abuse. It is up to us to make sure the “machine uprising” doesn’t happen and that our things continue to enrich our lives—and not destroy them.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/the-internet-of-trustworthy-things-can-we-trust-emerging-technologies/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×