Blog Posts Business Management

Augmented artificial intelligence: Will it work?

Blog: Capgemini CTO Blog

One of the methods of using artificial intelligence (AI) is to augment the user. AI will only be used to assist the human worker, not replace him. Augmented Intelligence seems to be the way to push the AI revolution forward. But will it work, or is it just marketing talk to gain acceptance and trust?

There’s nothing that prescribes that AI can only be used in an automated way, replacing humans. AI itself is a mechanized way of simulating human thought. As Wikipedia puts it: “Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem-solving”. For me, augmented intelligence is something that doesn’t oppose Artificial Intelligence. Augmentation is a way we’re going to use AI in real life situations. Ben Dickson (software engineer and the founder of TechTalks): “Augmented intelligence refers to the result of combining human and machine intelligence…”

So, what are the reasons to go for augmented intelligence as the desired way of using AI?

Errors and quirks

AI is imperfect, like humans. There are many causes for this imperfection: bad data, bad domain models and biases, incorrect interpretations of outcomes, and so forth. But in the end, the statistical algorithms on which AI is based will always have the mathematical change for erroneous outcomes; a kind of margin of error. The claim is not that AI will be perfect, the claim is that AI will be better than humans. Self-driving cars won’t stop accidents, they will cause fewer accidents – and the accident rate will go down as the AI learns from its errors and successes. Therefore, human supervision is still required – for now.

Empathy and trust

AI’s lack of empathy is regarded as an important issue. Some say that we should first add a layer of empathy to the systems. Why do we need empathic systems? From a design-thinking perspective, it’s a no brainer. Larry Greenemeier (Scientific American): “We make decisions not just based on rational thinking but also values, ethics, morality, empathy and a sense of right and wrong — all things that machines don’t inherently have.” So, we need humans to add that personal, social, or emotional touch to the decisions. As a side effect, we trust the AI system more when we know that a human is supervising it. We’re humans ourselves, after all.

Exception handling

Most AI systems, like any computer system, are good at bulk handling. But, though exceptions can be detected, they cannot be properly handled. Because the data this technology uses contains insufficient information about these exceptions, it cannot make well-informed decisions. So, in the end, exceptions should be handled by humans. Self-driving cars cannot operate very well in bad weather, so driver involvement is necessary. The question is – does the average driver have enough experience driving in adverse weather conditions to cope with the situations in which AI will fail? I’ll discuss this point later.

There’re plenty of reasons to choose for augmented AI. But will augment AI work? Will organizations adopt this implementation model? Besides the drivers who support augmented AI, there are some forces stopping this move. Here’s some:

ROI is king

As Tom Rikert (partner at Next World Capital) clearly showed, only using AI to fully automate a process will get you many efficiency gains. In my opinion, the gains will get you the revenues and savings you need to earn your investments back. Is it worthwhile to spend money on building the system, curating the data, training the AI, and continuously improving the AI system if it can only be used as an assistant? I’m afraid that many beautiful prototypes developed in research centers around the world won’t hit the market because the ROI is just too meager.

Slowing down the process

One of the advantages of using AI-based chatbots is that the time it takes to resolve issues goes down very fast. Computers are very fast machines, and this implies that decisions made by computers are very speedy indeed. For e.g. most of the trading on the stock exchange is done with fast computer systems. But they initiate about 12 mini flash crashes a day. Danielle Wiener-Bronner (CNN): “One of the culprits of the Flash Crash was high-frequency trading, where computers are programmed to trade a lot of stocks incredibly fast. It was a bizarre domino effect kicked off by rapid trading algorithms.” Humans are just too slow to see the Flash crashes coming to prevent them.

Computers are also very good at processing large quantities of data – quantities we cannot even comprehend. So how can a human check a decision based on vast amounts of data? With augmented AI, the human supervisor becomes the bottleneck. Are you willing to sacrifice efficiency gains for human supervision?

Empowering the humans

This is one of my major issues with AI systems. Are you willing to empower your users to overrule a decision or advice made by a multi-million-dollar system? With IBM Watson Health, highly trained and experienced doctors can overrule the system when they deem something wrong, but do your clerks have that power? Probably not. Like Lynda Blackwell (former architect at the Financial Conduct Authority) says: “Bigger mortgage companies are operating a tick-box mentality, showing no flexibility…” I’m afraid that with these types of company cultures, there’s is no value for augmentation because no one is allowed to question the AI outcomes. The computer says “No!” remains computer says “No!”

Brain-drain

We not only have to be empowered to challenge the advice and decisions made by AI-based systems but also qualified to do so. We should have enough knowledge and experience with the domain this technology is helping us with. If we lack such knowledge, we’re not the right counterpart for the AI system.

I like to boast about my sense of direction. I’ve learned how to use paper maps, the environment, the sun, to find my way around. I understand that people who lack these skills – and there are a lot of them – rely heavily on satnav systems. But young people, who don’t necessarily lack a sense of direction, seem lost without navigation on their mobiles. Why? Because they haven’t been trained to move around without augmented directions.

Changing our behavior

On the other hand, there is a more cultural aspect around the use of AI. Technological innovation changes the way people perceive their environment and has profound effects on their behavior and attitudes. When you implement AI-based systems in your organization, your employees will behave differently. I don’t mean to imply that they will obstruct or abuse the AI. I mean that they will adapt to the new AI assistant. Maybe they’ll just trust the decisions it makes because you, in your wisdom, thought it would benefit their work. What’s the value of augmentation when people blindly follow the advice from the AI system? Automation bias is a known issue around automated systems and it will negatively influence people’s ability to critically judge decisions and advises made by AI-systems.

So, when the AI overtakes our reasoning and decision making, we lose experience and knowledge. When we are only there to supervise and be exception handlers, we’re like operators in a nuclear plant – only when something goes wrong, humans should step in and overrule the automated systems. But operators are trained specifically for this task. Drills and simulations help them to prepare for eventualities. But when the learning capabilities of AI make the system better and better, can we keep up with it?

We have to think of the role of the human in the augmented processes. Is the human there to check the decisions? Is the human there to overlook the ethical, moral, and other consequences of computer-based decisions? Is the human there only for the exceptions? Or, is the human there just to add empathy when interacting with clients or other users? When we implement AI for the augmentation of humans, we need to think about how the human tasks will change, how humans will react to that change, and how we can train them to take on a new supervisory role. Otherwise, augmentation will only be marketing-speak for systems that won’t be controlled.

Photo: Artificial Intelligence by Nick Youngson CC BY-SA 3.0 Alpha Stock Images

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/augmented-artificial-intelligence-will-it-work/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×