Blog Posts Process Analysis Process Management

Artificial Intelligence (AI) ethics: 5 questions CIOs should ask

Blog: The Enterprise Project - Enterprise Technology

You may not realize it, but artificial intelligence (AI) is already enhancing our lives in a multitude of ways. AI systems already man our call centers, drive our cars, and take orders through kiosks at local fast food restaurants. In the days ahead, AI and  machine learning will become a more prominent fixture, disrupting industries and extracting tediousness from our everyday lives. As we hand over larger chunks of our lives to the machines, we need to lift the hood to see what kind of ethics are driving them, and who is defining the rules of the road. 

Many CIOs have begun experimenting with AI in areas that may not be very visible to end users, such as automating warehouses. But particularly as CIOs look to expand their use of AI into more customer-facing areas, they must be aware of the ethical questions that need to be answered or risk becoming part of the liability. 

[ Do you understand the main types of AI? Read also: 5 artificial intelligence (AI) types, defined. ]

Here are five questions in need of answers to uncover how ethics influences AI systems, as well as why CIOs need to be aware of these debates now. 

1. Can we hold a machine liable?

In 2019, Elaine Herzberg was killed crossing the road by an autonomous SUV powered by Uber. The car didn’t recognize the figure as a pedestrian because she wasn’t near a crosswalk. As human drivers, we realize people jaywalk, and we have awareness that people don’t always cross the street where they’re supposed to. Who was liable for this oversight?

An Arizona court found the “safety driver” was negligent, but there will come a day when no one is behind the wheel. In that case, is Uber responsible? Is it lidar technology (which measures distances using laser light and sensors)? Is it a caffeine-infused programmer if they inexplicably forgot to account for jaywalkers? 

Even though autonomous automobiles are expected to be much safer than tired, drunk, or distracted drivers, they will kill people. How many? There were 1.16 fatalities per 100 million miles driven in 2017, according to the U.S. Department of Transportation’s National Highway Traffic Safety Administration. In contrast, Waymo logged 10 million miles since it launched in 2009. Without billions of miles under their belt, there is no way of knowing how safe autonomous automobiles will be.

Every death at the hands of an autonomous vehicle is sure to be litigated to pin blame on the responsible party. Does every rideshare and automobile company get drawn into the wave of lawsuits that are sure to come?

Why CIOs should ask this question: With AI, the liability model could shift to the producer rather than the consumer. For those executing these solutions, we need to explore how this potentially impacts our business so we can adequately prepare for it.

2. Can we explain the unexplainable?

Artificial intelligence and machine learning models are being tasked with making critical decisions that have a sizable impact on people’s lives. They can tell us which job candidate to hire. They can hand down a jail sentence to a criminal. They decide which target to bomb. Our problem is we can’t always explain why a decision was made.

Deep learning models work off of neural networks that receive inputs which are synthesized into a result. We can’t easily trace through that web of decisions to find out how the decision was made. It’s a problem when we can’t explain why one inmate received a two-month sentence while another received one year for the same crime. 

We fail at this in the real world as racial bias creeps into human decision making in the judicial process. This bias can unknowingly be passed along through our datasets to taint our model. Accountability is key.

Why CIOs should ask this question: Everyone wants to know how software landed on its conclusion – especially when that decision looks wrong. Neural networks make it hard to answer those questions. We can feed our model inputs to see what outputs we receive, but it’s largely humans interpreting what the machine is doing instead of having certainty. At present, this is a limitation we have to accept.

[ Explainable AI tools tackle this issue: It means humans can understand the path an IT system took to make a decision. Read also: What is explainable AI? ]

3. Is there privacy in an AI world?

One day you’ll be strolling through the grocery store, and your refrigerator will send a reminder that you need milk. You’ll pass through the cookie aisle and a virtual billboard will suddenly appear announcing Oreos are on sale. As you scan the shelves, nutritional information will pop out beside each item. What once seemed property of sci-fi films, like Minority Report, will soon become reality.

While this explosion of information and convenience are welcomed, the tools to get us there are controversial. Most of us realize we surrender a measure of privacy everytime we turn on our cell phone. AI accelerates this decline, and people are worried. A recent research report by Genpact found that 71 percent of consumers were concerned AI will continue the erosion of their privacy. Facial recognition software is becoming more advanced and can pluck faces from a crowd. China is already deploying facial recognition to track and control the 11 million Uighurs population. This is just one aspect of the country’s wide surveillance state.

While there are firm opponents to the technology, we don’t necessarily know which companies or governments are actively using facial recognition. Some technology companies are avoiding the controversy, while others like Clearview sell AI facial recognition software to U.S. law enforcement agencies.

Why CIOs should ask this question: Facebook and Google have run into problems with data privacy, and regulators are increasingly cracking down on lax data privacy standards. As we compile volumes of data on our customers and use tools like AI and machine learning to extract more from that data, we need to keep customer privacy at the forefront of our thinking. Without this, we risk a PR nightmare when we become the poster child for everything that is wrong with technology.

4. Can we reign in bad actors?

Regardless of a technology’s noble intentions, some will always attempt to exploit it for personal gain. Artificial intelligence is no exception. Hackers are already using it to develop sophisticated phishing attacks and to arm vicious cyber offensives against unknowing organizations.

Bots are actively trying to influence our upcoming election by spreading false and misleading information across social media networks. These intensive disinformation campaigns were so effective during the 2016 presidential election that some question whether it swayed the end result. Social media companies have pledged to elevate policing their networks, and hopefully a day will come where misinformation is removed or tagged in real time before it can infect a wide audience.

Some are even using artificial intelligence to alter our perception of reality. Deepfakes are AI-generated video or audio recordings where someone is saying something they didn’t say. This could be weaponized by releasing a video of Elon Musk commenting that Tesla isn’t going to meet quarterly projections, driving the stock price down. It could be a political candidate revealing a bombshell on the eve of an election. While largely confined to porn, instances of deepfakes are estimated to be doubling every six months, according to Sensity, a visual threat intelligence company that tracks the deepfake landscape. It won’t be long before its mainstream. Deepfakes will have devastating consequences if we can’t develop forensic techniques to defuse this technology.

Why CIOs should care about this question: Cybersecurity is an increasingly complex beast. Artificial intelligence gives hackers a frightening new tool to exploit. From keeping our networks safe to impersonating our CEO, we need to understand these threats to properly combat them.

5. Who is responsible for AI ethics?

A common thread runs through all these questions. Who is responsible for establishing and enforcing the ethical standards for artificial intelligence systems?

Tech giants like Google and Microsoft say governments should step in to craft laws properly regulating AI. Coming to a consensus won’t be easy and will need input from a wide variety of stakeholders to ensure the problems baked into society don’t get passed along to our AI models. Laws are only as good as their enforcement. Thus far, that responsibility has fallen to outside watchdogs and employees within tech companies who speak up. Google axed its military drone AI project after months of protests by employees. 

Many companies are crowning chief ethics officers to help guide business units through this new terrain.

Why CIOs should ask this question: Artificial intelligence regulations on the national and international level are still a ways off. We must self-police how we use AI. Hidden controversial AI implementations will only remain the dark for so long. Questionable practices will eventually be brought to light. When it does, companies will be forced to react instead of being proactive about setting and enforcing an ethically-sound AI policy.

We have a long way to go before ethics becomes one with artificial intelligence. By asking the tough questions, we can determine the role we want AI to play in our lives and organizations. There is no time like the present. AI is being tasked with solving more important problems everyday which is why we find ourselves gummed up in these philosophical questions. The answers are out there if we are willing to ask and explore.

[ How can automation free up staff time for innovation? Get the free eBook: Managing IT with Automation. ] 

Primary Image: 

Article Type: 

Article
Subhead: 

Liability, privacy, and other ethical quagmires await IT leaders applying artificial intelligence tools. Let’s examine five areas that are sparking AI debates – and deserve special attention

CTA: 

Subscribe to our Newsletter

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/artificial-intelligence-ai-ethics-5-questions-cios-should-ask/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×