AI Expert Joanna Bryson Dishes on Due Diligence and Rooting Out AI Bias, Part 2
Blog: Appian Insight
How was that credit application declined? Why was that person denied parole? How were disability benefits cut for those constituents? And why do computers learning from humans automatically see certain occupational words as masculine and others as feminine?
It’s hard to explain how the most advanced algorithms make decisions. But as predictive systems proliferate, there are signs we’ve become more wary of their use in making critical decisions. For example, more than half of global consumers (53%) say we need more education on how to ensure AI lives up to our ethical expectations. But here’s the good news:
Many of the variables that impact the ethical use of AI are often directly related to the choices we make in developing the technology itself.
So says AI expert and scholar Joanna Bryson (@j2bryson), Professor of Ethics and Technology at the Hertie School of Governance in Berlin. In the first episode of this two-part blog, Bryson unpacked the importance of due diligence in the fight against bias in software development. In this final episode she explains why machines won’t take over the world and gives us a decoder ring for AI success:
Appian: I want to revisit something you said earlier. You mentioned the smoke and mirrors surrounding some of the arguments against regulating AI.
I think it’s important to recognize that we can track and document whether or not you follow due diligence in AI development. The process is no different than any other industry. It’s just that AI has been flying under the radar.
If we can get through to companies that this is the expectation, that this is just normal life…(For example,) when I was in the UK, we didn’t think we needed any new legislation (for AI). We just needed to help people understand how to apply existing legislation to the software industry.
Appian: Do you think the European approach of GDPR (General Data Protection Regulation) is a good policy model for us to follow?
Bryson: Nothing is perfect, but I think the GDPR is leading the way in AI policy. You can always improve it. But if we’re not going to improve it, we should just adopt it (laughter).
Appian: Let’s switch gears and talk about another hot topic—algorithmic bias. What’s driving bias in AI?
Bryson: Machine learning…machine learning will pick up the same human biases that psychologists call implicit biases.
Appian: Can you give an example of that?
Bryson: Relatively speaking, women’s names are associated closer to terms that are more domestic. And men’s names are associated closer to terms that are more career oriented. That’s an implicit association test which psychologists have done.
It’s also a really good example of how, if you’re training AI by machine learning, you’re going to wind up with the same prejudices that we already have. And that’s just one of the ways you can get biases into AI.
When AI Breaks Bad
Appian: How much should we worry about bias in AI?
Bryson: I’m very worried about it. One of my favorite stories about AI bias has to do with soap dispensers that won’t give you soap if you don’t have a certain skin tone. [This happens when infrared sensors aren’t designed to detect darker skin tones.]
In other words, none of the people who tested these dispensers were Asian. They were all incredibly Caucasian (laughter). But these are the easiest kinds of biases to fix. And that’s actually one of the good things about AI.
When you’re talking about human implicit biases, it’s harder to tell what’s behind them. But with AI, as with accidents involving self-driving vehicles, you can go out and look at the data logs and see what the AI was perceiving and figure out why the AI did what it did.
Appian: What about situations where bias is deliberately built into AI?
Bryson: …This is the one (kind of AI bias) that I think people are missing—where you can deliberately build bias into your process. It’s not about the algorithm being evil. There was a bizarre case in the U.S. where a state built an algorithm for allocating disability benefits.
But the formula caused disability benefits to suddenly drop for many people, in some cases by as much as 42%. When beneficiaries complained, state officials declined to disclose the formula, claiming it was IP (intellectual property).
Appian: So, what happened?
Bryson: The beneficiaries prevailed in court on due process and forced the state to reveal its formula for allocating benefits. We see the same thing happening with some recidivism programs.
Appian: In what way, can you give an example of how that plays out?
Bryson: Some judges, for example, are using AI software that has the capacity to predict the likelihood that a person will re-offend. The software behind these recidivism programs is worse than anything any academic can do. We can’t figure out how they are doing such a bad job of prediction.
This is why the most important thing [with AI] is accountability through logging.
AI Won’t Take Over the World
Appian: So, we’ve seen tremendous progress with intelligent automation in recent years. We’ve reached the point where machines are using sophisticated algorithms to mimic human behavior. But does that make them intelligent?
Bryson: To me, a thermostat is intelligent. If you want to define intelligence as being human, what is the purpose of that? There are lots of different ways to be intelligent. But I think what people really, really care about is moral agency and moral patiency.
Appian: Moral agency? Moral patiency? What does that mean?
Bryson: Moral agency is about who or what is responsible for the actions an agent takes. Moral patiency is about the “who” or “what” society is responsible for.
Now that we can have that conversation, the two things that people care about most are: Is AI going to be like us? Do we have to worry about it taking over the world?
Appian: So, is AI taking over the world?
Bryson: I don’t believe any one machine can take over the world. The world is a pretty big place. Cooperatively though, humanity is doing a very good job of taking over the entire ecosystem. We are the ones that are changing society by using AI. So, how should we regulate that? How should we change the laws to protect people, now that we have big data and we know all these things about them?
AI and Social Fragmentation
Appian: You’ve also argued that one of the problems with the evolution of AI is something you call social fragmentation. What did you mean by that?
Bryson: Think about how different our communities would be if everyone came out and talked to each other. The fragmentation problem came about because of communications technology. And it’s going to get worse with the rise of AI.
Appian: In all of the conversations you’ve had with business and public policy leaders, what would you say is the biggest misconception about AI?
Bryson: There are several things. One goes back to a point I made earlier in our conversation.
Which is the fear that you’ll lose the magic if you regulate AI. No, you can regulate AI. And you can regulate it on performance. Another misconception is that you’ll lose IP or innovation, if you regulate AI.
But medicine is heavily regulated, and they have 10x the IP as the tech industry. A lot of the resistance to regulation is coming from people who’re unwilling to change. They don’t realize that regulation can actually help them.
So, when I talk to really big companies, the main thing I want to communicate is the importance of accountability and getting on top of their software development process. And machine learning is just another tool in the tool box. Which means you need to be doing your systems engineering more carefully. You need to know where your libraries came from.
Whether you’re talking about software libraries that you’re linking to, or data libraries that you’re training from, you need to know where they came from, and who has access to them.
Integrating AI into Our Lives
Appian: Finally, what are your expectations for AI in the new decade?
Bryson: I think it’s important to understand that AI is everywhere. And the biggest challenges that we’re facing right now are the political, economic and social consequences of how it affects us. We’ve made this huge leap in AI capabilities. Because we have more data, and we got better at machine learning. In the long term, I think that this will accelerate our rate of progress.
So, now is the best time to figure out how to integrate AI into our lives.
(PS: If you missed the first part of this two-part post, you can read the first installment here. To learn how you can leverage AI as a productivity multiplier in your organization, check out this link.)