We Built Them, But We Don’t Understand Them
Blog: Decision Management Community
We design machines to act the way we do: they help drive our cars, fly our airplanes, route our packages, approve our loans, screen our messages, recommend our entertainment, suggest our next potential romantic partners, and enable our doctors to diagnose what ails us. And because they act like us, it would be reasonable to imagine that they think like us too. But the reality is that they don’t think like us at all; at some deep level we don’t even really understand how they’re producing the behavior we observe. This is the essence of their incomprehensibility.
Does it matter? Should we worry that we’re building systems whose increasingly accurate decisions are based on incomprehensible foundations? See how this interesting paper addresses these questions.