A Guide to Making AI Explainable – Yes, It’s Possible!
Blog: Enterprise Decision Management Blog
The possibilities of artificial intelligence are endless. AI helps businesses create tremendous efficiencies through automation, while enhancing an organizations ability to make more effective business decisions. However, it’s no surprise that companies are beginning to be held accountable for the outcomes of their AI-based decisions. From the proliferation of fake news to most recently, the deliberate creation of the AI psychopath Norman, we’re beginning to understand and experience the potential negative outcomes of AI.
While AI, machine learning, and deep learning have been deemed to be ‘black box’ technologies, unable to provide any information or explanation of its actions, this inability to explain AI will no longer be acceptable to consumers, regulators, and other stakeholders. For example, with the General Data Protection Regulation in effect, companies will now be required to provide consumers with an explanation for AI-based decisions.
FICO has been pioneering explainable AI (xAI) for more than 25 years and is at the cutting edge of helping people really understand and open up the AI black box. As you move forward with your AI journey, we’ve curated a list of blogs that uncover the importance of and trends leading to xAI.
GDPR and Other Regulations Demand Explainable AI
According to GDPR, customers need to have clear-cut reasons for how they were adversely impacted by a decision. But what happens when your model was built with AI? This blog post uncovers the requirement of making AI explainable.
Explainable AI Breaks Out of the Black Box
AI comes with many challenges, including trying to decipher what these models have learned, and thus their decision criteria. This blog lists ways to explain AI when used in a risk or regulatory context based on FICO’s experience.
How to Build Credit Risk Models Using AI and Machine Learning
Ready to make AI explainable? This post illustrates how you can achieve better performance and explainability by combining machine learning and scorecard approaches.
Explainable AI in Fraud Detection – A Back to the Future Story
In 1996 we filed a patent for Reason Reporter—indicative of how long, in fact, FICO has been working with Explainable AI. Simply enough, Reason Reporter provide reasons associated with the neural network scores Falcon produces. The not so simple part? This post demonstrates how we utilize the reason reporter algorithm during model training.
The post A Guide to Making AI Explainable – Yes, It’s Possible! appeared first on FICO.
Leave a Comment
You must be logged in to post a comment.