Blog Posts Process Analysis

A Guide to Making AI Explainable – Yes, It’s Possible!

Blog: Enterprise Decision Management Blog

explainable artificial intelligence

The possibilities of artificial intelligence are endless. AI helps businesses create tremendous efficiencies through automation, while enhancing an organizations ability to make more effective business decisions. However, it’s no surprise that companies are beginning to be held accountable for the outcomes of their AI-based decisions. From the proliferation of fake news to most recently, the deliberate creation of the AI psychopath Norman, we’re beginning to understand and experience the potential negative outcomes of AI.

While AI, machine learning, and deep learning have been deemed to be ‘black box’ technologies, unable to provide any information or explanation of its actions, this inability to explain AI will no longer be acceptable to consumers, regulators, and other stakeholders. For example, with the General Data Protection Regulation in effect, companies will now be required to provide consumers with an explanation for AI-based decisions.

FICO has been pioneering explainable AI (xAI) for more than 25 years and is at the cutting edge of helping people really understand and open up the AI black box. As you move forward with your AI journey, we’ve curated a list of blogs that uncover the importance of and trends leading to xAI.

GDPR and Other Regulations Demand Explainable AI

According to GDPR, customers need to have clear-cut reasons for how they were adversely impacted by a decision. But what happens when your model was built with AI? This blog post uncovers the requirement of making AI explainable.

Explainable AI Breaks Out of the Black Box

AI comes with many challenges, including trying to decipher what these models have learned, and thus their decision criteria. This blog lists ways to explain AI when used in a risk or regulatory context based on FICO’s experience.

How to Build Credit Risk Models Using AI and Machine Learning

Ready to make AI explainable? This post illustrates how you can achieve better performance and explainability by combining machine learning and scorecard approaches.

Explainable AI in Fraud Detection – A Back to the Future Story

In 1996 we filed a patent for Reason Reporter—indicative of how long, in fact, FICO has been working with Explainable AI. Simply enough, Reason Reporter provide reasons associated with the neural network scores Falcon produces. The not so simple part? This post demonstrates how we utilize the reason reporter algorithm during model training.

The post A Guide to Making AI Explainable – Yes, It’s Possible! appeared first on FICO.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="http://www.businessprocessincubator.com/content/a-guide-to-making-ai-explainable-yes-its-possible/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×