Blog Posts DMN

AI Decision Transparency

Blog: The Data Center

 

There was an interesting article in New Scientist last month that highlights new regulations in the UK related to AI decision transparency. Organizations there could face multi-million pound fines if they cannot adequately explain why decisions are made from these models. There are similar regulations in the EU. This points to a problem in the way that many AI models are built using deep learning models where the model itself acts as a big black box. You feed the inputs to the model and it spits out a result at the end. The “why” of the model is missing as you have no idea where the results came from or why the model determined a particular result.

A lack of transparency is fine for some types of classification models or other use cases. However, when it’s critically important to be able to prove fairness, to provide best of care solutions or when you need to explain why a certain decision is made – black box AI will not suffice. Does this mean that machine learning cannot be applied? Not at all, but it will change your approach.

There are a couple of different approaches that you could employ to create models that can be understood and explained, and both will use the same underlying machine learning technology. The first thing to understand is that there are a good number of machine learning modeling technologies. Experts in the field will debate how to classify these various models but understand that any of the following model types are regularly advertised as machine learning algorithms: regression, instance-based, regularization, decision trees, bayesian, clustering, association rules, artificial neural networks, deep learning,…… and the list goes on.

Fundamentally, to explain how an algorithm arrives at a conclusion you need to answer the question “If this, then that”. So, the best algorithms to use in order to answer this question are decision trees. You might use other algorithms to calculate inputs to a decision tree (regression, bayesian) but the decision tree will give you the ultimate explanation of how your model arrived at a conclusion. Decision trees might end up being slightly less “accurate” than a deep learning model, but it will be able to explain itself.

Decision Trees

Using a decision tree model will result in a series of “if then” rules that will determine your model outputs. These rules can then be simply created (or imported) into a rules engine like the Decisions platform. This will allow you to explain how the model arrived at a given conclusion given a series of inputs. This will also allow you to look closely at the rules and clean or modify any rules that look like they are spurious and don’t make sense. Spurious rules will indicate to you that your model is likely overtrained and is honing in on a specific data point that really doesn’t match a real-world scenario.

Sensitivity Analysis

Another method would be to create a neural network or deep learning model and then run a sensitivity analysis on the model. A sensitivity analysis is a method whereby you range all the inputs from minimum to maximum and watch the sensitivity of the output to those inputs. Looking at sensitivity scores can give great insight into what inputs are important and help you create rules based on what the model has taught you.

Model to Decision Tree

A final method might be to model your model. This sounds a bit confusing but sometimes a decision tree created from a deep learning model can generate better rules than if you had simply created the decision tree from the initial data first. In this way, you are feeding your model inputs and outputs to use in a second decision tree model. Here again, you will end up with a set of “if this, then that” rules that can explain the decision making process.

To conclude, there are many industries or applications where it is critically important to understand or be able to explain why a machine learning model has arrived at a particular result or conclusion. This doesn’t mean that you can’t use “black box” models in some form or fashion but it will dictate the final model choice. Implementing decision tree models in rules engines like Decisions can not only allow you to employ machine learning models but be able to explain them as well.

If you would like to talk about your specific model or use case, please feel free to reach out to us at sales@decisions.com. We love talking about rules and models.

Book a Personalized Decisions Demo

The post AI Decision Transparency appeared first on Decisions Blog.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/ai-decision-transparency/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×