Managing AI Decision-Making Part 4: Human out of the Loop
Blog: Decision Modeling Archives - Decision Management Solutions
Continuing our series on AI management options (kicked off by the HBR article Managing AI Decision-Making Tools), the final option is Human out of the loop (HOOTL). In these systems, the decisions must not only be made autonomously but it is not practical or desirable for humans to inject themselves too directly into the decision-making approach being used. Sometimes the decision-making must be launched into the world where it will be out of touch for a long period – think about autonomous vehicles and ships such as the Mayflower Autonomous Ship. Sometimes the decision-making involves too many players and parameters for the human to really understand what’s going on – automated bidding systems for instance can be like this.
In these circumstances, the design of the decision-making system must be tied to a set of objectives that the human can understand and change, even though they don’t understand how the decision-making will be changed in response.
To be honest, these systems are not common in the kind of large, established and often regulated companies that make up our client base. These systems remain largely in the realm of technology companies and specialists.
Click here to check out previous posts discussing other options, such as: Human in the Loop (HITL), Human in the Loop For Exceptions (HITLFE), or Human on the Loop (HOTL).
We hope you enjoyed learning about some of the options around AI management, if you have any additional questions on this article or other related topics, drop us a line – we’d love to connect.
The post Managing AI Decision-Making Part 4: Human out of the Loop appeared first on Decision Management Solutions.