Artificial Intelligence

Artificial Intelligence risks losing trust if it doesn’t offer explanations

Published on April 14, 2020

Over the past few decades, AI has gone from science fiction to an integral part of everyday business operations. According to a recent report, “62% of organisations in India have implemented AI in some form, a figure which is not so far from the global figure (65%).”

Looking out a bit further on the horizon, a report predicts that by 2023, 40% of infrastructure and operations teams will use AI-augmented automation in enterprises, resulting in higher IT productivity. As companies proceed from narrow AI to general AI — and begin automating not only processes, but also decisions — it’s vital that AI tools explain their behaviour.

The importance of explainable artificial intelligence cannot be overstated; it is absolutely vital that AI tools justify their decisions by offering detailed explanations. If an AI tool fails to offer an explanation as to how it reached a given decision, users may lose faith in the tool altogether.

While implementing AI tools into your business, it is important to retrofit AI into your existing workflows. After processes are successfully automated, you can begin to automate decisions as well. Even if you have a 100-member team specializing in anomaly detection, computer vision, natural language processing (NLP), and other AI techniques, all AI decisions should require approval from a human—at least until you have fully honed the process. Ideally, one’s AI tools should be accurate at least 80% of the time, and for every single automated decision, your tools should offer an explanation, as well as confidence intervals.

Why is explainable AI so important?

In a recent report, Forrester notes that “45% of AI decision-makers say trusting the AI system is either challenging or very challenging.” Thus, there’s a need for transparent and easily understandable AI models. For all decisions made by AI, there needs to be readily available explanation.

Acknowledging this, you should offer pre-built explanations for all of your AI decisions. For example, perhaps you’re utilising NLP and chatbots to streamline processes for technicians; if a particular request is frequently raised and directed to the same sysadmin every week at the same time, the AI recognises this pattern, automates the process, and explains why it did so.

Through “explanation-ready” AI features, you will be able to effectively assist IT teams with a host of security concerns, including log management, insider threat analysis, user behaviour analysis, and alert fatigue management. And through AI monitoring tools, it’s easier than ever to predict anomalies, outages, combinatorial anomalies, and the root causes of outages. During all of these automated discoveries and decisions, an explanation for the course of action must be provided, along with confidence intervals.

Your robust solutions for DevOps and IT Operations can effectively use AI tools to assess past user behaviour and then ascertain whether an action is anomalous. While accounting for seasonality, changes in schedules and processes, and time of day, these AI tools effectively predict anomalies and outages, ultimately saving your IT teams copious amounts of time and energy.

As an example, perhaps your website monitoring tool notes that a web page loads slowly at the same time each week when it is accessed from a certain location. AI tools will acknowledge this pattern and automatically send a ticket to the web manager via your service desk software. By integrating with multiple tools, AI automates processes, saves time, and improves productivity.

Again, the important point to drive home is that the AI must be explainable. AI tools can suggest certain decisions; however, if these decisions don’t come with pre-built explanations, people will lose faith in the tools.

Disclosure: This article was originally published in Analytics India Magazine

Leave a comment

Your email address will not be published. Required fields are marked *

− 5 = 4