Search
Close this search box.
funky robot

Data & Analytics Trend 4 – Explainable AI

This is the fourth part of our “Diving deep into Gartner’s Top 10 Data and Analytics Technology Trends for 2019” blog series. In this series, we’ll explore the top 10 Data & Analytics 2019 trends identified by Gartner. If you haven’t yet seen the third post on Continuous Intelligence check it out here.

In this post, we cover a specific type of technology whose main aim is to enhance an already existing and very widely spread use-case, Explainable AI (XAI).

Definition

Machine learning techniques that make it possible for human users to understand, appropriately trust, and effectively manage AI.

Harvard Business Review Analytic Services

As it stands right now, the results that machine learning-powered applications generate are provided without the necessary context to the user that explains why the result is the way it is. This may be less of an issue in a situation where AI is being used as an additional analytical point to do business intelligence but becomes a much more salient point when AI is being used for systems that power autonomous driving for example.

Essentially, the majority of AI processing occurs in a “black box” from which outputs are generated. An end-user could understand the mechanics behind the various machine learning techniques most commonly employed, such as random forest, neural net, and K-Nearest Neighbors, but in the moment of working with the AI/ML backed system would not get the full context of the machine’s decision.

For clarity, the focus of this post revolves around XAI as it relates to machine learning (ML). ML is a subset of artificial intelligence that deals with enabling computer programs to learn based on data without that “learning” being baked into the computer program by default. For example, you could feed an image processing ML program 1,000 pictures that contain trees and label those as such for the program to be “trained” on. Afterward, you could input one thousand non-labeled photos into that same program and ask it to find which pictures contain a tree.

Why It Matters

Photo by Fabio Comparelli on Unsplash

The power of AI lies in its ability to out-process the human brain once given a combination of a model and algorithm by which to ground its analysis.

The major stumbling block coming out of this process is that the end-user who’s supposed to take action based on the machine learning output will most likely be at quite a loss in regards to assessing the validity of the answer. The bridge between human and machine becomes very difficult, perhaps even quasi-humanly-impossible.

This will put end-users in a difficult position. Most likely, assuming they have some experience in machine learning, they will have an understanding of what inputs or training data sets can be altered in order to drive outputs in a certain direction.

Additionally, they can most likely abstractly explain the fashion by which this particular AI-backed application runs its machine learning operations.

However, when it comes to being able to give a full picture of the factors behind why the answer is the way it is, what potential errors to watch out for, and other questions of the sort, they will be in the dark.

As convoluted as it sounds, XAI uses AI to explain AI. In doing so it empowers end-users of all technical capabilities to make better sense of the results their AI-powered applications rely on. Additionally, this frees up your data scientists to focus on the harder problems of improving both the computational capabilities of the ML models used to drive output as well as the related models that are used to give the XAI context.

Applications

There are few applications out in the market currently that provide XAI. The only one we are aware of is simMachines. If this is something you’re interested in we recommend doing a demo, but be aware that this industry is very young and the full capabilities of XAI are yet to be reached.

Conclusion

As artificial intelligence becomes increasingly more powerful the insights we derive from it will become more complex to understand. Explainable AI is going to be a necessary part of the puzzle for modern organizations and their teams. XAI will also be one of the main drivers that will establish greater trust in machine learning applications as they spread within companies.

If you’re interested in learning more about this topic you can look at the bundle of links here http://leaves.anant.us/topic/xai.

Photo by Rock’n Roll Monkey on Unsplash