If explainability is the understanding of AI, explain on!

Share on facebook
Share on twitter
Share on linkedin

Artificial Intelligence makes predictions based on the data that organisations feed into it. The problem is that AI is largely what is a called black box, meaning we can’t see inside the AI and pull out why it made that particular prediction. We know the first part, which is the data that we feed into it, and the last part, which is the outcome that it chose from that data. What we don’t know is the decisions that the AI made to produce the outcome.

To help understand that journey, organisations are starting to utilise what is called Explainable AI, sometimes referred to as XAI. Simply put, explainable AI is a technique that monitors the AI black box and attempts to show the reasoning behind the prediction. 

One of the reasons that AI explainability is becoming more important is that organisations want to be able to trust the outcomes that the AI is giving. Imagine if a stranger gave you advice without explaining how they came to their conclusion or why you should listen to them. If your way of thinking disagrees with the stranger’s advice, you probably won’t listen to them as you don’t have any reason to believe their advice to be correct. Why should you trust them?

By opening up the AI’s black box you are seeing the reasoning behind the prediction, you can see which data has been selected and utilised in the decision making process. This not only helps create trust, but improves the quality of the predictions as developers can make adjustments to the algorithms based on what data the AI is emphasising. 

Transparency around the decisions that AI is making is becoming more and more important in regulated industries, such as financial services. To take an example from banking, if an individual were to apply for a small loan, an area that is becoming more and more automated, and then be denied some believe that the individual should have a right to explanation. Meaning that they are allowed to know the reasoning behind their rejection. Although a right to explanation is not yet implemented, it is bound to become a bigger discussion with the rise of automation.

With the right to explanation gathering momentum, it might be important for organisations to future proof their AI algorithms by including explainability from the start. To aid in this explaination of AI, please look below for a small list of some interesting companies we’ve come across.


2018 founded, $18.3m raised, 25 – 50 employees, US-based

Client snapshot:

Humana, Truebill

What do they do?

Arthur offers a centralised monitoring platform for ML models that are first going into production or have been running for some time. Their platform helps improve the performance of the models and detect problems before they become issues, enabling any black-box models currently in production to become explainable. By aggregating inference data, they configure thresholds to detect unwanted bias that affects consumer decisions. 

Why we’re interested

Arthur is interesting to us as they focus on the removal of biases that come inert in ML models as they are in production. Biases implant themselves into ML models when the algorithm places more emphasis on a factor that it predicts to be more relevant than it should. This can lead to the ML model predicting untrue stereotypes which leads to false predictions. Arthur’s platform gives organisations a better ability to flag and understand where these biases are coming from to improve their current models.


2017 founded, $3.9m raised, 50 – 100 employees, Canada-based

Client snapshot:

BMW, Honeywell, Lockheed Martin, 

What do they do?

DarwinAI’s product, GenSynth, allows for explainability, accelerates deep learning design and makes it easier for developers to interpret and quantify the inner workings of deep neural networks. GenSynth ingests virtually any data for AI systems, such as computer vision, natural language processing, or speech recognition, which it then outputs as a highly optimised, compact version. This yield can be stored both on-premise and in the cloud. The highly compact neural network is then run through their explainer tool to showcase how the model came to its decision.  

Why we’re interested

Instead of machine learning models, DarwinAI’s product helps to explain why Deep Learning neural networks have given their prediction. Their platform allows development teams to see the link between the data and model, explain the factors that lead to the model’s decision, explain why that decision was made and communicates the reasoning of the model intuition. Essentially, DarwinAI provides a platform that gives clarity to deep learning models that are notoriously difficult to understand. Within industries that are highly regulated, such as banking, where the reasoning for a decision needs to be explained, DarwinAI could allow for deep learning to be implemented much sooner. 


2018 founded, $13.2m raised, 25 – 50 employees, USA-based

Client snapshot:

Facebook, Intel, Hired

What do they do?

Fiddler develops an artificial intelligence platform designed to create successful artificial intelligence services that are transparent, explainable and understandable. The company’s platform uses the explainable artificial intelligence engine to provide business and statistical metrics, performance monitoring and security services, enabling businesses to analyse, manage and deploy their machine learning models to scale.

Why we’re interested

Fiddler’s emphasis is on the actual model performance more so than the other explainability platforms in this list. Their dashboard stops data drifts from happening, ensuring that predictions remain accurate. 


2019 founded, Unknown raised, 1 – 10 employees, USA-based,

Client snapshot:

Undisclosed

What do they do?

XaiPient is developing an explainable marketing intelligence engine. Through their platform, organisations can connect marketing data and view a dashboard of potential outcomes of their marketing campaigns. Most importantly, their platform is entirely explainable, with the reasoning behind each prediction showcased directly on the dashboard. 

Why we’re interested

Unlike the other companies, XaiPient is at a much earlier stage and does not focus on explaining machine learning or deep learning models that the organisation themselves are creating. Instead, it is already built, allowing for immediate deployment. One of the interesting ways that XaiPients explainable AI can be used is through multiple scenario planning.

Get in touch