Why We Need a More Transparent AI

essidsolutions

Along with the increase in AI adoption, skepticism on the ‘why’ and ‘what’ of the technology has brewed. With this, maintaining transparency within AI operations has become an important initiative. Jayant Murty, CTO(Americas), and Sanket Sinha, product owner at Digitate, explain the importance of AI transparency. 

It’s a natural tendency for people to distrust things that aren’t transparent. As humans, we always like to  know the “why” behind the “what” to feel like they can trust the “what.” This is something that leads people to question AI’s decision-making capabilities. An AI solution can produce an answer, but how did it arrive at that answer? What criteria did it use? Does it have the same values that humans operate by?

These questions have become more pronounced now with the advent of complex machine learning models and ensembles that are difficult to analyze how simpler linear models and decision trees can be analyzed. These models provide high accuracy for tasks that were considered difficult for machine intelligence but are less amenable to “explanation” or “transparency” in a conventional sense.  While the ultimate question, “Why machines think the way they think,” presently remains a tough one, huge strides have been made in improving the understanding of results.

It’s also essential to weigh in the fact that many human decisions also don’t have “transparent” explanations – by way of rules or closed-form solutions. In some cases, the accuracy of the output may outweigh the need for explanation. Let’s take a closer look at the concept of transparent AI and some approaches that help achieve transparency.

Learn More: Top 5 Ways to Overcome the Hurdles of Digital Transformation

What Is AI Transparency? 

Suppose we equate transparency in AI with the explainability or interpretability of an answer provided by a machine learning model. In that case, there are at least two definitions of what it actually is. One pertains to correlation and the second to causation. Correlation is an easier problem for a simpler machine learning model. Still, both correlation and causation are challenging for more complex models, such as deep neural network-based approaches.

AI solutions must provide a certain degree of confidence in the path it took to arrive at an answer while maintaining a high accuracy level. An explanation from a poorly performing model may not be of much use. Transparent AI systems rely on a multitude of techniques to understand why the AI system has arrived at the particular results. This can help in partly addressing the “black box” syndrome, in which an AI system doesn’t provide adequate and relevant correlations (if not causation) between the features and the output.

Why Is AI Transparency Important? 

Visibility and trust are major factors for adoption and sustained usage for any AI platform in an enterprise. Analyzing years’ worth of historical data to achieve a high degree of accuracy in prediction without convincing interpretability of the recommendation won’t earn points with decision-makers. This lack of transparency is one of the key barriers in the adoption of autonomous AI today. There are three key reasons why AI transparency is crucial.

  1. To provide a view of the directional and quantitative impact of features on the result in order to evaluate logical consistency of reasoning, particularly in mission-critical situations. In other words, transparency is important to understand how significant the impact of a feature is and whether it is positive or negative?
  2. To gain insights that can help improve the model and prevent biases from infiltrating production systems
  3. To support the increasing need for the right to explanation in the context of some of the recent regulations, such as GDPR and other data privacy-related regulations. (In the case of the EU’s GDPR, it requires that entities handling citizens’ personal data ensure fair and transparent processing by providing them with access to “meaningful information about the logic involved” in certain automated decision-making systems.)

Organizations need AI. However, today’s enterprises are extremely complex environments that generate terabytes of data each day. It’s not humanly possible to analyze all the data and make fully informed decisions in a timely manner. The hiring cycle can rarely keep up with this rate of expansion. So, many organizations have started depending on some form of AI to automate and enhance decision-making and action-taking. Complex AI models have enabled the analysis of large volumes of multidimensional structured and unstructured data.

The importance of confidence in AI’s decision-making processes cannot be overemphasized – it’s critical. Only with this understanding and transparency will organizations adopt AI at higher rates – and only by doing so will organizations continue to reap benefits of AI in the future. 

For example, imagine that a retail store’s AI system has sent an alert informing the retailer they won’t receive a warehouse shipment in time for the store opening, which is a service level agreementOpens a new window (SLA) breach.  While this is useful information, the alert also needs to include enough evidence about the root cause so that business teams can address the SLA breach, proactively fix the root causes underlying the problem, and prioritize such alerts in the future. 

Transparency in AI also enables users to analyze biases before releasing a system in production or prevent them from creeping in during production runs. The media has been quick to point out the flaws in past AI initiatives, for example, the Tay chatbot fiasco, which led people to question whether AI models were thoroughly tested for bias and accuracy.

Learn More: How To Create Test-Drive Solutions That Pique the Interest of Developers?

How Can Organizations Enable Transparency in AI? 

Organizations need to make clear choices and define success criteria when embarking on their AI journey. In cases that are mission-critical or have a large impact on revenue or cost, it’s important to have some form of interpretability of results provided by AIOpens a new window . 

It is also imperative to understand that not all machine learning models provide an equal degree of interpretability.  Simpler models are generally less accurate for complex decision making but are fairly open to scrutiny.  For complex models, surrogate methods are being developed. However, most of these provide local explanations rather than a global one.

In other cases, high degrees of accuracy with limited insights may still be acceptable. Let’s consider a  production setup, for example.  Suppose a highly accurate image processing model that matches or exceeds human performance is used to identify defects. In that case, it may be sufficient to accept results with limited quality checks for faulty and defect-free parts.

Closing Thoughts

AI will eventually undergird many aspects of daily life and business: autonomous cars, medical diagnoses, smart cities, supply chain forecasts, etc. It’s critical to improve the acceptance of AI by interpreting results and minimizing any bias or logical inconsistencies. Much of state-of-the-art AI is not inherently transparent or explainable, but it can become so. The above-mentioned best practices and tools can assist in correctly setting up AI initiatives.

Let us know if you liked this article on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We would love to hear from you!

Â