A Call for Governance and Responsibility in AI

essidsolutions

Why is responsible AI required? In this article, Sri Ambati, co-founder and CEO, H2O.ai, takes an example of a company in the real estate industry to illustrate the need for responsible AI and the human element in decision-making.

The online real estate site Zillow recently announced it would shutter its iBuying business in which it used artificial intelligence (AI) to identify cosmetic fixers that could be bought, renovated, and then quickly put back on the market for a profit. News accounts reported that the company stands to lose more than half a billion dollars due to inaccurate forecasts on the profit the company can make by flipping the properties after renovation. More than 25% of the company’s employees will be laid off, and the company is selling more than 2,000 units to an investment firm to cut its losses. 

Multiple reasons likely contributed to the failure, and many have wondered how a company renowned for the technology behind its home valuation model could make such erroneous decisions. At the center, however, was a lack of adequate governance around its decision-making algorithms.

See More: Unlock Stronger Customer Relationships With Natural Language Processing

What Is AI Governance?

At its core, AI governance refers to the guard rails put in place to ensure that machine learning models are researched, developed, and deployed in a way that a human can evaluate and explain an AI system. AI governance ensures how to best identify and handle a model that has biases and/or limitations and requires human oversight or intervention to ensure that the model is making the right business decision.

Today, regulated industries such as financial services, insurance and healthcare, are required to establish AI governance policies. The vast majority of other industries largely do not have these steps in place. Companies adhering to AI governance policies need to not only ensure against bias and limitations when a model is in the development phase but also throughout its lifetime. The best AI models can be easily explainedOpens a new window by someone with domain expertise – in Zillow’s case, a local real estate agent. But the same principles hold for insurance actuaries, medical specialists or financial analysts. 

Like every other industry, COVID-19 upended the real estate market on multiple levels: supply chains came to a screeching halt, causing shortages in core materials such as plywood and sheetrock, which in turn caused prices to spike. Contractors were in high demand as homeowners looked to make the houses they were now living, working, and learning in 24/7 more liveable. Urban residents able to work remotely flocked to suburban and rural areas for more space and less density. If not adjusted to account for these new conditions, any algorithm would have made decisions based on outdated assumptions. If local real estate agents aware of the impact of COVID-19 in real-time were involved with every decision made, could Zillow have changed course and avoided a $500+ million loss? 

See More: How Can Vision AI Help Airports Relaunch in a Post-Pandemic World

A Call for Responsible AI

Responsible AI is a human-centered approach to artificial intelligence, which means every element of an AI solution has a “human in the loop,” from the annotation of data and the development, validation, and testing of machine learning models to the identification of bias, limitations, and anomalies of the model. 

Regardless of any required regulatory compliance, it is in an enterprise’s best business interest to ensure that a human can understand and explain its model clearly, and be involved with the eventual decision-making process. Typically, industries are regulated to ensure that consumers are not harmed by their products and services. Digital real estate platforms are not regulated, and no harm is known to have affected Zillow buyers or sellers. In this case, however, following responsible AI and/or AI governance practices could have prevented, or at least limited, the adverse effect that inaccurate algorithms had on the business.

When each of these elements is in place, AI can be a significant driver of a business’s success. Responsible AI includes not just technology but relies heavily on people and processes within the organization to monitor for significant shifts in data, accuracy, bias, explanations, and decision making. 

Did you find this article helpful? Tell us what you think on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We’d be thrilled to hear from you.