Why a Risk-Based Approach to AI Regulation Is Critical for Future Implementations

essidsolutions

The European Commission’s proposalOpens a new window to regulate the use and supply of artificial intelligence (AI) has set in motion a first-of-its-kind legal process to make AI solutions more accountable and fair. Their draft proposes a risk-based approach focusing on laying ground rules for the use of high-risk AI practices. It also recommends hefty fines of up to 30 million Euros or 6% of global annual turnover of companies found at fault. 

The European Commission has been at the forefront of some of the landmark regulations written to make tech companies more accountable. GDPR has become a model for privacy laws in several countries outside of the EU. Like GDPR, AI rules will have extraterritorial reach, and all AI solutions used within the European Union will fall under its purview, regardless of the location of the solution provider.

The emphasis on regulating AI stems from the growing frustration with recurring patterns of bias and unfairness in AI solutions. Some of the critical issues revolving around AI include a lack of transparency about their algorithms and how they are written. Many believe some applications of AI, such as self-driven cars or weapons powered by AI increase the risk of physical harm. Though some big tech companies, including Google, made a conscientious decision not to develop AI for weapons, no rule enforces a blanket ban on others. Similarly, the use of AI for surveillance in law enforcement is also viewed as a violation of citizens’ privacy and fundamental rights. 

The biggest concern with AI is of bias leading to discrimination based on colour and gender. For instance, in 2019, a risk-prediction algorithm that was used on more than 200 million patients in the U.S. reportedly demonstrated racial bias and showed favor towards white patients. The researchers foundOpens a new window that racial discrimination had erroneously reduced the number of black patients eligible for extra care by more than half. Several such incidents have been reported in other sectors too. 

Experts believe, if left unchecked, these biases and issues can cause harm and reinforce many of the existing inequities instead of addressing them. According to Gartner’s estimatesOpens a new window , by 2022, 85% of AI projects will give inaccurate results due to bias in data, algorithms or the teams managing them. 

“Recent years have witnessed an exponential growth in AI and related disciplines with society reaping benefits across multiple industries from health to finance to real estate. With these benefits however, come concerns. It is generally understood that AI technologies designed to replace or augment human decision-making processes create risks with unforeseeable consequences,” says Brandon Loudermilk, Director of Data Science & Engineering at Spiceworks Ziff Davis. Opens a new window

See more: Why Machine Learning Accuracy Matters and Top Tools to Supercharge It

What Is the Best Approach to Regulate AI?

Experts believe that to regulate AI, it is essential to build a general consensus on what constitutes AI and what risks general AI poses outside of specific industries and domains. 

Ritu Jyoti, Group Vice President, Worldwide AI and Automation Research at IDC rues, “in the U.S., regulatory efforts remain fragmented.”

Jyoti has a point. In March, a group of five federal financial regulators issued a request for information on financial institutions’ use of AI in services to customers and business operations as a precursor to potential “clarifications” of existing laws and regulations. Shortly after that, the Federal Trade Commission released guidance on what it considers “unfair” and thus unlawful use of AI from a consumer protection perspective. More recently, the US National Institute of Standards and Technology (NIST) proposed an approach to identifying and managing AI bias across the AI life cycle, developing a broader risk management framework for trustworthy and responsible AI. 

“These activities and more emerging on the national level, combined with efforts by individual states and cities to introduce their own laws and regulations around AI education, research, development, and use, have created an increasingly complex and unpredictable regulatory climate in the U.S.,” says Jyoti. 

Though AI regulation aims to ensure oversight and adequate risk mitigation with larger societal interest in mind, the primary arguments against it center around increasing costs and stifling of R&D innovation. However, Loudermilk feels that posing the question of regulation as a binary yes/no choice is a false dilemma. Given AI’s current state of maturity and adjacent disciplines, general AI regulation is unwarranted and misguided. 

“Rather, regulation should be seen as a matter of degree. What is required is an approach to AI regulation that takes the middle ground by identifying how much regulation is required, what type of regulation is required, and what coverage domain the regulation addresses. Taking this flexible, pragmatic approach to AI regulation helps society safeguard the common good in cases where the risk is greatest while continuing to support general R&D innovation in AI by avoiding extensive regulatory efforts,” he adds.

Jyoti agrees that a broad stroke regulation on use and supply of AI is not required. “I think a risk-based approach to regulation (see inserted graphic), similar to what the European commission is proposing makes sense. Having said that, I think the federal governments and national regulatory authorities must find a sweet spot between too much and too little regulation,” she adds.

See more: Getting the Data and AI Implementation Right for Your Organization

Will Regulation Throttle the Adoption of AI?

Even though AI generates value, it also presents significant AI/ML business risks and potential adverse business impacts if steps are not taken to address unfairness and inaccuracies. Technology suppliers need to step up efforts to develop and deploy a fair, explainable, robust, and transparent AI. 

“AI is at the heart of digital disruption and is no longer a nice to have option. With trustworthiness emerging as a dominant prerequisite for AI, more investments on trustworthy AI tools and technologies will help manage AI/ML risks,” says Jyoti. 

According to IDC’s research, the revenue from applications that will fall under the EU’s proposed AI regulation is below 20% of the global AI market. As per Paul Nemitz, Principal Advisor on Justice Policy, European Commission, the proposed EU rules may be adopted by July 2022, followed by a transition period of 2 years, which means it will come into effect only by July 2024. 

Some sector-specific attempts have been made in the U.S. to regulate AI. A case in point is the U.S. Food and Drug Administration’s (FDA) proposal to create a framework to regulate AI-powered medical devices. The framework will support good ML practices, review AI algorithms and establish processes to evaluate and enhance their accuracy in detecting and diagnosing diseases.

However, not everyone is happy with top down regulation. Some industry experts believe AI regulation will stifle breakthroughs in AI by weighing down organizations with the burden of compliance. 

“Regulation may slow down AI growth and adoption. Instead of focusing on technology that can be improved, companies will need to spend time complying with the regulator’s rules to avoid getting fined. Some regulations can become economically disadvantageous, where instead of breakthrough technologies, the industry ends up with run-of-the-mill solutions,” said Alexandra Murzina, Machine Learning Engineer/ Data Scientist, Advanced Technologies, at Positive Technologies. 

Though the use of AI in security has not reached a critical level yet where regulation would be warranted, Murzina feels it might help battle the rising menace of deep fakes. “Deep fake technologies are now being actively used by attackers. This includes face swapping on video and voice changing. It’s always difficult to regulate any technology. But some protocol is needed here that addresses how people can verify the source that produces the video or voice is credible,” she adds. 

Murzina feels this is more of a technical problem than a legislative one. However, regulations can put pressure on the industry to collaborate more proactively and develop technical standards to address such issues.

See more: How Synthetic Data Can Disrupt Machine Learning at Scale

How Can Organizations Prepare for Tough AI Regulations?

Even though the roll of new rules will take time, the draft to regulate AI in the EU will likely trigger a wave of similar proposals across the world. Like data privacy, AI is a universal subject that touches everyone’s life in some form. Advanced preparations for the impending regulations will reduce the burden of compliance and save millions of dollars incurred on penalties. 

Addressing the inherent issues in AI will also help organizations avoid potential negative business impacts. Not long ago, Facebook was forced to shut down an AI project after two AI bots created their own language without human input and started using it to communicate. 

Trustworthiness is emerging as a dominant prerequisite for AI, and businesses must take a proactive stance. 

A good start would be to have a broad risk management plan. “AI cannot be managed as a one-off project. Systematic approaches to model development, management over the model lifecycle are needed. Organizations need to assess their AI/ML business risks proactively, build high-risks mitigation plans while integrating third parties’ (partners and suppliers) commitment and adherence to their standards,” suggests Jyoti. 

Jyoti further adds, “organizations also need to educate their workforce on the societal, legal, and ethical implications of working alongside AI. Company leaders need to integrate their corporate core values and provide a clear stance on ethics and AI, and ensure their AI use complies with corresponding laws.”

In addition to regularly examining their vendor ecosystem across AI products and services, organizations should also focus on leveraging other emerging technologies to validate, monitor, and analyze AI systems for trust and ethics. 

See more: Cognitive Computing vs. AI: 3 Key Differences and Why They Matter

Final Thoughts

AI presents tremendous growth opportunities for businesses as well as society at large. However, to ensure that it benefits both, it is important to have checks and balances to ensure AI implementations are fair and unbiased towards all. Regulation can create a framework for AI implementations and make them more accountable, fair and transparent. However, as Loudermilk said, regulators should not lose sight of the fact that AI is a fairly new discipline and considering its maturity, a general approach to AI regulation can be misguided. What is needed is a “pragmatic approach for selective regulation in well-understood, at-risk domains.”

Do you think a risk-based approach with focus on domain-specific issues is the best approach to regulate AI? Comment below or let us know on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We’d love to hear from you!