A Guide to the Regulation of Artificial Intelligence

essidsolutions

In the future, artificial intelligence may be the facilitator of decision-making, financial instruments and even conclusion of contracts. However, its actual power to impact these and other activities is difficult to predict.

That’s why various leading experts and organizations are pushing for the regulation of AI at many levels, all the way from consumer-grade to international markets. But it remains to be seen how AI use within companies can be regulatedOpens a new window , what form this regulation will take and whether it can be truly effective.

Why Regulate AI?

Prominent public figures, from SpaceX CEO Elon Musk to the late physicist Stephen Hawking, have argued in favour of AI regulation. Hawking was worried that people will come to entrust AI with a disproportionate amount of critical responsibilities on a global scale – for example, the existing reliance on Google’s increasingly powerful AI capabilities to provide answers to everyday questions.

By contrast, Musk appears to fear that autonomous machines will take control of the world, having seemingly failed to grasp the definition of AI for the purposes of the debate over regulation. AI consists of the software-level processes capable of performing functions such as sifting through large quantities of data and reaching conclusions in line with their programmed parameters; the main application of AI lies in saving similarly capable humans from conducting lengthy and tedious tasks. In addition, AI can present options derived from increasingly sophisticated decision-making processes.

So when it comes to business intelligence, regulating AI is not, as some participants in the debate argue, tantamount to curbing the biggest wave of human innovation since the development of the Internet. The Internet and AI are tools open to an ever-increasing number of people and organizations.

However, the Internet is not in and of itself intelligent, nor is it capable of performing functions designed to influence commercial decisions and functions on its own. It’s also easy to forget that the effectiveness of AI Opens a new window ow” target=”_blank”>depends largely on the humans programming it and the information it can access in order to fulfil its purpose.

This means AI deployment and provision are open to potential abuseOpens a new window – for example, a research suite designed to identify new vendors, which turns out to be restricted to the provision of information only on the provider’s commercial partners.

It follows that AI products must be fit for their given purpose and have access to the kind of data necessary to do so. Therefore, the institution of regulatory practices should also be capable of ensuring that the quality standards that may be required by some industries and applications will be implemented and followed.

Adequate and appropriate regulation may mitigate many relevant risks, just as organizations such as the US Food and Drug Administration or the European Medical Agency have been created to safeguard the standards to which the quality, safety and risk-to-benefit ratios of human medicines are held.

How to Regulate AI

The capacity of AI to outperform human capabilities means it is being deployed to handle a growing set of tasks, in areas including trading in international financial markets. However, AI engines do not necessarily have to reveal their inner operations to third parties, even those with vested interests in its output, such as investors.

One of the first efforts to regulate AI explicitly is the European Union’s second Markets in Financial Instruments Directive, which incorporates a set of rules governing AI tools – those that use algorithms, data processing or high-throughput processing, or a combination of them, to reach decisions impacting investment or trading.

The legislation requires firms that seek authorization to offer investment services throughout the EU to disclose the information they use, including the input for their AI operations, as well as their use of algorithms and the nature of the parameters, strategies and contingencies associated with each one.

It also requires the implementation of appropriate risk controls in conjunction with, or integrated into, each company’s AI. However, the main mechanism for AI control under the European legislation is to confine the information and output to regulated markets and platforms, where the continent’s regulatory authorities can monitor them.

It’s not always clear whether such a sector-specific approach is the best, although advocates argue that different forms of AI require different forms of oversight and regulatory language. At present, the most prominent aspect of AI legislation under consideration by the US Congress concerns the software that is being deployed to operate driverless cars. The results of this process could well prompt a broader examination of oversight of AI throughout the wider transport and logistics field.

In addition, the FDA has recently made increasing moves toward frameworks for the regulation of AI in the healthcare field due to the emerging trend to incorporate AI into products that determine health-related actions or the status of the user. For example, clinical decision support software products, which use custom-designed AI to assist doctors or other healthcare providers in drawing up treatment plans, may incorporate sensitive patient data, making it liable to strict regulation and restrictions.

Such use of AI could be regulated by the 21st Century Care Act enacted in 2016. However, the legislation stipulates that many forms of AI-enhanced products, including clinical decision support, may be exempted from FDA regulation under certain conditions, for instance if users are capable of reviewing any decisions generated by themselves.

Another example of the FDA’s approach to regulation of AI that may fall under its jurisdiction is the Digital Health Software Pre-certification Pilot Program, designed to enable “pre-market” oversight of health-tracking devices manufactured by companies such as FitBit or Under Armour. As these devices will probably be powered by AI in the future, and have extensive access to customers’ personal data even in their current form, they may well be targets as AI regulation gathers pace.

When Will Regulation Come?

In these respects, regulation of AI has already arrived, but it does not yet apply to all industries or applications in which AI technology is capable of being used, or for which it is already in operation. Extending the scope of regulatory oversight is likely to prove a complex task, and the cost of monitoring will initially be borne by the industry in question, but ultimately by its customers.

Nevertheless, in addition to front-end algorithm development, the oversight of companies’ AI applications will provide at least some opportunitiesOpens a new window for service providers including consultants, researchers and others active in the process of authorization and approval – an example of how the deployment of AI is likely to create new jobs – as well as eliminating existing jobs.

The use of AI in many sectors is intertwined with issues relating to ethical standards, quality and the safety of users and others, software experts note. As a result, it may become subject to a degree of regulation akin to that customary in areas such as financial products and services.

However, comprehensive oversight of AI’s advantages may require new and unique technical skills, as well as unprecedented access to the code used in many high-throughput data processing operations. Numerous strong arguments exist for regulating AI in terms of protection for purchasers, decision-makers and investors, but how this will impact its business applications has yet to be seen.