Why We Need Better AI Regulation: Q&A With Access Partnership’s Michael Clauser

essidsolutions

“Doing this requires a risk-based mindset. What I mean is that if governments wait until AI is perfect and free of flaws, they’ll never enjoy the benefits AI can bring to the table.”

Access Partnership’s head of data and trust, Michael Clauser, shares his views on what the C-suite’s approach to AI as a business driverOpens a new window should be. He discusses which guidelines IT leadership can put in place to tackle the issue of bias and lack of diversity in AI initiatives.

As global tech policy consultant, Clauser explains that companies and governments need to come together to review the ethical implications of AIOpens a new window . At Access Partnership, he leads teams that manage the confluence of data and trust between corporations, governments, and consumers in emerging technologies, such as AI. Clauser lends this insight to this exclusive conversation with Toolbox, where he discusses the biggest data security risks to enterprise-size companies, how CTOs can offer insights on AI and ethicsOpens a new window to policymakers, and more.

Key Takeaways From This Tech Talk Interview on the Ethics of AI:

  • Top tips for CTOs on how to leverage AI
  • Best practices to reduce bias and diversity issues in AI initiatives
  • Top trends to follow in AI regulation and ethics in 2020 and beyond

Here’s the Edited Transcript of the Interview with Clauser and His Views on the Ethics of AI:

Michael, to set the stage, tell us about your career path so far and what your role at Access Partnership entails.

In 2005, I came to Washington, DC from a small town in Pennsylvania angry about September 11th and determined to work in national security to do my damnedest to make sure that such an attack on the US homeland never happened again.

Through a variety of Forrest Gump-esque opportunities, I had the privilege to work towards my goal in uniform both ashore and at sea, at a management consultancy, as a Presidential appointee in the Pentagon, and as national security staff in the House of Representatives, not in that order.

But around 2013, like many of my 9/11-era national security peers in Washington, I migrated into the tech sector via the cybersecurity and data privacy debates after the Snowden revelations. I think Federal No. 3 predicts something about Americans always following the big national issues. I’ve watched that bear out in real time. Though I’m privileged to still get to serve in uniform.

How have you seen the C-suite’s approach to data as a business driver develop over the last several years?

A couple of things happened. I think at some point all those MBA classes preaching the virtue of metrics and measurability in the 1980s and 1990s caught up with the tech sector’s ability to provide tools over the Internet enabling both data inputs and analytics, coupled with ease of use and affordability, and at scale.

At the same time, that cohort of cynical Gen X-ers were ascending into the C-Suite, open to iconoclastic ideas, while hiring smart “Digital Native” millennials straight out of college. Not only did this provide businesses a human capital surge of computer literate data-driven professionals to deliver value, but it empowered these newly cashed-up tech savvy young professionals as an emerging market demographic with buying power.

One that rewards efficiency that only data-driven business can provide. Voila Uber and Grab for on-call transport, or Netflix streaming, or GPS on-demand with Google Maps. Thus, the C-Suite transition was more a revolutionary than evolutionary, less causal, more coup.

What are the 3 biggest data security risks to enterprise-size companies or even nation states?

You mean other than China? I think the first remains a mix of apathy and complaisance by companies in their information assurance plans. It’s just amazing to me that there are still companies that don’t use basic tools like two-factor authentication or encryption by default.

A close second, of course, is cyber hygiene and user error. But I think we’re all getting tired of the experts blaming the victim for broken tech. Thirdly, did I mention China?

Learn More: How to Detect a Data Breach and Resolve it: An Interview with Michael Bruemmer of ExperianOpens a new window

What are top 3 mistakes you see IT teams making in the area of data security?

One common mistake is not approaching cybersecurity and data protection with systems thinking in mind. Sure, the answer to a CISO’s problem could be buying this box from Palo Alto Networks, that software from McAfee, these hard drives from Seagate, and that threat consulting from FireEye.

But only if they are integrated into a beautifully architected, fully integrated system governed by equally thoughtful corporate data governance policies. If you’re ad hoc bolting on security as an afterthought you’re better burning your shareholder’s money in a dumpster. Bad guys will find the seams. Or your employees will.

Speaking of corporate data policies, another mistake is layering on too many user-unfriendly policies that incentivize workarounds. If you mandate a fifteen-character password with at least two upper case, two lower case, two numbers, two special characters, all non-consecutive, and can’t be the same as your last eight passwords, guess what?

It’s going to be written down on a stick note next to the computer. Or if your webmail system strips out URLs or requires a finnicky smart card, guess what? Your users will revert to Gmail.

Biometrics is an emerging solution, but only if the right data privacy guidelines are in place, otherwise it can get real creepy real fast.

The third mistake is only possible if the global CIO thinks that rolling out the same IT system for the subsidiary in China as for the rest of the company is a good idea, because, “hey, we’re a global company.”

What are your top 3 tips for CTOs to leverage AI to reduce the risk of data center downtime?

I think AI can be gainfully used by CTOs to reduce data center downtime risk. The first application is, of course, an AI-enabled threat detection and active defense. Assumption there being that the driver of downtown is cyber-enabled malicious activity.

The next is using AI to predict and detect opportunities for data back-up and arbitrage across various servers or centers. Finally, AI should be a part of any diagnostics, repair, resilience, and remediation strategy to get data centers back-up.

Learn More: How to Prevent Cyber-Attacks: Q&A With David Ferbrache of KPMGOpens a new window

Some AI initiatives have been called out for bias and lack of diversity. What kind of guidelines can IT leadership put in place that make sense for organizations across the public and private sector?

This is the question of the day. The good news is that there seems to be an international consensus converging. Just this week the US Chamber of Commerce released a set of principles for AI. The Japanese government was near first out the gate in releasing this set for AI research and development a few years ago.

The White House has weighted in recently. The EU is active in this space. Multilateral groups like the World Economic Forum and the OECD have working groups, as do civil society groups like the ACLU and standards consortium like the IEEE. All these frameworks seem to rhyme.

Ultimately, when it comes to bias and diversity issues, in the US legal context, the ultimate test will be the application of Equal Opportunity Laws with a sprinkling of First and Fourth Amendment cases in the courts. Which I think triggers a similarly interesting question about liability. Does it lie upstream with the AI creator or downstream with the AI user?

Can you give an overview of what the governments should do (and avoid) as they regulate AI? What insights can CTOs offer to policymakers around AI and ethics?

Governments should start adopting today. And if they’re queasy about rolling-out AI enterprise-wide, they should do so in back office pilot programs or a defined “sandbox” to test the capability. For example, the FBI’s personnel department is using AI to try to predict the retirement of serving Agents in order to determine how many new entry-level positions it needs to generate each year. This is especially important when you think about the lead-time necessary to walk an Agent-recruit through the background investigation.

Doing this requires a risk-based mindset. What I mean is that if governments wait until AI is perfect and free of flaws, they’ll never enjoy the benefits AI can bring to the table.

Listen, humans aren’t perfect.

And governments don’t wait until human nature perfects itself in order to govern. Ditto for AI. I say, “put it in the game, coach.” Though I will concede that a risk-based model is more culturally American and aligned to our ex post system of harms-based regulation. That is, we let citizens experiment and try things out and only regulate when there is an articulatable harm.

The Europeans and many Asian governments like to bake their regulations in up front (ex-ante). But I think if we’re honest, this is probably a contributing factor to why the USA is leading in AI and private investment is flowing to former apple orchards in California.

Learn More: Tech Talk Interview with Lars Selsås of Boost.ai on Conversational AIOpens a new window

Tell us about the upcoming projects in AI regulation and ethics at Access Partnership that you are excited about. Which trends are you tracking in this space as we approach 2020?

For the last six years or so, since the Snowden disclosures, the only companies actively following data policy issues were Big Tech companies. Think: Microsoft, Google, Facebook, and Salesforce. These sorts of companies were really the only serious corporate players on the field for data privacy and cross-border data flow. As such, were and remain our clientele.

Today, two things have changed. The first is technical: every company is a tech company now. The entire economy is data-reliant and tech-enabled. The second is the geopolitical landscape, GDPR is now law, Britain is posed to crash out of the EU, and China is being pushed out of the global value chain.

The confluence of the two macrotrends are waking companies of all stripes up to the confluence of tech policy and geopolitics, or what my friend Paul Triolo at a rival consultancy call, “GeoTech.” Think about Detroit’s foray into autonomous driving.

Big Ag using drones to monitor fields or Big Oil monitoring offshore infrastructure. Wall Street experimenting with Blockchain and Cryptocurrency. Or airlines getting fined for violations of GDPR. The trend is that Silicon Valley has and will continue to permeate traditional business with political, policy, legal, and reputational implications for companies who haven’t had to watch this space before.

Admittedly, there is a flash-to-bang lag for the C-Suite to communicate these new and emerging public policy needs down to their Washington offices, let alone offices in the Three B’s: Boston, Brussels, and Beijing. But when the SVPs for Global Policy in food, pharma, oil, commodities, utilities, chemicals, cars, financial services, and retail get the call from their C-Suite, Access Partnership will be there to help our new clients.

Neha: Thank you, Michael, for sharing your invaluable insights on the ethics of AI. We hope to talk to you again soon.

About Michael ClauserOpens a new window :

Michael A. Clauser is Head of Data & Trust at Access Partnership, the world’s leading global tech policy consultancy, where he works to bridge the trust gap between corporations, governments, and consumers in emerging technologies.

About Access PartnershipOpens a new window :

Access Partnership is the world’s leading public policy firm that provides market access for technology. They monitor and analyze global trends for the risks and opportunities they create for technology businesses and identify strategies to mitigate those risks and drive the opportunities to the clients’ advantage. Their team uniquely mixes policy and technical expertise to optimize outcomes for companies operating at the intersection of technology, data and connectivity.

About Tech TalkOpens a new window :

Tech Talk is a Toolbox Interview Series with notable CTOs from around the world. Join us to share your insights and research on where technology and data are heading in the future. This interview series focuses on integrated solutions, research and best practices in the day-to-day work of the tech world.

Would you like to share your thoughts about the future of AI regulation? Find us on TwitterOpens a new window , FacebookOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!