Can Mice Detect Deepfake Videos?

essidsolutions

Researchers are taking aim at “deepfake” videos, which could threaten our elections, and believe that mice can help stem their surge.

Deepfakes get their name from deep learning algorithmsOpens a new window that map movements and expressions of public figures onto actors in videos. Audio then is spliced to give voice to the words mouthed by the superimposed images.

At the recent Black Hat cybersecurity event in Las Vegas, a team of corporate data scientists and academic researchers pointed to mouse physiology as a potential template for training next-generation neural networksOpens a new window to spot the machine-generated blends of images and speech.

The rodents possess an innate ability to distinguish sounds replicated by artificial intelligence, the researchers said. Creating spoof detectors that account for subtle differences in deepfake audio could better alert us to the phony content distributed on Facebook, Twitter and other social media sites.

Easier to Make, Harder to Detect

A staple in Hollywood films for decades, deepfakes exploded over the last year as the technology used to create them became commonplaceOpens a new window . For as little as $100 in cloud-driven computer processing, hoaxers can create a deepfake by manipulating authentic recordings with readily available post-production tools and machine-learning models.

Politicians are a frequent target. One infamous example of a deepflakeOpens a new window is an altered video of House Speaker Nancy Pelosi that made it look like she was drunkenly stumbling over her words.

The video, circulated in May, was viewed two million times in its first 24 hours on a Facebook page. Facebook later deemed the video a fake but didn’t delete it, saying, “we don’t have a policy that stipulates the information you post on Facebook must be true.”

In Belgium, a political party published a deepfake on Facebook that showed President Trump criticizingOpens a new window Belgium’s progressive stance on climate change. It provoked hundreds of online comments expressing outrage that Trump would interfere in Belgium’s internal affairs.

The Potential of Better Mice Detectors

Better spoof detectors clearly are needed to tamp down the fakes.

Working with data scientists from Bloomberg, the financial information and software company, and AI specialist GSI Technology, researchers at the University of Oregon’s Institute for Neuroscience believe mice offer potentialOpens a new window  in building better detectors.

They propose mapping their brain patterns as a precursor to creating more robust algorithms for parsing the volume of digital data uploaded to social media.

That’s because mice can be trained to detect subtle differences in phonemes, sounds which are the basic elements of spoken words. And because mice can’t derive meaning from speech, they aren’t susceptible to the sort of cognitive illusions made in the human brain as it seeks to understand what’s being said.

Indeed, humans outscored mice for accuracy in identifying phonemes. And computers trained with artificial intelligence outperform humans at spotting deepfakes. But those neural networks rely overwhelmingly on visual cues, such as disparities in pixel density, unnatural head movements and the blinking of eyes. They’re less adept at parsing irregularities in pitch and tone of the underlying audio.

Calls in Congress to Take Action Now

While the Oregon team says that work remains to map the minds of mice onto neural networks, the projects can’t happen fast enough for some senators and representatives on both sides of the congressional aisle. Lawmakers began warning about deepfakes in 2017 amid investigations into Russian interference in the 2016 presidential campaign.

In the House, Rep. Adam Schiff, the Intelligence Committee chairman, expressed concerns during a recent hearing that Google, Facebook, and Twitter don’t have a clear planOpens a new window to deal with the problem. Other lawmakers mentioned the possibility of lifting legal protectionsOpens a new window for Internet publishers to make them liable for deepfakes.

In the Senate, two powerful members, Democrat Mark Warner of Virginia and Republican Mario Rubio of Florida, demanded that the intelligence community reveal what it knows about foreign-produced deepfakes and US countermeasures, if any.

Rubio warned that deepfakes should be treated as a national security threat. He said the technology could supercharge misinformation campaigns led by foreign powers, singling out Russia.

“I know for a fact that the Russian Federation at the command of Vladimir Putin tried to sow instability and chaos in American politics in 2016,” Rubio said. “They did that through Twitter bots and they did that through a couple of other measures that will increasingly come to light.”

But they didn’t use deepfakes in that election, he said, adding: Imagine using this now. Imagine injecting this in an election.