AI’s Growing Role in Cyber Security – And Breaching It

essidsolutions

When two Congressional Democrats last week wrote to Amazon founder and CEO Jeff Bezos demanding more information about the sale of the company’s facial-recognition technology to US law enforcement agencies, their concern appeared more about political positioning than expecting any practical impact.

Whatever the fears of Reps. Keith Ellison and Emanuel Cleaver, the civil rights organizations that prompted their protest or the members of the public who have warned that facial recognition could inadvertently increase racial bias in law enforcement, artificial intelligence of the type offered by the online retailer and logistics giant is revolutionizing cyber security. Its application is set only to increase in the future.

Responding to the Hackers

A rise in attacks by hackers seeking access to IT systems through the growing number of connected devices is leading companies across the US and worldwide to secure their operations with AI.

They are getting help from vendors such as Amazon, whose Amazon Web Services (AWS) cloud hosting subsidiary offers RekognitionOpens a new window , an AI-driven product suite that can identify people in real time from analyses of vast volumes of images.

While AWS has marketed Rekognition specifically to law enforcement since its launch 18 months ago – leading critics to charge in part it could put privacy at risk – companies are using AI to facilitate employees’ access to their workplace and systems with biometric identification – as well as to recognize threats and improve response times when breaches occur.

Vulnerability through Automation

A November report from New York-based P&S Market Research forecasts that global expenditure on AI applications for cybersecurity will rise by more than 30% annually through 2023 from the $1.2 billion spent in 2016. According to P&S, digital transformation and cloud computing are driving both uptake and delivery.

Digital transformation leads to increased vulnerability as more process are automated and as these are transitioned from on-premises IT infrastructure to cloud environments. Meanwhile, cloud services enabled cyber-security packages to operate more efficiently, opening the gates for small and medium-sized enterprises to buy in thanks to lower cost.

Nevertheless, amid questions about privacy and public safety, AI remains a double-edged sword. Hackers are using the same technology to seek ways into systems that companies and vendors are aiming to protect.

For example, the encryption designed to protect data is providing new avenues for attack. According to Cisco Systems, whose networking hardware underpins enterprise communications, hackers are becoming adept at embedding malware that enables them to control the systems they penetrate.

Embedded Malware

In January, France’s Schneider Electric reported that hackers used malware to exploit a bug in the Triconex industrial control system that it sells to utilities, including nuclear power plants, for safety monitoring and emergency shutdown.

Dragos, a Maryland-based cyber-security consultancy that detected the exploit, says the group responsible for that attack has since penetrated more industrial facilities, threatening human and environmental safety.

On the other hand, NHS Digital, the technology arm of Britain’s National Health Service, used AI to contain last year’s WannaCry ransomware attack that found its way into NHS IT systems as a result of infected email attachments.

The US government is implementing AI under the provisions of the Modernizing Government Technology Act, which this year saw the creation of a $250 million seed fund for infrastructure upgrades at federal agencies.

Getting Ready for 5G

The vectors for attack are set to increase with the rollout of the 5G mobile communications standard, set for launch in 2020 in the US and soon after in other developed markets.

Analysts warn that 5G’s wider frequency spectrum will add significant risk to the nearly 21 billion devices – from handheld smartphones to onboard sensors – expected to be connected to the internet at launch.

As a result, efforts to augment AI with machine learning to automate the evaluation and upgrading of threat assessment and containment are gaining pace.

The hope is that by teaching machines to be more proactive, they can reduce the human error in breaches that risk management firm Willis Towers Watson says was behind 66% of incidents recorded among the 160 companies in the US and Britain it polled last year.