AI Security Gone Wrong: When ML Is Not the Panacea for ‘Noise’

essidsolutions

The artificial intelligence market for Security Operations Center (SOC) is heating up. From $8.8 billion last year, analysts predictOpens a new window that it will reach $38.2 billion by 2026.  That’s a massive jump but does the hype match the reality. Does machine learning-based SOC solutions live up to their promise – or should enterprises avoid these technologies until they are more mature, sticking to more tried-and-tested tools? 

The answer lies somewhere in between. 

The Drivers for Using Machine Learning in Security Operations Centers

Security Operations Centers deal with huge streams of incoming data every day, and the volume continues to grow with evolving cyber threats. AI and ML scan data streams to thresh out anomalies, flag unusual behavior, and automatically filter our malicious traffic if needed. 

Machine Learning is particularly effective at protecting against “lazy” attack strategies, where the algorithm is already familiar with the threat pattern. “Hackers usually refer to older attacks and slightly alter them or build on them to create new ones. Artificial Intelligence, along with Machine Learning, leverages information related to past attacks and quickly spot potential risks that could emerge in the same style,” saidOpens a new window Vishal Salvi, CISO & Head Cyber Security Practice, Infosys. 

He also added that ML is integral in analyzing changing modes of user behavior. For example, in a work-from-home world, enterprises can configure ML algorithms to adapt and scan behavioral ticks in this brand new context. “Machine Learning is an automated function. The attack surface is significantly reduced while saving security analysts to conduct large manual checks. AI/ML provides the capability to analyze and spot the anomalies in user, entity behavior, or in network traffic,” he added. 

Apart from this, enterprises are also motivated by: 

  • Licensing costs: The SOC is typically part of a broader security information and event management (SIEM) platform. Most SIEM providers price the platform based on the amount of data ingested. If you overlay ML on top of the SIEM, you automatically filter out some of the data streams and control your SIEM costs. 
  • Easier scalability: Once you integrate a trained ML layer into the SOC, it is capable of handling any volume of data. This makes your security library repeatable, reduces response time for any number of events, and scales productivity by shrinking the number of manually actionable alerts. 

But this doesn’t mean you should embrace ML-led SOC with blinders on. A more cautious approach can make the difference between high-ROI security intelligence and IT clutter, leading to investment leakage. 

Learn More: How Automation Can Bolster a Business’s Post-Pandemic Recovery

What You Should Know Before Adopting Machine Learning Solutions

One of the biggest worries around adopting ML in SOC is false positives. In an attempt to dramatically reduce actionable alerts, the algorithm filters out harmless data traffic, causing false positives. 

In many cases, a company’s culture compounds the problem. If in a company, most users complain about spam, it is easy to become overzealous and go down the “better safe than sorry” route, which is rife with false positives. Dave Baggett, founder and CEO of INKY Technology Corporation highlighted this issue in a recent workshopOpens a new window on AI & ML in cybersecurity. 

To address it, keep these factors in mind:  

1.  Understand the benefits first 

Given the hype around AI and ML, it is easy to jump onto the bandwagon without a detailed outcome framework in place. Some vendors (particularly less mature solutions) will use ML as a marketing buzzword when the tool actually uses basic automations. That’s why it is so important to identify outcomes/benefits at the outset and make it part of your SLAs. 

Is the tool reducing my SIEM data ingestion needs? Are my security operators able to respond to the security issues faster? How much FTE have I saved by implementing ML? 

These are some of the measurable indicators to look at. “The main question security teams should ask themselves is not ‘do we have ML’, which is so generic as to be almost meaningless, but rather: Is the ML technology in my network giving me the benefits I want? When evaluating solutions that promise AI and ML we need to again look for the BENEFIT,” said Opens a new window Avi Chesla, Founder & CEO of empow. 

2. Replace tools, don’t add to them 

A SOC will have multiple bought and homegrown technologies in place, and ML only adds to the clutter. 

Large companies with a robust database are better positioned to build ML security algorithms than smaller enterprises and startups, argued Fred Chang from Southern Methodist University at the workshop. To bridge the gap, smaller players turn to prebuilt ML tools and solutions, adding to their technology clutter without significantly improving manual effort utilization.

“Cybersecurity operations and analytics is a chaotic environment plagued by too many tools and a lack of the right amount of adequately skilled staff who face complex, often manual processes while tackling high-priority tactical activities that take too much time,” noted  Opens a new window Jon Oltsik and Jack Poller, Analysts and ESG. Instead, enterprises should focus on cutting down the clutter by: 

  • Consolidating security tools, frameworks, and processes into a Security Operations and Analytics Platform Architecture (SOAPA) 
  • Replacing human responsibilities and grunt work with ML while ensuring zero effort duplication 

3. Deploy augmented intelligence 

There are several names for this approach – human-in-the-loop, human-machine teams or augmented intelligence that combine the best of both worlds to keep false positives down. For example, the leading security platform CrowdStrike ensures that there are human analysts to oversee threat detection in the field. These analysts can use the tool’s features to find additional specimens, over and above ML-labeled data, pointed out Opens a new window Sven Krasser, chief scientist at CrowdStrike. 

Learn More: Top Vendors Pushing the Boundaries of SIEM 

The Bottom Line: ML Is Only a Means to an End 

In all the hype around AI and ML, we often overlook its limitations and even its disadvantages. In many ways, ML in SOC is yet to mature. “The algorithms that are embedded in some security products could, at best, be called narrow (or weak) AI. They perform highly specialized tasks in a single field and have been trained on large volumes of data, specific to a single domain. This is a far cry from general AI,” Dr. Leila Powell mentioned. 

For now, it’s a good idea to begin at the beginning and apply ML to log analysis, phishing prevention, and other areas of repeatable, iterative efforts. A clear outcome roadmap, a pragmatic understanding of vendor capabilities, and the knowledge of what ML is (and isn’t) will keep you on the right track. 

What are your thoughts on today’s booming ML-based SOC tools market? Comment below to let us know on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We would love to hear from you!