Why AI Phishing is Code Red for Businesses in 2023


ChatGPT is all the rage, even causing upset among teachers and academics, but with this facile way of writing term papers comes yet another weapon in the hacker toolset. Stu Sjouwerman, CEO of KnowBe4, explains how cybercriminals are adopting AI to create phishing emails and ways organizations can protect themselves from AI-generated scams.

As innovations in AI accelerate, security researchers continue to sound alarms about how cybercriminals can exploit AI to advance their nefarious activities. One recent example is that of ChatGPT, an AI chatbot based on large language models (LLMs) that gained a million usersOpens a new window in a week due to its ability to answer complex questions, write essays and social media posts, and even generate or debug code. Now threat actors are using this publicly available AI to create super sophisticated and targeted spear-phishing attacks.

How Can AI Be Used For Phishing?

One of the easiest ways to spot a phishing scam is to look for grammatical and spelling mistakes. This is because phishers aren’t always the best copywriters and may be non-native English speakers. But with access to an AI tool like ChatGPT, emails can be typed out with great frequency, correct grammar and at scale. 

An interesting video by Marcus HutchinsOpens a new window demonstrates in the most simplistic way how ChatGPT can be abused to craft sophisticated phishing emails. Studies show that ChatGPT can also be used to create full infection flows, reverse engineer code, and generate malware and ransomware on demand. 

What’s more, researchers believe that a chatbot with advanced Natural Language Processing (NLP) capabilities can do much more than just draft phishing emails. Analysts believeOpens a new window that future bots will communicate with victims using natural language, just like a sentient being, convincing victims into carrying out specific actions or sharing sensitive information. 

Last year, threat actors used AI bots like SMSRanger and BloodOTPbot to launch a credential harvesting attack where the bot automatically follows up with victims to nab their multi-factor authentication (MFA) codes. 

AI chatbots aren’t the only AI tool that phishers will use. AI is capable of producing hyper-realistic, digital personas of people (synthetic audio, video or images, a.k.a. deepfakes), which can also be used for phishing, cyberattacks and other fraudulent activities. For example, phishers cloned the voice of a bank director and convinced bank employees into initiating bank transfers worth $35 millionOpens a new window . 

In another instance, scammers used an AI hologram on a Zoom call to impersonate a key executive and con a crypto exchange into transferring all their liquid funds. GartnerOpens a new window predicts that during 2023, 20% of all successful account takeover attacks will use deepfakes as part of their modus operandi. 

See More: Rate the AI: Does ChatGPT do a good job at answering common IT questions?

How Can Businesses Protect Themselves from AI Phishing? 

All the things that are necessary for AI phishing to go mainstream are already in place. The technology exists in the public domain (many AI tools are open source). Non-technical users can interact and explore tools like ChatGPT in their natural language. Plenty of high-quality videos, images and audio of well-known people can be used to train AI generators and create fake personas. Experts warn that tools like ChatGPT will make cybercrimeOpens a new window even easier. 

So how can businesses protect themselves from AI phishing? The answer does not lay in tools or technology alone but in culture and the secure behavior of users. Here are some best practices that can help: 

    • Run frequent security awareness training programs so that employees understand security do’s and don’ts, best practices and expectations. 
    • Send phishing simulations using defanged real attacks so that employees get first-hand experience of how real-life sophisticated phishing scams look like and work.
    • Enable users to report suspicious activity to security teams with a Phish Alert Button. Reward or promote such behavior instead of reprimanding people. 
    • Teach everyone to develop a healthy dose of skepticism and not trust everything at face value. Watch out for deepfakes, look for visual cues like distortions or inconsistencies in images and video, unusual head and torso movements and syncing issues between face, lips, and audio.
    • Train employees to validate the authenticity of requests using a different communications channel, especially if there is an unusual request or a sudden pressure or urgency to do something that involves large transfers of money.
    • Instruct employees to stick to company policies and best practices (use of strong passwords, responsible use of social media, secure browsing, etc.).
    • Use technologies like phishing-resistant MFA and zero-trust to lower the risk of account takeover and identity fraud.
    • Get senior management to actively advocate cybersecurity. Remember, culture eats strategy for breakfast and is always top-down.

It’s not hard to imagine that AI-generated phishing will become common and will be far more damaging than today’s social engineering attacks. Organizations must take this threat seriously and invest in building a strong security culture because whether one accepts it or not, secure behavior is the last line of defense against targeted and sophisticated phishing attacks.

How can enterprises handle AI-driven phishing attacks? Share your thoughts with us on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window .

Image Source: Shutterstock