How Anxiety Beat The AI $10 Billion Startup Deepfake Hackers

    6
    0
    How Anxiety Beat The AI  Billion Startup Deepfake Hackers


    The use of Artificial Intelligence generated deepfakes, be that video, audio or both, is on the increase when it comes to hacking attempts. The AI technology required to power such highly-targeted phishing campaigns has evolved so much, and the resources and cost required to deploy them, that cyber criminals can now justify the use of what was until fairly recently the territory of state-sponsored threat groups. However, no matter how much effort goes into producing a deepfake phishing attack, no matter how advanced the AI used to generate the deepfake bait, sometimes it takes the most unexpected of turns to make the facade come crashing down. Such is the case of the CEO of $10 billion startup Wiz.

    ForbesNew Gmail Security Warning As 10-Second Hackers Strike

    When AI Attacks A Cybersecurity Startup You Might Expect Things To Go Wrong—They Did, But Not Why You Think

    A cybersecurity consultant recently explained how he was targeted by a sophisticated and compelling AI-generated phishing threat in a story published here at Forbes that quickly went viral. In that case, Sam Mitrovic, a Microsoft solutions consultant, was very nearly fooled until he spotted a couple of flaws in an otherwise polished and believable exploit involving a deepfake Google support call.

    In this latest example, which came to light during a discussion at the TechCrunch Disrupt event in San Francisco, Wiz co-founder and CEO, Assaf Rappaport, explained how the attackers made one mistake that cost them success after targeting his employees with a deepfake version of himself.

    That one mistake: public speaking anxiety.

    How Anxiety Spoiled AI Deepfake Cyber Attack

    The attempted AI deepfake phishing attack took place two weeks ago, according to Rappaport, who said that dozens of Wiz employees had received a voice message purporting to be from him. The ultimate goal of this attack, like most deepfake campaigns of this type, is to get the credentials of at least one employee to enable the attacker to breach the targeted network.

    ForbesGoogle Launches New Free Tool To Secure Your AI—Introducing SAIF

    In Scooby-Doo style, they might have got away with it if it wasn’t for those pesky employees. The mistake that the otherwise state-of-the-art attackers made was one that they couldn’t have known about: Rappaport suffers from public speaking anxiety. This leads us to the first of three reasons the attack failed:

    1. The attackers created the AI deepfake using a recording from a previous conference where Rappaport had been a speaker.
    2. The second mistake was not knowing that Rappaport’s voice changes when he speaks in public due to that anxiety disorder.
    3. The third mistake was targeting a cybersecurity company whose employees were always going to be alert to things such as a voicemail from the CEO looking to get them to give up credentials in any way, especially when it didn’t sound like the CEO they all knew.

    The moral of this story is that you should always sweat on the small things when it comes to requests out of the blue to click on a link, execute a file or do something out of the usual when your boss requests it. The attackers, unfortunately, were not caught because while Wiz was able to trace where the voice had originated they couldn’t determine who carried out the attack itself. “The risk of getting caught is very low,” Rappaport said, which is why such AI phishing attacks are so valuable to cyber criminals.



    Source link

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here