02 July 2024

Possible AI-Powered Scams To Watch Out For

AI-Powered Scams
The last few years have seen a huge uptick not just in the quality of generated media, from text to audio to images and video, but also in how cheaply and easily that media can be created. The same type of tool that helps a concept artist cook up some fantasy monsters or spaceships, or lets a non-native speaker improve their business English, can be put to malicious use as well.

Don't expect the Terminator to knock on your door and sell you on a Ponzi scheme — these are the same old scams we've been facing for years, but with a generative AI twist that makes them easier, cheaper, or more convincing.

This is by no means a complete list, just a few of the most obvious tricks that AI can supercharge. We'll be sure to add news ones as they appear in the wild, or any additional steps you can take to protect yourself.

  1. Voice cloning of family and friends Synthetic voices have been around for decades, but it is only in the last year or two that advances in the tech have allowed a new voice to be generated from as little as a few seconds of audio. That means anyone whose voice has ever been broadcast publicly — for instance, in a news report, YouTube video or on social media — is vulnerable to having their voice cloned.

    Scammers can and have used this tech to produce convincing fake versions of loved ones or friends. These can be made to say anything, of course, but in service of a scam, they are most likely to make a voice clip asking for help.

    For instance, a parent might get a voicemail from an unknown number that sounds like their son, saying how their stuff got stolen while traveling, a person let them use their phone, and could Mom or Dad send some money to this address, Venmo recipient, business, etc. One can easily imagine variants with car trouble ("they won't release my car until someone pays them"), medical issues ("this treatment isn't covered by insurance"), and so on.

    This type of scam has already been done using America President Joe Biden's voice! They caught the ones behind that, but future scammers will be more careful.
  2. Personalized phishing and spam via email and messaging Almost everyone these days will get spam now and then, but text-generating AI is making it possible to send mass email customized to each individual. With data breaches happening regularly, a lot of your personal data is out there.

    It's one thing to get one of those "Click here to see your invoice!" scam emails with obviously scary attachments that seem so low effort. But with even a little context, they suddenly become quite believable, using recent locations, purchases and habits to make it seem like a real person or a real problem. Armed with a few personal facts, a language model can customize a generic of these emails to thousands of recipients in a matter of seconds.

    So what once was "Dear Customer, please find your invoice attached" becomes something like "Hi Doris! I'm with Etsy's promotions team. An item you were looking at recently is now 50% off! And shipping to your address in Bellingham is free if you use this link to claim the discount." A simple example, but still. With a real name, shopping habit (easy to find out), general location (ditto) and so on, suddenly the message is a lot less obvious.

    In the end, these are still just spam. But this kind of customized spam once had to be done by poorly paid people at content farms in foreign countries. Now it can be done at scale by an LLM with better prose skills than many professional writers!
  3. 'Fake you' identify and verification fraud Due to the number of data breaches over the last few years (thanks, Equifax!), it's safe to say that almost all of us have a fair amount of personal data floating around the dark web. If you're following good online security practices, a lot of the danger is mitigated because you changed your passwords, enabled multi-factor authentication and so on. But generative AI could present a new and serious threat in this area.

    With so much data on someone available online and for many, even a clip or two of their voice, it's increasingly easy to create an AI persona that sounds like a target person and has access to much of the facts used to verify identity.

    Think about it like this. If you were having issues logging in, couldn't configure your authentication app right, or lost your phone, what would you do? Call customer service, probably — and they would "verify" your identity using some trivial facts like your date of birth, phone number or Social Security number. Even more advanced methods like "take a selfie" are becoming easier to game.

    The customer service agent — for all we know, also an AI! — may very well oblige this fake you and accord it all the privileges you would have if you actually called in. What they can do from that position varies widely, but none of it is good!

    As with the others on this list, the danger is not so much how realistic this fake you would be, but that it is easy for scammers to do this kind of attack widely and repeatedly. Not long ago, this type of impersonation attack was expensive and time-consuming, and as a consequence would be limited to high value targets like rich people and CEOs. Nowadays you could build a workflow that creates thousands of impersonation agents with minimal oversight, and these agents could autonomously phone up the customer service numbers at all of a person's known accounts — or even create new ones! Only a handful need to be successful to justify the cost of the attack.
  4. AI-generated deepfakes and blackmail Perhaps the scariest form of nascent AI scam is the possibility of blackmail using deepfake images of you or a loved one. You can thank the fast-moving world of open image models for this futuristic and terrifying prospect! People interested in certain aspects of cutting-edge image generation have created workflows not just for rendering naked bodies, but attaching them to any face they can get a picture of. I need not elaborate on how it is already being used.

    But one unintended consequence is an extension of the scam commonly called "revenge porn," but more accurately described as nonconsensual distribution of intimate imagery (though like "deepfake," it may be difficult to replace the original term). When someone's private images are released either through hacking or a vengeful ex, they can be used as blackmail by a third party who threatens to publish them widely unless a sum is paid.

    AI enhances this scam by making it so no actual intimate imagery need exist in the first place! Anybody's face can be added to an AI-generated body, and while the results aren't always convincing, it's probably enough to fool you or others if it's pixelated, low-resolution or otherwise partially obfuscated. And that's all that's needed to scare someone into paying to keep them secret — though, like most blackmail scams, the first payment is unlikely to be the last.