If your colleague, the person you’d just had a meeting with, was a fake, would you be able to tell? I don’t mean someone who works for your company but who you’d never met, I mean a person you know. They work in a different location, so you don’t usually meet them in person, but you’ve met online many times and see them in person every few months. If they were an imposter, you’d be able to tell, right? What if I told you some of your co-workers were also in the meeting, and they were all imposters?
You’d know.
Wouldn’t you?
Don’t be so sure. This story is about an employee in Hong Kong who joined an online meeting with their European-based Chief Financial Officer and other co-workers. Everyone’s cameras were on, the employee spoke to these people, recognised them, and after the meeting, they completed the assigned tasks. Only not one of the people the Hong Kong employee met with that day were real. Or, at least, not who they claimed to be.
And it’s not confined to the workplace. You’re at home. It’s late at night when the phone rings. It’s your child who’s overseas on an international exchange student year. They’re utterly distraught. A friend they didn’t know well did something, and now they’re in trouble. They’ve been arrested, and the police are demanding a bribe. ‘Please, send me some money, Dad. I saw where they’re going to put me, who they’ll put me with. I need some money – quick!!’ I’m sure you see where this is going: that isn’t really your child on the phone, but a deep fake generated by AI. Would you be able to tell? This mother couldn’t, but thankfully, she verified the truth about her apparently kidnapped daughter before complying with the kidnapper’s demands.
Welcome to the deep fake attack.
Scams are a part of modern life, as much in the business environment as the personal. For years, we’ve been exposed to email scams: Nigerian princes, tax debts, undelivered mail, expired bank credentials. If you have an email address or a mobile phone, you’ve almost certainly received one of these common scams. Cybersecurity teams have been teaching us about this for almost as long: don’t click links or open attachments from unexpected emails, look for the red flags, and if in doubt, report the message.
Then came OpenAI, and the battle lines changed.
Generative AI can do amazing things, but as with a lot of technology, it isn’t always used for good. With a surprisingly small sample of audio, video, or static pictures, modern generative AI can create accurate “deep fakes”—audio, video, and pictures of real people saying or doing things they had never done in real life. Deep fakes have been around longer than ChatGPT and the other modern generative AI systems, but up until now, they haven’t been very convincing and have been challenging to produce. Now, almost anyone can produce them with no technical skills required. Almost as soon as this new technology was available, it started being abused.
Experts have been trying to get the warning out for a while, but it’s only recently started to hit mainstream news. The dangers are illustrated strongly in this awareness campaign out of Germany (warning: this video can be confronting ):
The abuse of generative AI is a massive problem our society must tackle, but now that Pandora’s box is open, there isn’t going to be a quick and simple fix. So where does that leave us in the meantime?
Pretty much where we started, just with an expanded scope. We need to apply those good habits cybersecurity taught us to all our remote interactions, including phone calls, online meetings, and social media connections.
So where do we start? With the basics:
If unsure, validate independently: this means contacting the person (or someone with appropriate oversight) directly, using known contact details (even if this means calling head office reception and getting put through) and getting confirmation of the request.