AI Deepfake Imposter Scams Are Driving a New Wave of Fraud

Reported by , , , and

Scamanomics
Illustration: Jinhwa Jang for Bloomberg Markets

Computer-generated children’s voices so realistic they fool their own parents. Masks created with photos from social media that can penetrate a system protected by face ID. They sound like the stuff of science fiction, but these techniques are already available to criminals preying on everyday consumers.

The proliferation of scam tech has alarmed regulators, police and people at the highest levels of the financial industry. Artificial intelligence in particular is being used to “turbocharge” fraud, US Federal Trade Commission Chair Lina Khan warned in June, calling for increased vigilance from law enforcement.

Even before AI broke loose and became available to anyone with an internet connection, the world was struggling to contain an explosion in financial fraud. In the US alone, consumers lost almost $8.8 billion last year, up 44% from 2021, despite record investment in detection and prevention. Financial crime experts at major banks, including Wells Fargo & Co. and Deutsche Bank AG, say the fraud boom on the horizon is one of the biggest threats facing their industry. On top of paying the cost of fighting scams, the financial industry risks losing the faith of burned customers. “It’s an arms race,” says James Roberts, who heads up fraud management at Commonwealth Bank of Australia, the country’s biggest bank.

Cloning a person’s voice is increasingly easy. Once a scammer downloads a short sample from an audio clip from someone’s social media or voicemail message—it can be as short as 30 seconds—they can use AI voice-synthesizing tools readily available online to create the content they need.

Public social media accounts make it easy to figure out who a person’s relatives and friends are, not to mention where they live and work and other vital information. Bank bosses stress that scammers, running their operations like businesses, are prepared to be patient, sometimes planning attacks for months.

What fraud teams are seeing so far is only a taste of what AI will make possible, according to Rob Pope, director of New ­Zealand’s government cybersecurity agency CERT NZ. He points out that AI simultaneously helps criminals increase the volume and customization of attacks.

Read full report: https://www.bloomberg.com/news/articles/2023-08-21/money-scams-deepfakes-ai-will-drive-10-trillion-in-financial-fraud-and-crime#xj4y7vzkg

Leave a comment