
Reported by Trupti Bavalatt
Generative AI technology that can be used to create fresh content like text, images, audio, and even videos, much like a human would.This technology has become incredibly popular and widespread. A report from Europol predicts that 90% of online content could be AI-generated within a few years. However, Generative AI is a double-edged sword. On the one hand, Generative AI holds immense potential across various sectors. It can enhance customer service through intelligent chatbots, assist in creative industries by generating art and music, and even aid in medical research by simulating complex biological processes. However, the same technology that drives these innovations can also be weaponized by malicious actors.
Recently, there has been a surge in the misuse of this technology for fraudulent activities. As generative models become increasingly sophisticated, so does the methods employed by bad actors to exploit them for malicious purposes. Understanding the various ways in which generative AI is being abused is crucial for developing robust defenses and ensuring the integrity and trustworthiness of digital platforms. While the novel and convoluted ways the bad actors are using this technology is only getting more and more sophisticated, we will explore the different types of fraud facilitated by generative AI that are already being reported widely by victims.
Financial scams
Advanced AI-powered chatbots can mimic human conversations with uncanny accuracy. Fraudsters launch phishing attacks using AI chatbots to impersonate bank representatives. Customers receive calls and messages from these bots, which convincingly ask for sensitive information under the pretext of resolving account issues. Many unsuspecting individuals fall victim to these scams, resulting in significant financial losses. Fraudsters can use AI to generate realistic voice recordings of executives to authorize fraudulent transactions, a scheme known as “CEO fraud.” In a notable case, an employee of a Hong Kong multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call. These scams exploit the trust and authority of high-ranking officials, causing significant financial damage to businesses.
Targeted scams using fake voice
Generative AI can copy voices and likenesses, making it possible for individuals to appear as if they are saying or doing almost anything. This technology is similar to “deepfake” videos but applies to voices. AI-generated voices are being used to enhance scams, making them more convincing. For instance someone can scrape your voice from the internet and use it to call your grandma and ask her for money. Threat actors can easily obtain a few seconds of someone’s voice from social media or other audio sources and use generative AI to produce entire scripts of whatever they want that person to say. This will make the targeted person believe that the call is from their friends or relatives and will thus be fooled into sending money to the fraudster thinking a close one is in need.
Romance scams with Fake identities
Romance scams are surging, and in India that was a report that a staggering 66 percent of individuals in the country falling prey to deceitful online dating schemes. An entirely fake persona can be fabricated using Generative AI. Fraudsters can create fabricated documentation like passports or aadhar cards. They can even create false imagery of themselves, like the image shown below of a fake person in front of a nice house and a fancy car. They can even make calls in a fake voice and can convince you that the AI is a real person and eventually target you emotionally to develop feelings for them, and later on extort money or gifts from you. Here is an example (in Figure 4) of an AI generated image of a fake person, generated using Midjourney bot.
Read full report: https://hackernoon.com/ai-deception-how-generative-ai-is-being-weaponized-for-fraud