5 AI Scams You Need To Be Aware Of In 2025

AI scams

AI is revolutionizing our lives in terms of productivity, automation, customer service, and more. AI is becoming so important that organizations increased spending on compute and storage hardware infrastructure for AI deployments by 37% year-over-year in the first half of 2024, reaching $31.8 billion.

However, like most technological advances, the good often comes with the bad. While AI has its benefits, there is also a dark side to this technology, with the development of AI scams, used by cybercriminals as a more advanced way to carry out cyberattacks, phishing, and identity theft.

To help you identify generative AI scams, how they work, and how to prevent them, this article will give you all the information you need to prevent fraud and keep your personal or business information safe from these advanced threats.

Table of contents

  1. Deepfakes
  2. Voice cloning
  3. Social media bots
  4. AI phishing
  5. Fake documentation
  1. Victim loses 17k in online romance scam
  2. Woman loses 20k through AI investment scam
  3. $25 million lost in finance deepfake scam
  1. AI detection tools
  2. Limit or control your social media presence
  3. Be skeptical and validate
  4. Use access controls, encryption, and secure file sharing

The 5 most common AI scams

These are the most prevalent AI scams currently circling the web. You can protect yourself from data or financial loss by gaining further knowledge about their work, with real-life examples to demonstrate the severe economic impact of these kinds of scams.

Deepfakes

Deepfake AI scams use artificial intelligence to create highly realistic and convincing audio videos and images of real people, usually celebrities, to try and defraud users.

Deepfake videos require several minutes of high-quality footage of the target individual. Thanks to platforms like YouTube, criminals can easily access the footage to capture the target from various angles and lighting to train the AI model and get the most realistic and convincing deepfake video possible.

Once created, deepfake videos can create a fake company executive based on an actual employee to instruct employees to transfer money or gain access to sensitive accounts such as email or cloud storage accounts to steal sensitive files or corporate information.

On top of this, AI deepfakes can also be used to spread misinformation marketing campaigns that convince users to download malware, purchase phony products, or lead them to a scam website that can steal emails, passwords, or financial details.

Voice cloning

AI voice cloning scams can create convincing recordings of a person’s voice to manipulate others to steal sensitive information or money. AI voice cloning analyzes the target voice's tone, pitch, cadence, and speech patterns.

Internxt is a cloud storage service based on encryption and privacy.

Microsoft’s Vall-E, for example, can replicate voices with a three-second sample. However, it can be even more realistic with more data that can be easily accessed from podcasts, video interviews, or social media.

Once the data is collected, the AI creates the voice clone and calls the target to carry out the scam, such as impersonating a distressed family member needing money.

Social media bots

AI is also used to create automated accounts that mimic real users and will like, comment, and share posts of a scam product to gain more visibility and make it appear more legitimate.

Once the profiles are established to look more realistic, these may send phishing messages, leading users to fraudulent websites to steal personal information. In other cases, AI is used to spread misinformation or fake news to influence public opinion.

AI phishing

Another common and popular AI scam is using AI to create phishing campaigns. Cybercriminals will gather information from their target using information readily available online. AI phishing emails can be incredibly convincing, as they lack the telltale signs of phishing emails, such as typos or grammar mistakes.

Instead, AI can create a more personalized, error-free message and hundreds of emails within minutes.  

Once created, AI can generate emails that appear as though they are from a trusted source to request money account details or spread malicious links. The danger of AI phishing emails is that due to the advanced nature of AI language models, they can bypass some email filters and trick the most cautious users.

Internxt VPN lets you browse the web securely and privately.

Fake documentation

Aside from voice, video, and content, AI scams can create fake images and documentation to carry out their schemes. Aside from creating photos for social media profiles, AI can also make copies of legitimate documents, such as IDs or other accreditations, to appear more legitimate and access sensitive information, accounts, or files.

Image generation can also create AI images for fake endorsements of products or other posts to try and gain more traction and spread the scam further.

Real-world examples of AI scams

Here are some recent examples of AI scams that used deepfake videos and other social engineering techniques to steal thousands and millions of dollars from unsuspecting victims.

Victim loses 17k in online romance scam

After meeting someone in an online chat group, 77-year-old Nikki MacLeaod thought she found love online with a woman named Alla Morgan, who claimed to be working on an oil rig in the North Sea.

Although initially skeptical, Nikki became more relaxed when she received videos from “Alla” of her working on the oil rig, although it turns out they were deepfake videos created by AI. During the relationship scam, Nikki lost £17,000 after sending money via PayPal, bank transfer, or buying gift cards.

The scammer also sent fake documents created by AI posing as the HR department Alla worked for, asking for £2,500 to pay for a helicopter to pay for her vacation to Scotland.

Internxt Dark Web Monitor checks if your email has leaked online

Eventually, Nikki’s bank identified the transfers as fraudulent, and she received £7,000 back; most of the money was lost as she transferred it under the category of personal payments for friends and family on PayPal.

Woman loses 20k through AI investment scam

Ann Jensen from Salisbury lost £20,000 in a scam after being convinced by a deepfake video of UK Prime Minister Sir Keir Starmer to invest an initial £200 in a cryptocurrency trading opportunity.

Once the initial payment was made, the woman was contacted saying her initial investment had grown to over £2,000, but she must take out a £20,000 loan to prove her financial fluidity and that she would be paid back even more through trading. Once she did, she was never contacted by the scammers again.

Unfortunately, the bank told Ms Jensen that she was liable for the loan and wouldn’t receive any money back, instead, she has agreed to pay the bank back £23,000 spread out over 27 years.

$25 million lost in finance deepfake scam

A finance worker from Hong Kong lost $25 million to another deepfake AI scam during a video call they believed to be with the company’s CFO and other colleagues.

Although the target was initially skeptical after rejecting a phishing email, the worker was unfortunately fooled during the video call of the deepfake footage and authorized a transaction of $25 million to the scammers.

How to identify AI scams

Despite how advanced AI is, there are still ways to identify if the content is AI or not. Although this may change as AI grows, currently, these are how you can identify whether AI scams are targeting you.

Visual clues

A cybersecurity and human-computer expert from Abertay University, Dr Lynsay Shephard, says deepfake videos used in AI scams may appear legitimate if you don’t know what to look for.

She recommends looking at the eye movement and blinking in the video, as the movements aren’t natural and can be out of sync. When people are talking, you should look around the jawline, as the filter or the apps will break and look unnatural.

Audio clues

Alongside visual clues, audio clues can show irregularities and be a giveaway of an AI scam. One of the biggest clues is that the audio will be out of sync with the lip movement of the person talking.

Other areas, such as a highly processed and almost robotic voice lacking natural speech cadence, may indicate that it has been created by an AI model. Finally, if response time is lag or replies seem too generic, this can also be a sign of a deepfake voice.

Internxt is a cloud storage service based on encryption and privacy.

Unexpected behaviour

If you’re unexpectedly contacted by a high-level executive, celebrity, or anybody else, be very cautious if you decide to proceed. Never click on any links sent from an unexpected email or message.

Like traditional phishing scams, AI scams use a sense of urgency, emotional manipulation, and pressure to act quickly by sending money, giving account details, or providing access to high-level files or documentation.

Someone is likely pulling a scam if someone contacts you out of the blue with a promise of free, easy, or quick money or requires your help to get out of financial struggles.

How to prevent AI scams

Fortunately, there are ways to identify spam if you suspect it is deepfake content. Plus, some other tools are available to help you identify AI and keep your personal data protected to prevent AI from using your content or data to train its model.

AI detection tools

AI detection tools can look for the unnatural movements or speech patterns commonly associated with deepfake videos or audio, such as subtle inconsistencies in video frames, unnatural blinking, skin texture, etc, and conclude whether it is AI content.

Despite not being 100% foolproof, it can be a good start. For guaranteed security to prevent unauthorized transactions or access to accounts, use Two-Factor or Biometric Authentication for all your accounts.

Limit or control your social media presence

Many AI language models, such as Grok AI, will scan language from social media platforms to impersonate users and create personalized phishing emails. It will also scan content from people’s videos, images, and other social media activity.

To prevent this, check out the privacy settings from your social media accounts to limit who can see your content, and always verify the accounts if you are ever messaged by somebody with an uninvited offer.

Be skeptical and validate

If you are contacted by colleagues or high-level executives in the workplace, always contact the actual person for verification, this will also prevent other employees from falling for the same scam.

For dating scams, you can always conduct a reverse image search, and if you suspect they are using deepfake content, identify or use technology to identify AI patterns.

Use access controls, encryption, and secure file sharing

To prevent information leaking on the internet, such as emails or other information that can benefit AI models to carry out scams, always ensure you store, share, and encrypt your data using end-to-end encrypted cloud storage, like Internxt Drive.

Internxt Drive for Business has the access controls and information required to allow account managers to monitor who accesses their accounts and from where to prevent users from accessing sensitive information without proper authorization.

Internxt post quantum encryption

Build an online life of privacy with Internxt

Both Internxt Drive for personal and business use zero-knowledge encryption, so you only have the encryption keys to your files. You can only access files whenever you share or store them using Internxt.

Using services like Internxt, you can limit data breaches that can be used to train AI and keep other personal information private, as Internxt never collects, monitors, or shares your data, unlike other services such as Google.

Get started with Internxt annual or lifetime plans and get the best data protection and more protection with its cloud storage and VPN products.