To continue enjoying all the features of Navy Federal Online, please use a compatible browser. Confirm your browser capability.

Bottom Line Up Front

  • Scammers are using AI to create fake videos, clone voices and write convincing messages. They look and sound like they’re from people and companies you trust.
  • You can protect yourself by slowing down, verifying who you’re talking to and never clicking links in unexpected messages.
  • If something feels urgent or too good to be true, that’s your sign to pause and double-check before taking action. 

Time to Read

8 minutes

February 17, 2026

Artificial intelligence (AI) has quickly become part of many people’s everyday lives. Unfortunately, that includes scammers. In 2026, they’re using AI to create fake videos that look real, clone voices that sound like people you know and text messages that seem legitimate. These AI-enabled scams are sophisticated and difficult to identify—and they’re becoming more common.

The good news? You don’t need to be a tech expert to protect yourself. Whether it’s a deepfake video, a voice clone or a suspicious message, there are clear signs that something’s not right. Once you know what to watch for, you’ll be more easily able to spot and avoid them.

What are the most common AI scams?

The AI tools that help you write emails or edit photos are the same ones scammers are using to impersonate people and create fake content. They’re taking technology that’s meant to be helpful and twisting it into something harmful. Here are the main ways they’re doing it:

  • Deepfake videos and images. Bad actors create fake videos or photos of celebrities, public figures or even people you know. These might show someone endorsing a fake investment, asking for money or saying things they never actually said.
  • Voice cloning scams. Using just a few seconds of audio from social media or voicemail, scammers can clone someone’s voice. They’ll call pretending to be your family member or friend in an emergency, asking you to send money right away.
  • AI-powered phishing. Scammers use AI to write emails and text messages that sound natural and convincing. These messages often look like they’re from your bank, a government agency or a company you trust. They’re designed to get you to click a link or share sensitive information about your financial accounts.
  • AI chatbot scams. Some scammers set up fake customer service chatbots that seem helpful but are actually designed to steal your login credentials or personal details.

How much are AI scams costing victims?

AI-driven scams are making criminals a lot of money. The Global Anti-Scam Alliance (GASA) estimates people lost more than $1 trillion to scams in 2024. In the United States, the average loss per person was $3,520—and it could be even higher. Experts estimate that 70% of victims don’t report AI scams.

9 tips to recognize and avoid AI scams

AI scams often work because they catch you off guard and pressure you to act fast. But scammers have a weakness: they need you to skip the basics. When you slow down, ask questions and double-check what you’re seeing or hearing, their schemes often fall apart. Here’s what you can do to stay safe:

  1. Pause and verify. When you get an urgent message—especially one asking for money or personal information—stop and take a breath. Scammers want you to act fast without thinking. Instead, contact the person or company directly using a phone number or website you already know and trust. Don’t use any contact info from the message.
  2. Use a code word. Set up a secret word or phrase with family members that only you know. If someone calls claiming to be your grandson in trouble, ask for the code word. Real family members will know it. Scammers won’t. This simple step can help stop voice cloning scams in their tracks.
  3. Avoid requests for immediate or risky payments. Pay attention to red flags, such as anyone demanding you send money right away, especially through wire transfer, cryptocurrency, gift cards or payment apps. Legitimate organizations give you time to think and offer multiple payment options.
  4. Inspect the sender. Look closely at email addresses and phone numbers. Scammers often use addresses that look almost right but have small changes. If something looks off, don’t engage.
  5. Check the media. AI-generated videos and audio can have telltale signs. Look for unnatural facial movements, weird lighting, voices that sound slightly robotic or backgrounds that don’t quite match up. If you’re on a video call and something feels off—like the person’s mouth doesn’t sync with their words—trust your gut.
  6. Don’t click. Never click links in unexpected emails or texts, even if they look legitimate. Instead, open your browser and type in the company’s website yourself. Or, use a phone number you’ve saved or found on an official site. This can help protect you from fake websites designed to steal your login information.
  7. Lock down your accounts. Turn on two-factor authentication for your important accounts—especially banking, email and social media. This adds an extra security step that makes it much harder for scammers to break in, even if they have your password. Use strong, unique passwords for each account.
  8. Limit your footprint. Be careful what you share on social media. Voice cloning scams work because criminals find audio clips of you or your loved ones online. Check your privacy settings and think twice before posting videos with clear audio. The less scammers know about you, the harder it is for them to target you.
  9. Document and report. If you spot a scam, report it to the Federal Trade Commission (FTC) at ReportFraud.ftc.gov and contact your financial institution immediately. Save any emails, texts or voicemails as evidence. Reporting helps protect others and can assist law enforcement in tracking down scammers. Even if you didn’t lose money, reporting matters!

How to spot deepfake scams

Deepfake videos and images look more real than ever, but they’re not perfect. Artificial intelligence struggles with the small details that make humans look natural—how we blink, how light hits our face or how our voice matches our lip movements. Once you know what these flaws look like, you can spot them. Here’s what to watch for:

  • Unnatural eye movements or no blinking at all
  • Blurry edges around the face, especially near hair or jawline
  • Audio that doesn’t sync with lip movements
  • Lighting or shadows that don’t match the rest of the video
  • Facial expressions that look stiff or robotic

The biggest giveaway is often the context. If you see a video of a celebrity endorsing a cryptocurrency investment or a public figure making a shocking announcement, ask yourself: Would they really do this? Real endorsements happen through official channels, not random social media posts.

How AI voice cloning scams work

Voice cloning scams are unsettling because they can sound so real. Scammers only need a few seconds of audio—pulled from a social media video, voicemail or even a public speech—to create a clone that sounds just like someone you know. Then they call you pretending to be that person in an emergency.

The scam usually follows the same pattern. You get a call from what sounds like your grandchild, child or close friend. They’re crying or panicked, saying they’ve been in an accident, arrested or stranded somewhere. They need money wired, sent through a payment app or loaded onto gift cards. The person’s voice sounds so real that your instinct is to help right away—and that’s exactly what scammers are counting on. 

If something feels urgent and emotional, that’s your sign to slow down and verify. Here’s how to protect yourself against phone scams:

  • Ask a question only the real person would know.
  • Use your family code word if you’ve set one up.
  • Contact another family member to verify the story.
  • Never send money before confirming it’s really them.
  • Hang up and call the person back on their real number.
  • Don’t trust caller ID—scammers can fake phone numbers.

Phishing is getting smarter

AI has made phishing emails and text messages harder to spot. Where old scams had obvious spelling errors and awkward wording, today’s AI-generated messages look professional and sound natural. They use your name, reference recent activity and even match the writing style of companies you trust.

These messages are designed to create urgency. The goal is to get you to click a link and enter your login credentials or personal information on a fake website that looks identical to the real one. The key to staying safe is recognizing the tactics scammers use, no matter how good the writing looks. Here’s what phishing messages often include:

  • Urgent language ("act now" or "your account will be closed")
  • Requests for sensitive information like passwords or Social Security numbers
  • Links that don’t match the company’s real website
  • Attachments you weren’t expecting and that appear suspicious
  • Too good to be true offers (refunds, prizes, deals)
  • Threats or pressure to respond immediately

Real vs. fake: Can you spot the scam?

  • Real message: Hi Jane, we noticed unusual activity on your account ending in 1234. Please sign in online to review and confirm these transactions. If you have questions, call the number on the back of your card.
  • Fake phishing message: ALERT: Suspicious activity detected on your account! Click to immediately verify your identity or your account will be locked within 24 hours.

Tip: The fake message creates panic, uses a suspicious link and threatens account closure. The real message gives you control, uses the actual website and provides a verifiable phone number.

How to protect yourself from AI scams

If you think you’ve been targeted by an AI scam—or if you think you’ve already fallen for one—don’t panic. There are steps you can take right now to limit the damage and protect yourself going forward.

What to do if you’ve been targeted

Stop all contact with the scammer immediately. If you shared financial information or sent money, contact your financial institution right away. The faster you act, the better chance you have of recovering your funds. Change your passwords, run a security scan on your device and report the scam. Keep copies of all messages and transaction records as evidence.

Secure your accounts and devices

Turn on two-factor authentication for your important accounts—especially email, banking and social media. Use strong, unique passwords for each account and consider using a password manager to keep track of them. Keep your devices updated with the latest security patches and only download apps from official sources.

Take action and stay informed

Review your financial statements regularly and watch for transactions you don’t recognize. Sign up for fraud alerts from the FTC. Talk to your family about AI scams and set up that code word if you haven’t already. The more people who know these tactics, the harder it is for scammers to succeed.

Navy Federal Credit Union helps you stay ahead of AI scams

AI scams are sophisticated, but you don’t have to handle them on your own. Navy Federal is here to help you spot and stop fraud before it happens. If you ever have questions about a message claiming to be from us—or if something just doesn’t feel right—contact us directly. Sign in through navyfederal.org, use the Navy Federal app* or call the number on the back of your card. 

We offer resources to help you protect yourself from the latest threats. For more tips on staying safe online, explore our MakingCents cybersecurity articles. We also invite you to explore our security watch services to be more aware of scams. The best defense against AI scams and cybercriminals is staying informed—and taking action before scammers can target you.

Key Takeaways Key Takeaways

Disclosures

This content is intended to provide general information and should not be considered legal, tax or financial advice. It is always a good idea to consult a tax or financial advisor for specific information on how certain laws apply to your situation and about your individual financial situation.