Detecting and Disrupting AI-Driven Pig Butchering Scams

Merkle Science
June 18, 2025

As AI-enhanced scams like pig butchering become more sophisticated, crypto crime investigators must approach detection from multiple angles. There are three key intervention points: identifying the initial AI-generated content used to lure victims; behavioral transaction monitoring while the scam is still unfolding; and tracing the laundering trail after the crime has occurred. 

Each of these phases offers unique opportunities to stop fraud in its tracks. This article will discuss red flag indicators for AI-driven scams, and how behavioral monitoring and blockchain analytics can help investigators stay ahead of these fast-evolving threats.

Red Flag Indicators for AI-Driven Scams

Because AI-driven scams are powered by artificial intelligence, potential victims and investigators can still rely on familiar signals that reveal AI-generated content. These warning signs vary across media types—text, images, and real-time audio or video—but often follow consistent patterns.

Text

Most people can intuitively sense when text is AI-generated due to subtle yet recognizable patterns in language. Word frequency studies have identified commonly overused terms by models like ChatGPT, including words such as “crucial”, “navigate”, “array”, “conclusion”, and “difference”. Beyond word choice, AI-generated text tends to be too polished. In contrast, human communication—especially in chat contexts—often includes incomplete sentences, slang, typos, and inconsistent grammar. If someone you’re chatting with uses flawless grammar and unnatural sentence structure, it may be a red flag. 

In some cases, scammers have even mistakenly pasted direct outputs from large language models (i.e. “As a language model…”), giving up the ruse. While these errors may seem ridiculous, they’re revealing—many perpetrators come from non-English-speaking regions in Southeast Asia, making it more likely they rely on AI tools for fluent communication.

Images

Scammers increasingly use generative AI tools to create fake profile pictures or lifestyle photos showcasing wealth, travel, and luxury. There are two primary ways to detect whether an image is AI-generated. First, investigators can check the file's metadata using tools like Metadata2Go or by right-clicking the image and viewing its properties. Many AI-generated images will contain metadata tags that reference the specific model or tool used. If metadata is missing or stripped, the second approach is to visually analyze the image. AI-generated images often contain visual artifacts: extra fingers, warped backgrounds, asymmetrical facial features, or repetitive patterns that appear unnatural.

Real-Time Video and Audio Cloning

Like images, deepfaked video and voice often carry detectable flaws. Signs of a manipulated video include unnatural blinking, blurring or flickering around the edges of the face, and poor lip-syncing—where mouth movement doesn’t align with the audio. 

Fortunately, real-time detection is more feasible with live calls. A concerned person can ask the caller to perform simple tasks that challenge the deepfake system, such as touching their face (which may cause a glitch) or switching to a different platform or app. Most fraudsters cannot run real-time video and audio cloning across multiple platforms seamlessly, exposing the limits of their tools. Refusal to perform these actions is itself a warning sign, suggesting the person may be hiding behind synthetic media.

Importance of Behavioral Transaction Monitoring

Relying solely on red flags—especially those tied to AI-generated content—is not a victim-centric approach. Many scam victims are emotionally compromised and psychologically manipulated to the point that they overlook or deny even the most obvious warning signs. 

Some are convinced they’re in relationships with celebrities or prominent figures, illustrating just how deeply these delusions can run. The emotional grip of these scams is further intensified by AI-enabled A/B testing and psychological profiling, which allow scammers to tailor messages for maximum persuasion. In such cases, AI-related cues—like flawless grammar or inconsistent imagery—often go unnoticed or are rationalized away.

This is why AI red flags are often more useful to investigators, not victims. Unfortunately, by the time an investigator is analyzing AI signals, the scam has typically already occurred. To intervene before damage is done, investigators need to rely on more proactive tools—most notably, behavioral transaction monitoring.

Behavioral transaction monitoring can be used to both protect victims and identify criminals. For example, if an Australian bank observes that a retired customer has suddenly begun sending large, recurring cross-border remittances to Cambodia—behavior that starkly deviates from their historical profile—it can raise an internal alert and trigger an intervention.

Some jurisdictions have recognized the need for this level of oversight. In early 2025, Singapore enacted a law that empowers banks to temporarily restrict a customer’s account if they suspect the customer is unknowingly sending money to scammers. These restriction orders give authorities time to intervene and protect individuals before they suffer irreversible losses.

Of course, not all countries will adopt this level of intervention. Some regulators may argue that customers should retain full autonomy—even if it means the freedom to make costly mistakes. In such environments, the most universally accepted and effective application of behavioral transaction monitoring is on the criminal side.

Take, for instance, a custodial wallet platform that observes an account consistently receiving large deposits and rapidly transferring the funds to a coin mixer, often within minutes. This behavior is a classic indicator of money laundering, where the swift movement of funds is designed to obscure their origin and ownership, reducing the risk of detection or seizure.

To stay ahead, investigators must leverage behavioral transaction monitoring tools. Solutions like Merkle Science’s Compass can detect anomalous transaction patterns that traditional blacklist-driven systems might miss. By focusing on behavioral indicators rather than static lists, these platforms enable faster identification of bad actors, helping to disrupt illicit activity and prevent more victims from being targeted.

How Blockchain Analytics Detects Patterns in AI-Boosted Scams

The laundering patterns in AI-boosted fraud schemes are typically no different from those seen in traditional financial crimes. A recent example is the U.S. Department of Justice’s prosecution of a $263 million crypto fraud case involving twelve individuals who initially met through online gaming.

Although the DOJ did not explicitly confirm whether AI tools were used, the core of their operation relied on social engineering. Some members specialized in impersonation calls to extract sensitive personal and financial information from victims. 

The stolen funds were funneled to their ringleader, Kunal Mehta, who used classic laundering tactics—including peel chains, coin mixers, and virtual private networks (VPNs)—to conceal the flow of illicit assets. These are the same obfuscation methods used by ransomware groups and state-sponsored threat actors alike.

In short, investigators don’t need a specific tool to follow the money in AI-driven scams. What they do need is a best-in-class blockchain analytics platform that supports multiple chains, enables intuitive fund-flow visualization even through complex obfuscation, and facilitates real-time collaboration across agencies.

Merkle Science’s Tracker offers exactly that. In-house investigators using Tracker were able to connect the $1.4 billion Bybit theft to previous Lazarus Group-linked incidents, such as the WazirX, BingX, and Poloniex breaches. Similarly, Tracker can support investigators in tracing the laundering trails of AI-enhanced scams, helping identify perpetrators, disrupt criminal networks, and recover stolen assets.

Staying Ahead of AI-Powered Fraud 

AI-driven scams are becoming more organized, targeted, and difficult to detect, especially as scammers adopt advanced tools to manipulate victims and move funds quickly. While red flags in AI-generated content can offer early clues, they are often missed by emotionally compromised individuals. Real-time intervention through behavioral transaction monitoring provides a critical second line of defense, one that can stop scams in motion. 

Finally, tracing illicit funds through blockchain analytics remains essential for attribution and asset recovery. To stay ahead, law enforcement and compliance teams need powerful tools built for this reality. Merkle Science’s Compass offers advanced behavioral monitoring to detect anomalies early, while Tracker enables cross-chain, collaborative investigations to follow the money—even through complex laundering schemes. Reach out to Merkle Science for a free demo.