How AI Is Supercharging Pig Butchering Crypto Scams

Merkle Science
May 28, 2025

Pig butchering scams are being rapidly optimized by large language models (LLMs), transforming them from labor-intensive cons into scalable, high-tech fraud operations. Once reliant on human labor, pig butchering  scams are now being supercharged by deepfakes, voice cloning, and AI chatbots like ChatGPT. With generative AI, scammers can convincingly pose as romantic partners or friends—often in real time.

This article will examine how AI is transforming pig butchering scams, focusing on deepfake-driven grooming, the use of LLMs to facilitate chatting, the automation of multi-language scripts, and what law enforcement and compliance teams need to know to respond effectively.

Deepfake Videos and Voice Cloning to Build Trust

In February 2024, a finance employee at Arup—a British design and engineering firm—participated in a video call with individuals he believed were the company’s Chief Financial Officer and other senior colleagues. Over the course of 15 transactions, he was persuaded to transfer $25 million. There was just one problem: the people on the call were deepfake impersonations. They looked and sounded like his coworkers—but none of them were real.

While this may sound like science fiction, deepfake video and voice cloning are rapidly becoming go-to tools for cybercriminals to fabricate trust. One of the greatest challenges lies in the private, real-time nature of these interactions. Platforms like Facebook now tag AI-generated content in public posts—but in private video calls or voice chats, no such safeguards exist. The quality of today’s synthetic media is often so high that even close colleagues or romantic partners can’t reliably tell the difference.

This problem is magnified when decisions must be made in the moment, without time to cross-check through trusted channels. As in the Arup case, victims are often isolated and unaware they’re being manipulated.

These tactics are already bleeding into pig-butchering scams, where operators use deepfake face-swapping tools to hold real-time video calls while posing as attractive, successful romantic partners. Their aim: to build emotional rapport and steer victims toward crypto-based investment schemes, including fraudulent trading platforms or fake mining operations.

Worryingly, these tools are being openly marketed. According to a 2024 report by the United Nations Office on Drugs and Crime (UNODC), mentions of deepfake-related services targeting criminal groups rose by 600% between February and July of last year. In parallel, face-swap injection attacks surged by 704% in the second half of 2023 compared to the first—underscoring the dramatic rise of synthetic media as a tool for fraud.

ChatGPT-Like Romance Bots for Daily Contact

While deepfake video and voice cloning are increasingly used in pig butchering scams to build trust and simulate authenticity, the core mechanism remains text-based communication. Scammers must maintain regular contact with their targets—sending daily pings, engaging in extended conversations, and crafting emotionally resonant messages to nurture the illusion of a genuine relationship.

This process is labor-intensive and methodical. Scammers often create online personas from scratch, complete with curated profile pictures, fabricated careers, and consistent backstories. Although they are typically provided with playbooks or template scripts, effective execution requires improvisation and personalization—mimicking the tone and responsiveness of a romantic partner or trusted friend.

At one known scam center, operators were forced to work 14-hour shifts, seven days a week. Most alarmingly, much of this labor is coerced: of the estimated 200,000 individuals working in scam compounds across Southeast Asia, many are victims of human trafficking, held under threat and with no means of escape.

In a 2023 investigation, cybersecurity firm Sophos revealed that pig-butchering operators had begun using large language models (LLMs) like ChatGPT to automate and scale their communications. In one case, a victim received a pasted message that inadvertently acknowledged it was created by an LLM. Despite such slip-ups, the use of LLMs allows scammers to shift from one-to-one to one-to-many targeting—dramatically increasing their reach.

Even more concerning, these tools are enabling scammers to hyper-personalize their messaging. With AI’s ability to adapt tone, language, and emotional cues to each individual, victims may become even more susceptible when the conversation eventually pivots to fraudulent investment requests.

Social Engineering Scripts Auto-Generated in Multiple Languages

Before the advent of AI, language barriers posed a major constraint for pig butchering operations. Early scams primarily targeted ethnic Chinese victims, as most scam center agents were fluent in Mandarin or Cantonese. This origin is reflected in the very name of the scheme—"pig butchering" is a direct translation of the Chinese phrase “Sha Zhu Pan” (杀猪盘), which likens victims to pigs fattened before slaughter.

As the scams matured, operators began expanding their targets globally, especially toward English-speaking populations—prompting formal warnings from the FBI and CFTC and other international agencies. However, there was a significant hurdle: in many Southeast Asian countries where these scam centers are based, English is primarily taught as a second language. With the exception of the Philippines, English proficiency in countries like Cambodia, Laos, and Myanmar is often limited, and scammers’ awkward phrasing or poor grammar could break the illusion of being a wealthy, well-educated, cosmopolitan investor or romantic partner.

The rise of large language models (LLMs) has effectively eliminated the language fluency barrier in pig butchering scams. AI chatbots like ChatGPT, along with illicit variants such as WormGPT or FraudGPT, now enable agents with limited English skills to produce grammatically correct, idiomatically natural, and culturally fluent messages. What once required a high level of language proficiency can now be accomplished by simply prompting an AI tool. This shift allows scam operators to convincingly pose as sophisticated, educated individuals—overcoming the linguistic tells that once exposed the ruse.

More alarmingly, language models now allow agents to operate in languages they have no familiarity with at all. Scammers can feed incoming messages into an LLM for translation, then auto-generate convincing replies in the same language—no manual fluency required. This capability enables pig butchering operations to expand beyond major global languages like English, Mandarin, and Cantonese, and begin targeting the long tail of high-value but under-defended markets—such as German, Dutch, Japanese, Korean, and Swedish. 

These are affluent regions where awareness of pig butchering tactics may still be low, making individuals more susceptible to sophisticated, well-written outreach. By breaking the language barrier, LLMs unlock entirely new pools of victims who may not expect to be targeted, and may not recognize the red flags when they are.

Tracking Pig Butchering Scams in the Age of AI

While blockchain analytics is often associated with tracing high-profile hacks and multimillion-dollar breaches, it remains just as effective in uncovering the money trails behind smaller-scale scams like pig butchering. As these fraud schemes scale through the use of generative AI, their volume—not just their value—will grow exponentially. That makes timely detection and tracking more critical than ever. 

Merkle Science’s Tracker, our advanced crypto investigation tool, is designed to detect suspicious flows across chains and typologies—including pig-butchering operations. Contact us today for a free demo and see how Tracker can strengthen your investigation across all types of crypto crime.