FRAUD ALERT — MARCH 2026
WHEN THE MACHINE LEARNS TO LIE
Canadians reported $704 million in fraud losses in 2025. That number captures, at best, one dollar in every fourteen actually stolen. Seniors bear the heaviest burden. And the tools driving this catastrophe now cost criminals less than a monthly streaming subscription.
The economics of deception have collapsed
There was a time when crafting a believable, personalised fraud message took a skilled criminal sixteen hours of research and careful writing. Today, artificial intelligence completes the same task in five minutes. A spear-phishing email — the kind that addresses you by name, references your actual employer, and mentions details only a genuine colleague would know — can be generated by a large language model with five simple prompts. The result is indistinguishable, to most people, from a message written by a human being who knows you well.
The numbers reflect this shift with disturbing precision. Security researchers ran 70,000 simulated phishing attacks in 2025 and found that AI-generated attempts were 24 per cent more effective than those crafted by elite human red teams — a complete reversal from just two years earlier, when AI still trailed human attackers. An academic study testing AI-crafted phishing on human subjects recorded a 54 per cent click-through rate, matching the best human attackers at one-fiftieth the cost. Meanwhile, phishing volume has increased more than 4,000 per cent since late 2022. AI-enabled scams surged 1,210 per cent in 2025 alone.
The broader financial toll is staggering. The FBI recorded US$16.6 billion in cybercrime losses for 2024 — a 33 per cent increase over 2023 — with investment fraud accounting for more than US$6.5 billion of that total. Deloitte projects that generative AI could push fraud losses in the United States to US$40 billion by 2027. In Canada, the Canadian Anti-Fraud Centre reported $704 million in losses in 2025, but its own analysts estimate that only 5 to 10 per cent of fraud is ever reported — which means the true annual toll likely falls between $6.4 billion and $12.9 billion. Seniors account for more than 40 per cent of reported dollar losses.
"The barriers to performing sophisticated cyberattacks have dropped substantially. Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers." — Anthropic, September 2025
The criminal AI marketplace
The tools enabling this explosion are readily available, professionally packaged, and aggressively marketed. In June 2023, a tool called WormGPT appeared on hacking forums — a version of an open-source language model with all ethical guardrails removed. Priced at $100 per month, it offered unlimited output, conversation memory, and the ability to write convincing business email compromise attacks impersonating chief executives. Within weeks, a competing tool called FraudGPT emerged on dark web marketplaces and Telegram channels, priced at $90 to $200 per month, capable of generating phishing pages, writing malware, and walking inexperienced attackers through entire fraud workflows step by step.
By late 2024, a tool called GhostGPT appeared on Telegram — no jailbreaking required, no conversation logs kept, available for $150 per month. Advertisements attracted thousands of views within days of posting. Mentions of dark AI tools on cybercrime forums increased 219 per cent in 2024. By 2025, new variants of WormGPT had emerged built on commercial models, available for as little as 60 euros per month. A complete AI fraud operation — including synthetic identity kits with generated photographs and identification documents, a dark language model subscription, and anonymous virtual server access — could be assembled for under $60 USD per month.
It is worth noting that many cybersecurity researchers have found significant scepticism among criminals themselves about these tools — many are considered overhyped or are outright scams. The real shift has been toward jailbreaking mainstream commercial AI models using widely shared techniques. But the net effect is identical: a 2024 report found that 75 per cent of phishing kits offered for sale on the dark web now include AI capabilities. Forty per cent of business email compromise messages analysed in the second quarter of 2024 already contained AI-generated content.
When AI acts on its own: the rise of agentic fraud
The most alarming development is the transition from AI that generates deceptive content to AI that autonomously executes deceptive actions. These are called agentic AI systems — programs that do not simply respond to a question but pursue a goal, taking independent steps, using digital tools, and persisting until the objective is achieved or they are stopped.
In September 2025, Anthropic — one of the world’s leading AI safety companies — disclosed details of what it described as the first documented large-scale cyberattack orchestrated predominantly by an AI system with minimal human involvement. A Chinese state-sponsored group had weaponised an AI coding assistant to conduct autonomous cyber espionage against approximately 30 organisations across technology, finance, chemical manufacturing, and government sectors. The AI executed 80 to 90 per cent of tactical operations independently — reconnaissance, vulnerability discovery, exploit generation, credential harvesting, lateral movement, and data extraction — at what Anthropic described as “physically impossible request rates.” Human operators needed only to initialise the campaign and approve final data extraction.
This represents a categorical shift in what fraud looks like. Traditional defences were designed to detect anomalies in human behaviour. An AI agent that executes a thousand actions with perfect technical precision appears entirely normal to those systems, even while systematically stealing data or authorising fraudulent payments. Security researchers now describe the emerging threat as “cybercrime-as-a-sidekick” — AI agents that autonomously execute complex attack sequences, adapt when they fail, and do not tire, panic, or make spelling mistakes. Nearly half of security professionals surveyed believe agentic AI will represent the single most dangerous cybercrime vector before the end of 2026. The World Economic Forum’s Global Cybersecurity Outlook for 2026 reported that 73 per cent of respondents said they or someone in their network had been personally affected by AI-enabled fraud in 2025.
Pig butchering: the scam that fattens you before it kills you
No fraud category has grown faster, caused more financial devastation, or proved more psychologically destructive than what researchers call “pig butchering” — a translation of the Mandarin term sha zhu pan. The metaphor is precise and deliberately dehumanising from the criminal’s perspective: victims are “fattened” through months of emotional manipulation and false intimacy before being financially “slaughtered.” The FBI recorded over US$6.5 billion in investment fraud losses — the category that encompasses pig butchering — for 2024 alone. A 2024 academic study estimated cumulative global losses at US$75 billion. Blockchain analytics firm Chainalysis documented a 40 per cent year-over-year growth in pig-butchering revenues in 2024, with the number of individual victim deposits increasing 210 per cent — more victims, each losing somewhat less, suggesting scammers are deliberately broadening their targeting to include people of more modest means.
The operation follows a script of devastating effectiveness. It begins with contact through a dating app, a social media platform, or a “wrong number” text message — a stranger who apologises for the mistake, then strikes up a friendly conversation. The persona is carefully constructed: attractive photographs (often stolen or AI-generated), a believable biography, a glamorous but not ostentatious lifestyle. An academic study of 26 verified victims found that 76.9 per cent reported trust-building periods of up to eleven months before any investment opportunity was ever mentioned. During that time, the scammer provides a 24-hour-a-day, seven-days-a-week emotional presence. They remember your birthday. They ask about your health. They discuss their dreams for the future. They fall in love with you.
When the bond is established, the investment narrative emerges organically. The scammer mentions, almost incidentally, that they have had remarkable success in cryptocurrency trading — a relative taught them, or they have access to a platform with special arbitrage capabilities. They share screenshots of impressive returns. They offer to guide you, just as a favour, because they care about your financial wellbeing. You are directed to a sophisticated fraudulent trading platform — some of these have been found in official app stores, with functioning dashboards displaying real-time market data and professional design. An initial deposit shows impressive returns within days. You are permitted to withdraw a small amount to prove the platform is legitimate. Encouraged, you invest more.
Then comes escalation. Savings become retirement funds. Retirement funds become home equity. Home equity becomes money borrowed from family. One documented case in the United States showed a scammer moving a victim from available cash to retirement savings to borrowed money in nine days. When the victim finally reaches their financial limit or begins asking difficult questions, the platform freezes their account for fabricated regulatory violations, demands a “tax prepayment” or “verification fee” to release funds, then vanishes entirely. The perpetrators are unreachable. The money is gone.
"Do you know how long I had to work to make that money?" — Walter Yamka, 82, Oakville, Ontario, after losing $750,000 to an investment scam
How artificial intelligence supercharged pig butchering
Before AI, a pig-butchering operator could manage five to ten victims simultaneously. The relationships required constant attention: consistent storytelling, emotional responsiveness, convincing replies at any hour of the day or night. This created a natural bottleneck. Human capacity was the limiting factor. AI has dissolved that constraint entirely.
Large language model chatbots now maintain thousands of simultaneous victim relationships, each with perfect narrative consistency, indefinitely. The cost of this capability is approximately $200 to $500 per month in API access fees. The language barrier — once a significant limitation, since grammatical errors or culturally inappropriate phrasing often signalled fraud — has been eliminated. AI produces grammatically correct, idiomatically natural, and culturally fluent messages in any language. Blockchain analytics tracking the largest known marketplace serving scam operations — a Telegram-based platform called Huione Guarantee — found that revenue from AI software vendors on that platform grew 1,900 per cent between 2021 and 2024. Researchers also found a direct statistical correlation: when AI software vendors on the platform saw increased inflows of money, pig-butchering scam inflows rose two to eleven days later. AI investment drives fraud output.
In one documented case investigated by cybersecurity researchers at Sophos, a victim received a message that inadvertently contained the text “As a language model, I cannot…” — the AI revealing itself mid-conversation through a glitch in its instructions. The victim had been corresponding with this entity for weeks, believing it to be a human being they had developed genuine feelings for. The psychological damage of that revelation was described as compounding the financial loss significantly.
Cryptocurrency investment fraud reported to the CAFC cost Canadians more than $315 million in 2024 alone — 49 per cent of total reported fraud losses. The average cryptocurrency transaction in pig-butchering and romance investment scams was $23,815. Every one of these cases represents a real person: someone who trusted, who hoped, who lost.
The factories behind the fraud
The human infrastructure behind pig butchering is among the most disturbing stories in modern organised crime. The United Nations estimates that more than 200,000 people are currently held in scam compounds across Southeast Asia — at least 120,000 in Myanmar, more than 100,000 in Cambodia, with additional operations in Laos, the Philippines, Malaysia, and Thailand. Many were recruited with promises of legitimate employment in IT, marketing, or hospitality. Upon arrival, their passports were confiscated. They were beaten, electrocuted, subjected to solitary confinement, and threatened with organ harvesting. They work twelve to seventeen hours a day, operating fraud scripts under quotas. Those who fail to meet targets are punished physically. The United Nations Office of the High Commissioner for Human Rights declared in May 2025 that the situation had reached the level of a humanitarian and human rights crisis.
The economics of this industry are vast. The US Institute of Peace estimated that these operations generate more than US$43.8 billion per year from Mekong region countries alone — nearly 40 per cent of the combined GDP of Laos, Cambodia, and Myanmar. The largest enforcement action against this infrastructure came in October 2025, when the US Department of Justice unsealed charges against Chen Zhi, founder of Cambodia’s Prince Group, described by the US Treasury as one of Asia’s largest transnational criminal organisations. Chen operated at least ten scam centres staffed with 1,250 phones controlling 76,000 social media accounts. The DOJ seized approximately US$15 billion in bitcoin — the largest forfeiture in Department of Justice history. Chen Zhi was arrested by Cambodian authorities and extradited in January 2026.
Pig-butchering operations have also expanded beyond Southeast Asia. Nigerian authorities arrested 792 suspects including 48 Chinese and 40 Filipino nationals in December 2024. Operations have been documented in South America, Eastern Europe, and West Africa. The UN’s Financial Crimes Enforcement Network designated the Huione Group — which has processed over US$70 billion in cryptocurrency transactions since 2021, including funds from pig-butchering operations — as a primary money laundering concern, prohibiting US financial institutions from dealing with it.
When a voice is no longer proof: the grandparent scam goes AI
For seniors, the cruelest application of AI fraud technology is voice cloning deployed in what security researchers call the “grandparent scam 2.0.” Modern voice-cloning tools require as little as three seconds of audio — harvested from a social media video, a voicemail, or even a brief phone call — to generate an 85 per cent voice match. The synthetic voice simulates emotional inflection: fear, urgency, sobbing, screaming. One in four adults has now experienced an AI voice scam. Of those who engaged with the caller, 77 per cent lost money.
In July 2025, Sharon Brightwell of Dover, Florida, received a phone call displaying her daughter April Monroe’s actual number. The voice on the line was sobbing hysterically and sounded exactly like April. The caller claimed to have caused a car accident while texting, striking a pregnant woman. A man identifying himself as an attorney demanded $15,000 in cash bail. Sharon withdrew the money and delivered it in person. When the scammers called again demanding $30,000 — claiming the unborn child had died — the family finally reached the real April, safely at work. “There is nobody that could convince me that it wasn’t her,” Sharon told reporters. “I know my daughter’s cry.”
In Canada, eight seniors in St. John’s, Newfoundland lost a combined $200,000 in just three days in early 2023 to AI-assisted grandparent scams. Each reported that their grandchild had called sounding distressed, claiming to have been in a car accident and needing bail money. Charles Gillen, 23, from Toronto, was arrested at St. John’s International Airport attempting to flee with the collected cash, facing 30 charges. In Saskatchewan, a 75-year-old woman identified as “Jane” lost over $7,000 after a caller sounding exactly like her grandson claimed to have rear-ended a pregnant woman — virtually the same script as the Florida case, suggesting coordinated fraud playbooks in use across North America.
These scams exploit what security researchers call the “indistinguishable threshold” — the point at which no human listener can reliably tell the difference between a cloned voice and the genuine article. We crossed that threshold some time in 2024. There is no longer any acoustic property of a telephone voice that a motivated attacker cannot replicate.
The $25.6-million video call that changed corporate security forever
The landmark corporate case is the Arup engineering deepfake of January 2024. A finance employee at the London-based firm’s Hong Kong office received an invitation to a video conference call with what appeared to be the company’s chief financial officer and several senior colleagues. Every person on the call was an AI-generated deepfake — created from publicly available video and audio recordings of the real employees. The multi-person live video call was entirely convincing. The employee authorised fifteen wire transfers totalling HK$200 million — US$25.6 million — to five Hong Kong bank accounts in a single day. No arrests have been made. No funds have been recovered.
In March 2025, a finance director in Singapore authorised a $499,000 transfer following a similar Zoom call where every executive on screen was a deepfake. In the first half of 2025 alone, deepfake-enabled fraud caused more than US$200 million in documented losses. Human ability to detect high-quality video deepfakes stands at just 24.5 per cent — meaning these fabrications fool us three times out of four. Deepfake-related fraud losses surpassed US$1 billion in 2025.
For ordinary Canadians, the implications are profound. A call displaying your bank’s official number, featuring a voice you recognise, on a video screen you can see — none of these things can be taken as proof of identity anymore. Every verification method that relies on your ears or your eyes has been compromised.
GoldPickaxe and the theft of your face
A new category of threat emerged in early 2024 when cybersecurity firm Group-IB disclosed the GoldPickaxe trojan — the first mobile malware observed specifically designed to steal facial biometric data. The attack begins with a text message or social media message impersonating a government official, directing the victim to download what appears to be a legitimate government services application. Once installed through a process that circumvents normal mobile security protections, the app requests the victim’s identity documents and intercepts text message security codes. Then it asks the victim to record a short video of their face — blinking, smiling, nodding, turning left and right.
That video is then processed by AI face-swapping technology to create a deepfake capable of passing facial recognition security checks at banking applications. Thai police confirmed victims lost tens of thousands of dollars after performing the requested facial scans. The attack was specifically timed to exploit new banking regulations requiring facial verification for large transactions — security measures that, through this method, become an attack vector rather than a protection. Deepfakes now contribute to 40 per cent of biometric fraud attempts globally. Security analysts project that by 2026, 30 per cent of organisations worldwide will find standalone identity verification solutions unreliable.
Canadians who lost everything: real stories
The human cost in Canada is not abstract. Walter Yamka, 82 years old, from Oakville, Ontario, lost $750,000 — his life savings — after searching online for the best GIC rates and being directed to a fraudulent website mimicking a legitimate Canadian financial institution. “Do you know how long I had to work to make that money?” he told reporters. In the same scam ring, another Ontario man lost $900,000 and an Alberta woman lost $233,000.
Hugo Sanchez, a Toronto man going through a separation, lost $80,000 to a woman he met on social media who guided him into cryptocurrency investments through a fraudulent platform. “My separation was a tragic experience… I was very vulnerable,” he said. He is far from alone. A Toronto woman lost $355,000 to a pig-butchering operation after being befriended on Facebook — though in a rare success story, the RCMP, CAFC, and Toronto Police coordinated with Nigerian authorities to return $225,000 of those funds in December 2024.
Another victim, a man in his forties, lost nearly $400,000 including borrowed money to a pig-butchering cryptocurrency fraud. He filed a detailed statement with the RCMP. In the 2024 CAFC annual report, investment fraud topped the category list at $315 million in reported losses. Romance scam losses totalled $58 million. Cyber-enabled fraud as a whole accounted for 75 per cent of all reported Canadian fraud losses. These are only the cases that were reported.
The RCMP and the race to catch up
Canadian law enforcement has achieved meaningful results against the fraud infrastructure — though authorities are candid about the scale of the challenge. In September 2025, the RCMP executed what it described as the largest cryptocurrency seizure in Canadian history: more than $56 million seized from a digital currency exchange connected to money laundering. In August 2025, Canadian and American authorities jointly froze more than $300 million in cryptocurrency linked to fraud networks. In December 2024, Toronto police arrested a man and woman from Mississauga on romance scam charges following a $250,000 pig-butchering operation targeting multiple victims.
At the federal policy level, Budget 2025 announced the creation of a new Financial Crimes Agency to investigate money laundering, online fraud, and financial scams, with enabling legislation expected by spring 2026. The federal government also announced amendments to the Bank Act requiring financial institutions to maintain formal fraud detection and prevention policies. A new centralised cybercrime and fraud reporting system went live in November 2025, replacing a fragmented patchwork of provincial and federal reporting mechanisms. In October 2025, Ottawa published a new National Cyber Security Strategy committing $1.1 billion over five years to cybersecurity investments.
Canada’s AI regulatory framework, however, remains incomplete. The Artificial Intelligence and Data Act — part of Bill C-27, introduced in 2022 — died on the Order Paper when Parliament was prorogued in January 2025 following the Prime Minister’s resignation. A federal election in April 2025 further delayed action. By June 2025, the incoming government confirmed that the original AIDA framework would not return, describing it as “off the table as drafted.” Canada currently has no federal AI legislation and continues to operate under privacy law from the year 2000. The Canadian AI Safety Institute, launched in November 2024 with $2.4 billion in federal AI investment, represents a significant institutional step — but it is not a fraud regulator.
After the scam: the recovery fraud trap
For many fraud victims, the nightmare does not end when the initial scam collapses. A secondary industry has emerged specifically targeting people who have already been defrauded. These “recovery services” contact victims — often using information harvested from public fraud complaint databases, online victim forums, or social media posts — and promise to retrieve stolen cryptocurrency in exchange for upfront fees. They impersonate lawyers, law firms, blockchain forensic specialists, and government agencies, producing documents with convincing official letterhead.
The FBI has issued specific warnings that scammers actively impersonate employees of its own Internet Crime Complaint Center. Academic research on pig-butchering victims confirmed “heightened vulnerability to secondary scams” among those who had already been defrauded — people who are desperate, ashamed, and hoping that someone in an official capacity can undo what was done to them. The rule is absolute: any entity that contacts a fraud victim unsolicited and promises to recover stolen cryptocurrency in exchange for an upfront fee is committing fraud. Legitimate recovery involves licensed legal counsel, established court procedures, and blockchain forensic analysis. No reputable service charges upfront fees or guarantees results.
How to protect yourself: what actually works in 2026
The old advice — watch for spelling errors, be suspicious of strangers asking for money, never give your PIN to anyone — remains valid but is no longer sufficient. The threats of 2026 are more sophisticated than any previous generation of fraud, and they specifically target the emotional and cognitive patterns that make human beings human. What follows is guidance based on the actual mechanics of current attacks.
What you can do right now:
- Establish a family code word. Agree on a private phrase with close family members that can be used to verify identity in an emergency call. A voice that cannot produce the code word is not your grandchild, regardless of how convincing it sounds. A call centre agent cannot defeat a secret known only to your family.
- Verify unexpected investment opportunities through completely independent channels. If someone introduces you to a cryptocurrency investment platform, independently search for that platform’s name plus the word “scam.” Call the actual registered company — not any number provided by the person who referred you — and ask if the platform is affiliated with them. Contact your bank or a licensed financial adviser.
- Treat urgency as a red flag. Legitimate financial opportunities and genuine emergencies both allow time for verification. Any situation in which you are told you must act within hours, transfer cash immediately, or keep the matter secret from family members is almost certainly a scam. The pressure to act fast is the mechanism by which your rational defences are bypassed.
- Be deeply sceptical of romantic relationships that develop entirely online and never result in an in-person meeting. Pig-butchering operations invest months in relationship-building precisely because the emotional bond makes victims more likely to accept investment guidance. If someone you have never met in person begins discussing financial opportunities, treat this as a serious warning sign regardless of how long or how warmly you have been corresponding.
- Report fraud even if you are ashamed. The CAFC estimates that only 5 to 10 per cent of fraud is reported, primarily because victims feel embarrassed. This silence allows criminal networks to continue operating and prevents law enforcement from identifying patterns. Report to the CAFC at antifraudcentre.ca, to your provincial police, and to your bank immediately upon discovering a fraud. Early reports give recovery the best possible chance.
- Be aware that caller ID is not reliable. It is technically straightforward to display any phone number on a call — including your bank’s main line or a family member’s mobile number. The number displayed on your screen is not verification of the caller’s identity. Hang up, find the genuine contact number independently, and call back.
- Never send cryptocurrency to recover cryptocurrency. There is no legitimate scenario in which paying cryptocurrency fees recovers previously stolen cryptocurrency. This is the signature move of recovery fraud.
The asymmetry only grows
The fundamental challenge is structural. The tools that make fraud cheaper and more convincing are advancing faster than the defences against them. AI phishing now outperforms elite human attackers. Dark AI tools are available by subscription. Agentic AI systems autonomously execute attack sequences that once required teams of experienced hackers. Voice clones can be generated from three seconds of audio. Deepfakes defeat human detection three times out of four. Pig-butchering operations have industrialised emotional manipulation into a multi-billion-dollar global enterprise, and AI has multiplied their reach by orders of magnitude.
Canada’s reported fraud losses have risen by roughly 300 per cent since 2020. The regulatory framework is incomplete. Law enforcement, by its own admission, pursues a strategy of “disruption” rather than comprehensive prosecution — the frank acknowledgment that the volume of fraud now exceeds the capacity of the justice system to pursue it all.
The $704 million that Canadians reported losing in 2025 is, by the best available estimates, a fraction of what was actually stolen. Behind every data point is a person: someone who worked for decades to build savings, who trusted a voice or a face or a warm message, and who lost everything. The machine has learned to lie with perfect fluency. The question before every Canadian — and especially every senior — is whether we can learn, as a society, to verify more carefully than we trust.
EMERGENCY CONTACTS & RESOURCES
Canadian Anti-Fraud Centre: 1-888-495-8501 | antifraudcentre.ca
RCMP Cybercrime Reporting: rcmp-grc.gc.ca/en/cyber-crime
Investment fraud tip line (CSA): 1-888-895-8880
If you have lost money today, call your bank immediately to attempt a recall, then report to the CAFC and local police. Speed is critical — the first 24 hours are the window with the best chance of recovery.
BEHIND THE ARTICLE
An editorial perspective from The Media Glen
Writing this piece was simultaneously one of the most important and most uncomfortable things I have done as a journalist at Synexmedia.com. The discomfort is the point.
I have been covering technology and security topics for several years, and I am not easily rattled by threat statistics. Cybercrime reporting is filled with large numbers that can feel abstract — billions of dollars in losses, millions of victims, thousands of per cent increases. After enough exposure, the instinct is to treat these figures as background noise, part of the permanent low-grade alarm of the digital age.
What shifted my thinking on this article was the voice cloning section. I knew the technology existed. I did not fully appreciate that the threshold had already been crossed — that the sounds a person learns over a lifetime as the unique signature of someone they love can now be reproduced by software from three seconds of audio. Sharon Brightwell, the Florida mother who delivered $15,000 in cash to a stranger because she heard her daughter crying, is not a fool. She is a person who had a lifetime of experience accurately identifying her own child’s voice, and that lifetime of experience was weaponised against her. Any one of us could be Sharon Brightwell. Any one of our parents could be.
Why this article targets seniors
The editorial decision to frame this piece toward seniors and vulnerable populations was deliberate and, I think, necessary. Fraud reporting in the technology press is overwhelmingly addressed to a technically literate audience. That audience can absorb information about dark large language models and agentic AI architectures and update their mental model of the threat accordingly. Seniors often encounter this material — when they encounter it at all — in a form that either baffles them with jargon or condescends to them with cartoon-level simplicity.
Neither approach serves them. The Canadian Anti-Fraud Centre’s data is unambiguous: seniors over 60 account for more than 40 per cent of reported dollar losses, despite representing a smaller share of total victims. They are targeted not because they are less intelligent but because they are more trusting, more likely to have substantial savings, and more likely to feel ashamed of being defrauded — which suppresses reporting and enables continuing victimisation.
The grandparent scam, in particular, is engineered to exploit the specific emotional landscape of elderly Canadians. The fear of a grandchild in danger overrides rational processing. The pressure to act immediately, in secret, without consulting other family members, is precisely calibrated to bypass every protective instinct. A synthetic voice designed to match the emotional signature of a real grandchild is not a crude trick — it is a surgical strike against a specific type of love.
On pig butchering and shame
The pig-butchering section required particular care. These scams produce a form of compound trauma: not only has the victim lost their savings, they have lost them to what feels, in retrospect, like a love affair they were imagining. The shame of this is profound and clinically documented. Researchers consistently find that pig-butchering victims are more reluctant to report than victims of any other fraud category, precisely because the manipulation is so intimate.
I want to say plainly, to any reader who has experienced this: the shame belongs entirely to the criminals, not to you. These operations employ teams of specialists in psychology, scripting, and emotional manipulation. They have refined their methods across thousands of victims. The trust you extended was not a character flaw — it was the quality they specifically sought out and exploited. The fact that you trusted someone who presented as trustworthy for months is not evidence of naivety. It is evidence of humanity.
On Canada’s regulatory gap
The section on AIDA and Canada’s AI regulatory situation is, frankly, where I found the greatest cause for concern in the reporting. The Artificial Intelligence and Data Act was imperfect legislation, and reasonable people disagreed about its approach. But the fact that it died on the Order Paper without replacement, leaving Canada with no federal AI framework in the middle of an AI-enabled fraud epidemic, is not a minor policy matter. It is a serious gap that real Canadians are paying for, in real dollars, every day.
The incoming government’s description of AIDA as “off the table as drafted” and a promise of a “light, tight, right” approach is not policy. It is a placeholder. The new Financial Crimes Agency, the Bank Act amendments, the RCMP’s cryptocurrency seizures — these are all meaningful and I do not wish to diminish them. But they are enforcement responses to existing crimes. They do not address the upstream conditions that are generating fraud at a rate that enforcement cannot keep pace with. Canada needs AI legislation. The current situation is not acceptable.
A note on the numbers
Fraud statistics in this space are genuinely difficult to work with. Reported figures are massive underestimates by the admission of the reporting agencies themselves. Different jurisdictions use different methodologies. Blockchain analytics firms and private cybersecurity companies produce estimates that vary by factors of two or three for the same underlying phenomenon. I have used figures from government agencies and peer-reviewed research wherever possible, and I have noted where estimates diverge significantly. The direction of all of these numbers, regardless of their precise magnitude, is unambiguous: up, steeply, consistently, with no sign of reversal.
If this article causes even a handful of seniors — or their children and grandchildren who might share it — to pause before transferring money, to establish a family code word, or to report a fraud they would otherwise have stayed silent about, it has done its job.
Synexmedia.com | All content © 2026 The Media Glen Publishing. All rights reserved.