Your phone buzzes. A text. Six words on the screen, give or take, and they look perfectly ordinary. Something about a package. Maybe your address couldn't be verified, maybe there's a customs fee of $2.17, maybe delivery failed and you need to reschedule. The logo looks right. The sender ID looks right. There's a link, and if you're being honest with yourself you've tapped links like it a hundred times before without thinking twice. That's the whole trick. That right there. The moment between reading a text and deciding what to do about it has shrunk to almost nothing in the smartphone era, and the people engineering these messages know your thumb moves faster than your suspicion.

This is smishing. SMS phishing. And it has, in the space of roughly four years, grown from a minor irritant into one of the most profitable criminal enterprises on the planet. The Federal Trade Commission says Americans lost $470 million to text scams in 2024. The Canadian Anti-Fraud Centre puts total reported fraud losses at $643 million for the same year. Both agencies will tell you, if you ask, that somewhere between 90 and 95% of victims never report anything at all.

So those numbers are a floor. The ceiling is somewhere you don't want to look at too closely.

What follows is the story of how we got here. Not who did it. How it was done. The techniques, the tools, the vulnerabilities in your phone and your telecom provider and your own hardwired human behaviour that make this possible. Some of these techniques have been partially blocked. Some of them work better today than they did a year ago. And one of them, the one involving your digital wallet, is so elegant in its viciousness that reading about it might make you want to pay for everything in cash for the rest of your life.

Before the Flood: 2019 to Mid-2021

To understand why your phone is drowning in scam texts, you first need to understand what happened to scam phone calls.

For years, the dominant fraud channel was voice. Robocalls. Billions of them per month in the United States alone, spoofed caller IDs making it look like your bank or the tax office or the local police were ringing you up. The technology behind spoofing was cheap and the regulatory framework was, to put it politely, not keeping pace. In Canada, the CRTC was fielding complaints by the truckload. In the U.S., the FCC was catching political heat.

The fix arrived in the form of something called STIR/SHAKEN. Not a cocktail. A pair of nested acronyms that basically mean: a system for verifying that a phone call actually originates from the number it claims to originate from. The FCC adopted the mandate in March 2020 under the TRACED Act. Major U.S. carriers had until June 30, 2021, to comply. Canada's CRTC imposed a similar deadline.

It worked. Within a year of enforcement, robocalls dropped by nearly 47%, according to the U.S. PIRG Education Fund citing telecom analytics data. Carriers were flagging and blocking spoofed numbers. The fraud call pipeline, which had been running at about 2.1 billion scam calls per month, got cut roughly in half.

Here's the thing, though. Criminals are not stupid. They're frequently creative, occasionally brilliant, and always paying attention to which door just closed so they can find the window that's still open. And the window, it turned out, was the one sitting right there in your text messages.

The Migration: Mid-2021 Through 2022

The shift happened fast. Disturbingly fast. A leading spam-blocking analytics firm tracked it in real time. In July 2021, the month STIR/SHAKEN enforcement began, Americans were receiving roughly 1 billion spam text messages per month. One year later, July 2022, that number had hit 12 billion. A twelvefold increase in twelve months. By the end of 2022, that same firm's annual report counted 225 billion fraudulent texts for the year. That number surpassed robocalls for the first time in history.

Why did the text channel work so well? Three reasons, and they're all still true.

First, people open texts. The industry-standard estimate, from a widely cited research benchmark, puts the open rate for SMS at 98%. Email sits around 20%. You might ignore a dozen emails before lunch. You almost certainly read every text that arrives on your phone, even the ones from numbers you don't recognise. It's a reflex. The buzz, the glance, the tap. Your thumb does the work before your brain catches up.

Second, there was almost no filtering. In 2021, SMS spam detection was primitive compared to what existed for email and voice. Carriers had poured resources into blocking spoofed calls because that's where the regulatory pressure was. Text messages? Not yet. The FCC wouldn't adopt its first rules specifically targeting scam texts until March 16, 2023, nearly two full years after the fraud migration began.

Third, and this one matters more than people think. A text message feels intimate. It feels like it comes from a person. An email from your bank goes into an inbox full of newsletters and promotions and that coupon from the pizza place. A text from your bank sits in the same thread as messages from your spouse, your kids, your doctor's office. It occupies trusted space. The fraudsters understood this instinctively.

The Lure: Anatomy of a Smishing Text

Every smishing message is built around the same psychological architecture. Urgency, authority, and a single call to action. That's it. The genius, if you want to call it that, is in how thin the message is. Not how elaborate. How thin.

A typical text reads something like: "Canada Post: Your package could not be delivered. Please confirm your address." Or: "USPS: A package is being held due to an unpaid shipping fee of $1.65." Or: "CRA: Your tax refund of $847.00 has been approved. Claim it here." Sometimes it's a bank alert. Your account has been locked. Unusual activity detected. Verify your identity immediately. There's always a link. Usually a shortened URL, sometimes a domain that looks close to the real thing but isn't.

The dollar amounts in these messages are worth paying attention to. They're almost always small. $1.65 for a shipping fee. $2.17 for customs. $4.35 for an unpaid toll. That's deliberate. The amount needs to be low enough that paying it seems easier than investigating it. You're not going to call your bank over two dollars. You're going to tap the link and type in your card number, because you've got fourteen other things to deal with today and this seems like the fastest way to make the problem go away. That speed, that tiny friction-free moment where you just want the notification to stop, is exactly where the money is.

Where does the link go? In the early days, a basic phishing clone. Someone had copied the login page of, say, a major bank, hosted it on a cheap server, and waited for credentials to roll in. Username, password, maybe a security question. Then the attacker logged into your real account with the real answers.

That was 2021. By 2023, the technique had evolved into something considerably worse.

The Factory: Phishing-as-a-Service, 2023 to 2024

At some point, and researchers place it around mid-to-late 2023, the smishing economy crossed a threshold. It industrialised. Individual operators gave way to platforms. Subscription services. A criminal could now rent access to a fully built phishing operation the way you might subscribe to a streaming service, and the comparison is not as absurd as it sounds.

These platforms are called PhaaS. Phishing-as-a-Service. Several have been identified and documented by cybersecurity research firms. They provide everything a would-be text-message fraudster needs: pre-built phishing website templates mimicking postal services, banks, toll agencies, and tax offices in over 100 countries. Built-in SMS delivery tools. Real-time dashboards showing how many victims have clicked, how many have entered data, how many have surrendered card numbers. Customer support, sometimes. Subscription tiers. One documented platform charges as little as $88 per week for basic access, up to $1,588 per year for the premium package.

The templates are good. Uncomfortably good. They replicate the real websites down to the favicon, the font spacing, the mobile-responsive layout that adjusts to your screen size. A victim clicking through on a phone, which is where nearly all smishing happens, would have difficulty distinguishing the fake from the real thing without checking the URL character by character. And nobody checks the URL character by character. Not when they're standing in line at the grocery store with a text saying their package is stuck.

But the real innovation wasn't the website clones. It was the delivery mechanism. Because by 2024, these platforms had figured out how to bypass the SMS spam filters that carriers had finally started deploying.

Through the Side Door: iMessage, RCS, and the Encryption Problem

Here's something most people don't know. When a scam text arrives on your iPhone via iMessage, or on your Android phone via RCS, it has not passed through your carrier's spam filters at all. Not even a little. And that's not a bug. It's the architecture working exactly as designed.

Traditional SMS travels through your cellular carrier's network. The carrier can inspect it, scan it for known spam patterns, check the sender against blocklists, and flag or kill the message before it reaches you. Carriers have gotten better at this. But iMessage and RCS are different. These are internet-based messaging protocols. They're encrypted end-to-end, which is wonderful for your privacy, genuinely wonderful, but also means the carrier cannot see the content. The messages travel over data connections, not the traditional SMS pipeline. No inspection. No filtering. Straight to your screen.

The PhaaS platforms figured this out early. They began routing smishing messages through iMessage using temporary accounts with impersonated display names. On Android, they used device farms and emulators to push messages through RCS. One documented sending system could blast two million messages in a single day through these channels. The carrier never sees a thing.

The iPhone's maker, to its credit, built a defence. If an iMessage arrives from someone not in your contacts, iOS disables the link. You can see the URL but you can't tap it. Smart. Except the attackers found a workaround that is, in its simplicity, almost admirable. The phishing text now includes an instruction: "Please reply Y to activate this link." Or: "Reply 1 to confirm." The moment you reply, even with a single character, iOS re-enables the link for that sender. You've told your phone this person is someone you want to hear from. The gate opens.

This technique is still active. It worked in 2024. It works right now, in 2026.

The Wallet Trick: 2024's Nastiest Innovation

Everything described so far is bad. What comes next is worse.

Credential harvesting, the old-school approach of stealing usernames and passwords, has a shelf life. Banks started deploying better fraud detection. Stolen passwords only work until the victim notices and changes them, which might be hours. Sometimes minutes. The return on a stolen credential was shrinking.

So the technique evolved. Instead of stealing your password to log into your bank account, the newer phishing sites steal your card number to load it into someone else's mobile payment wallet. The kind built into every modern smartphone. And once it's in there, the card works at any tap-to-pay terminal on the planet for as long as the card remains active.

How it works, step by step. You receive a text. Package undeliverable, toll unpaid, whatever. You click the link. The site asks for your card number, expiry, and CVV to pay the small fee. You enter them. What happens next is the key: the phishing site, in real time, uses your card details to initiate an enrolment in a mobile payment wallet on a phone controlled by the attacker. Your bank, doing its job, sends a one-time verification code to your phone via SMS. The phishing site then displays a new screen asking you to enter that code. You do. The code completes the wallet enrolment. Your card is now loaded into the attacker's phone.

And then it gets worse still. Relay applications, available for a few hundred dollars a month, let an attacker transmit the NFC signal from that digital wallet over the internet to a collaborator standing at a point-of-sale terminal. Anywhere. A different city. A different country. The collaborator holds a phone up to the payment terminal, the relay app transmits the NFC handshake from the enrolled card, and the terminal processes the payment as if the real cardholder were standing right there. No PIN required for contactless transactions under the limit. In Canada, that limit is $250. In many U.S. retailers, it's $100 to $200.

This technique was documented by independent security researchers and reported by cybersecurity journalists in early 2025. It is, as of this writing, still operational. The fundamental vulnerability has not been patched because it is not a software bug. It's a design feature of contactless payment systems being exploited through social engineering. Your bank cannot distinguish between you tapping your phone at a coffee shop and someone halfway around the world relaying your card's NFC credentials through an intermediary device.

The Domain Blizzard: Scale and Disposability

One of the reasons these operations are so hard to shut down is speed. The phishing domains, the fake websites where victims land after tapping the link, are designed from the ground up to be disposable. A major cybersecurity research team published a technical analysis in October 2025 that quantified the problem. They counted 194,345 unique domain names registered for smishing purposes since January 2024, spread across 136,933 root domains. About 68% were registered through a single Hong Kong-based registrar.

Most of these domains never see their first birthday. Never see their first month. Roughly 71% are active for less than one week. Another 12% survive a second week before being abandoned. By day fifteen, 83% are gone. Threat intelligence researchers estimate that at any given moment, about 25,000 of these phishing domains are live and operational.

The rotation serves two purposes. It outruns blocklists, because by the time a domain gets flagged and added to a browser's safe-browsing database or a carrier's URL filter, the operation has already moved to a fresh domain. And it makes forensic investigation exponentially harder. Each domain exists for such a short window that collecting evidence, issuing takedown requests, and tracing hosting infrastructure becomes a game of whack-a-mole played at continental scale.

The domains are cheap. A few dollars each. The registrars are, in many cases, not asking hard questions. When your business model depends on burning through 25,000 domains a week, the per-unit cost needs to be trivial, and it is.

The AI Upgrade: Spring 2025

In April 2025, cybersecurity researchers identified something that should worry anyone paying attention. One of the major phishing-as-a-service platforms had integrated generative AI into its website-building tools. A subscriber could now type a description of what they wanted to impersonate, in plain language, and the platform would auto-generate a phishing site complete with branding, form fields, payment processing, and mobile-responsive design. No coding knowledge required. No web development experience. Just a description and a credit card.

The barrier to entry, which was already low, essentially disappeared. Before this, you needed to either build your own phishing kit or subscribe to a platform that provided pre-made templates. Templates are limited. They cover major brands in major countries, but if you wanted to target, say, a regional credit union in New Brunswick or a municipal parking authority in Ohio, you were out of luck unless you could code. The AI tools removed that constraint entirely.

This matters because localisation is the next frontier. A text about an unpaid highway toll means nothing to someone in Saskatchewan. But a text about an overdue property tax payment from your actual municipality? That hits differently. The AI tools make hyper-targeted phishing possible for operators who couldn't previously build a custom site. Whether this capability has been widely deployed against Canadian targets specifically is, at the time of writing, not yet documented in published research. But the tool exists. And there is no technical barrier to its use.

The Counterattack: What's Changed, What Hasn't

It's not all bleak. Parts of the defence have improved, genuinely and measurably, since the crisis began.

One major platform provider reported in October 2025 that its AI-powered scam detection now blocks over 10 billion suspected malicious calls and messages per month combined on Android devices, using on-device AI processing deployed on flagship phones. The AI analyses message content locally, on your phone, without sending the text to external servers. When it identifies a likely scam, it flags or quarantines the message before you see it.

On the iPhone side, the response has been slower but meaningful. iOS has long offered a "Filter Unknown Senders" toggle that routes texts from numbers not in your contacts to a separate tab. The limitation is that it doesn't scan content. A scam text from an unknown number gets separated, yes, but so does a legitimate delivery notification or a text from your dentist's new number. iOS 26, released in mid-2025, added a dedicated "Filter Spam" feature that uses on-device analysis to identify probable junk and routes it to a Spam folder with no notification. That's a significant improvement. But the iMessage reply-to-activate-link vulnerability described earlier remains unpatched as a design limitation rather than a bug, and attackers continue to exploit it.

Carriers have gotten better at traditional SMS filtering. One major U.S. carrier's scam-blocking system updates its threat intelligence every six minutes and blocked 19.8 billion scam calls in 2023 alone. The major Canadian carriers have similarly ramped up filtering. But carrier filters only work on messages that pass through the carrier's SMS infrastructure. iMessage and RCS bypass it entirely. That's the gap. And until the messaging protocols themselves implement sender verification at the protocol level, that gap will remain.

In November 2025, a major technology company filed a RICO lawsuit against operators of one of the largest PhaaS platforms. The suit alleged the platform had generated over $1 billion in fraud losses across more than 100 countries. Following the legal action, the targeted platform's cloud servers were blocked and its communication channels disrupted. Whether this produces lasting damage or merely forces migration to a new platform remains to be seen. If the disposable-domain model taught us anything, it's that these operations are built to be rebuilt.

What Still Works Against You Right Now

So let's be specific. As of early 2026, here is what still works.

Phishing texts delivered via iMessage and RCS still bypass carrier spam filters. The iMessage reply trick that re-enables disabled links still works. Digital wallet enrolment fraud via one-time codes is still operational. NFC relay apps are still available for purchase. PhaaS subscription platforms are still running. AI-generated phishing sites can be created in minutes. And the fundamental human vulnerability, the one where you read a text about a package and tap the link because you're busy and it costs two dollars and your brain classified it as harmless before your frontal cortex had time to object, that one is permanent. That one doesn't get a software update.

Fake package delivery texts remain the single most common smishing category. The FTC confirmed it was the number-one reported text scam in the United States for 2024. Tax-season impersonation scams, impersonating the CRA and the IRS, spike annually between February and April. Bank impersonation texts are growing fastest, with reporting volumes up nearly twentyfold since 2019, and carry the highest median loss per victim at $3,000.

The volume has roughly tripled since 2021. Spam analytics data, cited by the U.S. PIRG Education Fund, shows monthly robotext volume climbing from approximately 7 billion in 2021 to 19 billion in 2024. In Canada, reported fraud losses have more than tripled since 2020. The CAFC puts cumulative reported losses since 2021 at over $2 billion. And remember: the CAFC estimates that only 5 to 10% of incidents are reported.

What You Can Actually Do

There is a hierarchy to this, and if you take one thing from this article, let it be the first item on the list. Everything else is supplementary. This is the one that matters.

Stop using text messages as your second factor for authentication. Full stop. If your bank sends you a six-digit code by text when you log in, and that is the only second factor protecting your account, you are using a system that CISA and the FBI jointly recommended against in December 2024 guidance. NIST, the U.S. government's standards body for cybersecurity, classified SMS as a "restricted" authenticator back in 2017. The reason is straightforward: text messages are not encrypted in transit. Anyone with access to telecom infrastructure can intercept them. Switch to an authenticator app. There are several good free ones available for both iPhone and Android. Or better yet, a hardware security key. Then go back into your account settings and disable SMS as a fallback method. If SMS remains an option, it remains an attack surface.

Beyond that: never tap a link in a text message you weren't expecting. Not even if it looks like it's from Canada Post, not even if it says there's a $1.65 fee. If you think it might be real, open your browser and type the organisation's website address yourself. Or call them. Use a phone number you find independently, not one in the text. This sounds tedious. It is tedious. It is also the single most effective behavioural defence against smishing because it removes the attacker's primary weapon, which is the link.

Enable your phone's spam filters. On iPhone, go to Settings, Messages, and turn on Filter Unknown Senders. If you're running iOS 26, enable Filter Spam as well. On Android with the default messaging app, spam protection should be on by default, but check Settings, Spam Protection, and confirm. These won't catch everything. But they will catch a lot.

Forward scam texts to 7726. That's the short code for SPAM on your phone keypad, and it works on every major carrier in Canada and the United States. Free to use. The message gets routed to your carrier's spam analysis team and into industry-wide threat intelligence databases. It's not glamorous. But it feeds the filters that protect the next person.

Report to the authorities. In Canada, the Canadian Anti-Fraud Centre can be reached at 1-888-495-8501. In the United States, file with the FTC at ReportFraud.ftc.gov and with the FBI's IC3 at ic3.gov. These reports are how enforcement agencies track trends, identify large-scale campaigns, and build cases. You might feel like it won't matter. It does. Each report is a data point, and data points accumulate.

And finally. Never reply to a scam text. Not even to say "stop." Not even to tell them off. Any reply confirms that your number is active and monitored by a real human being. That confirmation has value. It gets your number moved to a premium list, shared across platforms, targeted more frequently. The correct response to a smishing text is silence, a forward to 7726, and deletion. In that order.

Your phone buzzes. A text. Six words. Something about a package. The logo looks right. There's a link. And now you know what lives on the other side of it. Not a shipping update. Not a customs form. A machine built to strip your wallet, clone your credentials, and enrol your card into someone else's pocket before you've finished reading the notification. Four years of engineering went into making that text look boring. Making you tap without thinking. Making the two-dollar fee seem like the path of least resistance.

Don't tap the link. Type the address yourself. And for the love of everything, stop using text messages for your banking codes.

Behind the Story

The question that started this piece was simple enough: how, exactly, does a scam text message turn into money in someone else's pocket? The technical pipeline from SMS delivery through credential harvesting to digital wallet enrolment and NFC relay fraud had been documented in pieces across cybersecurity research, but not assembled chronologically for a general audience. That assembly was the goal.

Primary sources examined directly included the FTC's April 2025 Data Spotlight on text scams and its June 2023 predecessor, the FBI IC3 2024 Annual Report, the CAFC 2024 Annual Statistical Report accessed via open.canada.ca, the USPS Office of Inspector General's December 2020 management alert, CRA scam advisories published at canada.ca, official fraud awareness pages from Canada Post and other Canadian carriers, the CRTC's published enforcement actions under CASL, CISA and FBI joint guidance on mobile communications best practices dated December 18, 2024, and NIST Special Publication 800-63B in both its 2016 draft and 2017 final forms. Cybersecurity industry research came from multiple independent firms and journalists specialising in threat intelligence, phishing infrastructure analysis, and digital fraud documentation.

Cross-referencing methodology was strict. Every statistic was traced to its originating source before inclusion. Where a figure appeared only in secondary reporting, it was either verified against the primary publication or excluded. The FTC's year-over-year loss figures were checked against their own footnoted data. IC3 complaint counts were pulled from the 2024 annual report PDF directly. PhaaS platform capabilities were confirmed across at least two independent research sources. The digital wallet fraud pipeline was verified against independent reporting from multiple cybersecurity researchers and investigative journalists.

Several deliberate editorial decisions shaped the piece. No perpetrators are named. The focus is exclusively on technique, because techniques are what matter to a reader who needs to know whether a given attack vector can still reach them. The CISA recommendation to abandon SMS-based two-factor authentication is presented with its original context: the guidance was issued jointly by CISA and the FBI, targeted primarily at senior government officials and corporate executives in the context of the Salt Typhoon espionage campaign. The underlying technical rationale, however, applies to everyone. The distinction between the FCC's March 2023 first-order and December 2023 second-order scam text rules is preserved. The difference between legitimate SMS marketing engagement rates and actual smishing attack success rates is maintained throughout.

No real names of private victims appear anywhere in this document. AI tools assisted with research compilation and source retrieval. All claims were independently verified against primary sources. The writing, editorial voice, structure, and conclusions are the author's own.

Questions, corrections, or tips? Contact The Media Glen directly.