Six months ago, maybe eight, you installed something free. An AI image generator you found through a link in a forum, or a PDF converter, or a mod for a game your kid plays. The download bar filled. You clicked run. The little icon tucked itself into the corner of your taskbar and disappeared behind whatever window you happened to have open. You went back to the dog, the laundry, the pot on the stove. Back to the afternoon.
It has not been sleeping.
Every time you open a browser, every time you sign in somewhere, every time you click remember this device, it reads. Quietly. The small encrypted file where your browser keeps your saved passwords, it copies. Any folder on your drive that might hold a configuration file, an API key, a crypto seed phrase, a scrap of plaintext with your Visa on it, all scanned, all read, all bundled. It also grabs your cookies. A cookie is the small token a website drops on your computer when you sign in, the thing that lets you close the tab and come back later without typing your password again. Your computer holds thousands of them. The program reads them all.
Once a week or once a day, depending on how it was built, the whole haul gets zipped and shipped to a server in a place you have never heard of. Moldova. Romania. Some numbered block of a data centre in a country you will never visit and whose name you cannot spell.
This is called an infostealer. The category has been around for years. It was never aimed at you specifically. It was aimed at everybody.
Here is the part of the story that is newer. You have probably, in the past year or two, started signing into an AI service. Maybe several. Maybe without paying attention to it. ChatGPT for a recipe, Claude for a contract review, Gemini because it came bundled with your Google account. Maybe your employer set you up with Microsoft Copilot and you did not quite grasp what it was. And when you told your browser to save the password, because everybody tells their browser to save the password, the infostealer caught that too. Same folder. Same sweep. It did not care what it was grabbing.
You will not know you were hit. That is the thing people struggle with most when they hear about this. The program is designed to be quiet. There is no flashing warning, no popup, no slowdown you would notice. The gap between the day it got onto your machine and the day something visibly bad happens is usually measured in weeks. Often months. By the time anything surfaces, your credentials have been packaged and sold two or three times, and the trail back to the original infection is cold.
Hundreds of thousands of stolen AI credentials now wind up on underground markets every year. A fresh batch each month. Fewer in August, somebody told me once, because even criminals seem to take a week off.
The Log
They call each infected machine a log. The log is a folder, or a zip file, or both. Inside you would find tidy subfolders: Passwords, Cookies, Autofills, Wallets, System Info, each with text dumps pulled from a specific corner of your computer. The system-info file holds your operating system version, your IP address, the name you gave your machine when you first set it up. Which was probably something like Family Room PC, or just your first name and last initial.
The log gets uploaded. The log gets sold.
Prices for a run-of-the-mill one run a few dollars. Logs from computers whose browsers are signed into interesting places, a developer's source-code account, a corporate email, a cloud console with real billing behind it, go for much more. Logs with AI-service credentials attached have their own little category now, growing.
The buyer does not break your password. They do not need to. Your cookies are the website's own signed promises, each one saying this person has already logged in, let them through. Drop the cookie into a browser on the other side of the planet, point it at the site, and the site waves the buyer in. No multi-factor prompt, because the multi-factor already happened for that session. No alert, because nothing looks strange from the website's end. The session just picks up where it left off. Which, in a sense, it is doing.
If your chat history with an AI assistant is still there, and it usually is because that is the default, the buyer gets to read it. Everything you have ever pasted in. And you have pasted more in than you remember. Proprietary code. The rough draft of an email you were not sure you should send. A contract a colleague forwarded because you said you would look at it. Once, a password.
If you used an API key with billing attached, the buyer spends your money. The usage runs up fast when there is no adult supervising the room. People have woken to bills ballooning overnight into the tens of thousands of dollars, because somewhere on the planet, a person they will never meet is running a small side business on their credentials.
The Bundle
None of this requires a home computer. A phone is a computer now. Phone-based infostealers are a smaller ecosystem than the desktop one, but a growing one, and the AI apps that most people actually use for day-to-day questions live on phones. Anything that can read app data and exfiltrate it is in scope. So is anything that asks for accessibility permissions and then uses them to lift what flashes across your screen.
For a long time, that was where the story ended. Stolen credentials, stolen reuse, stolen reuse turning into small-dollar fraud or a credit-card buy, and the log retired quietly to a graveyard of spent credentials. Bad, but bounded.
Except the same infection that took your AI login took your work email. The work email is worth more than the AI login. The work email has colleagues who trust it, an accounts department that pays invoices that come from it, suppliers who answer wire-transfer instructions sent from it. The person who bought your log does not sit down and laugh at your ChatGPT history. The person who bought your log uses your Outlook to talk a junior accounts clerk into changing a bank routing number on a real invoice, and by the time anyone notices, the money is in three countries, none of which will return it. Or the buyer sells the bundle on to somebody whose specialty is finding a way deeper in, and three weeks later, a ransomware crew is inside the network and every file has an extension you have never seen before.
The AI credential is rarely the whole prize. The AI credential is the foot in the door.
The Agent
Then the agents arrived.
Not everyone has one yet. Most people reading this probably do not. But if you do, or if your employer has rolled one out, or if the bright-looking tool you signed up for last month actually does more than just chat, this next part matters.
An AI agent is not a chatbot. A chatbot talks. An agent does things. It can send email in your name, open files, run commands, reach into the tools you have connected it to, your calendar, your cloud storage, your messaging apps, your code repositories, and act. Not every action is drastic. Most of them are useful. An agent will draft the replies, book the meeting, reorganize the project files, bump the pull request. Things you would have done yourself if you had the minutes.
An agent with your credentials is not a window into your life. It is a pair of hands. Your hands, operating without you.
A short phrase has begun to circulate among the people who study this stuff for a living. A compromised agent is an insider threat. It means that a stolen login to a plain chatbot is just a stolen login. A stolen login to an agent wired into your company file-sharing and your team chat and your private source code? That is a person who, as far as the systems can tell, works where you work. Has your badge. Has your face. Sits in the chair you sat in yesterday.
You might think, well, surely all of this has been sorted out by now.
It has not.
It has been partially sorted. Law enforcement has pulled down some of the biggest infostealer operations, the marketplaces that fed them, the servers that received the logs. Browsers have added new protections that make cookie theft harder than it used to be. AI companies have gotten better at spotting suspicious sign-ins hitting their login pages. Some things are genuinely more difficult today than they were two years ago.
But the economy is stubborn in a way that is hard to appreciate until you have watched it work. A marketplace goes down and a nearly identical one appears three weeks later with a slightly different name. A malware family gets disrupted and its source code leaks and five copies appear, each rented out by a different operator for a few hundred dollars a month, each one a little better than the one that got caught. The commodity is cheap to make. The commodity is cheap to buy. The thing the commodity steals is access, and access has never been worth more.
Meanwhile, the attack surface keeps widening. Every new AI tool is another credential worth stealing. Every new AI agent is another pair of hands worth hijacking. Every configuration file, every stored API token, every remember this device cookie for a service that lets you actually do something, is a fresh line on the stealer's shopping list. A whole new category of credentials, the keys that belong to AI agents themselves, living right there on your own machine, has only just started to be priced on the underground. Nobody yet knows what one of those is really worth. We will find out.
The Delivery Truck
There is one more turn worth mentioning. The attackers are using AI too. To write better phishing emails in cleaner English than they could manage on their own. To generate the help-desk-style tone of the poisoned search results. To build the small conversational scripts that sit on the other end of a fake customer-support chat. The same technology that is the target is also, now, the weapon. The phishing email that slips past your filter next month will be grammatically perfect in a way that it would not have been three years ago.
And some of the newer campaigns are clever in a way that does not need you to install anything sketchy at all. They poison the search results. You type a question, how do I free up disk space on my Mac, and the top result is a link that looks like a helpful shared conversation with a legitimate AI chatbot on a familiar, legitimate domain. You click. You read. The polite little assistant tells you to paste one short command into your terminal for a quick cleanup. You paste. The command is not the assistant's. The command is the attacker's. The assistant was the delivery truck. You did not download anything from a shady site. You followed advice from a search result and an interface you trusted. Fifteen seconds later your credentials and your cookies and whatever else was on your machine are on their way to somebody else's server.
So, yes. It can still be done today. Variations on it are being done today. To more people than you would guess.
The Exposed
A detail worth staying with. The readers of this story who are most exposed are not the big companies with dedicated security teams and quarterly risk reviews. They are the independent contractors, the two-person accounting offices, the clinic that runs on one receptionist and a shared laptop, the family-owned print shop that signed up for a ChatGPT Plus account on somebody's personal card because that was simpler. The people who have the least security infrastructure are also the people who increasingly rely on AI tools to punch above their weight. And they sign in on the same machine they use for everything else. Their AI credentials and their bookkeeping credentials and their payroll credentials sit in the same browser profile. One infection is the whole business.
The Canadian Centre for Cyber Security has been flagging AI-amplified cybercrime as its top-line trend for the current threat assessment window. The Cyber Centre does not name infostealers specifically in its top-line summary, but the category is named repeatedly in the supporting guidance it has put out for small businesses. The advice is consistent with what any competent security professional would tell you, and it is worth reading even if, perhaps especially if, you have never considered yourself the kind of person who would be targeted. You are exactly the kind of person who would be targeted. That is the model.
What To Do
There is no way to make yourself impossible to hit. You can make yourself a harder target.
Turn on multi-factor authentication everywhere it is offered. It does not stop cookie theft, but it stops password reuse, which is most of what the underground economy is built on. Use a password manager that is itself protected with its own second factor, and read the emails it sends you when a new device signs in. Stop letting your browser save passwords for anything that actually matters, your banking, your work email, your AI services. Clear your cookies on the services that matter, at least sometimes. Be suspicious, more suspicious than feels reasonable, of any search result or AI chat that tells you to paste a command into your terminal or run a script you do not understand. Treat API keys like cash.
If you use AI agents, ask what you have hooked them into. Whether they really need the permissions you have given them. What would happen if someone who was not you were sitting at the keyboard. What your agent knows about you that you would not want read aloud in front of a judge.
If any of this makes you wonder whether you might already be infected, do not spend the weekend worrying. Pick up your machine and act. Run a full scan with current anti-malware, not the scanner that came with the operating system and has not been updated in eighteen months. Change every important password from a different device, the one you trust most, not the one you are suspicious of. Sign out of every AI service you use, everywhere, and sign back in, which forces old session cookies to expire. Revoke any API keys you have issued and generate new ones. Watch your bank and your credit card for anything small and unfamiliar, because the first fraudulent charge is usually tiny, a test run before the big one. If you run a business, tell your accounts department to hold any routing-number changes until they are verbally confirmed by the person who supposedly requested them. That last step alone has saved people in this country a lot of money.
The thing in the corner of your taskbar does not care about any of that. It will keep doing its small, patient, awful work until somebody notices. Or until you do.
Notice.