There is a moment — quick, almost imperceptible — when you click a button on a website and a small window pops up, floating over the page. Maybe it says "Sign in with your email account." Maybe it's asking for your cloud service credentials. The window looks right. The little padlock icon is there. The web address in the bar reads exactly what you'd expect it to read. You've seen this window a hundred times.
You type your password.
That's when it gets you.
The attack has a name. Browser-in-the-Browser. BitB for short. It works by building a fake browser window — the whole thing, address bar and padlock and title and all — entirely out of the same basic code that makes up every web page you've ever visited. No malware. No virus. No file you downloaded by accident. Just HTML, CSS, and a few lines of JavaScript, arranged so cleverly that what you're looking at is not a window at all. It's a painting of a window. And paintings don't open onto anything real.
The Researchers Saw It First
The idea didn't come from criminals. It came from researchers.
In February 2007, a team from a major technology company's research division published a paper titled "An Evaluation of Extended Validation and Picture-in-Picture Phishing Attacks." Their finding was not encouraging. A fake browser window, rendered inside a real webpage, fooled people as effectively as the most sophisticated URL-spoofing attacks of the era. Participants in the study couldn't tell the difference.
The paper landed in academic archives and largely stayed there. Browser makers didn't act on it. The web kept growing. Users kept trusting the little padlock.
Twelve years later, that dormant idea woke up.
The First Time It Happened for Real
In December 2019, gamers started losing their online gaming accounts. Not because of malware. Not because of weak passwords. Because websites were showing them a fake login window — a convincing replica of a gaming platform's own pop-up authentication box — and asking them to type in their credentials. The window looked exactly right. The address bar read exactly the right thing. The people running these sites had figured out, without any published guide, how to build a fake browser chrome out of raw web code.
By February 2020, a cybersecurity research team had analysed the campaign in enough detail to describe how it worked. Over 200 malicious domains. Anti-debugging code baked in to prevent security researchers from examining the JavaScript too easily. A fake chatbox on the site, pre-loaded with convincing-looking messages, to sell the illusion further.
Nobody called it Browser-in-the-Browser yet. That name was still two years away.
The Day It Got a Name
On March 15, 2022, a security researcher posted a technical writeup that put a name to the technique, documented exactly how it was built, and — in what turned out to be a decision with real consequences — released a full set of ready-to-use templates on a public code-sharing platform.
The templates were polished. They covered popular browsers on both major desktop operating systems, in both light and dark mode. They replicated each operating system's own visual language down to the pixel. One version used the exact hex colour codes for the coloured window-control buttons found on macOS. Another matched the close-button hover colour used by the browser on Windows precisely. There was even a variant with an animation library-powered fade-in, so the fake window would appear with the same slight delay you'd expect from a real pop-up.
The address bar in all of them was not an address bar. It was a styled text element. The padlock icon was not a browser security indicator. It was a small image file called ssl.svg. The whole structure — window, toolbar, content area — was a single <div> tag containing nested layers of carefully arranged code.
Combine that with an iframe pointing to a fake login page, the researcher wrote, and it's basically indistinguishable.
He was right.
It's Not a Window. It Never Was.
Picture the real thing first.
When you click "sign in through a third-party account" on a website, your browser opens a genuine pop-up window. Separate. Independent. The browser itself controls it. It has its own address bar showing the authentication service's actual domain, its own padlock confirmed by your browser's security layer, its own entry in the operating system's taskbar. It lives outside the original webpage entirely.
Now picture the fake.
The attacker's webpage runs a small JavaScript function the moment you click that login button. The function doesn't open a new window. It makes a hidden <div> element visible. That element is built to look exactly like a browser pop-up. Fake title bar. Fake address bar showing whatever domain the attacker typed into a placeholder variable. Fake padlock image. And buried inside it all, an iframe loading a replica login form designed to collect whatever you type.
The address in that fake bar can say anything. The attacker chose it. It costs them nothing to display a well-known authentication service's address. The browser has no say in the matter. You're not looking at something your browser made. You're looking at something a website made, and websites can draw anything they like on your screen.
When you submit your credentials through that form, they go to the attacker's server. You get redirected to a legitimate-looking page. Nothing feels wrong.
The Padlock Was Always the Lie
For years, every security awareness campaign said the same thing. Check the address bar. Look for the padlock. If those things look right, you're safe.
BitB inverts that completely. The address bar and the padlock are the lie. They're the attack, not the protection.
The timing sharpens it considerably. The fake window typically appears when you're already in the middle of doing something — logging into a service, completing a transaction, joining a game. You're focused on the task. The pop-up appears at exactly the moment you'd expect it. Your guard is already down because everything up to that point felt normal.
The victim isn't being fooled by a suspicious email or a misspelled domain. They're being fooled by the thing that was supposed to prove the page was legitimate.
The Tells — If You Know What to Look For
The fake window does have weaknesses. Not many. But they exist.
The most reliable test is also the simplest. Try to drag the window outside the browser. A real pop-up behaves like any other window on your operating system. You can drag it off the visible webpage, past the browser's own controls, onto another part of your screen. The fake window can't do that. It's trapped inside the browser viewport, because it is part of the webpage. Drag it toward the edge, and it stops. That's not how real windows behave.
A good password manager will also give it away, though the warning is silent. Password managers autofill based on the actual domain of the page you're on. If you're on a fraudulent site but the fake address bar shows a legitimate service's address, your password manager sees the real domain. It won't offer to fill in your credentials. It just sits there, quiet. If you notice that silence — if you wonder why your password manager isn't offering anything — that's your signal.
Try pressing Ctrl+W or Cmd+W while the fake window is open. Those shortcuts close a browser tab. If the entire page disappears instead of just the pop-up, the window wasn't real. A real pop-up has its own existence and closes independently.
Right-click on the address bar of the fake window. You'll get the web browser's standard context menu for web content, not the special menu your browser shows when you right-click a real address bar.
Most people won't try any of these things. Most people, most of the time, see the familiar shape of a login window, fill it in, and move on.
The Weapon Is Just a Webpage
There's no malware here. No exploit. The attack runs entirely on web technologies that every browser in the world supports, because they're the same technologies that power legitimate websites.
The fake window is built from <div> elements and CSS properties. That floating-above-the-page illusion comes from a single line. box-shadow: rgba(0, 0, 0, 0.35) 0px 5px 15px. The drag functionality uses a standard JavaScript animation library. The close button changes colour when you hover over it because of a standard CSS hover rule.
Traditional security software doesn't flag this. Endpoint detection tools scan for malicious processes, suspicious file behaviour, unusual network calls. A fake element that looks like a browser window doesn't trigger any of those detections. The page loads over HTTPS. The traffic looks normal. Nothing trips the alarm because there's nothing the alarm knows to look for.
The whole mechanism fits in a few hundred lines of code. Anyone with basic web development skills can read the original templates and understand them completely in an afternoon.
Then They Made It Worse
The original technique had one notable limitation. The fake content area — the login form sitting inside the fake window — lived in an iframe. Major identity providers set HTTP headers specifically designed to block their pages from loading inside iframes on third-party sites. Called frame-busting protections, they cause the browser to refuse to render the legitimate authentication page inside any third-party iframe.
Attackers solved this by not using the real login page at all. They built replicas — visually identical forms that mirrored the real thing and sent captured credentials to their own servers. It worked. But it also introduced a maintenance burden. The attacker had to keep a convincing copy of the login page current as the real provider kept making changes.
A later refinement removed the iframe entirely. A reverse proxy sits between the victim and the real identity provider. Injected overlay code draws the fake browser chrome on top of the proxied real page. The genuine login page loads and functions correctly while the surrounding fake window, fake address bar, and fake padlock are being drawn on top by attacker-injected code. The victim interacts with a real login form. The proxy reads everything passing through it, including session tokens that let attackers bypass two-factor authentication entirely.
By 2025, You Could Subscribe to It
Phishing-as-a-service platforms — subscription-based criminal infrastructure that lets people run sophisticated credential-theft campaigns without building anything from scratch — began incorporating BitB as a standard feature. One documented kit added automatic detection of the victim's operating system and browser, then adapted the fake window's appearance in real time to match. Victims on one operating system saw that system's window style. Victims on another saw the other. The kit used a commercial bot-detection service as a gate to filter out security researchers. Code obfuscation techniques broke up recognisable text strings using invisible HTML elements.
The technique had gone from a researcher's proof-of-concept to a subscription feature in a criminal marketplace. That trajectory — academic finding to wild-west exploitation to commoditised service — took less than four years.
What the Math Knows That Your Eyes Don't
The most effective protection isn't behavioural. It doesn't require users to know about drag tests or password manager silences.
FIDO2, WebAuthn, and passkeys work at the protocol level. When these authentication systems are in use, the browser generates a cryptographic response tied to the actual domain of the page being loaded — not the domain displayed in any address bar, real or fake. A BitB fake window is sitting inside a fraudulent domain. The cryptographic challenge goes to that fraudulent domain. The real identity provider's servers, expecting a response bound to their own legitimate domain, reject it. The fake window can display whatever address it likes and it won't matter. The math says no.
Password managers, used consistently, provide secondary defence. A manager that won't autofill on a mismatched domain is a manager that catches exactly what BitB depends on.
Browser extensions designed to detect anomalous iframe behaviour can catch some variants. Security platforms that inspect the DOM structure of loaded pages — looking for fake browser chrome elements hiding inside the page — can flag the attack before credentials are entered.
None of these protections are automatic for most users. Passkeys are still in the middle of a slow rollout. Password managers require setup and habit. DOM-analysis extensions require installation and awareness.
For most people browsing the web today, the protection that was always supposed to work — looking at the address bar, trusting the padlock — is the exact thing this attack targets.
It Still Works
Yes.
The technique is older now and more widely known in security circles. Defenders have written about it. Some organisations have deployed browser security tools specifically designed to detect it. Identity providers have continued expanding their support for passkeys and FIDO2.
But the attack surface hasn't closed. Most websites still use password-based authentication. Most users haven't switched to passkeys. Most browsers don't natively alert you when a page is drawing an element that resembles browser chrome. The core deception — a fake window made of web code, showing an address bar you have no reason to doubt — still works in any browser, on any operating system, for any target who doesn't know to look for the signs.
Criminal phishing kits still ship with it as a feature. New variants keep appearing. Those researchers who documented the problem in 2007 put it plainly. Users trust the visual representation of security. That visual representation can be forged. Nothing fundamental has changed.
The window that lies is still open.
This article is part of The Media Glen's cybercrime series, which examines the tools and techniques used against ordinary people online.
Behind the Story
This piece started with a simple question. Can a phishing attack fool someone who already knows to check the address bar? The answer, as it turned out, was yes — and the mechanism behind it had been sitting in a published academic paper since 2007.
Research for this article drew on primary technical sources throughout. The original 2007 academic paper was read in full, not summarised from secondary reporting. The open-source attack templates released in 2022 were examined directly — the HTML, the CSS, the JavaScript — so that the technical descriptions here reflect what the code actually does rather than what someone else said it does. Hex colour codes, file names, CSS property values, and the names of DOM elements cited in this piece all come from that direct examination.
From there, the research followed the attack's development forward in time. Published threat intelligence reports from multiple independent security research teams were cross-referenced against one another. Where claims appeared in only one source, they were treated with appropriate caution and either verified elsewhere or set aside. The timeline was reconstructed from contemporaneous publication dates, not from retrospective summaries, which have a way of getting things wrong.
The decision to remove all researcher and company names was deliberate. The technique is the story. It doesn't belong to any one discoverer, and understanding how it works — and how to defend against it — doesn't require knowing who built the first clean template or which firm published the first threat advisory. What matters is what the attack does to ordinary people who have done nothing wrong except trust a padlock icon.
The claim that this attack still works today is not conjecture. Documented phishing kits incorporating the technique were still active and commercially available as of late 2025. The authentication infrastructure that would neutralise it — passkeys and FIDO2 deployed universally — does not yet exist at scale. Both of those facts were verified before this article was published.
Nothing in this piece was generated or drafted by an AI writing tool. Research assistance was used for source retrieval and cross-referencing. Every sentence was written and reviewed by a human editor.
If you've encountered an attack that resembles what's described here, or if you have documentation of a variant not covered, The Media Glen welcomes contact through its website.