How the Algorithms That Run Your Life Were Rigged Before You Ever Applied
A Synexmedia Investigative Feature
March 2026
• • •
Imagine you walk into a bank. You are dressed well, you have a steady job, and your bills are paid on time. You sit across from a loan officer, hand over your paperwork, and wait. But there is no loan officer. There is no human being reviewing your file at all. Instead, a machine—a piece of software you will never see, built by engineers you will never meet, trained on data you never consented to share—has already decided your fate. In the time it took you to sit down, an algorithm scanned your postal code, your browsing habits, the formatting of your phone’s contact list, and hundreds of other data points you didn’t even know existed. The answer is no. And nobody can tell you why.
Across the industrialised world, automated decision-making systems—commonly called algorithms or artificial intelligence (AI)—have moved from the margins of tech laboratories into the absolute centre of modern life. These systems now decide who gets a mortgage, who gets a job interview, who receives urgent medical care, and who goes to prison. They operate at breathtaking speed, processing millions of decisions per hour with a confidence that human beings could never match. And they are presented to the public as something better than human judgement: objective, data-driven, free from prejudice.
The trouble is, they are none of those things.
A growing mountain of evidence—from academic researchers, investigative journalists, civil rights organisations, and even the courts—reveals that these supposedly neutral systems are riddled with hidden biases. They do not eliminate discrimination. They automate it. They take the racism, sexism, and class prejudice baked into decades of historical data, and they apply those patterns as ironclad rules for the future, at a scale and speed no human bigot could ever achieve. The technical term is algorithmic bias: repeatable errors in a computer system that produce unfair or discriminatory outcomes. But the human term is simpler. It is injustice, running on autopilot.
This article is a comprehensive guide to understanding how that injustice works. It is written for readers who have no technical background whatsoever—you do not need to know what an algorithm is, or how machine learning functions, or what a neural network looks like. By the end, you will understand exactly how automated systems absorb the prejudices of the past, how a shadowy industry of data brokers feeds those systems, how real people’s lives have been damaged in healthcare, housing, finance, and employment, and what governments around the world are doing—or failing to do—about it.
• • •
Part One: What Exactly Is an Algorithm, and Why Should You Care?
Before we go any further, let us strip away the jargon. An algorithm is simply a set of step-by-step instructions for solving a problem. A recipe is an algorithm. The directions on your GPS are an algorithm. When you sort your laundry by colour, you are running a very basic algorithm in your head.
In the world of technology, algorithms are sets of mathematical instructions that computers follow to analyse information and make predictions or decisions. When people talk about “artificial intelligence” or “machine learning,” they are usually talking about a specific type of algorithm that can improve itself over time by studying enormous quantities of data. Think of it this way: instead of a programmer writing every single rule by hand, the computer looks at millions of past examples and figures out the patterns on its own.
Here is a simple example. Suppose a bank wants to predict which loan applicants are most likely to repay. A traditional approach would be for a human underwriter to review each application. A machine-learning approach feeds the computer thousands of past loan records—who repaid, who defaulted, and every detail about those borrowers—and the computer learns to spot patterns that predict repayment. It might discover that applicants with longer employment histories and lower debt ratios tend to repay more reliably. So far, so reasonable.
But what if the historical data contains a pattern that is not reasonable at all? What if, because of decades of discriminatory lending, the data shows that people living in predominantly Black neighbourhoods defaulted more often—not because of personal irresponsibility, but because they were systematically denied the financial support that white borrowers received? The computer does not understand history. It does not understand racism. It sees a statistical pattern, and it applies it as a rule. Residents of those neighbourhoods are now penalised again, not by a prejudiced loan officer, but by a machine that learned prejudice from the data it was given.
That, in its simplest form, is algorithmic bias. The machine is not making an error in the mathematical sense. It is finding a real pattern in the data. But the data itself was shaped by injustice, so the pattern the machine learns is an echo of that injustice—projected forward into the future with mechanical precision.
“Algorithmic bias is not a glitch. It is history, encoded into mathematics and deployed at scale.”
Part Two: The Invisible Industry That Knows Everything About You
Every algorithm needs fuel, and that fuel is data—your data. The sheer volume of personal information required to power these systems has created one of the most lucrative and least understood industries on the planet: data brokerage. These are companies whose entire business model is collecting, packaging, and selling information about you to anyone willing to pay.
To appreciate the scale, consider a company called Acxiom LLC. Most people have never heard of it. Yet Acxiom maintains detailed profiles on approximately 2.5 billion consumers across 62 countries—roughly 68 percent of the world’s internet-connected population. The company gathers information from public records, commercial transactions, loyalty programmes, social media activity, and dozens of other sources. It anonymises this data (or claims to), then sells it to banks, retailers, insurance companies, and governments.
Acxiom is not alone. Oracle America, Inc. has built what industry observers describe as a data powerhouse for the generative AI age, acquiring smaller brokers and assembling a database that can construct detailed consumer profiles using over 50 pre-built attributes extracted by dozens of AI models. Epsilon, another giant, processes data for major marketing campaigns worldwide. LexisNexis, originally a legal research service, now functions as a massive aggregator of personal and public records used in everything from insurance underwriting to law enforcement background checks.
Then there are the companies that operate in more specialised—and arguably more consequential—domains. CoreLogic provides analytics for the mortgage industry, holding records on hundreds of millions of property transactions. The “Big Three” consumer credit reporting agencies—Equifax, Experian, and TransUnion—have evolved far beyond simple credit scoring. They are, in practice, enormous data brokers that collect identifying and financial information to help businesses assess whether a person is worth an investment. Historically, these agencies have been criticised for selling data on deeply personal behaviours, including sexual orientation, under the theory that such characteristics could predict loan repayment likelihood.
How They Get Your Data
The collection methods are astonishingly varied. Cookies—tiny tracking files placed on your web browser—allow advertisers to follow your activity across the entire internet, building a portrait of your interests, habits, and purchasing patterns. This practice was pioneered by early online advertising networks and is now virtually universal. But cookies are only the beginning.
A particularly insidious method involves Software Development Kits, or SDKs. These are free code libraries that data brokers offer to app developers. When a developer includes an SDK in their app, it dramatically speeds up the development process and reduces costs. In return, the SDK quietly transmits user data—location, device information, usage patterns—back to the broker. You download a free flashlight app, and a company you have never heard of starts tracking where you go, when you sleep, and how often you visit the doctor’s office.
This creates what researchers call a “privacy paradox.” People say they value their privacy, but in practice, they accept invasive tracking for trivial conveniences—a free app, a small discount, a simpler sign-up process. The result is an ocean of personal data flowing into systems that may, at some future date, be used to deny you a loan, raise your insurance premium, or flag you as a security risk.
• • •
Part Three: The Four Doors Through Which Bias Enters
Understanding algorithmic bias requires knowing that it does not come from a single source. Researchers have identified four major families of bias, each entering the system at a different stage. Think of it as four doors, each one an opportunity for prejudice to sneak inside.
Door One: Historical Bias
This is the most fundamental form. Historical bias occurs when the data used to train an algorithm reflects existing social inequalities. If the past was unfair, the data from the past is unfair, and any model trained on that data will learn to replicate that unfairness.
A vivid example comes from large language models—the AI systems behind tools like chatbots and text generators. These models are trained on billions of words from the internet. Because the internet reflects (and often amplifies) societal stereotypes, the models absorb those stereotypes as statistical truths. Studies have found that in these models, words like “programmer” and “pilot” cluster near male-associated terms, while words like “homemaker” and “maid” cluster near female-associated terms. The model is not expressing an opinion. It is reflecting the biased language of the world it was trained on.
Door Two: Measurement Bias
This occurs when the tools used to collect data are themselves flawed. A striking real-world example comes from photography and medical technology. For decades, camera sensors and film stocks were calibrated primarily for lighter skin tones. This was not an accident; it was a design choice rooted in the demographics of the consumer market. The consequences ripple forward into AI. Facial recognition systems trained on images captured by these biased sensors are dramatically less accurate for people with darker skin—studies have shown error rates up to ten times higher. In law enforcement, these failures have led to wrongful arrests when facial recognition misidentifies a suspect.
Door Three: Proxy Bias
Even when designers explicitly remove protected characteristics—race, gender, religion—from an algorithm’s inputs, the system can still discriminate by using proxies: variables that are closely correlated with those protected traits. The most common example is postal code. Due to the long history of residential segregation in countries like the United States and Canada, a person’s postal code is often a reliable predictor of their racial background. If an algorithm uses postal code to estimate financial risk, it may effectively be using race, without ever mentioning race. This process has been called “digital redlining,” a modern echo of the explicitly racist housing policies that once drew literal red lines around minority neighbourhoods to deny them services.
Door Four: Feedback Loops
Perhaps the most dangerous form of bias is the self-reinforcing feedback loop. Consider a predictive policing algorithm that directs officers to neighbourhoods with high historical arrest rates. More officers in those neighbourhoods inevitably leads to more arrests—not necessarily because more crime is being committed there, but because more police are watching. Those new arrests feed back into the system as fresh data, “confirming” the algorithm’s original prediction and intensifying the cycle. The algorithm appears to be getting more accurate, when in reality it is manufacturing its own justification.
• • •
Part Four: When the Algorithm Plays Doctor
The healthcare sector offers some of the most alarming case studies of algorithmic bias, because the stakes are literally life and death.
The Optum Scandal
One of the most widely cited examples involves a commercial algorithm developed by Optum and used by major American health insurers to identify patients who would benefit from intensive care management programmes. The algorithm’s job was straightforward: find the sickest patients so they can receive extra support. But instead of measuring actual illness, the algorithm used a proxy: healthcare spending. It assumed that patients who cost the healthcare system more money were sicker.
The problem was devastating in its simplicity. Because of systemic barriers—including unequal access to care, implicit provider bias, and economic disadvantage—Black patients in the United States historically spend less on healthcare than white patients with the same conditions. At the hospital studied, Black patients cost on average $1,800 less per year than white patients with identical numbers of chronic illnesses. The algorithm interpreted this spending gap as a health gap. It concluded that Black patients were healthier and needed less help. In reality, they were just as sick but had been historically underserved. The machine took that history of neglect and turned it into a rule for continued neglect.
Medical Devices That Cannot See You
The bias extends to the physical tools of medicine. Pulse oximeters—those small clips placed on a fingertip to measure blood oxygen levels—have been shown to systematically overestimate oxygen saturation in patients with dark skin. This is not a minor calibration issue. During the COVID-19 pandemic, inaccurate oxygen readings meant that some Black patients appeared healthier than they were, delaying critical interventions like supplemental oxygen or ventilation. Studies in the United States linked this measurement bias to delayed treatment and higher mortality rates.
AI-based diagnostic tools exhibit similar problems. Systems designed to detect skin cancer, for example, are trained overwhelmingly on images of lighter-skinned patients. In one widely referenced dataset containing over 100,000 dermatological images, only a single image depicted dark brown or black skin. A system trained on such data will inevitably perform poorly when examining patients whose skin tones were barely represented in its training.
The Global Dimension
These biases are amplified in the Global South. In India, digital health initiatives built around smartphone access effectively exclude large portions of women, older adults, and rural populations who lack devices or reliable internet. During the COVID-19 pandemic, India’s Aarogya Setu contact tracing app failed to reach rural communities, creating an uneven public health shield that left the most vulnerable populations most exposed. In Brazil, AI models trained on urban hospital data failed to capture disease patterns in rural areas, missing epidemics because the environmental and socioeconomic factors unique to those communities were absent from the training data.
• • •
Part Five: The Gatekeepers of Money and Home
Credit Scoring: The Score You Cannot Challenge
For millions of people, access to economic opportunity begins with a credit score. Traditionally, these scores were calculated using relatively straightforward financial data: payment history, outstanding debts, length of credit history. But the rise of “big data” has transformed credit scoring into something far more opaque. Modern algorithms may incorporate alternative data sources—how you capitalise words in your text messages, the formatting of your smartphone’s contact list, your social media activity—to assess creditworthiness.
A 2025 academic review found that female applicants consistently received credit scores six to eight points lower than male counterparts with virtually identical financial profiles. When researchers tested large language models on loan evaluation tasks, the AI systems regularly recommended charging higher interest rates to Black applicants while approving white applicants with the same qualifications.
The most troubling aspect may not be the bias itself, but the impossibility of fighting it. Most of the widely used credit algorithms cannot explain how they reached a specific decision. This is the infamous “black box” problem. A person denied credit has no way to know which of the hundreds of data points tipped the scale against them, no way to challenge the logic, and no meaningful avenue for appeal. The consequences of a bad score cascade through a person’s life for years, affecting everything from rent applications to car insurance rates.
Housing: The New Redlining
The rental housing market has become another frontier of algorithmic gatekeeping. Landlords increasingly rely on automated tenant screening services that aggregate credit scores, eviction records, and criminal background checks into a single accept-or-reject recommendation. These systems are fast, cheap, and often dangerously inaccurate.
A behavioural study found that when landlords used automated screening, they relied almost exclusively on the system’s recommendation rather than reviewing the underlying data—even when that data contained crucial context, such as a dismissed criminal charge or an eviction lawsuit that was ruled in the tenant’s favour. The score flattens nuance into a single number, and the human decision-maker trusts the number.
SafeRent, a major tenant screening company, faced a lawsuit alleging that its algorithm assigned disproportionately lower scores to Black and Hispanic applicants compared to white applicants. The algorithm ignored housing vouchers and spotless rental records, focusing instead on credit and court records that were themselves shaped by decades of segregation and inequality. Meanwhile, AI-driven rent pricing tools have been accused of enabling collusion between landlords, pushing rents higher and squeezing tenants out of their communities.
• • •
Part Six: The Machine That Throws Away Your Résumé
The automation of hiring has introduced a new category of risk. When a company receives thousands of applications for a single position, it is tempting to let a machine do the initial sorting. But that machine learns from past hiring decisions—and if those past decisions were biased, the machine inherits the bias.
The most famous example is Amazon’s experimental recruitment AI, which the company developed internally and eventually scrapped. The system was trained on a decade of résumé data that reflected the company’s historically male-dominated hiring. It learned that male applicants were more likely to be hired, and it began penalising résumés that contained the word “women’s”—as in “women’s chess club captain”—and downgrading graduates of all-women’s colleges.
The Workday Lawsuit: A Turning Point
The legal landscape shifted dramatically with Mobley v. Workday, widely described as the most closely watched algorithmic employment case of 2025 and 2026. Derek Mobley, a Black applicant over the age of 40 with a disability, alleged that Workday’s automated résumé screening tool systematically rejected his applications across more than 100 different companies that used the software. His argument was that the AI produced illegal disparate impacts—meaning it disproportionately harmed people based on race, age, and disability—through the score thresholds it applied.
Workday’s initial defence was remarkable: the company argued that it was merely a software provider, not an employer, and therefore could not be held responsible for discriminatory outcomes. A federal judge disagreed. The court ruled that Workday could be treated as an “agent” of the employers who used its tool, and allowed the case to proceed as a class action. The judge ordered discovery into features deployed after Workday’s 2024 acquisition of HiredScore, an AI screening company, and emphasised that the speed of automated rejections left virtually no room for human review or override.
The case has sent shockwaves through the HR technology industry. Companies that purchase automated screening tools are now demanding bias audit clauses in their contracts, and financial analysts predict a significant rise in compliance spending as new state laws require annual algorithmic assessments.
• • •
Part Seven: The Laws Trying to Catch Up
Europe Leads: The EU AI Act
The European Union adopted the world’s first comprehensive AI law in June 2024. Known as the EU AI Act, it establishes a risk-based framework that classifies AI systems according to the danger they pose to fundamental rights. Systems used in healthcare, employment, education, and law enforcement are designated “high-risk” and must undergo rigorous assessments covering data quality, transparency, human oversight, and bias testing before they can be deployed. At the highest end of the scale, “unacceptable risk” systems—such as government-operated social scoring—are banned outright.
America’s Fragmented Response
The United States has taken a fundamentally different approach. Rather than passing a single national law, the U.S. relies on a patchwork of sector-specific regulations and voluntary commitments. President Biden’s 2023 Executive Order 14110 laid out principles for algorithmic fairness, but the subsequent Executive Order 14179, issued in 2025 under President Trump and titled “Removing Barriers to American Leadership in Artificial Intelligence,” signalled a shift toward lighter regulation in the name of innovation.
Into this federal vacuum, individual states have stepped. New York City’s Local Law 144 mandates annual bias audits for automated employment tools. The Colorado AI Act requires annual impact assessments for high-risk systems. These local laws often rely on a legal concept called “disparate impact”—meaning a system can be held liable if it disproportionately harms a protected group, even if no one intended it to discriminate.
The Right to an Explanation
Europe’s General Data Protection Regulation (GDPR) includes a provision—Article 22—that grants individuals the right not to be subject to significant decisions made entirely by automated systems. In February 2025, the Court of Justice of the European Union issued a landmark ruling clarifying that companies must provide intelligible information about the logic behind automated decisions. Crucially, the court held that trade secrecy cannot be used as a blanket excuse to withhold this information. If a credit score “decisively influences” a lender’s decision, the entity that generated that score bears responsibility under Article 22.
• • •
Part Eight: Fighting Back — Solutions, Safeguards, and the Long Road Ahead
Technical Fixes
Researchers have developed a growing toolkit of technical interventions. Data cleaning and screening involves auditing training datasets for errors, gaps, and historical distortions before any model is built. Adversarial debiasing uses a second AI model as a kind of internal auditor, specifically designed to find inputs that cause the primary model to behave unfairly. Explainable AI (XAI) tools, such as IBM’s AI Fairness 360 or the Aequitas toolkit, allow auditors to identify exactly which variables are driving biased outcomes. And participatory design brings affected communities into the design process, ensuring that the people most likely to be harmed have a voice in how the system is built.
The Human Oversight Problem
One of the most commonly proposed safeguards is “a human in the loop”—the idea that a real person should review and approve every consequential algorithmic decision. In theory, this sounds like common sense. In practice, it is deeply unreliable. Research on “automation bias” shows that when humans work alongside automated systems, they tend to trust the machine’s judgement over their own, even when they have evidence the machine is wrong. Over time, the human reviewer becomes a rubber stamp. To make human oversight meaningful, organisations must give reviewers genuine authority to override the system, expose them to the system’s failures during training, and invest in AI literacy so they understand what the machine is actually doing.
Mandatory Independent Audits
Perhaps the single most important reform is the requirement for third-party audits of high-stakes algorithms. These audits must go beyond surface-level testing. They need access to the algorithm’s training data, its decision logic, and its real-world outcomes across different demographic groups. Without independent scrutiny, organisations are simply grading their own homework.
• • •
Part Nine: What Happens If We Do Nothing
Algorithmic Inheritance
A 2025 study from the MIT Media Lab uncovered a phenomenon researchers call “Algorithmic Inheritance.” The study found that surnames associated with historical wealth and power—think old-money family names—consistently boosted a person’s chances in AI-driven hiring, lending, and leadership selection, regardless of their actual qualifications. The AI had learned that certain names correlated with success, and it applied that correlation as a filter. In effect, the algorithm was extending the privileges of the past into the future, creating a cycle where social status becomes a self-fulfilling prophecy passed down through generations.
Environmental Injustice by Algorithm
Algorithmic bias does not only affect individuals. In the growing field of sustainability AI, biased models risk misallocating resources on a planetary scale. Environmental monitoring data tends to be denser and higher-quality in wealthy areas, simply because that is where the sensors are. An AI model trained on this data may underestimate pollution in poor communities while overestimating problems in affluent ones. The result is a distorted picture that directs cleanup resources toward the wrong places, deepening the environmental inequality that already exists.
The Alignment Question
Looking further ahead, the rapid advance of AI capabilities raises what researchers call the “alignment problem”: the challenge of ensuring that increasingly powerful AI systems remain aligned with human values and interests. If bias is already embedded in today’s relatively simple systems, the risks multiply as AI becomes more autonomous and more capable. The concern is not merely that biased AI will make bad decisions, but that it will make bad decisions with such speed and scale that human correction becomes impossible. The window for building fairness into the architecture of these systems is now. Once the foundations are set, they become extraordinarily difficult to change.
“The goal of just automation is not to eliminate all error. It is to ensure that the errors that do occur do not systematically fall on the shoulders of those already marginalised by history.”
• • •
The Verdict
The data brokerage industry is the invisible engine of the automated economy, and its lack of transparency is a fundamental barrier to fairness. Algorithmic bias is not a software bug—it is a mirror reflecting the societies that built these systems, and fixing the code without addressing the underlying data and the historical injustices encoded within it is like treating a symptom while ignoring the disease. The legal landscape is shifting, as the Workday class action and the European Court’s transparency ruling demonstrate, signalling that the era of “software provider immunity” may be drawing to a close.
But the most important conclusion is this: algorithmic bias is not somebody else’s problem. If you have applied for a loan, rented an apartment, submitted a résumé, visited a hospital, or simply browsed the internet, you have already been profiled, scored, and sorted by systems you did not choose, cannot see, and were never asked to consent to. The invisible judges are already sitting in their courtrooms. The question is whether we will demand accountability before the gavel falls—or after.
BEHIND THE STORY
An editorial note on how this piece was researched and written
Why We Wrote This
This article began with a simple question that kept nagging at us: if algorithms are making so many of the decisions that shape people’s lives, why do so few people understand how they work? The academic literature on algorithmic bias is extensive and growing, but it is written for specialists. The news coverage, while improving, tends to focus on individual scandals rather than the systemic architecture that connects them. We wanted to build a single, comprehensive guide that a reader with no technical background could pick up and, by the end, genuinely understand what is at stake.
The Research
We drew on more than 50 primary sources for this piece, spanning peer-reviewed academic papers, investigative journalism, legal filings, regulatory documents, and reports from civil rights organisations. The legal analysis of the Workday litigation is based on court filings and analyses from Fisher Phillips, FairNow, and Inside Tech Law. The healthcare findings draw primarily from research published in the Proceedings of the National Academy of Sciences, a comprehensive analysis of bias in public health AI from the National Institutes of Health’s PubMed Central database, and clinical studies on pulse oximeter accuracy conducted at major U.S. hospitals.
The data brokerage section relied heavily on a transnational interdisciplinary review published in Internet Policy Review, supplemented by investigative databases maintained by Privacy Bee and analysis from Bernard Marr’s consumer data tracking reports. The financial and housing sections drew on research from Women’s World Banking, a 2025 credit scoring case study from Real World Data Science, Georgetown Law’s analysis of tenant screening programmes, and Columbia University’s review of AI-driven rent pricing tools.
What Surprised Us
Several findings during the research process genuinely startled us. The MIT Media Lab’s discovery that surnames alone could influence AI-driven hiring and lending outcomes was one. It is one thing to know abstractly that historical privilege perpetuates itself; it is another to see a controlled study demonstrate that your family name—a characteristic you did not choose and cannot change—can measurably alter your economic future in an algorithmic system.
The Optum healthcare algorithm was another revelation, not because of the bias itself, but because of its elegant simplicity. Using healthcare spending as a proxy for illness severity seems perfectly reasonable on the surface. It took careful analysis to reveal that the proxy was deeply contaminated by systemic racism, and that the algorithm was essentially automating the consequences of decades of unequal access to care. It is a cautionary tale about how seemingly neutral design choices can encode profound injustice.
We were also struck by the behavioural research on landlord decision-making. The finding that landlords almost never looked past the automated screening score—even when critical context was readily available—speaks to a broader human tendency that makes algorithmic bias so dangerous: once a number is presented with the authority of technology behind it, people stop asking questions.
What We Left Out
No single article can cover every dimension of this topic. We made deliberate choices to focus on the sectors where algorithmic bias has the most immediate impact on everyday people: healthcare, housing, finance, and employment. We touched on criminal justice and predictive policing only briefly, though that field warrants an investigation of its own. We also did not explore military applications of biased AI, algorithmic content moderation on social media platforms, or the specific challenges facing Indigenous communities in Canada, all of which are urgent and deserving of dedicated coverage.
Where the Story Goes Next
The regulatory landscape is evolving rapidly. As the EU AI Act’s provisions continue to take effect and the Workday litigation moves through the courts, the legal framework for algorithmic accountability is being written in real time. We intend to follow these developments closely. We are also tracking the emergence of mandatory bias audit requirements at the state level in the United States and the growing call for international harmonisation of AI governance standards.
If you have experienced what you believe to be algorithmic discrimination—a loan denial that did not make sense, a résumé that seemed to vanish into a black hole, a medical decision that felt wrong—your story matters. The pattern only becomes visible when individual experiences are connected.
— The Synexmedia Editorial Team, March 2026