← KeepSanity
Apr 08, 2026

Untitled Post

AI systems don’t emerge from a vacuum. They reflect and often amplify the biases embedded in their training data and their designers’ choices. The mechanisms are technical-gradient descent optimizi...

3. Bias, Discrimination, and Broken Fairness

AI systems don’t emerge from a vacuum. They reflect and often amplify the biases embedded in their training data and their designers’ choices. The mechanisms are technical-gradient descent optimizing on skewed proxies-but the consequences are deeply human beings suffering real harm.

Algorithmic bias refers to systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. In AI, this often happens when systems are trained on biased data, leading to discrimination against minority groups in hiring, finance, and justice (see facts: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12). This means AI can inherit and amplify human prejudices present in their training data, causing discriminatory outcomes.

The COMPAS controversy set the template. In 2016, ProPublica exposed that the COMPAS recidivism algorithm used in U.S. criminal justice falsely flagged Black defendants as higher risk 45% more often than white defendants with identical recidivism rates. The algorithm didn’t explicitly use race as an input, but it learned racial disparities from historical data that reflected decades of unequal policing and sentencing.

Facial recognition errors follow predictable patterns. A 2018 NIST study found error rates of 34.7% for darker-skinned women versus 0.8% for light-skinned men. These aren’t abstract statistics. Robert Williams spent 30 hours in a Detroit jail in 2020 after a facial recognition system from Amazon Rekognition misidentified him as a shoplifting suspect. Nijeer Parks was detained for 10 hours in 2019 over a false match from a Woodbridge, New Jersey store theft.

Hiring algorithms discriminate too. Amazon scrapped its AI-powered recruiting tool between 2014 and 2017 after internal audits discovered it downgraded resumes containing words like “women’s” or female names. The system had learned from historical hiring data dominated by male applicants. A 2019 UC study found that Facebook’s ad algorithms perpetuated race and gender stereotypes in housing and employment targeting without explicit instructions to do so.

Healthcare bias carries life-or-death stakes. A 2019 Science paper revealed that an algorithm from Optum used by hospitals to prioritize care systematically underestimated Black patients’ needs. The flaw: it used historical healthcare spending as a proxy for health need, but Black patients historically had less access to care and thus lower spending even at equivalent illness severity. The algorithm missed equal-risk Black patients 50% more often.

These ai algorithms often function as black boxes-neural networks with billions of parameters that defy human audit. Appeals and corrections become nearly impossible when the system can’t explain its reasoning. Despite the proliferation of “responsible AI” frameworks, MIT’s 2025 study found that 95% of corporate AI projects fail to properly integrate fairness testing, prioritizing speed and profit over explainable ai principles.