← KeepSanity
Apr 08, 2026

Risks of AI: From Everyday Harms to Existential Threats

AI risks span a wide spectrum-from measurable present-day harms like algorithmic bias, deepfake manipulation, and AI-enabled cybercrime to longer-term concerns about artificial general intelligence...

Key Takeaways

Introduction: Why AI Risk Matters Now

Between 2012 and 2024, artificial intelligence went from an academic curiosity to front-page news reshaping industries, elections, and warfare. The turning points came fast: ImageNet’s 2012 deep learning breakthrough enabled modern image recognition, AlphaGo’s 2016 victory over world Go champions demonstrated AI’s strategic prowess, and ChatGPT’s November 2022 launch brought generative AI into mainstream consciousness almost overnight.

Since then, the acceleration has been relentless. GPT-4 arrived in March 2023 with multimodal capabilities handling text, images, and code. Gemini 1.5 introduced ultra-long context windows exceeding a million tokens. Claude 3 emphasized constitutional AI for safer outputs. These AI models now power search engines, social media algorithms, hiring software, predictive policing tools, high-frequency trading, drone targeting, and diagnostic imaging.

This means AI is no longer an isolated technology. When an AI system fails today, the effects cascade across interconnected systems-from wrongful arrests in Detroit to manipulated elections in Slovakia to autonomous weapons selecting human targets in Gaza.

For clarity: most current AI systems are what researchers call “narrow AI”-specialized tools excelling at single tasks like facial recognition or language generation. They’re not artificial general intelligence (AGI), which would match or exceed human intelligence across diverse cognitive domains. But these narrow systems are becoming increasingly general-purpose, and the same dynamics creating today’s concrete risks-bias, opacity, racing incentives-also set the stage for catastrophic outcomes from more capable successors.

This article moves from near-term harms already visible in everyday life to long-term existential risks that AI researchers increasingly take seriously, then to regulation and what individuals and organizations can realistically do in 2024–2026.

What Is AI Risk? Main Categories Explained

AI risk refers to the potential for artificial intelligence systems to cause harm, whether through unintended consequences, misuse, or failures in design and deployment. These risks can affect individuals, organizations, and society at large. The main categories of AI risk include:

Understanding these categories is essential for recognizing both the immediate and long-term challenges posed by AI technologies.

Near-Term Societal Risks: Bias, Inequality, and Everyday Harms

Many AI risks aren’t hypothetical futures-they’re measurable right now. Discriminatory outcomes, wrongful arrests, unfair loan decisions, and skewed access to education and healthcare are already documented consequences of deploying AI algorithms without adequate oversight.

Algorithmic Bias in Action

Algorithmic bias occurs when AI systems produce unfair or discriminatory outcomes due to biases present in training data or design. For example, AI algorithms can perpetuate existing societal biases, leading to unfair treatment in applications such as recruitment and law enforcement.

AI bias emerges when training data reflects historical prejudices, and homogeneous developer teams fail to catch the resulting problems.

Consider these concrete cases:

COMPAS Recidivism Scoring (2016): ProPublica’s analysis of this US court tool found it was twice as likely to falsely label Black defendants as high-risk compared to white defendants. The system used proxy variables like zip code that correlated with race, embedding discrimination into sentencing recommendations.

Facial Recognition Failures (2019–2020): NIST-tested vendors showed error rates up to 100 times higher for Black and Asian faces. In Detroit, Robert Williams-a Black man-was wrongfully arrested in 2020 based on misidentification by facial recognition systems. Similar cases have since emerged across the country.

Amazon’s Recruiting Tool (2018): Amazon scrapped an AI hiring system after discovering it systematically downranked women’s resumes. The model had been trained on 10 years of male-dominated hiring data, learning to penalize indicators of female candidates.

Medical Algorithm Bias (2019): A University of California study found healthcare algorithms underestimated Black patients’ medical needs, allocating 18% less care despite equal severity-because the system used past healthcare spending as a proxy for illness, reflecting existing disparities in access.

These aren’t edge cases. A 2023 Wilke AI Index found that AI development remains roughly 90% male and skewed heavily toward Western, high-income backgrounds-meaning the teams building these systems often lack the perspectives needed to catch their blind spots.

Socioeconomic Displacement

Beyond discrimination, AI development is reshaping labor markets. McKinsey’s 2017–2030 analysis projected that roughly 30% of work hours in the US and Europe could be automated, potentially displacing 15–30 million jobs in clerical, call-center, and routine office roles.

Earlier estimates from Oxford’s 2013 study suggested 47% of US jobs were at risk of automation. While these projections vary, the pattern is consistent: displacement hits lower-income and minority workers hardest, concentrating gains among those who own or control AI tools.

Mental Health and Cognitive Effects

The harms extend beyond economics. A 2024 Stanford study linked heavy student use of AI tools to a 20% drop in writing originality. AI companions like Replika have been linked to dependency issues, with users reporting grief-like responses when features changed in 2023.

Overreliance on AI assistance may erode critical thinking-a concern as these tools become embedded in schools, offices, and daily decision making processes.

A diverse group of office workers is gathered around computer screens, displaying concerned expressions as they engage in discussions about the potential risks of AI systems and their implications for human safety. Their focus suggests a serious examination of AI development and the existential threats posed by powerful AI technologies.

Information Hazards: Misinformation, Deepfakes, and Social Manipulation

Since 2016, recommendation algorithms on YouTube, TikTok, and Facebook have shaped what billions of people believe about politics, health, and each other. These systems maximize engagement-which often means amplifying polarizing content. The 2020 Cambridge Analytica revelations showed how this dynamic could be weaponized to influence elections.

Generative AI since 2022 has supercharged these threats. Language models can now produce convincing but false content at scale, while diffusion models generate synthetic images and audio practically indistinguishable from reality.

Real-World Deepfake Incidents

The 2024 election cycle saw AI generated content deployed as a political weapon:

Beyond elections, nonconsensual deepfake pornography targets women in 95% of cases according to a 2023 Sensity AI report-with minors affected in 10% of documented incidents.

Model Collapse and Information Pollution

A 2024 Nature paper documented a troubling feedback loop: when AI models are trained on AI generated content (which is flooding the web), their performance degrades by 30–50% over generations. Rare concepts and minority perspectives get washed out, leaving a polluted information environment that makes the next generation of models even worse.

Threats to Democratic Discourse

Bad actors can now deploy:

Attribution becomes nearly impossible when malicious actors use AI for persuasion at scale. The asymmetry is stark: a single person with access to modern AI tools can generate more synthetic content than entire news organizations can fact-check.

Security and Cyber Risks: From Script Kiddies to Autonomous Attackers

AI technology lowers the barrier for cybercrime dramatically. Non-experts can now launch sophisticated attacks using off-the-shelf tools-and defenders face adversaries that adapt faster than ever.

AI-Assisted Fraud and Phishing

Language models generate multilingual phishing emails that fool 40% more recipients than traditional attacks, according to a 2024 Proofpoint study. Voice-cloning scams have mimicked family members in “kidnapping” frauds costing $25 million in US cases during 2023 alone.

Polymorphic malware-code that mutates to evade detection-has become more accessible through AI-assisted generation. The 2025 ransomware campaigns already show these techniques in action.

Vulnerability Discovery at Scale

Tools like GitHub Copilot can scan code repositories for exploitable bugs 5x faster than manual review. In the right hands, this accelerates security research. In the wrong hands, it enables mass exploitation of infrastructure-power grids, hospitals, logistics networks.

Privacy and Data Risks

AI training on scraped data creates novel privacy risks. Early ChatGPT bugs in 2023 exposed user conversation histories and regurgitated personally identifiable information. Surveys show 68% of organizations have experienced AI-related data incidents.

Corporate models can inadvertently memorize confidential documents from training data, and prompt injection attacks can hijack AI agents integrated into browsers or email to exfiltrate sensitive information.

Model Theft and Supply Chain Attacks

Attackers can reconstruct model weights through API queries, as demonstrated in 2023 attacks on Hugging Face models. Training data poisoning can subtly shift model behavior, and supply-chain attacks on datasets threaten the integrity of AI systems before they’re even deployed.

AI in Warfare and State Power: The New Arms Race

The dynamics driving AI development increasingly mirror the nuclear arms race-but moving faster and with even less regulatory oversight to date. The US Department of Defense allocated $1.8 billion to AI in its 2023 strategy. China invests an estimated $10 billion annually. Russia and the EU are racing to keep pace.

Lethal Autonomous Weapons Systems

Lethal autonomous weapons systems (LAWS)-sometimes called “killer robots”-are no longer theoretical:

Turkey’s Kargu-2 Drones (Libya, 2020): A UN report documented autonomous drones hunting targets without human intervention during the Libyan civil conflict.

Israel’s Lavender System (Gaza, 2024): According to +972 Magazine, this AI system reportedly selected 37,000 targets with an acknowledged 10% civilian error rate, raising concerns about human supervision in life-and-death decisions.

These systems lower the political cost of aggression. When own-soldier casualties drop toward zero, the barriers to military action erode. Drone swarms can scale targeting with minimal human intervention, making autonomous weapons a force multiplier for states willing to deploy them.

AI-Enhanced Cyberwarfare

US Cyber Command’s 2024 AI tools automate malware reconnaissance and deployment. The difficulty of attribution in cyberspace raises the risk of miscalculation and escalation-particularly when AI systems manage responses to perceived attacks.

Surveillance and Social Control

China’s facial recognition systems in Xinjiang have reportedly tracked 1.4 million Uyghurs, according to 2020 reports. US predictive policing tools in Los Angeles flagged minority individuals 3.5x more often than others per a 2021 study.

The infrastructure for large-scale surveillance exists. The question is how widely it spreads-and who controls it.

Flash War Risks

When AI systems manage early-warning, targeting, or escalation decisions, they can interact in unforeseen ways. Think of the 2010 Flash Crash in financial markets-but with kinetic weapons instead of stock prices. The speed of AI decision making processes may outpace human ability to intervene, creating serious threats of unintended escalation.

A military drone is depicted flying over rugged mountainous terrain during a vibrant sunset, highlighting the intersection of advanced AI technology and military applications. This image reflects the potential benefits and risks of AI systems in defense, raising concerns about safety and human control in the context of autonomous weapons.

Environmental and Resource Risks

Modern AI-especially large language models and diffusion models-is extraordinarily resource-intensive. Training GPT-3 in 2020 consumed an estimated 1,287 megawatt-hours of electricity. GPT-4-class models use roughly 10x more.

Data centers powering AI inference are projected to draw 1,000 terawatt-hours annually by 2026, according to a 2024 IEA report-equivalent to Japan’s entire electricity consumption.

Water and Cooling

Hyperscale data centers require massive cooling. Google’s data centers used 5 billion gallons of water in 2022, sparking protests in Arizona and concerns in water-stressed regions where new facilities are planned.

Hardware Supply Chains

AI development depends on rare earth elements and advanced chips manufactured primarily by TSMC in Taiwan and designed by NVIDIA. This concentration creates:

While AI can also optimize energy use and logistics, the current trajectory-ever-larger models, more inference-risks outpacing efficiency gains without policy intervention. The environmental impact is a cost that current market incentives largely externalize.

Long-Term and Existential Risks: AGI, Misalignment, and Runaway Systems

Artificial general intelligence refers to systems that could outperform human intelligence across most economically valuable tasks-and potentially improve their own capabilities autonomously. Superintelligence would exceed humans in all cognitive domains.

These aren’t near-term realities, but AI researchers increasingly treat them as serious planning considerations rather than science fiction.

Expert Concern Is Rising

In March 2023, the Future of Life Institute’s open letter calling for a pause on training systems more powerful than GPT-4 gathered 33,000 signatures, including Elon Musk and Geoffrey Hinton.

In May 2023, researchers from OpenAI, Anthropic, DeepMind, and leading universities signed a one-sentence statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Surveys of AI experts (e.g., 2023 AI Impacts) suggest a 50% probability of high-level machine intelligence by 2047, with 10–20% probability estimates for existential risk if development continues without strong safeguards.

Alignment and Control Problems

AI safety research grapples with fundamental challenges:

Catastrophic Scenarios Short of Extinction

Even without apocalyptic outcomes, powerful AI systems could:

These risks are extensions of dynamics already visible: the AI race prioritizing speed over safety, opacity in how models work, and misalignment between what systems optimize for and what humans actually want.

Maintaining Perspective

Existential threat scenarios depend on future systems, not current ones. But the decisions shaping those futures-how we regulate AI, what safety research we fund, how we deploy AI systems-are being made now.

Both skeptical and concerned perspectives deserve consideration. What’s clear is that treating these risks as purely hypothetical would be a mistake, given the pace of capability development and the explicit warnings from AI researchers at the frontier.

Governance, Regulation, and Safety Efforts

AI risk isn’t inevitable destiny. Governance choices in 2024–2030 will strongly shape outcomes-similar to how nuclear non-proliferation or climate policy turning points shaped their respective domains.

Major Regulatory Efforts

EU AI Act (2024): The most comprehensive AI regulation to date, establishing risk tiers from minimal risk to unacceptable. It bans certain uses of real-time biometric surveillance and mandates transparency for high-risk applications, with fines up to 7% of global revenue.

US Biden Executive Order (October 2023): Required safety testing for powerful AI systems, established reporting requirements for AI labs, and directed agencies to develop sector-specific guidance.

NIST AI Risk Management Framework (2023): Provides voluntary guidance for organizations to inventory AI systems, conduct audits, and implement governance practices.

International Coordination: The G7 Hiroshima AI process in 2023 established voluntary codes of conduct. UN discussions on lethal autonomous weapons continue, though binding treaties remain elusive.

Regional Policy Summary

Region

Key Initiative

Status

EU

AI Act

Adopted 2024, phased implementation

US

Biden Executive Order

In effect, agency implementation ongoing

UK

Pro-innovation framework

Sector-specific guidance, no comprehensive law

China

Algorithm regulations

Multiple rules on recommendation, deepfakes

Global

G7 Hiroshima Code

Voluntary, non-binding

Corporate and Technical Safety

Leading AI developers have adopted practices including:

These efforts represent genuine progress, but current governance remains fragmented and often lags behind capability gains-creating a governance gap that must close this decade to matter.

The image depicts an international conference room where a diverse group of delegates is seated around a curved table, engaging in discussions about the implications of artificial intelligence. The setting highlights the importance of collaboration among global leaders to address the risks and ethical considerations associated with AI development and its potential impacts on human safety and society.

Practical Risk Mitigation for Organizations and Individuals

Most readers aren’t heads of state or frontier lab CEOs. But you still influence how AI is used-in your company, your products, your family’s daily choices.

For Organizations

Establish AI Governance Policies: Create clear policies covering acceptable use, oversight requirements, and accountability structures. The NIST AI Risk Management Framework (2023) provides a solid starting template.

Maintain Model Inventories: Know what AI systems you’re using, what data they’re trained on, and what decisions they’re influencing. You can’t manage risks you haven’t identified.

Implement Human-in-the-Loop Review: For high-stakes decisions-hiring, lending, medical diagnosis, legal matters-ensure human supervision before AI outputs become actions with legal responsibility attached.

Conduct Regular Audits: Bias audits have shown 20–50% error reductions when conducted systematically. Security audits catch prompt injection vulnerabilities and data leakage before attackers do.

Cross-Functional Oversight: Don’t leave AI governance solely to engineering. Involve legal, security, ethics, and domain experts in approving AI systems for deployment.

For Individuals

Verify Before Sharing: Especially during election seasons, be skeptical of sensational audio or video. Tools like Hive Moderation can help detect deepfakes, but the best defense is slowing down before amplifying.

Protect Your Data: Be cautious about which services can collect personal data, especially biometric identifiers like face and voice. Opt out of unnecessary data collection when possible, and read privacy policies for AI tools that process health data or personal communications.

Educate Your Family: Children need to understand that AI generated images and audio can be fabricated, and that AI companions are products, not relationships. Digital literacy for the AI age means understanding what these computer systems can and cannot do.

Stay Informed Without Burnout: The AI news cycle moves fast, but most daily updates are noise. Low-frequency, high-signal sources-like KeepSanity AI’s weekly brief-let you track major developments without sacrificing your time or mental health to daily inbox floods.

The Power of Informed Pressure

Responsible use and informed pressure from users, employees, and voters can tilt incentives toward safer AI. Companies respond to customer expectations. Legislators respond to constituent priorities. The choices you make about what AI tools to adopt-and what practices to demand-matter more than they might seem.

Conclusion: Steering AI Away from Catastrophe

AI is already reshaping economies, information ecosystems, security landscapes, and warfare-with clear present-day harms and plausible paths to much larger catastrophes if left unchecked.

The same features that make artificial intelligence powerful-scale, speed, autonomy, and relentless optimization-can either amplify human flourishing or magnify our worst incentives and errors. The difference lies in the choices we make now.

Governments can regulate AI to ensure accountability and prevent the most dangerous capabilities from proliferating unchecked. Companies can adopt strong governance practices rather than racing to deploy AI systems without adequate safeguards. AI researchers can prioritize safety alongside capability. And citizens can demand transparent policies, responsible deployment, and clear rules for AI in warfare, surveillance, and high-stakes decisions.

The 2020s are a critical decade for AI. The decisions made now about arms races, transparency, and oversight will echo through the rest of the century. Whether AI becomes a tool that broadly benefits humanity or a force that concentrates power and amplifies harm depends on whether we treat these risks with the seriousness they deserve.

Staying informed is part of the answer-but it doesn’t have to mean drowning in daily headlines. Curated, low-noise sources let you track how AI risks and regulations evolve without sacrificing your time or sanity. That’s exactly why KeepSanity AI exists: one email per week with only the major AI news that actually matters.

Lower your shoulders. The noise is gone. Here is your signal.

FAQ

Is current AI already an existential threat, or is that still hypothetical?

Today’s deployed AI systems (as of 2024–2025) are not AGI and don’t pose existential risks directly. However, they can cause systemic harm to elections, financial markets, and military conflicts long before reaching superintelligence. The existential scenarios researchers worry about involve future, more capable systems-but the governance decisions shaping those futures are being made right now. Many AI experts believe we must address safety and alignment early, before AGI exists, because retrofitting control after deployment may be impossible.

How realistic are AI-driven mass unemployment scenarios?

Most experts expect a mix of job destruction and creation rather than pure mass unemployment. Heavy disruption will hit routine, predictable tasks-clerical work, basic analysis, customer support-while AI-augmented roles may grow. Projections suggest 20–30% of tasks in some economies could be automated by 2030. The real concern isn’t total job disappearance but distributional impact: who gains, who loses, and how fast transitions happen. Policies like reskilling programs, safety nets, and business practices that augment rather than immediately replace workers are key to managing this transition.

Can strong regulation slow innovation too much and make some countries fall behind?

The tension between innovation and regulation is real, but smart, risk-based regulation can actually encourage trustworthy innovation by clarifying rules and building public trust. The EU AI Act, for example, focuses restrictions on high-risk applications rather than banning AI broadly. A global race to the bottom on safety could backfire badly-disasters that erode public and investor confidence would harm AI industry development far more than measured regulation. The goal is guardrails that prevent the worst outcomes while allowing the potential benefits of AI to flourish.

What can a non-technical person realistically do about AI risks?

You can vote for representatives who take AI safety and privacy seriously, support civil society groups working on digital rights, and push employers to adopt transparent AI policies. On a personal level, verify sensational content before sharing, be cautious with biometric data (faces, voices), and opt out of unnecessary data collection when possible. Teaching children about deepfakes and AI companions builds digital literacy for the next generation. Finally, stay informed through curated, low-frequency sources rather than trying to track every daily update-KeepSanity AI’s weekly brief is designed exactly for this purpose.

Are there any clear benefits of AI that justify taking these risks?

Major benefits are already visible: improved medical diagnosis, accelerated drug discovery (AlphaFold has transformed protein-folding research), accessibility tools for translation and speech-to-text, climate modeling, and broad productivity gains. The question isn’t “AI or no AI”-it’s whether we shape development so benefits are broadly shared and risks are aggressively managed. Recognizing and addressing risks is precisely what allows society to harness AI’s upside without accepting unacceptable downside scenarios. Responsible AI development isn’t about stopping progress; it’s about steering it wisely.