← KeepSanity
Apr 08, 2026

Who Invented AI? (And Why the Answer Isn’t One Person)

No single person “invented AI”-it emerged from decades of work by many scientists, with John McCarthy often called the “father of AI” for coining the term and organizing the 1956 Dartmouth Conference.

Key Takeaways

Introduction: Can You Really Say Someone “Invented” AI?

Ask “who invented AI” and you’ll get a dozen different answers depending on who you ask and what they mean by “AI.” That’s because artificial intelligence isn’t a single gadget someone built in a garage-it’s a sprawling field that emerged from mathematics, philosophy, engineering, and computer science over more than a century.

John McCarthy is widely known as the “father of AI” because he coined the term “artificial intelligence” in 1955 and organized the famous Dartmouth workshop where the field got its name. But McCarthy himself would be the first to tell you he didn’t invent AI alone.

Here’s why the question is tricky:

Before AI Had a Name: Early Ideas of Thinking Machines

The question of who invented AI starts centuries before digital computers existed. Humans have imagined artificial beings and intelligent machines since ancient times, laying cultural and mechanical groundwork for what would eventually become AI research.

Here are the concrete milestones that planted the seeds:

These developments established the core assumption behind AI: that human thought might be mechanized and simulated in hardware and software. The stage was set for the formal logic and computing power that would turn speculation into science.

The image features an intricate brass clockwork mechanism, showcasing a complex arrangement of gears and mechanical parts, illuminated by warm lighting that highlights the craftsmanship and detail. This representation of engineering ingenuity may evoke thoughts of the intricate systems found in artificial intelligence and machine learning, where complex components work together to achieve intelligent behavior.

Foundations of Machine Intelligence (1943–1955)

The 1940s and early 1950s created the mathematical and computational toolkit that modern AI still relies on today. This period saw the birth of neural network theory, the formalization of machine intelligence, and the earliest experiments in teaching machines to reason.

McCulloch & Pitts (1943): The First Neural Network Model

Warren McCulloch and Walter Pitts published “A Logical Calculus of the Ideas Immanent in Nervous Activity,” modeling biological neurons as simple binary threshold logic units. Their insight was profound: networks of these units could compute any logical function, including all Boolean operations.

This was the first artificial neural networks model capable of universal computation in principle-though limited to feedforward structures without learning. It connected the human brain’s architecture to formal logic, suggesting intelligent behavior could emerge from simple components.

Alan Turing: The Theoretical Bedrock

Alan Turing provided the theoretical foundation for everything that followed:

Year

Contribution

Impact

1936

“On Computable Numbers” paper introducing the Turing machine

Proved the halting problem undecidable and established universal computation

1950

“Computing Machinery and Intelligence”

Proposed the Turing test (originally called the imitation game) as an operational definition of machine intelligence

Turing’s 1950 paper asked the question that launched a field: “Can machines think?” His answer was to sidestep philosophical debates and propose a practical test-if a human judge couldn’t distinguish a computer program from a human being in text-based conversation, the machine could be said to exhibit intelligent behavior.

Norbert Wiener and Cybernetics

Norbert Wiener’s 1948 work on cybernetics synthesized feedback control across machines and organisms. His book sold over 50,000 copies by 1950, introducing concepts like feedback loops and information entropy that became precursors to AI control systems and later reinforcement learning.

Early Experiments in Machine Intelligence

Before the field had a name, researchers were already building:

This period accumulated the tools-logic networks, computability theory, feedback, and proto-programs-that the Dartmouth conference would unify into a named discipline.

The Dartmouth Conference: Where “Artificial Intelligence” Was Born (1955–1956)

If you must pick one event where AI was “invented” as a field, it is the 1956 Dartmouth Summer Research Project on Artificial Intelligence. This workshop didn’t solve AI-but it created AI as a funded, named research discipline that persists to this day.

John McCarthy’s Central Role

John McCarthy, then a 29-year-old mathematics professor at Dartmouth, drove the vision. In 1955, he authored the two-page typed proposal that coined the term “artificial intelligence”-choosing it for neutrality, sidestepping “cybernetics” (associated with Norbert Wiener’s analog focus) and “automata” (too narrow).

McCarthy secured $13,500 from the Rockefeller Foundation for the workshop. He later called his proposal a “flag to the mast” at the AI@50 conference in 2006, marking the ambition that unified disparate ideas into a coherent field.

Key Organizers and Participants

Participant

Affiliation

Key Contribution

John McCarthy

Dartmouth

Coined “AI,” later created Lisp, pioneered time-sharing

Marvin Minsky

Harvard

Cognitive modeling, later co-founded MIT AI Lab

Claude Shannon

Bell Labs

Information theory, entropy, cryptography

Nathaniel Rochester

IBM

IBM 704 designer, early pattern recognition

Allen Newell

RAND/Carnegie

Logic Theorist co-creator, problem solving research

Herbert Simon

Carnegie

Logic Theorist co-creator, cognitive psychology pioneer

The workshop ran from June 18 to August 1956 in Hanover, New Hampshire, with about 11 core attendees and additional visitors like Ray Solomonoff (induction work) and Oliver Selfridge.

The Workshop’s Ambition

The proposal’s manifesto-like language set the field’s agenda:

“Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Specific targets included natural language processing, early neural networks, abstraction, problem solving, and self-improvement-ideas that remain central to AI today.

Early Successes Emerging Around Dartmouth

McCarthy later admitted at the 2006 reunion that collaboration at Dartmouth was imperfect-attendees arrived at different times and pursued individual agendas. But the workshop formalized AI as fundable science, sparking decades of progress through booms and winters.

The image depicts a group of researchers in suits, gathered around early computing equipment in a 1950s academic environment, reflecting the beginnings of artificial intelligence and computer science. Their focus on the machinery signifies the early exploration of machine intelligence and the foundational work that would lead to advancements in AI research and technology.

Who Is the “Father of AI”? McCarthy and His Contemporaries

Calling John McCarthy the “father of AI” is common, but the field has multiple “parents” with distinct contributions. Think of AI’s invention as a relay race rather than a solo sprint.

John McCarthy (1927–2011)

McCarthy earns the title for institutionalizing the field:

Marvin Minsky (1927–2016)

Minsky co-founded the MIT AI Lab with McCarthy in 1959 and advanced:

Allen Newell (1927–1992) & Herbert A. Simon (1916–2001)

This duo demonstrated that digital computers could engage in symbolic reasoning:

They shared the 1975 Turing Award for their foundational contributions to artificial intelligence ai and cognitive science.

Arthur Samuel (1901–1990)

Samuel coined the term “machine learning” in 1959 for his IBM 704 checkers program (1952–1959). Using minimax alpha-beta pruning and temporal difference learning, the program reached expert play through self-play-over 200,000 games-and defeated Samuel himself by 1962.

Frank Rosenblatt (1928–1971)

Rosenblatt’s 1957 Perceptron was the first trainable neural network for pattern recognition. The Mark I hardware used 400 motors and 5,500 photocells, learning binary classifications via the delta rule (precursor to backpropagation). Despite limitations exposed by the Minsky-Papert critique, the Perceptron pioneered the approach that would dominate modern AI.

Turing: The Theoretical Father

Some authors call Alan Turing the “father of theoretical AI” for formalizing computation and the intelligence test. But historically, McCarthy gets the title for founding AI as an institution-naming it, convening its first workshop, and building its core programming language.

AI was a collaborative invention spanning theory, algorithms, languages, and hardware. No single computer scientist created it alone.

AI Grows Up: Early Booms, Winters, and Key Inventions (1957–1990)

AI’s first decades followed a pattern that would repeat: rapid optimism, overpromising, disillusionment, funding cuts, and eventual recovery. Understanding this history of ai explains why the field seems to be “reinvented” every decade.

The Optimistic 1950s–1960s: Symbolic AI Flourishes

Early AI research focused on symbolic reasoning-manipulating symbols and rules to mimic human intelligence:

Researchers predicted that machines would match human intelligence within a generation. Herbert Simon famously predicted in 1965 that “machines will be capable, within twenty years, of doing any work a man can do.”

The First AI Winter (1973–Late 1970s)

Reality hit hard:

Event

Year

Impact

ALPAC Report (US)

1966

Concluded machine translation was uneconomic after $20M investment

Lighthill Report (UK)

1973

Criticized AI for “toy problems” that didn’t scale; triggered £1M funding cuts

Early systems worked only on constrained toy problems. Combinatorial explosion-chess has a branching factor of 35 per move-made scaling impossible with available computing power. Government funding dried up, and AI research contracted.

The 1980s Expert Systems Boom

Expert systems revived commercial interest:

Corporations invested heavily, believing intelligent systems would revolutionize business.

Sub-Symbolic and Probabilistic Inventions

Meanwhile, alternatives to symbolic AI emerged:

The Second AI Winter (Late 1980s–Early 1990s)

Expectations again outpaced delivery. Lisp machine companies like Symbolics collapsed (losing $100 million), and expert systems proved brittle outside narrow domains. The ai winter returned as corporate and government funding retreated.

But the tools developed during both winters-neural networks, probabilistic reasoning, faster hardware-would fuel the next wave.

From Machine Learning to Deep Learning: New “Inventors” of Modern AI (1990–2016)

“Who invented AI” changes meaning in this era. The focus shifts from symbolic programs to data-driven learning and deep neural networks-requiring new heroes and new infrastructure.

The 1990s Recovery: Statistical Machine Learning

AI research regrouped around statistical methods:

Big data became crucial. The ai community realized that algorithms were often less important than having massive labeled datasets.

Infrastructure Breakthroughs

Year

Development

Significance

2004–2006

Face Recognition Grand Challenge

Showed large-scale benchmark-driven progress

2006

Hinton’s deep belief networks

Revived deep learning via unsupervised pretraining

2007

Fei-Fei Li’s ImageNet launch

14 million labeled images across 1,000 classes

2009

GPU-accelerated training (Raina, Ng)

60x speedup versus CPUs

ImageNet became the benchmark that would define progress in image recognition and computer vision for a decade.

The AlexNet Moment (2012)

The ai boom of the 2010s traces directly to one paper:

AlexNet (Krizhevsky, Sutskever, Hinton) won the 2012 ImageNet competition by cutting top-5 error from 26% to 15%. The architecture used:

By 2017, ImageNet error dropped below 5%-better than average human performance. Deep learning techniques had arrived.

Visible Milestones That Made AI Feel “Newly Invented”

The “Godfathers of Deep Learning”

Geoffrey Hinton, Yann LeCun, and Yoshua Bengio share the 2018 Turing Award for their foundational contributions:

Just as McCarthy was the father of symbolic AI, these three are widely seen as the central “inventors” of modern deep learning.

The image depicts rows of modern server racks illuminated by blue LED lights in a data center, showcasing the advanced computing machinery that supports artificial intelligence and machine learning technologies. This environment is crucial for processing big data and powering intelligent systems used in various applications, including natural language processing and computer vision.

Transformers, Generative AI, and Today’s AI “Inventors” (2017–Present)

The most visible “AI” to the public today-ChatGPT-style tools, image generators, virtual assistants that actually work-rests on the transformer architecture and massive-scale training that would have been unthinkable at Dartmouth.

The Transformer Revolution (2017)

Google researchers published “Attention Is All You Need” in 2017, introducing self-attention mechanisms that could:

This single architectural innovation enabled the leap from BERT (2018, 340M parameters) to GPT-3 (2020, 175B parameters) and beyond.

Major Generative AI Milestones

Year

System

Significance

2014

GANs (Goodfellow)

Min-max adversarial training enabled photorealistic synthetic images

2020

GPT-3

175B parameters, few-shot learning across 45/93 SuperGLUE tasks

2020

Diffusion models (Ho et al.)

Iterative denoising for high-quality image generation

2022

ChatGPT launch

100 million users in 2 months, conversational AI goes mainstream

2023

DALL·E 3, Midjourney V5

Text-to-image reaches near-photorealistic quality

Generative ai transformed from research curiosity to consumer product in under three years.

The Major Labs Shaping Current AI

The “inventors” of today’s AI are increasingly large research teams with billion-dollar budgets:

AI Is No Longer a Garage Invention

Modern AI development involves:

The question “who invented AI” now has an answer more like “who invented the internet”-it’s an ecosystem, not a single genius.

Because breakthroughs now land weekly, professionals rely on curated weekly briefings instead of trying to track every minor model release. KeepSanity AI filters the signal from the noise-one email per week with only the major developments that actually matter.

Did Anyone Invent Artificial General Intelligence (AGI) Yet?

AGI-systems matching or exceeding human-level, broad intelligence across diverse tasks-has not been invented yet, despite what marketing materials might suggest. Current systems, however impressive, are narrow AI optimized for specific complex tasks.

What AGI Actually Means

AGI is distinct from:

Early AGI thinking includes:

Current AGI Research and Debates

Labs like OpenAI and DeepMind explicitly target more general systems:

But significant gaps remain. LLMs show “emergence” (in-context learning appearing at 10B+ parameters) but struggle with:

Safety, Ethics, and Governance

The potential for AGI raises alignment challenges:

Emerging regulation includes:

Whoever “invents” AGI in the future will be standing on a century of prior AI inventions-from Turing’s formalism to McCulloch-Pitts neurons to transformers-not starting from zero.

FAQ: Common Questions About Who Invented AI

These FAQs cover related questions not fully addressed above. For quick reference, here are the answers to what readers commonly ask about AI’s origins.

Was John McCarthy really the person who invented AI?

John McCarthy did not single-handedly create all AI techniques, but he uniquely positioned himself as the field’s founder through three contributions:

  1. Coined “artificial intelligence” in the 1955 Dartmouth proposal, giving the field its name

  2. Organized the 1956 Dartmouth Summer Research Project, bringing together the earliest examples of AI researchers and formalizing ai began as an academic discipline

  3. Developed Lisp (1958) and foundational concepts in logical AI that powered decades of research

This combination of naming, convening, and technical innovation is why the AAAI and most historians call him the “father of AI.” But McCarthy himself acknowledged AI was a collaborative effort-he organized the wedding, but many people built the marriage.

What role did Alan Turing play in inventing AI?

Turing provided the theoretical bedrock rather than the institutional founding. His contributions include:

Some historians call Turing the “father of theoretical computer science and AI.” His 1950 paper on computing machinery and intelligence remains foundational. But McCarthy is more specifically tied to AI as a named field because he created the term and organized its founding workshop.

When did AI first appear in real products people could buy?

AI’s path from lab to consumer followed a gradual timeline:

AI often “disappears” into everyday products once it works reliably. The translate languages feature in your browser, the speech recognition in your phone, the medical diagnosis support in hospitals-all use AI, but we stop calling them “AI” once they’re normal.

Who invented generative AI specifically?

Generative AI is a branch of AI with multiple milestones rather than a single inventor:

Era

Development

Inventor(s)

1960s

ELIZA chatbot

Joseph Weizenbaum

2014

GANs enabling realistic images

Ian Goodfellow

2017

Transformers enabling fluent text

Vaswani et al. (Google)

2020–22

GPT-3, DALL·E, ChatGPT

OpenAI research teams

Generative AI results from decades of progress in neural networks, optimization algorithms, massive data collection (big data), and exponentially growing computing power. It’s a symphony, not a solo.

Why does AI seem to be “reinvented” every few years?

AI progresses in waves, each bringing new excitement and eventual consolidation:

  1. Symbolic reasoning (1950s–70s): Logic, theorem proving, expert rules

  2. Expert systems (1980s): Knowledge-encoded business applications

  3. Statistical ML (1990s): SVMs, probabilistic models, data-driven approaches

  4. Deep learning (2010s): Neural networks at scale, image recognition breakthroughs

  5. Transformers and agents (late 2010s–2020s): Large language models, autonomous systems

Each wave gets hyped as revolutionary, faces partial disillusionment, then consolidates into real products. The pattern repeats because each “invention” builds on previous foundations while introducing genuinely new capabilities.

Staying informed without burning out means tracking only the truly major shifts. KeepSanity AI delivers exactly this-one weekly email with the signal, zero daily filler to waste your time.


AI wasn’t invented by a single genius-it’s a century-long relay race of ideas passed from Turing to McCarthy to Minsky to Hinton and beyond. Every breakthrough stands on foundations laid by previous generations of researchers, engineers, and dreamers who imagined creating thinking machines.

The next major AI development could come from an established lab, an open-source community, or an unexpected direction entirely. That’s what makes tracking the field both exciting and exhausting.

If you need to stay informed but refuse to let newsletters steal your sanity, lower your shoulders. The noise is gone. Here is your signal.