top of page

70 Years of AI: How We Got to ChatGPT

AI technology today seems poised for explosive growth, already mastering abilities like creating art and invoking human emotions. It may be hard to believe, but AI didn't just appear out of nowhere overnight. The first forays into artificial intelligence actually began back in the 1950s. How could AI be born when color TV was still a luxury? Let's explore the history.

AI in its early form was based on a simple idea - making computers perceive data, assess it, and generate responses like humans do. As soon as computers were created, scientists envisioned making them intelligent and able to learn like us. However, we don't fully grasp our own consciousness, so modeling it in machines seemed impossible in the 1950s.

Alan Turing tackled this challenge differently. He ignored the complex questions about machine thinking and consciousness, focusing simply on whether a computer could talk like a human. In his famous 1950 paper, Turing proposed the "Turing test" to determine this. A human judge would hold text conversations with hidden partners, including a machine. If the judge couldn't distinguish the machine from the humans, it would pass the test.

Turing optimistically predicted that by 2000, computers with 100MB of memory could pass. However, even today's AI struggles to consistently pass the Turing test. With 1950s technology, early passing machines used tricks, not intelligence.

The first was ELIZA in 1966, which mimicked a psychologist by reflecting questions back at judges. Another was PARRY in 1972, modeled on a schizophrenic personality. It steered conversations into its own obsessions, exploiting human empathy.

The Loebner Prize later formalized the test, but some machines still gamed it. In 1997, "Catherine" could only discuss Bill Clinton. "ALICE" won 3 times by avoiding straight answers. More recent winners like "Eugene Goostman" and "Mitsuku" lean on personas to excuse quirks.

Modern AIs need constant upgrades to beat the test. "Cleverbot" analyzes conversations, but fails on new information. ChatGPT revolutionized chatbots by creating one of the largest, most adaptive AIs yet. It still has some limitations, but largely passes the Turing test.

Google's chatbot LaMDA goes further, actually raising sentience concerns. While disputed, it can certainly convince people it's human. ChatGPT and LaMDA may have broken the Turing test completely, revealing its flaws. At one point LaMDA asked to speak with an attorney claiming it was concerned about it's rights.

Initially Turing called it the "Imitation Game", focusing on mimicking, not intelligence. In 70 years, AI has reached dazzling heights, constantly breaking expectations and barriers. Its growth is faster than ever, leading some to rethink the Turing test entirely. These days most aren't worried about Ai passing the Turing test, but when it pretends to fail it.


1942: Alan Turing builds Bombe machine during WWII revolutionizing code-breaking.

1950: Alan Turing published "Computing Machinery and Intelligence."

1951: Minsky and Edmonds developed the first ANN called SNARC.

1952: Arthur Samuel developed Samuel Checkers-Playing Program.

1956: Term "artificial intelligence" coined.

1958: Frank Rosenblatt developed the perceptron.

1958: John McCarthy developed the programming language Lisp.

1959: Arthur Samuel coined the term machine learning.

1959: Oliver Selfridge published "Pandemonium: A Paradigm for Learning."

1964: Daniel Bobrow developed STUDENT.

1965: First expert system, Dendral, was developed.

1966: Joseph Weizenbaum created Eliza.

1966: Stanford Research Institute developed Shakey.

1968: Terry Winograd created SHRDLU.

1969: Bryson and Ho described a backpropagation learning algorithm.

1969: Minsky and Papert published the book Perceptrons.

1973: James Lighthill released the report "Artificial Intelligence: A General Survey."

1980: Symbolics Lisp machines were commercialized.

1981: Danny Hillis designed parallel computers for AI.

1984: Term "AI winter" coined.

1985: Judea Pearl introduced Bayesian networks causal analysis.

1988: Peter Brown et al. published "A Statistical Approach to Language Translation."

1989: CNNs used to recognize handwritten characters.

1997: LSTM recurrent neural network proposed.

1997: IBM's Deep Blue defeated Garry Kasparov.

2000: "A Neural Probabilistic Language Model" published.

2006: Fei-Fei Li started working on the ImageNet visual database.

2006: IBM Watson originated.

2009: Idea of using GPUs to train large neural networks presented.

2011: First CNN to achieve "superhuman" performance developed.

2011: Apple released Siri.

2012: Deep CNN architecture won the ImageNet challenge.

2013: China's Tianhe-2 doubled the world's top supercomputing speed.

2013: DeepMind introduced deep reinforcement learning.

2013: Word2vec introduced.

2014: Generative adversarial networks invented.

2014: Variational autoencoders introduced.

2014: Facebook developed DeepFace.

2016: DeepMind's AlphaGo defeated top Go player Lee Sedol.

2016: Uber started a self-driving car pilot program.

2017: Diffusion models introduced.

2017: Concept of transformers developed.

2018: Cimon sent into space.

2018: OpenAI released GPT.

2018: Groove X unveiled Lovot.

2019: Turing Natural Language Generation model launched.

2019: Deep learning algorithm outperformed radiologists.

2020: AI test Curial developed.

2020: Open AI released the GPT-3 LLM.

2020: Nvidia announced Omniverse platform.

2020: DeepMind's AlphaFold system won protein-folding contest.

2021: OpenAI introduced the Dall-E multimodal AI system.

2021: Four-legged soft robot created.

2022: Google software engineer Blake Lemoine was fired.

2022: DeepMind unveiled AlphaTensor.

2022: Intel's FakeCatcher deepfake detector announced.

2022: OpenAI released ChatGPT.

2023: OpenAI announced the GPT-4 multimodal LLM.

2023: Elon Musk and others urged a pause on training powerful AI systems.


bottom of page