Discarded.AI

Discarded.AI

The Eight Googlers Who Accidentally Invented the Future

In 2017, eight researchers bet their careers on killing the most successful AI technique in history. They won and accidentally created the architecture behind every AI you use today.

Alan Robertson's avatar
Alan Robertson
Nov 11, 2025
∙ Paid

In 2017, eight researchers at Google published a paper with a deceptively simple title: “Attention Is All You Need.” They couldn’t have known they were writing the origin story of the AI revolution we’re living through today.

This Paper laid the foundation of Transformer Technology which is the backbone of generative AI (genAI). This being what ChatGPT is built on.


Before the Revolution: AI’s Dark Ages

For decades, AI lived in two separate worlds that barely spoke to each other.

First came symbolic AI, the dream of the 1950s through the 1980s. Researchers painstakingly hand-coded rules and logic into machines: “IF patient has fever AND cough, THEN diagnose flu.” i.e If This Then That (IFTTT) It was tedious, brittle, and laughably unable to handle the messy reality of human language. Ask it to understand a metaphor? Forget it.

Then came probabilistic reasoning and early neural networks in the 1990s and 2000s. Instead of rigid rules, these systems learned patterns from data using statistics and probability. Better, but still primitive. The breakthrough that seemed to matter most was recurrent neural networks (RNNs) and their fancier cousin, LSTMs (Long Short-Term Memory networks). These could finally process sequences sentences, time series, music. They were the state-of-the-art by 2016.

Keep reading with a 7-day free trial

Subscribe to Discarded.AI to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Alan Robertson
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture