History of deep learning (and some videos)

  • 1943, "A Logical Calculus of Ideas Immanent in Nervous Activity" (Warren McCulloch and Walter Pitts)
  • 1969, "basic perceptrons were incapble of processing the XOR circuit" and "computers didn't have enough processing power to handle the work required by large neural networks" (Marvin Minsky and Seymour Papert)
  • 1986, neural networks are cool again; back-propagation algorithm (Rumelhart, Williams, Hinton), the algorithm propagates measures of the errors produced by the network’s guesses backwards through its neurons, starting with those directly connected to the outputs, allowing networks with intermediate “hidden” neurons between input and output layers to learn efficiently.
  • 1989, Backpropagation applied to handwritten zip code recognition (LeCun)
  • 1997, IBM's Deep Blue beats Garry Kasparov
  • 2000, "A Neural Probabilistic Language Model", introduced high-dimensional word embeddings as a representation of word meaning, led to a major shift in machine translation and natural language understanding systems (Bengio)
  • 2006, "deep learning" first used by Geoffrey Hinton; Deep learning (Nature, 2015)
  • 2013, IBM's Watson and the Jeopardy
  • 2016, AlphaGo
  • 2018, Yoshua Bengio, Geoffrey Hinton, Yann LeCun, Turing award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing
  • 2020, DeepMind, AlphaFold, a solution to a 50-year-old grand challenge in biology
  • 2022, Meta AI, ESMFold, a protein language model with 16B parameters for end-to-end prediction (sequence-to-structure)