shlogg · Early preview
Mike Young @mikeyoung44

Why Deep Learning Works: Neural Network Success Explained

Neural networks generalize well despite theoretical limitations due to 'soft inductive bias' & compression of information. 5 frameworks explain how simplicity & training impact deep learning success.

This is a Plain English Papers summary of a research paper called Why Deep Learning Works: The Simple Science Behind Neural Network Success. If you like these kinds of analysis, you should join AImodels.fyi or follow us on Twitter.

  
  
  Overview

Neural networks generalize well despite theoretical limitations
The concept of "soft inductive bias" explains this phenomenon
Generalization depends on how networks compress information
Five different theoretical frameworks help explain neural network generalization
Training simplicity and compression are key to understanding deep learning...