shlogg · Early preview
Mike Young @mikeyoung44

Blockwise Pretraining Rivals Backpropagation Performance On ImageNet

Deep learning models can be trained efficiently with blockwise pretraining using self-supervised learning, rivaling backpropagation performance on ImageNet dataset.

This is a Plain English Papers summary of a research paper called Self-Supervised Blockwise Pretraining Rivals Backpropagation Performance on ImageNet. If you like these kinds of analysis, you should join AImodels.fyi or follow me on Twitter.

  
  
  Overview

Current deep learning models rely heavily on backpropagation, a powerful but computationally intensive training technique.
This paper explores alternative "blockwise" learning rules that can train different sections of a deep neural network independently.
The researchers show that a blockwise pretraining approach using self-supervised l...