shlogg · Early preview
Mike Young @mikeyoung44

New Attack Method Breaks Security Of Brain-Inspired AI Networks

SNNs thought to be secure against adversarial attacks but researchers found a new 'BIS' attack breaks them using hidden training backdoors

This is a Plain English Papers summary of a research paper called New Attack Method Breaks Security of Brain-Inspired AI Networks Using Hidden Training Backdoors. If you like these kinds of analysis, you should join AImodels.fyi or follow us on Twitter.

  
  
  Overview

SNNs (Spiking Neural Networks) can resist adversarial attacks better than traditional neural networks
Researchers discovered surrogate gradients make SNNs vulnerable to attacks
A new "BIS" attack breaks these invisible surrogate gradients
BIS attack is more effective and uses fewer perturbations than existing methods
The atta...