Analog substrates for temporal and local event-based computation
Authors: Melika Payvand
Presentation type: Invited talk at SNUFA 2023 online workshop (7-8 Nov 2023)
Locality and sparsity are two key properties for reducing the energy footprint of neural network accelerators. Analog SNN hardware exploits these properties through (i) reducing the data movement by co-locating memory and computing units, and (ii) sparsifying the number of times the memory is accessed. Such “event-based in-memory computing architecture” has suitably been realized through exploiting the physics of novel resistive random-access memories (RRAMs). In this talk, I will introduce two architectural solutions for further exploiting RRAMs for efficient computation, through temporal processing using delays, and minimizing wiring by enforcing local communication. The analog nature of RRAMs results in physical heterogeneity which we will exploit for hardware-aware training of SNNs mappable to these architectures.
Watch it on Youtube