Skip to the content.

Weight Sparsity Complements Activity Sparsity in Neuromorphic Language Models

Authors: Rishav Mukherji, Mark Schöne, Khaleelulla Khan Nazeer, Christian Mayr, David Kappel, Anand Subramoney

Presenting author: Mark Schöne

Presentation type: Poster at SNUFA 2024 online workshop (5-6 Nov 2024)

Abstract

Activity and parameter sparsity are two standard methods of making neural networks computationally more efficient. Event-based architectures such as spiking neural networks (SNNs) naturally exhibit activity sparsity, and many methods exist to sparsify their connectivity by pruning weights. While the effect of weight pruning on feed-forward SNNs has been previously studied for computer vision tasks, the effects of pruning for complex sequence tasks like language modeling are less well studied since SNNs have traditionally struggled to achieve meaningful performance on these tasks. Recent machine learning works show that linear time-invariant dynamics are less suitable for recalling information from prior time steps compared to linear time-variant dynamics. Recall is fundamentally necessary for understanding language, and gating mechanisms as employed by Long Short-Term Memory (LSTM) or Gated Recurrent Units (GRU) mitigate this limitation of Leaky Integrate-and-Fire neurons. Using a recently published event-based GRU architecture that works well on small-scale language modeling, we study the effects of weight pruning when combined with activity sparsity. Specifically, we study the trade-off between the multiplicative efficiency gains the combination affords and its effect on task performance for language modeling. To dissect the effects of the two sparsities, we conduct a comparative analysis between densely activated models and sparsely activated event-based models across varying degrees of connectivity sparsity. We demonstrate that sparse activity and sparse connectivity complement each other without a proportional drop in task performance for an event-based neural network trained on the Penn Treebank and WikiText-2 language modeling datasets. Our results suggest sparsely connected event-based neural networks are promising candidates for effective and efficient sequence modeling.