Skip to the content.

Evolving Interpretable Plasticity Rules on Accelerated Neuromorphic Hardware

Authors: Maryeme Ouafoudi, Philipp Spilger, Eric Müller, Mihai A. Petrovici, Jakob Jordan

Presentation type: Poster at SNUFA 2024 online workshop (5-6 Nov 2024)

Abstract

Genetic Programming (GP) has recently emerged as a novel approach to uncovering mechanistic models of synaptic plasticity in spiking networks (Jordan et al., 2021). This approach, however, is compute-hungry, limiting its practical applications. Here, we present initial steps toward leveraging the acceleration of mixed-signal neuromorphic devices to speed up the evolutionary search for plasticity rules. We consider the following task: an input projects onto a ”student” and a ”teacher” neuron. The synaptic weight to the teacher is fixed, while plasticity should adapt the student’s weight so that its membrane potential matches the teacher’s. We use Cartesian GP to search a space of mathematical expressions for plasticity rules that are a function of the average input rate and the teacher’s and student’s membrane potential. The fitness of the rule is determined by the difference between the neurons membrane potentials and the variance of weight changes. While the evolutionary algorithm is executed on a host computer, neuron, and weight dynamics are emulated (Billaudelle et al., 2020) on BrainScaleS-2 (BSS-2) – a versatile accelerated spiking mixed-signal neuromorphic system-on-chip (SOC) (Pehle et al., 2022). The SOC contains two general-purpose embedded processors, which can execute arbitrary learning rules. Here, we extend our PyNN (Davison, 2008) modeling front-end for BSS-2 (Müller et al., 2022) to support a just-in-time compilation of rules generated by our evolutionary algorithm for execution on the hardware. The evolutionary search successfully rediscovers previously known, efficient error-correcting learning rules, in particular variants of the delta rule (Widrow & Hoff, 1988). Next, we will consider more complex tasks and hierarchical networks. The neuromorphic system’s ability to emulate neuron dynamics 1000x faster and maintain speed regardless of network size promises significant improvement over software simulations for complex models.