Laminar Organization and Information Integration in Spiking Recurrent Neural Networks
Authors: Aidai Kazybekova, Fatemeh Hadaeghi, Michael Winklhofer, Claus C. Hilgetag
Presentation type: Poster at SNUFA 2023 online workshop (7-8 Nov 2023)
Abstract
Biological recurrent neural networks exhibit an intriguing organization across spatial scales, sparking our curiosity regarding their origins, structural and functional constraints, and their role in information processing and computation. Particularly fascinating is the characteristic laminar organization found in cortical microcircuits, for instance, the visual cortical system. Building upon Haeusler and Maass’s pioneering work (2007), we investigated the roles of specific laminar-specific connectivity patterns in aggregating and integrating generic spike inputs into layers 4 and 2/3. We constructed a data-driven cortical microcircuit template in the spiking substrate of the reservoir computing framework, featuring leaky-integrate-and-fire (LIF) neurons with dynamic synapses. We formulated a memory capacity (MC) task, where the model classified whether input streams to layers 4 and 2/3 originated from high-frequency or low-frequency Poisson spike generator populations in the previous time segment. Additionally, a delayed XOR task was designed such that the system had to determine whether, in the previous segment, the input streams presented to layers 4 and 2/3 originated from the same source. We also introduced three random control cases by implementing systematic random between-layer connectivity. Our findings in the MC task consistently reveal that individual layers within structured networks outperformed their counterparts in random networks. Interestingly, even though input stream 2 was exclusively received by layer 2/3, layer 5 demonstrated remarkable memory capacity in this data-driven circuit design. The superiority of structured networks was especially pronounced in the XOR task, where it outperformed random networks regardless of the readout site. Furthermore, we observed that representations generated in layer 2/3 and layer 5 were particularly well-suited for the XOR task, which necessitates the integration of information from two input streams. Additionally, a series of modifications we introduced to Haeusler and Maass’s implementation resulted in enhanced memory capacity and improved XOR performance.