Skip to the content.

Training spiking trisynaptic circuit models to perform spatial-memory dependent navigation using reinforcement learning

Authors: Christopher Earl

Presentation type: Poster at SNUFA 2023 online workshop (7-8 Nov 2023)

Abstract

Artificial Neural Networks (ANN) have hit performance plateaus for the past several years, largely due to the vanishing gradient problem. Non-gradient based learning, and neuromorphic compatible models like Spiking Neural Networks (SNN) may be a solution, assuming they can be designed and trained effectively. Traditional ANN architectures and learning methods like Backpropagation Through Time are non-biological and known for poor task-generalizability once trained. One benefit of SNNs over ANNs is the abundance of successful biological computers with high generalizability already in existence. We believe that for practical and generalizable SNNs, a mixture of biologically inspired circuitry and machine learning is essential. To test this, we replicated circuits and cell populations from the mammalian navigation cortex due to its well documented nature and ease of testing using simulated environments. For this project, we mimicked the Trisynaptic Circuit (TC) to solve tasks which require temporal spatial reasoning. The TC is strongly associated with spatial reasoning and describes the flow of information from the external environment to the Hippocampus. This circuit is populated with many known cell-types including Grid Cells which display hexagonally-arranged receptive fields according to the organism location in its environment, and Place Cells which fire in irregularly arranged receptive fields. Grid Cells and their unique firing patterns have been replicated in prior research using SNNs and spike-timing dependent reinforcement learning, but while these models were successful at solving simple tasks, they lacked architectural support for spatial memory and reasoning. To remedy this, we implemented Place Cells and replicated connectivity from Grid to Place Cell populations observed in the Trisynaptic circuit. These cells and their connectivities have a mixture of trained and static parameters that allow for consistent learning and generalizability. Our goal is to engineer as little of the parameters as possible, while still benefiting from the biologically inspired architecture.