Comparison of synthetically trained non-spiking and spiking networks for event-based optical flow prediction
Authors: Jad Mansour, Hayat Rajani, Rafael Garcia, Nuno Gracias
Presentation type: Poster at SNUFA 2024 online workshop (5-6 Nov 2024)
Abstract
The joint use of event-based vision and SNNs is expected to have a large impact in robotics in the near future, in tasks such as, visual odometry and obstacle avoidance. While researchers have used real-world event datasets for optical flow prediction (mostly captured with UAVs), these datasets are limited in diversity, scalability, and are challenging to collect. Particularly, the absolute lack of labeled event datasets for underwater applications hinders progress in combining event-based vision with AUVs. Thus, synthetic datasets offer a scalable alternative by bridging the gap between reality and simulation. In this work, we develop and compare five UNet-based neural networks for event-based optical flow prediction. We implement two non-spiking networks: ConvGRU, which relies on convolutional GRUs as memory components, and Conv3D, which conversely uses 3D convolutions. We also implement 3 hybrid spiking networks: ConvSpike, which uses the neuron’s membrane potential as memory component, ConvRSpike, which adds a recurrent component to its spiking layers, and ConvGRUSpike, which replaces the continuous activation function of ConvGRU with a spiking alternative. Moreover, we address the lack of datasets by introducing eWiz, a comprehensive library for processing event-based data. It includes tools for data loading, augmentation, visualization, encoding, and generation of training data, along with loss functions and performance metrics. We further present two synthetic event-based optical flow datasets and data generation pipelines built on top of eWiz: eCARLA-scenes, which uses the CARLA simulator for self-driving cars, and eStonefish-scenes, which uses the Stonefish simulator for underwater vehicles, depicting diverse scenarios. Lastly, we demonstrate eWiz by comparing the generalization performance of our spiking and non-spiking neural networks, trained on synthetic data and evaluated on real-world data. The ultimate goal is to lay a foundation for advancing event-based camera applications in autonomous navigation for AUVs using SNNs on neuromorphic hardware such as the Intel Loihi.