From Chaos Comes Order: Ordering Event Representations for Object Detection

04/26/2023
by   Nikola Zubić, et al.
0

Today, state-of-the-art deep neural networks that process events first convert them into dense, grid-like input representations before using an off-the-shelf network. However, selecting the appropriate representation for the task traditionally requires training a neural network for each representation and selecting the best one based on the validation score, which is very time-consuming. In this work, we eliminate this bottleneck by selecting the best representation based on the Gromov-Wasserstein Discrepancy (GWD) between the raw events and their representation. It is approximately 200 times faster to compute than training a neural network and preserves the task performance ranking of event representations across multiple representations, network backbones, and datasets. This means that finding a representation with a high task score is equivalent to finding a representation with a low GWD. We use this insight to, for the first time, perform a hyperparameter search on a large family of event representations, revealing new and powerful representations that exceed the state-of-the-art. On object detection, our optimized representation outperforms existing representations by 1.9 the 1 Mpx dataset and 8.6 state-of-the-art by 1.8 by 6.0 explicit representation optimization for event-based learning methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset