This paper presents a Dynamic Vision Sensor (DVS) based system for reasoning about high speed motion . The solution takes the event stream from a DVS and encodes the temporalevents with a set of causal exponential filters across multiple time scales . Wecouple these filters with a Convolutional Neural Network (CNN) to efficiently extract relevant spatiotemporal features . The combined network learns to output both the expected time to collision of the object, as well as the predicted collision point on a discretized polar grid . These critical estimates arecomputed with minimal delay by the network in order to react appropriately to the incoming object . We highlight the results of our system to a toy dartmoving at 23.4m/s with a 24.73° error in ${\theta $18.4mm averagediscretized radius prediction error, and 25.03° median time to colliding with a .03% median time To Collision Point on a ‘danger”

Author(s) : Anthony Bisulco, Fernando Cladera Ojeda, Volkan Isler, Daniel D. Lee

Links : PDF - Abstract

Code :

Keywords : time - network - collision - neural - spatiotemporal -

Leave a Reply

Your email address will not be published. Required fields are marked *