A paper presents a perception framework that fuses visual and tactile feedback to make predictions about the expected motion of objects in dynamic scenes . It uses a novel See-Through-your-Skin sensor that captures both the visual appearance and the tactile properties of objects . The perceptual system can be used to infer the outcome of future physical interactions, which we validate through simulated and real-world experiments in which the resting state of an object is predicted from given initial conditions . Weinterpret the dual stream signals from the sensor using a MultimodalVariational Autoencoder (MVAE), allowing us to capture both modalities ofcontacting objects and to develop a mapping from visual to tactile interactionand vice-versa. We also use the MVAE to predict the future interaction of objects when they come into contact with

Author(s) : Sahand Rezaei-Shoshtari, Francois Robert Hogan, Michael Jenkin, David Meger, Gregory Dudek

Links : PDF - Abstract

Code :

https://github.com/doty-k/world_models


Coursera

Keywords : objects - visual - tactile - sensor - mvae -

Leave a Reply

Your email address will not be published. Required fields are marked *