Vision and language tasks such as Visual Relation Detection and VisualQuestion Answering benefit from semantic features that afford proper groundingof language . The 3D depth of objects depicted in 2D images is one such feature . However it is very difficult to obtain accurate depth information without learning the appropriate features, which are scene dependent . The state of theart in this area are complex Neural Network models trained on stereo image datato predict depth per pixel . Fortunately in some tasks, only the relativedepth between objects that is required is required . In this paper the extent to whichsemantic features can predict course relative depth is investigated . An overall increase of 14% in relative depth accuracy overrelative depth computed from the monodepth model derived results is achieved. The results are compared to those obtained from averaging the output of the monodeepth neural network model, which represents the

Author(s) : Stefan Cassar, Adrian Muscat, Dylan Seychell

Links : PDF - Abstract

Code :

https://github.com/oktantod/RoboND-DeepLearning-Project


Coursera

Keywords : depth - features - relative - objects - neural -

Leave a Reply

Your email address will not be published. Required fields are marked *