Vision and language tasks such as Visual Relation Detection and VisualQuestion Answering benefit from semantic features that afford proper groundingof language . The 3D depth of objects depicted in 2D images is one such feature . However it is very difficult to obtain accurate depth information without learning the appropriate features, which are scene dependent . The state of theart in this area are complex Neural Network models trained on stereo image datato predict depth per pixel . Fortunately in some tasks, only the relativedepth between objects that is required is required . In this paper the extent to whichsemantic features can predict course relative depth is investigated . An overall increase of 14% in relative depth accuracy overrelative depth computed from the monodepth model derived results is achieved. The results are compared to those obtained from averaging the output of the monodeepth neural network model, which represents the
Author(s) : Stefan Cassar, Adrian Muscat, Dylan SeychellLinks : PDF - Abstract
Code :
https://github.com/oktantod/RoboND-DeepLearning-Project
Keywords : depth - features - relative - objects - neural -
- Open Machine Learning Course
- Practical Deep Learning for Coders – Part 1
- The Hundred-Page Machine Learning Book
- Kaggle Learning
- Mathematics for Machine Learning by Marc Peter Deisenroth, A. Aldo Faisal and Cheng Soon Ong
- Python for Everybody Specialization
- 3Blue1Brown
- Khan Academy Statistics and Probability