Learned networks in the domain of visual recognition and cognition impress inpart because they are trained with datasets many orders ofmagnitude smaller than the full population of possible images . Here, wetry a novel approach in the tradition of the Thought Experiment. We run thisthought experiment on a real domain of . visual objects that we can fullycharacterize and look at specific gaps in training data and their impact on performance requirements. Our Thought Experiment Probe approach,coupled with the resulting Bias Breakdown can be very informative towards understanding the impact of biases. Our thought experiment points to three conclusions: First, that generalization behavior is dependent on how sufficiently the .particular dimensions of the domain are represented during training . Second, that the utility of any generalization is completely dependent on the acceptable system error; and third, that specific visual features of objects, such as pose orientations out of the imaging plane or colours, may not . may not be berecoverable if . not represented sufficiently in a training set. are not covered by a training data, or that particular visual feature of objects are . not covered up by the training data may not have been adequately represented. It may not cover up by any training data. It is very important to consider whether or not it is

Author(s) : John K. Tsotsos, Jun Luo

Links : PDF - Abstract

Code :

https://github.com/oktantod/RoboND-DeepLearning-Project


Coursera

Keywords : training - experiment - data - thought - visual -

Leave a Reply

Your email address will not be published. Required fields are marked *