Catastrophic forgetting remains a severe hindrance to the broad application of artificial neural networks, however, it continues to be a poorlyunderstood phenomenon . We argue that it is still unclear how exactly the phenomenon should be quantified . We recommend inter-task forgetting insupervised learning must be measured with both retention and relearning metrics co-concurrently . In many instances classical algorithms such as vanilla SGD experience less catastrophic forgetting than the more modern algorithms like Adam . We show that the degree to which the learningsystems experience catastrophic forgetting is sufficiently sensitive to themetric used that a change from one principled metric to another is enough to change the conclusions of a study dramatically . Our results suggest that a muchmore rigorous experimental methodology is required when looking at catastrophicforgetting. Based on our results, we recommend that a lot more rigorous experimental research is required to look at catastrophic Forgetting is needed to be looked at catastrophic forgetting. In many cases, we recommended inter-targets must be monitored with pairwise interference. In some cases inter-Task forgetting in reinforcement

Author(s) : Dylan R. Ashley, Sina Ghiassian, Richard S. Sutton

Links : PDF - Abstract

Code :

https://github.com/oktantod/RoboND-DeepLearning-Project


Coursera

Keywords : forgetting - catastrophic - inter - task - cases -

Leave a Reply

Your email address will not be published. Required fields are marked *