In standard neural networks the amount of computation used grows with the size of the inputs, but not with the complexity of the problem being learnt . PonderNet learns end-to-end the number of computational steps to achieve aneffective compromise between training prediction accuracy, computational cost and generalization . On a complex synthetic problem, Pondernet dramaticallyimproves performance over previous adaptive computation methods and succeeds at extrapolation tests where traditional neural networks fail . Also, our method matched the current state of the art results on a real-world question and answering dataset, but using less compute . Finally,PonderNet reached state-of-the-art results .

Author(s) : Andrea Banino, Jan Balaguer, Charles Blundell

Links : PDF - Abstract

Code :
Coursera

Keywords : pondernet - results - neural - networks - state -

Leave a Reply

Your email address will not be published. Required fields are marked *