Conventional DNN training paradigms typically rely on one training set and one validation set . The validation set may hardly guarantee an unbiased estimate of generalization performance due to potential mismatching with test data . Training a DNN corresponds to solve a complex optimization problem, which is prone to getting trapped into inferior local optima and thus leads to undesired training results . To address these issues, we propose a novel DNN . framework . It generates multiple pairs of training and validation sets from the gross training set via random splitting . It trains a . DNN model of a . pre-specified structure on each pair while making the useful knowledge (e.g., promising network parameters) obtained from one model training process . to be transferred to other model training processes via multi-task optimization, and outputs the best, among all trained models, which has the overall best performance across the validation sets, which have the overall performance across all pairs. from all pairs of pairs. All pairs of trained models have the best performance from all training models. All training models have a good performance across training sets from all paired pairs. The new framework. The framework can also improve on generalized performance via implicit regularization performance via . regularization and regularization. The proposed framework. We implement the proposed framework, parallelize the implementation on a . GPU cluster, and apply it to train several widely used DNN models. We use it to

Links: PDF - Abstract

Code :

None

Keywords : training - performance - dnn - pairs - framework -

Leave a Reply

Your email address will not be published. Required fields are marked *

error

Enjoy this blog? Please spread the word :)