Optical flow is widely adopted, despite its high computational complexity, e.g. occupying over 97% inference time . This paper proposes to learn a lightweight video style transfer network via knowledge distillation paradigm . The output difference between these two teacher networks highlights the improvements made by optical flow, which is then adopted to distill the target student network . Furthermore, a low-rank distillation loss is employed to stabilize the output of student network by mimicking the rank of input videos . Extensive experiments demonstrate that our student network without an optical flow module is still able to generate stable video and runs much faster than the teacher network. Extensive experiments demonstrations demonstrate that our student network is able to run much faster

Links: PDF - Abstract

Code :

None

Keywords : network - flow - optical - student - distillation -

Leave a Reply

Your email address will not be published. Required fields are marked *

error

Enjoy this blog? Please spread the word :)