Video-based person re-identification (Re-ID) aims to retrieve video sequences of the same person under non-overlapping cameras . Previous methods usually focus on limited views, such as spatial, temporal or spatial-temporal view . To capture richerperceptions and extract more comprehensive video representations, in this paper we propose a novel framework named Trigeminal Transformers (TMT) for video-basedperson Re-ID . The experimental results indicate that our approach canachieve better performance than other state-of-the-art approaches . We will release the code for model reproduction of model model reproduction. We hope to use the model to improve the performance of our approach on publicRe-Id benchmarks. Back to Mail Online home .

Author(s) : Xuehu Liu, Pingping Zhang, Chenyang Yu, Huchuan Lu, Xuesheng Qian, Xiaoyun Yang

Links : PDF - Abstract

Code :
Coursera

Keywords : video - model - person - id - transformers -

Leave a Reply

Your email address will not be published. Required fields are marked *