The recent success in deep learning has lead to various effective representation learning methods for videos. However , the current approaches for video representation require large amount of human labeled datasets for effective learning. We present an unsupervised representation learning framework to encode scene dynamics in videos captured from multiple viewpoints. The proposed framework has two main components: Representation Learning Network (RL-NET), which learns a representation with the help of Blending Network (BL-NET), and Video Rendering Network (VR-NET), which is used for video synthesis. The framework takes as input video clips from different viewpoints and time, learns an internal representation and uses this representation to render a video clip from an arbitrary given viewpoint and time. The ability of the proposed network to render video frames from arbitrary viewpoints and time enable it to learn a meaningful and robust representation of the scene dynamics. We demonstrate the effectiveness of the proposed method in rendering view-aware as well as time-aware video clips on two different real-world datasets including UCF-101 and NTU-RGB+D. To further validate the effectiveness of the learned representation, we use it for the task of view-invariant activity classification where we observe a significant improvement (∼ 26%) in the performance on NTU-RGB+D dataset compared to the existing state-of-the art methods. Figure 1: An overview of the proposed video rendering framework. An activity is captured from different viewpoints (v1, v2, and v3) providing observations (o1, o2, and o3). Video clips from these viewpoints (v1 and v2) at arbitrary times (t1 and t2) are used to learn a scene and dynamics representation (r) for this activity, employing the proposed RL-NET. The learned representation (r) is then used to render a video from an arbitrary query viewpoint (v3) and time (t3) using proposed VR-NET.
translated by 谷歌翻译