(a) System setup Li gh t fie ld se qu en ce (3 fp s) Interactively refocused (b) System input (c) System output Light field video (30fps) DSLR Lytro ILLUM Fig. 1. The setup and I/O of our system. (a) We attach an additional standard camera to a light field camera using a tripod screw, so they can be easily carried together. (b) The inputs consist of a standard 30 fps video and a 3 fps light field sequence. (c) Our system then generates a 30 fps light field video, which can be used for a number of applications such as refocusing and changing viewpoints as the video plays. Light field cameras have many advantages over traditional cameras, as they allow the user to change various camera settings after capture. However, capturing light fields requires a huge bandwidth to record the data: a modern light field camera can only take three images per second. This prevents current consumer light field cameras from capturing light field videos. Temporal interpolation at such extreme scale (10x, from 3 fps to 30 fps) is infeasible as too much information will be entirely missing between adjacent frames. Instead, we develop a hybrid imaging system, adding another standard video camera to capture the temporal information. Given a 3 fps light field sequence and a standard 30 fps 2D video, our system can then generate a full light field video at 30 fps. We adopt a learning-based approach, which can be decomposed into two steps: spatio-temporal flow estimation and appearance estimation. The flow estimation propagates the angular information from the light field sequence to the 2D video, so we can warp input images to the target view. The appearance estimation then combines these warped images to output the final pixels. The whole process is trained end-to-end using con-volutional neural networks. Experimental results demonstrate that our algorithm outperforms current video interpolation methods, enabling consumer light field videography, and making applications such as refocusing and par-allax view generation achievable on videos for the first time. Code and data are available at https://cseweb.ucsd.edu/~viscomp/projects/LF/papers/SIG17/lfv/.
translated by 谷歌翻译