Currently, mobile devices are equipped with multiple cameras, inertial measurement units (IMU), and even Time-of-Flight (ToF) depth sensors. Data captured by different sensors might allow us to better constraint the ill-posed nature of several computational photography problems. This project aims at developing computational photography and videography methods that fuse the multi-modal data and incorporate information from computer vision cues such as depth and semantics.
Huawei
Currently, mobile devices are equipped with multiple cameras, inertial measurement units (IMU), and even Time-of-Flight (ToF) depth sensors. Data captured by different sensors might allow us to better constraint the ill-posed nature of several computational photography problems. This project aims at developing computational photography and videography methods that fuse the multi-modal data and incorporate information from computer vision cues such as depth and semantics.