T1 / SfM-based 3D/3D matching
Task leader: InSo Kweon (RCVLab, KAIST, South Korea)
The main difficulty of this project is the non-synchronization of the cameras and laser scanners. To deal with this problem, we suggest to consider the N camera/laser sensors in an independent manner. Each camera/laser sensor package can be located and utilized to reconstruct the environment. The approach includes the development of the camera-laser calibration method, the robust reconstruction of rigid and non-rigid (e.g., pedestrians) objects using the images and the 3-‐‑D depth data by the laser scanners, and a novel camera-laser based SfM-SLAM problem, and then performs a 3D merging of the N reconstructions obtained by the N camera-laser hybrid sensors. Nevertheless, in our case, the cameras of interest (fisheye) have an important distortion. It will be necessary to take into account these distortions in order to develop algorithms adapted to these conditions.
This task has thus two goals. Firstly, the aim is to develop general algorithms working with several sensors and also robust to illumination sensitivity of the cameras. The second goal consists in developing a fast 3D merging method to obtain a localization and a reconstruction as accurate as possible from the 3D models obtained by the N camera-laser hybrid sensors. This approach has the advantage to avoid the synchronization problem. However it does not take into account the constraint that all the camera-laser hybrid sensors have the same motion (they are rigidly attached).
This task is divided into two sub-tasks :
The main difficulty of this project is the non-synchronization of the cameras and laser scanners. To deal with this problem, we suggest to consider the N camera/laser sensors in an independent manner. Each camera/laser sensor package can be located and utilized to reconstruct the environment. The approach includes the development of the camera-laser calibration method, the robust reconstruction of rigid and non-rigid (e.g., pedestrians) objects using the images and the 3-‐‑D depth data by the laser scanners, and a novel camera-laser based SfM-SLAM problem, and then performs a 3D merging of the N reconstructions obtained by the N camera-laser hybrid sensors. Nevertheless, in our case, the cameras of interest (fisheye) have an important distortion. It will be necessary to take into account these distortions in order to develop algorithms adapted to these conditions.
This task has thus two goals. Firstly, the aim is to develop general algorithms working with several sensors and also robust to illumination sensitivity of the cameras. The second goal consists in developing a fast 3D merging method to obtain a localization and a reconstruction as accurate as possible from the 3D models obtained by the N camera-laser hybrid sensors. This approach has the advantage to avoid the synchronization problem. However it does not take into account the constraint that all the camera-laser hybrid sensors have the same motion (they are rigidly attached).
This task is divided into two sub-tasks :
- T 1.1 Structure from motion for camera-laser hybrid sensors
- T 1.2 3D map registration