T2 / SfM-based 2D/3D matching
Task leader: Cédric Demonceaux (Le2i, UB, France)
In this task, we exploit the assumption that the cameras and lasers follow a rigid motion (i.e. they are rigidly attached). The cameras and lasers set in front of the vehicle will be in charge of estimating a rough structure and vehicle position. This first estimation will then be used by the sensors that cannot observe yet this part of the environment, to initialize and optimize its own localization and reconstruction by knowing its own motion. Thus the system will have to update the structure of the environment by optimizing the result obtained by the cameras and lasers set in front of the vehicle. For this part, it will be necessary to build the correspondence the 2D image point and the 3D structure already calculated by the camera and laser data. Two main scientific difficulties exist. First, we will have to associate a 2D pixel with a 3D point in a photometric point of view, which needs to be invariant to the sensibility of the different cameras, as discussed previously. Then, as soon as the position of the point is known in the space, it will have to be updated using a priori knowledge about the camera/laser motion. This procedure could be carried with a dedicated error measure to optimized which takes into account the likelihood of the 3D position of the point, the camera/laser motion in order to obtain more accurate results while considering the previous reconstructions.
This task is divided into three sub-tasks :
In this task, we exploit the assumption that the cameras and lasers follow a rigid motion (i.e. they are rigidly attached). The cameras and lasers set in front of the vehicle will be in charge of estimating a rough structure and vehicle position. This first estimation will then be used by the sensors that cannot observe yet this part of the environment, to initialize and optimize its own localization and reconstruction by knowing its own motion. Thus the system will have to update the structure of the environment by optimizing the result obtained by the cameras and lasers set in front of the vehicle. For this part, it will be necessary to build the correspondence the 2D image point and the 3D structure already calculated by the camera and laser data. Two main scientific difficulties exist. First, we will have to associate a 2D pixel with a 3D point in a photometric point of view, which needs to be invariant to the sensibility of the different cameras, as discussed previously. Then, as soon as the position of the point is known in the space, it will have to be updated using a priori knowledge about the camera/laser motion. This procedure could be carried with a dedicated error measure to optimized which takes into account the likelihood of the 3D position of the point, the camera/laser motion in order to obtain more accurate results while considering the previous reconstructions.
This task is divided into three sub-tasks :
- T 2.1 Structure from motion for monocular fish-‐‑eye camera
- T 2.2 Illumination and distortion robust 2D feature matching
- T 2.3 3D/2D feature prediction and verification for SfM