T3 / SfM-based 2D/2D matching
Task leader: Pascal Vasseur (LITIS, UR, France)
In this approach, we propose to take into account the properties inherent to a sensor cluster. The idea is to use all the sensors, in a global framework, to obtain the 3D reconstruction and the vehicle location. For this, we utilize the classical techniques of multi-‐‑view geometry of calibrated camera cluster and the camera/laser combination . Beyond the problems of robust feature matching between cameras with high distortions (fisheye) that exist in the three tasks, a new problem has to be solved here. Indeed the unsynchronization of the sensors cannot permit to use classical tools. Thus it will be necessary to check that the features matched between two cameras have been observed at the same time and that the laser points are correctly back-projected in the image despite the time offset. For this, we propose a novel method of camera/laser resynchronization. In the targeted application, since the vehicle drives in a urban environment where numerous structures have standard size (traffic signs, white lanes, ...), we can utilize much a priori information for the 3D reconstruction of the scene. Therefore, since the scene is moving with respect to the sensors, if the two studied images or image/laser date have not been acquired at the same time, the associated reconstruction will modify the real size of the objects known in the scene. There is thus a strong link between the reconstruction of moving objects whose size is known and the sensor synchronization. That is why we also propose a novel bundle adjustment method based on the 2D primitives, including a temporal parameter related to the camera/laser unsynchronization in order to obtain better results.
This task is divided into two sub-tasks:
In this approach, we propose to take into account the properties inherent to a sensor cluster. The idea is to use all the sensors, in a global framework, to obtain the 3D reconstruction and the vehicle location. For this, we utilize the classical techniques of multi-‐‑view geometry of calibrated camera cluster and the camera/laser combination . Beyond the problems of robust feature matching between cameras with high distortions (fisheye) that exist in the three tasks, a new problem has to be solved here. Indeed the unsynchronization of the sensors cannot permit to use classical tools. Thus it will be necessary to check that the features matched between two cameras have been observed at the same time and that the laser points are correctly back-projected in the image despite the time offset. For this, we propose a novel method of camera/laser resynchronization. In the targeted application, since the vehicle drives in a urban environment where numerous structures have standard size (traffic signs, white lanes, ...), we can utilize much a priori information for the 3D reconstruction of the scene. Therefore, since the scene is moving with respect to the sensors, if the two studied images or image/laser date have not been acquired at the same time, the associated reconstruction will modify the real size of the objects known in the scene. There is thus a strong link between the reconstruction of moving objects whose size is known and the sensor synchronization. That is why we also propose a novel bundle adjustment method based on the 2D primitives, including a temporal parameter related to the camera/laser unsynchronization in order to obtain better results.
This task is divided into two sub-tasks:
- T 3.1 Illumination and distortion robust 2D feature matching
- T 3.2 Image/laser matching
- T 3.2 SfM and camera network resynchronization