The VCA group, as a TU/e leading group for 3-D reconstruction, has contributed to the PANORAMA project for 3-D modeling and 3-D image processing with various sensors (Figure 1). The contribution consists of design and implementation of several algorithms including distance-aware weighting strategies, real-time edge detection and planar segmentation of depth images, real-time RGB-D registration Pipeline (R3P), 3-D reconstruction applications for large-scale environments and multi-modal fusion of 3-D models obtained via intrinsically different sensors. The following paragraphs give a brief overview for each of the above-mentioned contributions. Interested readers can obtain detailed information via the provided links.
PANORAMA is a research project of the ENIAC Joint Undertaking (JU) and is co-funded by grants from Belgium, Italy, France, the Netherlands, and the United Kingdom.
Figure 2. From left to right, the snapshots of the Desk region of the final 3D meshes obtained by the original KinFu, DA, and DASS methods, respectively.
Figure 3. Five examples of datasets containing various types of 3D edges and the corresponding outcomes: (a) color images of each scene, (b) extracted 3D edges, and (c) planar surfaces (Note that the merging and size-validation phases have not been incorporated in the visual results).
Figure 4. Applying the planar-segmentation algorithm to depth images with and without noise reduction of 3-D edges based on the proposed Solidarity filter.
Figure 5. Samples taken by the FARO Focus 3D laser scanner: (a) the 3-D model of the VCA office and laboratory at Eindhoven University of Technology (The Netherlands) consisting of 16 registered scans and (b) the 3-D model of a supermarket in Nice (France) including 64 registered scans.
Figure 6. Registered 3-D chain of the FlexiFusion aligned to the FARO-based 3-D model with a focus on the quality of alignment. The colored point-clouds are not shown to enable readers for a better investigation of the alignment quality, since coloring a point cloud can hide part of the misalignments.
Figure 7. FlexiFusion architecture illustrating the CPU/GPU threads and data structures with the internal/external communications: the system receives raw 3-D data as input and generates 3-D chains as output. The Focus of the FlexiFusion is to deliver the highest possible level of adjustability, which is achieved via the proposed 3-D chain model.
Figure 8. Snapshots of the FlexiFusion registered 3-D chain as a unified 3-D model: (partly) colored at left and adjusted 3-D boxes at right
Figure 9. R3P architecture: the 2-D and 3-D algorithms are separated into two main group of layers.
Figure 10. R3P pipeline: (1) feeding color images to 2-D phase layers, (2) obtaining 2-D key points, (3) adding depth information to 2-D key points, (4) feeding 2-D key points with corresponding depth information to 3-D phase layers, (5) estimated transformation based on key-point-clouds, (6) key-point-clouds represented as bold dots, and (7) the complete point-clouds are well aligned based on the transformation obtained for key-point-clouds