In networks of multiple cameras, it is extremely important to know the position and orientation of each camera. Only with this information it is possible to perform global image processing and analysis based on the multiple streams over the same scene. Examples on the global analysis are people tracking and re-identification over multiple camera view and 3D reconstruction of the scene. Currently, the camera calibration is done manually, which requires 2-8 hours of work. We need to automate the process, such that we can perform re-calibration at any point in time at no cost.
The project focuses on research and development of innovative method for automated calibration of multiple cameras with overlapping views. One of the idea on how to identify the orientation and location of each camera is to detect dominant lines (edges of roads, buildings, etc) in the image of each cameras and then using the 3D geometry to match and register (align) these lines visible in different cameras. Registration of these lines will bring the camera calibration parameters in the global coordinate system. Another idea is to detect and identify the trajectories of people in these cameras. The alignment (registration) of these trajectories will lead to the camera calibration parameters in the global coordinate system.
There are interesting challenges in this topic:
- Reliable detection of dominant lines and robust identification of people trajectories
- Minimizing error in alignment of lines (trajectories) obtained from different cameras
- How to calibrate cameras when only a few lines are detected