Computer Vision for Intelligent Vehicles

Summary: 




Project Description: 
In recent years, much research has been dedicated to developing vision-based Advanced Driver Assist Systems (ADAS). These systems help drivers in controlling their vehicle by, for instance, warning against lane departure, hazardous obstacles in the vehicle path or a too short distance to the preceding vehicle. As these systems evolve with more advanced technology and higher robustness, they are expected to increase traffic safety and comfort. A key component of ADAS is free-space detection, which provides information about the surrounding drivable space. Since traffic scenes come in a wide variety (urban versus rural, highway versus city-center), with varying imaging conditions (good versus bad weather, day versus night), ADAS have to be both flexible and robust. A potential strategy is to train many different classifiers and to select the one that is most relevant at the moment (for instance, based on time and geographical location), or train a complex single classifier to handle all cases. In contrast, we show in our research that it is feasible to fine-tune a relatively simple, single classifier in an online and self-supervised fashion. If training labels can be generated automatically and in real-time, the amount of supervised training data available becomes practically unlimited. To this end, we apply a disparity-based segmentation, which can run in the background of the critical system path and at a lower frame rate than the color-based algorithm.

Figure 1: Proposed system design where the classifier is updated while pusueing for robustness and flexibility.

This algorithm is not perfect but at least sufficiently good to generate weak training labels. We show that the new classifier, trained with these weak labels, outperforms the traditional algorithm. Our system diagram is shown in Figure 1. Additional example results are shown in Figure 2.

Figure 2: Input images, their disparity signal, the stixel labeling and the free space as detected by our algorithm.

The stixel labeling algorithm involves the organization of depth signals (e.g. found by a stereo camera or by a laser/radar sensor) into vertical columns, called stixels, as noticeable from the third column of dual subfigures in Figure 2. We have developed a fast algorithm for depth stixel processing and mapped this on a GPU processor, featuring real-time performance. This algorithm offers now depth profiles per column so that specific depth profiles such as the road surface, trees along a road etc., can be identified very rapidly and can be combined with further spatial processing such as texture and color analysis. This type of processing has been included into the Counter-IED project (elsewhere found on this website), where real-time object detection well in advance of a vehicle is required. The mapping of the developed real-time algorithm was a cooperation with ViNotion.
Application Area: 
Automotive
Video/Imaging Discipline: 
Content Analysis
3D processing
Partners: 
Partners: 

Eindhoven University of Technology SPS-VCA and ViNotion