Abstract:
Using Head-up-Displays (HUD) for Augmented Reality requires to have an
accurate internal model of the image generation process, so that 3D content
can be visualized perspectively correct from the viewpoint of the user. We
present a generic and cost-effective camera-based calibration for an
automotive HUD which uses the windshield as a combiner. Our proposed
calibration model encompasses the view-independent spatial geometry, i.e. the
exact location, orientation and scaling of the virtual plane, and a
view-dependent image warping transformation for correcting the distortions
caused by the optics and the irregularly curved windshield. View-dependency
is achieved by extending the classical polynomial distortion model for
cameras and projectors to a generic five-variate mapping with the head
position of the viewer as additional input. The calibration involves the
capturing of an image sequence from varying viewpoints, while displaying a
known target pattern on the HUD. The accurate registration of the camera path
is retrieved with state-of-the-art vision-based tracking. As all necessary
data is acquired directly from the images, no external tracking equipment
needs to be installed. After calibration, the HUD can be used together with a
head-tracker to form a head-coupled display which ensures a perspectively
correct rendering of any 3D object in vehicle coordinates from a large range
of possible viewpoints. We evaluate the accuracy of our model quantitatively
and qualitatively.
Social Program