Abstract:
Recognizing previously observed actions in video sequences can lead to Augmented Reality manuals that (1) automatically follow the progress of the user and (2) can be created from video examples of the workflow.
Modeling is challenging, as the environment is susceptible to change drastically due to user interaction and camera motion may not provide sufficient translation to robustly estimate geometry.
We propose a piecewise homographic transform that projects the given video material onto a series of distinct planar subsets of the scene.
These subsets are selected by segmenting the largest image region that is consistent with a homographic model and contains a given region of interest.
We are then able to model the state of the environment and user actions using simple 2D region descriptors.
The model elegantly handles estimation errors due to incomplete observation and is robust towards occlusions, e.g. due to the user's hands.
We demonstrate the effectiveness of our approach quantitatively and compare it to the current state of the art.
Further, we show how we apply the approach to visualize automatically assessed correctness criteria during run-time.
Social Program