Doctoral Consortium
Attendees
- Mikel Salazar Univ of Deusto
A Human-Computer Interaction Paradigm for Augmented Reality Systems
In this short paper, the PhD candidate Mikel Salazar presents his research project aimed towards the construction of a Human- Computer Interaction(HCI) paradigm that takes advantage of the Augmented Reality(AR) capabilities of current mobile devices to create a Three-dimensional User Interfaces(3DUIs) integrated in the real world. The document discusses the evolution and limitations of the present HCI paradigms while analysing the new research directions. Later, the author describes his current research, stating the main hypothesis, objectives and completion planning, while also detailing the methodology and the subprojects currently in development.
- João Paulo Lima CIn-UFPE
Object Detection and Pose Estimation from Natural Features Using Consumer RGB-D Sensors: Applications in Augmented Reality
The work proposed in this document aims to investigate the use of consumer RGB-D sensors for object detection and pose estimation from natural features, with the purpose of using such techniques for developing augmented reality applications. Two methods based on depth-assisted rectification are proposed, which transform features extracted from the color image to a canonical view using depth data in order to obtain a representation invariant to rotation, scale and perspective distortions. While one method is suitable for textured objects, either planar or non-planar, the other method focuses on texture-less planar objects. Qualitative and quantitative evaluations of the proposed methods are performed, comparing it to existing methods for object detection and pose estimation.
- Feng Zheng UNC
Closed-Loop Registration of Physical and Virtual Objectsin Spatial Augmented Reality
Spatial Augmented Reality (SAR) brings physical objects to life with projected imagery. Accurate registration of projected virtual objects and physical objects is crucial for effective augmentation. Typical registration process in SAR consists of static calibration, dynamic tracking and graphics projection, which is analogous to open-loop systems. However, no matter how carefully calibration and pose measurements are done, there almost certainly will be errors in the displayed images. Such registration errors can be reduced by integrating a dynamic correction loop into the registration process, forming closed-loop systems. This thesis focuses on closed-loop registration which makes use of registration errors. More specifically, we propose to “close the loop” in the displayed appearance for both static and dynamic scenes using custom physical-virtual fiducials or only virtual fiducials. For physicalvirtual fiducials, the goal is to design pairs of fiducials comprising (1) physical fiducials affixed directly onto the physical objects and (2) virtual fiducials projected to lie on top of the physical fiducials such that the combined appearance of the physical and virtual fiducials provides an optical signal that directly indicates error in the relative pose between the physical and virtual objects. As a step further, using only custom virtual fiducials which adapt to the physical attributes of objects can also produce similar misregistration responsive and correcting optical signal. This optical error signal could be comprised of geometry and/or surface characteristics including physical curvature and edges, color, texture frequencies and gradients, reflectance properties, and material transparency and translucency, in easily identifiable patterns. With the use of such fiducials, SAR systems are capable of continuously and robustly sensing and correcting for misregistration. By interpreting the combination process of the physical and virtual in a simulated way other than optically, we can also design such fiducials that apply to other AR paradigms such as video-based AR.
- Steffen Gauglitz UCSB
Interactive Remote Collaboration using Augmented Reality
We describe a framework to support remote collaboration on tasks that involve the physical environment. The framework builds upon current as well as to-be-developed computer vision and augmented reality technologies in order to enable more immersive and more direct interaction with the remote environment than what is possible with today’s tools. The described completed and proposed future work comprises contributions in visual tracking and modeling of the environment, user interfaces for collaboration, remote pointing and gesturing, proof-of-concept implementations as well as user studies of various aspects of remote interaction.
- Weiquan Lu NUS
Subtle Cueing in Augmented Reality
Visual search in augmented reality environments is an important task that can be facilitated through different cueing methods. Current cueing methods rely on explicit cueing, which can potentially reduce visual search performance. In comparison, my dissertation work proposes a subtle cueing method that improves visual search performance while being clutter-neutral. This proposal involves a series of rigorous human vision-based experiments to uncover the parameters and attributes of subtle cueing. The results of these experiments will then inform the design and implementation of a prototype AR system, with the aim of evaluating subtle cueing for enhancing visual search in AR environments.
- Yiyan Xiong UCF
Two-way Inside Interaction between the Real Person and the Virtual World in a Mixed Reality System
Any disconnection between real and virtual objects in a Mixed Reality (MR) system negatively influences the sense of presence of users and thereby limits the effectiveness of the intended MR experience. In this paper, we propose a novel solution to the problem of interaction between real people and virtual objects in an MR space. We create a 3D human model for each person in the MR scene, so each has both a real representative captured by the camera and a virtual representative rendered by the computer. Using this scheme, a user tracked by Kinect or similar device can manipulate virtual objects. Furthermore, the virtual objects can affect the virtual representative of the user, providing visual feedback of these interactions.