RegisterGet new password
Welcome to ISMAR 2013!
header

ISMAR 2013 Doctoral Consortium

2013 ISMAR Doctoral Consortium

PROGRAM

Tuesday Oct 1, 2013

Time

DC Student

Research Topic

Mentors

9:00-9:05am

Opening

9:05-10:00am

Ky Waegel (UNC)

Filling the Gaps: Hybrid Vision and Inertial Tracking

Prof. Tobias Hollerer (UCSB)

10:00-11:00am

Rodney Berry (UTS)

Representational Systems with Tangible and Graphical Elements

Christopher Stapleton (UCF)

11:00-12:00pm

Chis Sweeney (UCSB)

Improved Outdoor Augmented Reality through “Globalization”

Gerhard Reitmayr (Qualcomm Research ), Prof. Henry Fuchs (UNC)

12:00-1:00pm

LUNCH

1:00-2:00pm

Jason Orlosky (Osaka University)

Management and Manipulation of Text in Dynamic Mixed Reality Workspaces

Prof. Steve Feiner (Columbia University), Raphael Grasset (TU Graz)

2:00-3:00pm

Ulrich Eck (UniSA)

Visuo-Haptic Augmented Reality Runtime Environment for Medical Training

Prof. Tobias Hollerer (UCSB), Weiquan Lu (NUS)

3:00-4:00pm

Youngkyoon Jang (KAIST)

Unified Visual Perception Model for Context-aware and Wearable Augmented Reality

A/Prof. Kiyoshi Kiyokawa (Osaka University), Takeshi Kurata (AIST), Weiquan Lu (NUS)

4:00-5:00pm

Neven A. M. Elsayed (UniSA)

Visual Analytics in Augmented Reality

Prof. Dieter Schmalstieg (TU Graz), A/Prof. Kiyoshi Kiyokawa (Osaka University)

5:00-5:05pm

Closing

Submission Abstracts

Berry.png

Representational Systems with Tangible and Graphical Elements

Rodney Berry (Thesis Supervisors: Ernest Edmonds and Andrew Johnston)

This research centres on the development of a number of prototype interactive systems, each of which uses a tangible means of representation and manipulation of musical elements in musical composition. Data gathered through collaborative prototyping and user studies is analysed using grounded theory methods. The resultant contribution to knowledge includes theory, design criteria and guidelines specific to tangible representations of music. This knowledge will be useful for future design of systems that use tangible representations, particularly for making music. The prototypes themselves also serve as a form of knowledge and as creative works.

Ueck.png

Visuo-Haptic Augmented Reality Runtime Environment for Medical Training

Ulrich Eck (Thesis Supervisors: Christian Sandor and Hamid Laga)

During the last decade, Visuo-Haptic Augmented Reality (VHAR) systems have emerged that enable users to see and touch digital information that is embedded in the real world. They pose unique problems to developers, including the need for precise augmentations, accurate colocation of haptic devices, and efficient concurrent processing of multiple, realtime sensor inputs to achieve low latency. We think that this complexity is one of the main reasons, why VHAR technology has only been used in few user interface research projects. The proposed project’s main objective is to pioneer the development of a widely applicable VHAR runtime environment, which meets the requirements of realtime, low latency operation with precise co-location, haptic interaction with deformable bodies, and realistic rendering, while reducing the overall cost and complexity for developers. A further objective is to evaluate the benefits of VHAR user interfaces with a focus on medical training applications, so that creators of future medical simulators or other haptic applications recognize the potential of VHAR.

Elsayed.png

Visual Analytics in Augmented Reality

Neven A. M. ElSayed (Thesis Supervisors: Christian Sandor and Hamid Laga)

In the last decade, Augmented Reality has become more mature and is widely adopted on mobile devices. Exploring the available information of a user’s environment is one of the key applications. However, current mobile Augmented Reality interfaces are very limited compared to the recently emerging big data exploration tools for desktop computers. Our vision is to bring powerful Visual Analytic tools to mobile Augmented Reality. Challenges for this approach include: limited input capabilities of mobile platforms, limited display capabilities (resolution, field of view), and the dynamically changing environment the user is in. To solve these problems, we propose to create mobile Augmented Reality visualization techniques based on Visual Analytics tools. We will demonstrate these techniques in a food-shopping assistant. The assistant can show the containment relations of ingredients between different products. We are currently working on the visualization design; next, we will implement a prototype and evaluate different alternative visualizations. Our research will impact the Augmented Reality com- munity, as well as the Visual Analytics community.

Jang.png

Unified Visual Perception Model for Context-aware Wearable AR

Youngkyoon Jang (Thesis Supervisor: Woontack Woo)

We propose Unified Visual Perception Model (UVPM), which imitates the human visual perception process, for the stable object recognition necessarily required for augmented reality (AR) in the field. The proposed model is designed based on the theoretical bases in the field of cognitive informatics, brain research and psychological science. The proposed model consists of Working Memory (WM) in charge of low-level processing (in a bottom-up manner), Long-Term Memory (LTM) and Short-Term Memory (STM), which are in charge of high-level processing (in a top-down manner). WM and LTM/STM are mutually complementary to increase recognition accuracies. By implementing the initial prototype of each boxes of the model, we could know that the proposed model works for stable object recognition. The proposed model is available to support context-aware AR with the optical see-through HMD.

Orlosky.png

Management and Manipulation of Text in Dynamic Mixed Reality Workspaces

Jason Orlosky (Thesis Supervisors: Kiyoshi Kiyokawa and Haruo Takemura

Viewing and interacting with text based content safely and easily while mobile has been an issue with see-through displays for many years. For example, in order to effectively use optical see through Head Mounted Displays (HMDs) in constantly changing dynamic environments, variables like lighting conditions, human or vehicular obstructions in a user's path, and scene variation must be dealt with effectively. My PhD research focuses on answering the following questions: 1) What are appropriate methods to intelligently move digital content such as e-mail, SMS messeges, and news articles, throughout the real world? 2) Once a user stops moving, in what way should dynamics of the current workspace change when migrated to a new static environment? 3) Lastly, how can users manipulate mobile content using the fewest number of interactions possible? My strategy for developing solutions to these problems primarily involves automatic or semi-automatic movement of digital content throughout the real world using camera tracking. I have already developed an intelligent text management system that actively manages movement of text in a user's field of view while mobile. I am optimizing and expanding on this type of management system, developing appropriate interaction methodology, and conducting experiments to verify effectiveness, usability, and safety when used with an HMD in various environments.

Sweeney.png

Improved Outdoor Augmented Reality through “Globalization”

Chris Sweeney (Thesis Supervisors: Tobias Hollerer and Matthew Turk)

Despite the major interest in live tracking and mapping (e.g., SLAM), the field of augmented reality has yet to truly make use of the rich data provided from large-scale reconstructions generated by structure from motion. This dissertation focuses on extensible tracking and mapping for large-scale reconstructions that enables SfM and SLAM to operate cooperatively to mutually enhance the performance. We describe a multi-user, collaborative augmented reality system that will collectively extend and enhance reconstructions of urban environments at city-scales. Contrary to current outdoor augmented reality systems, this system is capable of continuous tracking through areas previously modeled as well as new, undiscovered areas. Further, we describe a new process called globalization that propagates new visual information back to the global model. Globalization allows for continuous updating of the 3D models with visual data from live users, providing data to fill coverage gaps that are common in 3D reconstructions and to provide the most current view of an environment as it changes over time. The proposed research is a crucial step toward enabling users to augment urban environments with location-specific information at any location in the world for a truly global augmented reality.

Waegel.png

Filling the Gaps: Hybrid Vision and Inertial Tracking

Ky Waegel (Thesis Supervisor: Frederick P. Brooks, Jr.)

Existing head-tracking systems all suffer from various limitations, such as latency, cost, accuracy, or drift. I propose to address these limitations by using depth cameras and existing 3D reconstruction algorithms to simultaneously localize the camera position and build a map of the environment, providing stable and drift-free tracking. This method is enabled by the recent proliferation of lightweight, inexpensive depth cameras. Because these cameras have a relatively slow frame rate, I combine this technique with a low-latency inertial measurement unit to estimate movement between frames. Using the generated environment model, I further propose a collision avoidance system for use with real walking.