Bc Biermann, jordan seiler, ean mering, MOMO, camila mercadoA collaboration between The Heavy Projects (LA) and the Public Ad Campaign (NYC) Re*Public is one of the first AR applications to use building architecture as a natural feature tracking trigger. Re*Public is an experimental mobile device application that digitally resurfaces, or "skins" physical buildings in urban centers by overlaying 3D content onto the physical environment. Creatively harnessing augmented reality technology, we have turned architecture into locations for meaningful cultural exchange and, in doing so, provided the user with a new way of seeing public space as a more open and democratic media environment. Re*Public represents an innovative way to create and display urban art. Blurring the lines of private property boundaries, our application allows artists to create digital works of art in public spaces and place them on buildings in ways they were previously unable to do within the constraints of the physical environment. Specifically, Re*Public has placed works of virtual urban art (by MOMO) on buildings in NYC and LA and created a virtual history of the Houston and Bowery wall (NYC). In addition, we are engaging in an experimental urban art / augmented reality deployment designed to integrate mural art and mobile device AR. We have created interactive and animated murals that aim to engage public urban space by using AR to encourage individuals to peer beyond the surface image.
Augmented Reality for Underground Infrastructure and for Construction
Stéphane Côté, Philippe Trudel, Rob Snyder, Renaud GervaisThis demo is in 2 parts.
Part I: Enhanced Virtual Excavation. The purpose of the demo is to display underground pipes and 3D ground penetrating radar data inside an interactive virtual excavation, that can be enlarged and moved on the surface of the road. The demo is not quite augmented reality, but instead augmented panoramas. It is still very convincing, and offers high quality (jitter free, accurate and interactive) augmentation. This demo is an improvement from last year’s, in that it incorporates new features such as a slicing plane to ease measurement of vertical distance between pipes, and augmentation using 3D ground penetrating radar data. More details can be found at: http://bit.ly/Kw8gCc
Part II: An Augmented Reality Tool for Facilitating On-Site Interpretation of 2D Construction Drawings. The purpose of this demo is to enable a user to augment a building with 2D construction drawings. It displays stable (jitter-free) high accuracy augmentation of building panoramic images. By displaying 2D construction drawings in their physical world context, the system is designed to help construction workers more easily find the physical location that a drawing represents in the building. We show 2 augmentation techniques: a 2D drawing being inserted inside the building as a sliding plane, and a combined display showing simultaneously a 3D model, a 2D drawing, and a panoramic image. The demo is not quite augmented reality, but instead augmented panoramas. It is still very convincing, and offers high quality (jitter free, accurate and interactive) augmentation. More details can be found at: http://bit.ly/QJXmGW
Constrained-SLAM : a flexible framework for 3D object tracking
Steve Bourgeois,Mohamed Tamaazousti, Vincent Gay-Bellile, Sylvie Naudet ColletteIn this demo, we will demonstrate the ability to achieve an accurate and robust visual tracking of textureless 3D object on mobile platform.
To show the large panel of application that can be handled by our tracking framework, our demonstration will focus on different kind of objects.
First, we will show a demonstration of the “car customization” application introduced in our the paper “A mobile markerless Augmented Reality system for the automotive field” of the Tracking Methods and Applications workshop. It consists in changing realistically the color of the car body in real-time. For convenience, the demonstration will be realized on a car toy.
A second demo will demonstrate the ability of our technology to handle industrial object, such as introduced in our ISMAR 2012 paper “An Interactive Augmented Reality System: a prototype for industrial maintenance training applications”. This demonstration will consists in adding virtual information (eg. maintenance procedure) or virtual elements over an industrial metallic object, but also showing the quality of the registration by superimposing the CAD model on the real object.
Quick Response (QR) & Natural Feature Tracking (NFT) based Mobile Augmented Reality (AR) Framework1
Jian Gu, Daniel Tan, Zhiying ZhouNFT has been widely used in AR applications for tracking purpose. However, when the number of NFT images to be tracked increases, the computation needed for identification of the NFT images increases drastically. This limits the usage of NFT tracking when computation power is a constraint, especially on mobile devices. This research demo describes an implementation of a novel “QR Code + NFT” based Mobile Augmented Reality (AR) framework, which is designed to 1) using QR Code for identification 2) using the ID to retrieve corresponding AR contents and the NFT image to be tracked. This framework consists of five modules, 1) Decoding QR Code information; 2) Retrieving the corresponding NFT image and 3D content or videos from the server; 3) Analyzing natural images provided by user to determine if the image is suitable for NFT; 4) Real-time tracking of NFT/QR Code and 3D pose computation; 5) High quality augmentation of 3D content using a game engine renderer with computed 3D pose.
Mass Market AR Games by Ogmento
Ori InbarA mobile augmented reality learning game that parents play with their children, anywhere, anytime. Parents hide virtual characters (baby animals) in real spots around the house (no markers required!) Children follow a detective story and use hints to find the missing animals and reunite them with their families.
Technology: The game is based on Ogmento’s unique template-based instant tracking technology. It allows users to acquire low feature density targets under a second and make it available for immediate detection. This unmatched technology has been 2 years in the making and because it doesn’t require a predefined target for the game is the first to truly enable mass market augmented reality gaming.
Target age group: 5-9 years old
Benefits of the Game:
- Engage parents in their children’s education through a fun game
By investing just a few minutes, parents participate in the creation of case files for their kids – and prepare a unique gaming experience tailored to their home.
- Develop spatial orientation, and motor skills
Use simple 2D maps and visual clues to identify where virtual characters are hidden around the home while operating a detective gadget (iPod)
- Combine a compelling video game with physical real world exploration
Unlike traditional video games where children spend hours in front of a screen, Pet Detective uses proven game mechanics to encourages movement and exploration, and make children more aware of their immediate surroundings.
- Learn about animal families and their special names
e.g. Reunite the Kitten with his fesnyng of ferrets –parents: jill & hob.
- Develop problem solving skills
Solving case files by collecting clues and deducing solutions.
The prototype was tested with a group of children ages 3-6 and demonstrated huge potential for learning games that interact with the real environment.
Interactive & Adaptive Story-Telling in Museums and Science Centers with Augmented Reality on Mobile Devices
Jens Keil, Timo Engelke, Harald Wuest, Folker WientapperIn this showcase we present early results from the EU-funded CHESS project, which aims to enrich cultural heritage experiences with mobile and mixed reality technologies. The demonstrators show how Augmented Reality virtually re-colors ancient statues and playfully educates by illustrating scientific processes and extending objects with digital information in an interactive and narration-oriented fashion, finally enhancing the whole experience through motion and camera controlled interaction techniques.
In order to do so, we employ our instantAR framework, an augmented reality system for mobile devices that has evolved in the last years in between industrial and applied research. Based on standards and web-technologies, the framework eases handheld AR development and is capable to employ rich media visualizations and various tracking techniques, such as image recognition, 2D and 3D arbitrary target tracking.
ARBlocks: Augmenting Education
Rafael A. Roberto, Daniel Q. de Freitas, Veronica Teichrieb, Manoela M. O. da SilvaThis demonstration will allow visitors to use different applications builded for the ARBlocks, which is a dynamic blocks platform aim- ing early childhood educational activities. The platform and its applications were designed to increase the teaching possibilities for educators as well as motivate children and help them to develop important abilities. This tool is based on projective augmented reality and tangible user interfaces. The content is displayed by projectors, which exhibit the required information only on the blocks surface using an automatic projector calibration technique, allowing teachers to easily use the tool.
Wide-area Scene Mapping for Mobile Visual Tracking
Jonathan Ventura, Tobias HöllererWe demonstrate our large-scale mapping and tracking system which is easily usable by non-experts. With our system, a single person can quickly and easily prepare a large outdoor scene for visual tracking in augmented reality applications. Usage is very simple: all that is required of the user is to walk through the environment with an omnidirectional camera (such as the cheap & small Sony Bloggie). After short automatic processing of the captured video on a server, vision-based AR can be experienced in a large environment. Our results and evaluation validate the ease-of-use, accuracy and range of the system. For the demo session we will demonstrate visual tracking on an Apple iPad 2 with an indoor scene, from a pre-built point cloud model.
Kaiju Kazoo and Mechanice: Creating AR Games out of Simple AR Toys
Brian Schrank, Ted Molinski, David Bayzer, David Laskey, Paul Brom, Logan Wright, Brian Gabor Jr., Joe Stramaglia, Brian von Kuster, Majdi Badri, Phil Tibitoski, Michael Langley, Joe Elsey, Ray Tan, Dan Rose, Robert Polzin, Joe ScalzoIt's daunting to develop AR games that are intuitive to play and leverage the unique affordances of AR. This demo includes two handheld AR games, Kaiju Kazoo and Mechanice, that accomplish both. Each game was developed through a highly iterative process to avoid the common development trap of adding elements to AR games that ultimately feel broken. We rapidly prototyped a dozen ideas to discover the core “game feel” that the player would perform and experience most often. From those, we crafted two simple toys that were innately fun to tinker with. Finally, we expanded those experiences into games by building up challenges, rewards, and emotional closures of winning/losing.
In Kaiju Kazoo you run a Godzilla-like monster around an elliptical kazoo world, smashing buildings and brain police. The image marker is a magnetic kazoo-shaped spinner. You indirectly control the Kaiju by tilting the kazoo downward in the direction you want to run. The Kaiju builds momentum, making it easier to control while avoiding fast, twitch-like gameplay. You tap anywhere on screen to spin attack. Kaiju Kazoo emerged from a virtual "magnetic ball" toy that rolls around a marker. That simple indirect control and intuitive movement provided the core mechanic and hook we developed into Kaiju Kazoo. The high fidelity of rotation and haptic feel of the spinner leverages a unique affordance of AR.
The second game in the demo is Mechanice. The physical interface is a cube with virtual cubes stacking off every side. Aim and tap to push rings of virtual cubes around, trying to match the colors on each side to win and explode the puzzle as a reward. Mechanice emerged from a toy that enabled players to transform the configuration of virtual cubes around a physical cube, which provides a strong sense of agency/discovery in AR.
Interactive Augmented Reality Exposure Therapy
Sam Corbett-Davies, Andreas Du ̈nser,Adrian Clark‡In this demonstration we show an augmented reality (AR) system we are developing for the exposure treatment of arachnophobia (the fear of spiders. AR has great potential for phobia treatment because virtual fear stimuli can be shown in the real world and the client can see their own body and interact naturally with the stimuli. Our system uses an overhead Kinect camera to obtain 3D information about the therapy environment. Objects in the scene are tracked and computer representations of them developed to enable virtual spiders to interact with the real world in a very realistic fashion. While virtual exposure therapy is not new, no previous system has achieved the level of interactive realism that our work does. Our system aims to give the therapist much more control over the stimulus compared to traditional (non-virtual) exposure therapy. The size, speed, appearance and number of the spiders can be adjusted. The virtual spider can walk up, around, or behind real objects and can be carried, prodded and occluded by the user. we invite feedback and discussion on the technical aspects of the system and how they could be developed.
Handheld AR/AV system using PDR localization and image based localization with virtualized reality models
Koji Makita, Masakatsu Kourogi, Thomas Vincent, Takashi Okuma, Jun Nishida, Tomoya Ishikawa, Laurence Nigay, Takeshi KurataOur demo will show a handheld AR/AV (Augmented Virtuality) system for indoor navigation to destinations and displaying detailed instructions of target objects with contextual interaction. A localization method of the system is based on two crucial functions, PDR (Pedestrian Dead Reckoning) localization, and image based localization.
The main feature of the demo is a complementary use of PDR and image based method with virtualized reality models. PDR is realized with the built-in sensors (3-axis accelerometers, gyroscopes and magnetometers) in waist-mounted device for estimating position and direction on 2D map. An accuracy of the PDR localization is improved with map matching and image based localization. Maps of the environment for map matching are automatically created with virtualized reality models. Image based localization is realized with matching phase and tracking phase for estimating 6-DoF (degree of freedom) extrinsic camera parameters. In matching phase, correspondence between reference images included in virtualized reality models and images from the camera of the handheld device is used. An output of the PDR localization is used for an efficient searching of reference images. In tracking phase, interest point-tracking on images from the camera is used for relative motion estimation.
Layar Creator - AR content authoring made easy
Jens de Smit, Ronald van der LingenLayar has been working on being able to not only track entire pictures, but also smaller parts of it, allowing users to zoom in on particular details of a scene. This technology is integrated in our Layar Creator authoring environment for easy augmentation of print publications. In this demo you can see prototypes of how the same technology of cutting up a reference image also makes it possible to track an object from multiple angles, creating more complex and immersive scenes than ever before possible with Layar. The demo will feature some technical insights of the inner workings of this solution not normally visible in the market version of the Layar app. Participants can also bring their own reference images and try the Creator out for themselves.
Outdoor AR Library – A Component based Framework for Mobile Outdoor AR
Gun A. Lee, Leigh Beattie, Robert W. Lindeman, Raphaël Grasset, Mark BillinghurstThe Outdoor AR Library (http://www.hitlabnz.org/mobileAR) is a software development framework for easily building outdoor AR applications on mobile platforms. By mixing and matching the ready-to-use framework components, developers can focus on designing and developing the domain content, logic and user interface,and spend less time implementing basic functionality.The demonstration will outline the basic structure and development process using the Outdoor AR library. The demonstration will also show a number of mobile outdoor AR applications developed using the library, including highly interactive game, running on tablet devices and smartphones. The demonstrated mobile outdoor applications will include:
- CityViewAR: Showing 3D buildings of Christchurch city that are demolished due to earthquake.
- CCDU 3D: 3D and AR visualization of new city plan of the Christchurch city.
- GeoBoids: an outdoor AR exer-game with multi-modal interaction.
- Courier AR: An outdoor AR application for courier
Photometric Registration from Arbitrary Geometry Demo
Lukas Gruber, Dieter SchmalstiegIn our demo we show recent advances in photometric registration for AR. We will present the technology from the paper “Real-time Photometric Registration from Arbitrary Geometry“ which will be published at ISMAR 2012. The novelty of this technology is a non invasive photometric registration approach from arbitrary geometry, which does not use additionally inserted light probes such as reflective mirror balls and estimates the environment light from observations from the current scene. We will demonstrate a narrative and interactive setup based on an AR rendering system which supports lighting for virtual content from our photometric registration algorithm. We show AR lighting effects such as the application of real-world lighting onto virtual objects, virtual shadows on virtual objects, virtual shadows on real-world geometry and real-world shadows on virtual objects. The system is based on real-time reconstruction and hence supports dynamic scenes as well.
SnapAR: Quick Viewpoint Switching for Manipulating Virtual Objects in Hand-‐Held Augmented Reality using Stored Snapshots
Mengu Sukan, Steve Feiner, Barbara Tversky, Semih Energin SnapAR is a magic-lens-based hand-held augmented reality application that allows its user to store snapshots of a scene and revisit them virtually at a later time. By storing a still image of the unaugmented background along with the 6DOF camera pose, this approach allows augmentations to remain dynamic and interactive. This makes it possible for the user to quickly switch between vantage points at different locations from which to view and manipulate virtual objects, without the overhead of physically traveling between those locations.
In this demo, visitors can use SnapAR to explore a virtual furniture layout application using a scaled-down table-top living room model. Visitors are given hand-held tablet computers running SnapAR with a set of pre-stored snapshots that they can select and view. They can walk around a physical table, on which the model resides, to change their viewpoint and compare that to being able to change their viewpoint virtually using SnapAR. Finally, users can also experiment with rearranging virtual furniture using SnapAR’s manipulation controls in both live and snapshot modes.
Recreating the parallax effect associated with Fishtank VR in a real-‐time telepresence system using head-‐tracking and a robotic camera
Christian Heinrichs, Andrew McPherson This demonstration extends the concept of Fishtank VR to real environments by mapping head movements to real camera movements. A mechanical camera mount is used to move the camera, and head-coupled perspective projection is emulated by cropping the video feed depending on the position of the head in relation to the screen. While head to camera movement mappings are kept proportional the size of the scene is comparatively smaller, resulting in a magnified view.
The effect is of a virtual window into a smaller space, allowing detailed remote examination of artifacts. Since the camera position and stage dimensions are known, a further benefit of this approach is the potential to overlay virtual 3D content without the use of markers.
VRCodes: Unobtrusive and Active Visual Codes
Grace Woo and Szymon JakubczakVRCodes is a novel visible light-based communication architecture based on undetectable, embedded codes in the picture that are easily resolved by an inexpensive camera. VRCodes convert unobtrusive display surfaces into general digital interfaces which transmit both pictures as well as machine-compatibile data. They also encode relative orientation and position. This technology will facilitate development of public-private interfaces in which people use their personal devices to interact with public infrastructure.
We demonstrate the potential of VRCodes through LipSync, an interactive broadcast of famous monologues available in many different languages to multiple users at the same time. Users direct their smartphones to "tune in" to their desired language, either audio or closed captioning, by simply pointing them at the relevant part of the screen. This allows intuitive selection of the speaker and language preference. The precise synchronization between the video and the audio streams creates a seamless experience, where the user's natural motions give voice to moving lips.
This demo installation will also allow the spectators to appreciate the technical effect underlying VRCodes as described in the main ISMAR paper. Using provided peripherals or their own camera-enabled portable devices, they can easily observe how the eye works differently from the camera. VRCodes at the MIT Media Lab have most recently been featured in CBS and Engadget. We hope LipSync also gets people talking.
VENTURI City game
Selim BenHimaneThe demo presents a multi-player Augmented Reality Game (see left image) that takes place on a table-top 0.7m x 0.7m miniature city (see center image). The players have to accomplish a set of missions in the city (deliveries in a limited time, fire extinguishing, etc.). The real-time visual localization employed in this demo is based on the tracking partly described in the accepted long paper Kurz et al. “Representative Feature Descriptor Sets for Robust Handheld Camera Localization”. The demo is running on experimental prototype mobile platforms from ST Ericsson (see right image). The system showcases a sensor-aided markerless feature-based tracking of 3D platform composed of different complex structures. The platform has been geometrically reconstructed in an offline stage for correctly handling the occlusions. The continuous communication between the mobile platforms allows a multi-player interaction.
The demo is part of the first year’s demo use case of the European project VENTURI. VENTURI is a three-year collaborative European project targeting the shortcomings of current Augmented Reality design; bringing together the forces of mobile platform manufacturers, technology providers, content creators, and researchers in the field. It aims to place engaging, innovative and useful mixed reality experiences into the hands of ordinary people, by co-evolving next generation AR platforms and algorithms. And finally, it plans to create a seamless and optimal user experience through a thorough analysis and evolution of the AR technology chain, spanning device hardware capabilities to user satisfaction. This demo is the result of the first step in the project that consists of the integration of the technology, the software and the hardware to showcase a multi-player game.
Distributed Visual Processing for Augmented Reality
Winston Yii, Wai Ho Li, Tom DrummondThis is a demonstration of the research in the paper with the same name. It shows AR using visual tracking distributed between smartphone clients and a server PC connected to a Microsoft Kinect that dynamically models the operating environment. The Kinect sees the same scene as the smartphones. The attached PC builds and indexes a trackable model of the world for each RGBD frame from the Kinect. Image processing and pose computation are distributed so as to minimise the computational load on the smartphones and the bandwidth requirements of the wireless link. The server transforms the Kinect data into the viewpoint of each smartphone, thus turning it into a virtual Kinect. This enables correct rendering of virtual content on the smartphones with occlusions in dynamic scenes.
Vantage – Mobile phone navigation application with augmented reality directing arrow
Wang Yuan, Mandi Lee Jieying, Dr. Henry Been-Lirn DuhThis demonstration showcases a mobile phone (iPhone) navigation application with real- time rendered directing arrow displayed through augmented reality (AR) technology. Vantage provides a new way to display the directional instructions in 3-dimensional urban environment. It aims to reduce the mental and physical effort of user that is required for user to translate the directional information on map (2D) to the physical urban environment (3D). The most intuitive way to guide user to its destination is not through maps but to lead the person. The presence of the person in the same environment and guiding another person directly has quite a few advantages over using map. User does not have to match the landmarks on map with landmarks in the environment to know which direction user is going towards. Even though digital map nowadays provide the function to mark user’s position on map, there is still a need for some user to check the surrounding landmarks or move in a direction to know if user is moving in the right direction. Vantage works within the limitation of AR to provide a more intuitive way to lead user with directional instructions places in the urban environment.
Move it there: Image-Driven View Management for MAR
Raphael Grasset, Tobias Langlotz, Denis Kalkofen Markus Tatzgern, Dieter SchmasltiegThis demonstration will show a novel view management technique to improve label layout and representation for mobile augmented reality applications. We will demonstrate, in the context of an AR browser, how traditional placement methods of annotations can be refined based on the content of the image. In the demo, participants will see how labels visible on an AR view will adapt their position (“move”) and change their representation to improve a better understanding of the AR scene. The demonstration will be done on a tablet (Android or iOS) using prerecorded situated content but also live content of geo-located labels placed in the ISMAR demo room. This project is rather timely as the public rapidly adopts mobile AR browsers, driving the need of this type of techniques and outlining the importance of this topic for the AR community.
High-Quality Reflections, Refractions, and Caustics in Augmented Reality
Peter Kán and Hannes KaufmannHigh quality rendering plays an important role in achieving visual coherence between real and virtual objects in Augmented Reality scenarios. Many application areas can benefit from the photorealistic synthesis of videos of real and virtual worlds. In this demonstration we present a novel ray-tracing based rendering system, which produces high-quality output of specular global illumination effects and their composition with real video while achieving interactive framerates.
We demonstrate our novel rendering method which combines and improves ray-tracing based algorithms from computer graphics with compositing methods to simulate a high-quality light transport between virtual and real worlds in Augmented Reality. Our system exploits the power of modern parallel GPU architecture and shows a way how the correct reflection and refraction on virtual objects can be rendered in interactive AR setup. The caustics created by virtual specular objects are rendered interactively. Our system allows dynamic changes of light, materials, and geometry. We demonstrate the possible change of index of refraction of virtual objects which affects the refracted background and created caustics.
Presented method together with its implementation can be useful in all AR applications where the realistic result of rendering is important. It can possibly help to increase the efficiency of medical AR systems, or improve the production workflow in movie industry.
Real-Time Surface Light-field Capture for Augmentation of Planar Specular Surfaces
Jan Jachnik, Richard A. Newcombe, Andrew J. DavisonMost real-time AR systems are only able to achieve a convincing effect when placing augmentations on real-world surfaces with Lambertian reflectance. We demonstrate an easy to use system which allows artificial objects to be added realistically and rapidly to planar surfaces with general, specular reflectance characteristics.
Our approach requires only a single hand-held camera connected to a computer, with all the main data storage and processing carried out by a commodity GPU. The core step is the capture of the light-field emanating from the planar surface of interest, using a surface
light-field representation which looks forward to a future
generalisation to non-planar scenes. During capture, the camera's 6DOF pose is tracked using PTAM (Klein and Murray, ISMAR 2007), and the user's camera movements are guided by a real-time capture coverage display.
In a modest capture period of under 30 seconds, the light-field
recovered is not fully complete, but sufficient to enable some
interesting applications. The diffuse and specular components of the reflection at each surface element can be separated, to form a diffuse mosaic and, with the assumption of distant illumination, a convincing hemispherical environment map. A virtual object can now be added to the specular surface as the user continues to browse the scene with the moving camera in real-time. The light-field is used to compute the rendering steps which give a convincing effect: object relighting using a multi-point-light simplification of the environment map; the casting of a diffuse shadow by the object on the surface; a reflection of the object in the specular surface; and most importantly the occlusion of specular reflections by the object, these occlusions
being filled in with diffuse texture.
Texture-Less Planar Object Detection and Pose Estimation Using Depth-Assisted Rectification of Contours
João Paulo Lima, Hideaki Uchiyama, Veronica Teichrieb, Eric MarchandThis demo presents a method named Depth-Assisted Rectification of Contours (DARC) for detection and pose estimation of texture-less planar objects using RGB-D cameras. In the demonstration, the DARC method is used to perform real-time detection, pose estimation and augmentation of texture-less planar objects such as a logo, a traffic sign and a map. DARC’s ability of discerning objects with the same shape but different sizes due to the use of depth data is also illustrated. In addition, the visitors are able to interactively register their own texture-less planar objects. More details on the DARC technique can be found in the poster session of the S&T track from this year’s ISMAR.
The augmented painting
Wim van Eck, Yolande KolsteeThe AR Lab, a collaboration between the Royal Academy of Art The Hague, Delft University of Technology and Leiden University, will demonstrate their latest augmented reality project named ‘The Augmented Painting’. Working together with the Van Gogh Museum (Amsterdam) the AR Lab realised three installations which all offer playful interaction with multi-spectral captures of paintings by Van Gogh. These captures (x-ray, infrared, ultraviolet etc.) are usually not accessible to the museum visitor, even though they offer valuable information about the painting.
We will demonstrate the latest version of our project, which tracks the painting ‘The Bedroom’ using natural feature tracking. Via an iPad the user can not only see all the multi-spectral captures overlaid live on the painting, but also captures of the back of the painting and a heightmap. Touching visual hotspots triggers pre-recorded verbal narration, which are used to give extra information about specific parts of the painting. Some other hotspots allow you to zoom in on a specific part of the painting, revealing for example small cracks of the paint.
For more information about our research, please read our article ´The Augmented Painting: Playful Interaction with Multi-Spectral Images´.
Interactive 4D Overview and Detail Visualization in Augmented Reality
Stefanie Zollmann, Denis Kalkofen, Christof Hoppe, Stefan Kluckner, Horst Bischof, Gerhard ReitmayrThe visualization of time-oriented 3D data in the real world context has special challenges compared to the visualization of arbitrary virtual objects. Data from different points in time may occlude each other, which makes it hard to compare multiple datasets. Especially, 4D data sets are often a representation of the current scene in the past, which means that the data may occlude a large part of the current scene. Furthermore, the complexity of data complicates a comparison of 4D data in the real-world context and to detect changes.
In this demonstration we present an approach for visualizing time-oriented data of dynamic scenes in an AR view. To provide a comprehensible visualization, we introduce a visualization concept that uses overview and detail techniques to present 4D data in different detail levels. These levels provide at first an overview of the 4D scene, at second information about the 4D change of a single object and at third detailed information about object appearance and geometry for specific points in time. By combining the three levels with interactive transitions such as magic lenses or distorted viewing techniques, we enable the user to navigate through the levels and understand the relationship between them. For the demonstration we will show how to apply this concept for construction site documentation and visualize different stages of a construction process.
Florence AR interactive tourist map (Bird’s view AR)
Giovanni Landi, Giacomo Chegia, Nicola PiredduIn traditional bird’s eye view maps the spatial relations between spots and locations are perceived in a unique and very natural way, making it easy and intuitive to orient oneself. In many tourist maps, this perspective survives assisted by other useful graphical elements. Augmented reality allows the development of interactive tourist maps where the user can observe a detailed 3d reconstruction of the city from an ideal bird’s view perspective, finding monuments and points of interest in a easy and intuitive way. The traditional tourist city map could be distributed together with a specific AR app that uses the paper map as AR marker, adding a new layer of extra tourist infos (updated regularly), a 3d model of the city and its monuments plus all the location based services available with the GPS support.
TineMelk AR - augmenting 100 000 breakfast tables with talking cows
Kim Baumann Larsen, Tuck Siver and David JonesAugmented Reality in marketing offers many creative and technical opportunities that are only recently beginning to be understood by advertisers. Placebo Effects and Labrat’s work with TineMelk AR gave us the opportunity to develop an appreciation of these opportunities in a real-world application.
The TineMelk AR application for Android and iOS ran in Norway from January 2012 for about 4 months nationwide. An AR marker was printed on the back of more than 50 million milk cartons. The app was part of a campaign to raise awareness of locally produced and distributed milk and was built on an existing marketing concept of cows talking like humans when unobserved.
The AR app placed two small animated cows on the user’s table playing out a funny mise en scène around the milk carton in a different Norwegian dialect depending on which region the milk was from. The story ends when the cows are surprised to ‘discover’ the user and then clumsily return to ‘playing’ cow.
The app was highly successful. In a country of 5 million people it reached 110 000 downloads in 3 months, and averaged 4 stars on App Store and helped the campaign reach its goal of communicating that TineMelk’s milk was locally produced. The app was shortlisted for two Cannes Lions awards - the world's biggest annual awards show and festival for the creative communications industry.
There were several technical challenges we found unique solutions for in creating this app, such as allowing for multiple regions in a single app and tracking the user’s location and passing that information onto an animation blended character. Other challenges were the significant offset of the scene from the marker, and the tracking of 30 non-optimal near-similar markers resulting from a locked-down carton design.
http://www.pfx.no/media/TineMelkAR/TineMelk_AR_demo_subs.mov
Virtual Interactive Podium (VIPodium)
Inga Nakhmanson, Aleksey Streltsov, David Esayan, Den Ivanov Fitting Reality's VIPodium is a virtual fitting room based on augmented reality and MS Kinect. VIPodium is a cloud based cross-platform system that allows customers to virtually try on clothes.
For customers this means a controllable and interactive shopping experience solving their FIT/SUIT dilemma. Record your own ShapeID profile, get measured once, and then use it for shopping in any online store. For fashion brands and retailers this means utilizing a unique opportunity to not only boost profit margins and competitiveness, but challenge the very basics of retail industry through creating new standards of tomorrow’s fashion retail.
AR Marker Hiding Based on Image Inpainting and Reflection of Illumination Changes
Norihiko Kawai, Masayoshi Yamasaki, Tomokazu Sato, Naokazu YokoyaThis demo shows marker-based AR applications such as games and furniture arrangement simulation in which AR markers are removed from a user's view image in real time. Users can enjoy AR applications without noticing the existence of markers. To achieve natural marker hiding, assuming that an area around a marker is locally planar, the marker area in the first frame is inpainted using the rectified image to achieve high-quality in painting. The unique inpainted texture is overlaid on the marker region in each frame according to camera motion for geometric consistency. Both global and local luminance changes around the marker are separately detected and reflected to the inpainted texture for photometric consistency. In our system, the positions of markers need to be fixed while hiding markers. However, if the markers are moved, the marker regions can be re-inpainted soon by giving an instruction.
Depth Perception Control by Hiding Displayed Images Based on Car Vibration for Monocular Head-up Display
Tsuyoshi Tasaki, Akihisa Moriya, Aira Hotta, Takashi Sasaki, Haruhiko OkumuraWe have developed a monocular HUD and accurate depth perception control method for navigation. We called the method a dynamic perspective method. The monocular HUD has no problems of the binocular parallax because a user observes images at only one eye by optical systems of the monocular HUD. The dynamic perspective method uses a size and a position of an object image as depth cues. An example of an object image is an arrow for navigation. When we want users to perceive near position, the object image on the monocular HUD is displayed bigger and lower. The dynamic perspective method achieved a depth perception position of 120 [m] within an error of 30% in the simulation.
Multisensor-driven Adaptive Augmented Reality in the Cultural Heritage Context
Yongchun Xu, Ljiljana Stojanovic, Nenad Stojanovic, Tobias SchuchertIn this demo we will show the ARtSENSE system that enables a personalized experience for every individual visitor by adapting to the psychological state of the visitor the content presented through an augmented reality museum guidance system. The system is unique due to its multi-sensing and multi-technology nature.
To estimate the interest and engagement of the visitor and to adapt the content provided by the guide accordingly, the following sensors are used: (i) see-through glasses with integrated camera for tracking the gaze of the visitors and for displaying the augmented reality content to them; (2) acoustic sensors for sensing the acoustic information surrounding visitors such as environment noise or the content that visitors are listening to, and (iii) bio-sensors for observing a set of visitor’s physiological parameters like heart rate or skin conductance to extract her/his mental engagement.
These sensors provide rich, continuous data of the visitor state. However, since individual metrics carry very little meaning on their own, a combination of sensors’ data is needed to infer viewer’s interest or intent. To integrate isolated sensor data, the ARtSENSE system is based on (i) semantic technologies for the correlation of sensors’ data via modelling the attention situations as well as (ii) complex-event processing for recognizing these interesting patterns in the sensors’ data streams.
The sensors are sensed proactively, i.e. events are generated when a new sensor data is available, when a data is changed etc. In this way the number of interactions offered to a visitor is dramatically constrained and made correspond to the user’s expectations. To provide support to a visitor in the right moment, the delay between collection of the data and the relative notification is made as close as possible to real-time (considering network latency).
The ARtSENSE system has been developed within the EU project ARtSENSE (http://www.artsense.eu/).
Distance-based modeling and manipulation techniques using Ultrasonic Gloves
Thuong N Hoang & Bruce H ThomasWe present a set of distance-based interaction techniques for modeling and manipulation, enabled by a new input device called the ultrasonic gloves. The ultrasonic gloves are built upon the original design of the pinch glove device for virtual reality systems with a tilt sensor and a pair of ultrasonic transducers in the palms of the gloves. The transducers are distance-ranging sensors that allow the user to specify a range of distances by natural gestures such as facing the palms towards each other or towards other surfaces. The user is able to create virtual models of physical objects by specifying their dimensions with hand gestures. We combine the reported distance with the tilt orientation data to construct virtual models. We also map the distance data to create a set of affine transformation techniques, including relative and fixed scaling, translation, and rotation. Our techniques can be generalized to different sensor technologies.
LDB: An Ultra-Fast Feature for Scalable Augmented Reality on MobileDevices
Xin Yang and Kwang-Ting(Tim) ChengThe efficiency, robustness and distinctiveness of a feature descriptor are critical to the user experience and scalability of a mobile Augmented Reality (AR) system. However, existing descriptors are either too compute-expensive to achieve real-time performance on a mobile device, or not sufficiently robust and distinctive to identify correct matches from a large database. As a result, current mobile AR systems still only have limited capabilities, which greatly restrict their deployment in practice.
We design a new binary descriptor, called Local Difference Binary (LDB), which is highly efficient and more robust and distinctive than the state-of-the-art binary descriptor BRIEF. In this demo, we demonstrate the performance of our LDB using a mobile object recognition task. Our database contains over 200 planar objects stored on a mobile tablet. Comparing to BRIEF, LDB has similar computational efficiency, while achieves a greater accuracy and 5x faster recognition speed due to its better robustness and distinctiveness.
A component-based approach towards Mobile distributed and Collaborative PTAM
Tim Verbelen, Pieter Simoens, Filip De Turck, Bart DhoedtIn this demo, we show a component-based implementation of the PTAM algorithm. These components can run on multiple platforms and can be migrated and reconfigured at runtime.Because the components can be distributed and configured at runtime, it enables us to:
– tailor the application to each mobile device's capabilities, by adapting the configuration– offload CPU intensive components to nearby resources, to enhance quality
– share components between multiple devices, enabling collaborative applicationsOur demo application runs on the Android platform, and the tracking quality can be adapted by changing either the camera resolution or the number of features to track.
The mapping can be offloaded to a laptop, in order to speed up the bundle adjustment. Finally, multiple devices can share the same map, allowing them to collaboratively refine and expand the map.
Sphero MR Engine
Ian Bernstein, Fabrizio PoloSphero is a robotic ball that is controlled from your smartphone or tablet. It works over Bluetooth and you can download many different apps to roll Sphero around or play a wide variety of games. In this demonstration we will be showing off Spheros new MR Engine that allows us to track Sphero in real-time and overlay 3D objects on top of and around Sphero to create gameplay. Tracking a moving non image-based fiducial on many different surfaces while constrained to the computing power of a smartphone were the main challenges of this project.