RegisterGet new password
ISMAR 2014 - Sep 10-12 - Munich, Germany
header

2014 ISMAR Tutorials

AR Development with the Metaio Product Suite: Demonstration of Use Cases in Industry
Date & Time : Monday, September 08 01:00 pm - 05:30 pm
Location : TBA
Topic : 
AR Development with the Metaio Product Suite: Demonstration of Use Cases in Industry
Organizers:
Frank Angermann, Metaio GmbH, Maximilian Krushwitz, Metaio GmbH
Description:

Abstract

This tutorial covers the AR creation process from idea to solution in an industrial context. Thereby employing the Metaio AR pipeline with practical demonstrations to show how AR projects can be quickly realized with readily available tools. To illustrate how AR can be employed to solve industrial problems, this tutorial will showcase several interesting industrial use cases implemented by Metaio. To conclude, the tutorial provides a brief outlook into new available tracking technologies and AR for wearable devices. Further, potential challenges for industrial integration of AR, for example with ERP systems, are pointed out.

The attendees will acquire a solid understanding of developing an AR scenario from idea to solution. Further the tutorial provides a good insight into the Metaio software environment, enabling and hopefully encouraging attendees to implement their own AR projects in industry.

Schedule

  • 14:00 - 14:30 1. Overview of the Metaio software suite (0.5 hours)
  • 14:30 - 15:00 2. Complete AR pipeline from idea to solution (0.5 hours)
  • 15:00 - 17:30 3. Practical AR Use Cases with implementation (2.5 hours)
    Bavarian National Museum use case demonstration
    Volkswagen Marta use case demonstration
    AR at SAP use case demonstration

  • 17:30 - 18:00 4. Outlook (0.5 hours)
    Future corporate backend integration
    Wearable devices in AR
    New tracking approaches
    Metaio ISMAR papers & posters
    Introduction of upcoming InsideAR conference

Form of Presentation

The tutorial will be presented through several speakers with the support of visual aids in the form of PowerPoint slides. Further there will be demonstrations of software and use cases.

Intended Audience

The target audience would range from anyone working, researching or planning to work in the field of AR. While the tutorial at times will require a high degree of technical understanding, attendees with less technical skills can still benefit from the practical demonstration of AR use cases.

Instructor Background

Frank Angermann, Metaio GmbH, frank.angermann@metaio.com

After finishing his studies in media engineering, Frank started working at Metaio in developer relations for the AR browser Junaio. Since then he has moved on to the project management and is now heading the project development team at Metaio in Munich.

Maximilian Kruschwitz, Metaio GmbH, maximilian.kruschwitz@metaio.com

Maximilian is responsible for Developer Relations at Metaio. He holds an MBA and an MSc Computing degree. Before joining Metaio he worked in business as well as technical positions across different sectors.

Fusing Web Technologies and Augmented Reality
Date & Time : Monday, September 08 09:00 am - 12:30 pm
Location : TBA
Topic : 
Fusing Web Technologies and Augmented Reality
Contributor:Ulrich Bockholt, Fraunhofer IGD, Germany
Description:

Abstract

Within the German research project ARVIDA a large consortium of industrial Virtual and Augmented Reality users, of technology providing companies and research institutes cooperate on the establishment of highly flexible web-based reference architecture for Augmented Reality applications. The use of web technologies is motivated by modern web standards as WebGL or WebRTC supporting e.g. real time rendering of 3D-content of video streaming within Web-Browsers. Thereby, the use of Web technologies not only offers the possibility to develop applications platform and OS independent but it also facilitates the integration of Augmented Reality into industrial workflows or PDM environments. The developed reference architecture offers RESTful tracking, rendering and interaction services that foster the combination and exchange of different algorithms with the aim to fit the technology to the specific requirements of an AR-applications in an optimal way.

The Tutorial will address the following topics:

  • Use of Web-standards (e.g. WebGL/WebRTC) as basis technologies for Augmented Reality Systems.
  • Distribution of Rendering/Tracking/Interaction algorithms in client/server configurations.
  • Streaming technologies used in the development of Augmented Reality frameworks.
  • Transcoding services for the integration of Augmented Reality applications into PDM environments.
  • Use of RDFs (Resource Description Framework) and sematic wiki for the formulation of tracking/processing/visualisation services available for different resources (e.g. for captured sensor data)
  • Exemplary AR applications developed with the help of Web-based technologies

Form of Presentation

"The outlined topics will be presented in lectures (with Slides) and mostly with the help of many examples developed in this context. We expect 30 to 50 attendees, the tutorial has not been taught before."

Intended Audience

Students, researched and industry professionals developing Augmented Reality and that are interested in the possibilities and limitations of Web-technologies in the context of AR.

Instructor Background

Ulrich Bockholt (Dr.-Ing.) received his diploma degree in Mathematics from the University Mainz (Germany) in 1997 and his doctoral degree from the Darmstadt University of Technology in 2003. From 1997 to 2001 he has been researcher at the Interactive Graphics Systems Group (GRIS) of the Darmstadt University of Technology. Since 2001 he is working at the Department „Virtual & Augmented Reality“ at the Fraunhofer-Institute for Computer Graphics (Fraunhofer-IGD) in Darmstadt (Germany). Since 2002 he is leading the Augmented Reality Group in the department “Virtual & Augmented Reality“. In 2004 he has received his doctoral degree at the Technical University Darmstadt. Since 2008 he is heading the department “Virtual and Augmentd Reality“ with 15 to 21 full time researchers.

A 'Look Into' Medical Augmented Reality
Date & Time : Tuesday, September 09 09:00 am - 05:30 pm
Location : TBA
Topic : 
A 'Look Into' Medical Augmented Reality
Organizers:
Yuji Oyamada, Waseda University, Japan, Pascal Fallavollita, Technische Universität München, Germany
Description:

Abstract

The concept of augmented reality (AR) has been introduced to variety of fields in the last decade. Recent development of portable devices such as smart phone and tablet PC provides the community a lot of possible applications in AR systems. Even in the medical field, various AR systems have recently been proposed: systems for education, pre-planning, and those in the operating room. The aim of this tutorial is to bridge the expertise between the researchers in ISMAR community and medical doctors so that researchers can contribute to the medical domain with their specialty more than one can do right now.

This tutorial aims to make a bridge between researchers in augmented reality field and medical doctors. We target an audience interested in medical augmented reality systems.

Schedule

  • 9:00 -  9:45 Keynote Talk: Prof Nassir Navab, Advanced in Medical Augmented Reality
  • 10:00-10:45 Technical talk 1: System Components of Medical AR technology (Oyamada)
  • 10:45-11:00 coffee break
  • 11:00-12:00 Technical talk 2: Visualization in Medical AR (Schulte zu berge)
  • 12:00-13:00 Lunch break
  • 13:00-14:00 Keynote talk: Dr. med. Simon Weidert, First experiences of Medical AR in the OR
  • 14:00-14:45 Technical talk 3: User Interfaces in Medical AR (Fallavollita)
  • 14:45-15:00 coffee break
  • 15:00-16:00 Travel to Hospital lab
  • 16:00-17:30 Demo tour at NARVIS Lab

Keynote Speakers

  • Prof. Dr. Nassir Navab Technische Universität München and Johns Hopkins University
  • Dr. med. Simon Weidert Chirurgischen Klinik und Poliklinik-Innenstadt, LMU München, Munich

Intended Audience

This tutorial aims to make a bridge between researchers in augmented reality field and medical doctors. We target an audience interested in medical augmented reality systems.

Instructor Background

Dr. Yuji Oyamada is currently a junior researcher in School of Fundamental Science and Engineering, Waseda University, Japan. Dr. Oyamada received all his degrees from Keio University: B.E., M.E., and Ph.D. of Engineering in 2006, 2008, and 2011 respectively. Before joining Waseda University, he was a full-time intern in Microsoft Research Asia from January to June 2010, a postdoctoral researcher in Keio University from October 2011 to March 2013, and a visiting researcher at the chair for Computer Aided Medical Procedure (CAMP), Technische Universität München (TUM) from March 2012 to March 2013. His research interests span computer vision and its applications but not limited: image restoration, image enhancement, camera/object tracking, and augmented reality.

Since Nov 2010, Dr. Fallavollita is a senior research scientist at the world-class Chair for Computer Aided Medical Procedures (CAMP), at Technische Universität München. He is managing and leading the research activities within the Navigated Augmented Reality Visualization System laboratory. His responsibilities include advancing research in medical augmented reality for anatomy/rehabilitation learning, medical simulation training, interventional imaging, intraoperative navigation, and surgical devices/systems for therapy delivery. His resume demonstrates that he can engage in partnership with clinicians from different clinical backgrounds and has built up solid relationships and research projects in outstanding hospitals in the areas of neurology, cardiology, radiation therapy, and orthopedic and trauma surgery.

Designing Location-Based Experiences
Date & Time : Tuesday, September 09 02:00 pm - 05:30 pm
Location : TBA
Topic : 
Designing Location-Based Experiences
Contributor:Mark Melnykowycz, idezo, Zurich
Description:

Abstract

The development of locationbased applications from the perspective of story structure and product design will be presented. We present the challenges with developing locationbased mobile products from a storytelling perspective and tools for integrating user experience into the development process to drive story structure of new products. Included is a casestudy focused on the Ghost of Venice mixedreality film project, which is centered on an augmented reality mobile application.

Learning objectives of this tutorial are:

 

  • Understand how communication patterns have evolved with new technologies to their present state and how this influences the way stories are told.
  • Understand the design intent behind different AR/MR locationbased games from the story and user experience design perspectives.
  • Gain an understanding for how to approach AR/MR projects, which may include distributed storylines over different media.
  • Understand the complexity of creating AR/MR locationbased applications and how to address them in app or story development.
  • Gain insight into how to work between writers and the app development (design and coding) team to efficiently translate story concepts into mobile apps.

 

A workshop module is included at the end of the tutorial session, and at this point participants will be engaged to design a locationbased game experience. This will show in a project based learning environment, what the participants learned from the tutorial.

Schedule

14:00 - 15:00 1. Story structure and communication patterns
Evolution of communications technology and consuption patterns
Linear and nonlinear story structures
Story progression in different media (books, movies, games, etc)

15:00 - 16:00 2. Design of AR/MR mobile apps
Fundamental differences between AR/MR technologies
User experience design tools
Discussion of current location-based AR/MR apps

16:00 - 16:15 Break

16:15 - 17:15 3. Ghosts of Venice Case Study
Value of ghost stories for location app design
Process of translating written stories to location stories
Production of integrating story with user interaction needs

17:15 - 18:00 4. Workshop
Pick a story and mobile app goal
Break down the design process for this specific case study

Form of Presentation

The main form of the tutorial will be a lecture style with project slides. Additionally, there will be a Junaio channel setup , which will allow participants to easily access additionally information while the tutorial is occurring. The ideal size will be 2030 participants, which will allow interaction between the speaker and the participants during the presentation. This tutorial has not been presented before, but the idea for it has grown from the experiences Mark has had in organising the Transmedia Zurich meetup group. He has spoken there on topics such as story structure and world building, while other speakers have focused on mobile game design, communication patterns, and locationbased games from Gbanga and Rundercover (new Swiss startup). As an extension of the Transmedia Zurich meetups Mark has been designing the transmedia toolkit (www.transmediatoolkit.org), which will be an opensource materials package focused on connecting storytelling and technology together in the best way to reach and engage with an audience.

Intended Audience

The main target audience includes people interested in how emerging technologies can be used most effectively in developing AR/MR experiences. The tutorial will look at the intersection between story structure, user experience development, and display (or consumption) technologies, and would therefore be useful for developers, designers, product managers, as well as researchers who are interested in how AR/MR technologies can be integrated into product development and predict how they can be used in the future.

Instructor Background

Mark Melnykowycz (mark@idezo.ch, +41 78 693 0831) works in the development of flexible sensors for wearable computing applications and he is a cofounder of Lost In Reality (www.lostinreality.net), a locationbased storytelling app (currently in development). He joined the Ghosts of Venice project to lead the app development and user experience design. Additionally, through his company idezo in Zurich, he is involved with prototyping interactive apps and exhibits for museum exhibitions. He is a coorganizer of the Transmedia Zurich (www.transmediazh.ch) group, which is focused on discussing topics of technology and storytelling.

Diminished Reality as Challenging Extension of Mixed and Augmented Reality
Date & Time : Tuesday, September 09 09:00 am - 12:30 pm
Location : TBA
Topic : 
Diminished Reality as Challenging Extension of Mixed and Augmented Reality
Organizers:
Hideyuki Tamura, Ritsumeikan University, Japan, Hideo Saito, Keio University, Japan, Fumihisa Shibata, Ritsumeikan University, Japan, Maki Sugimoto, Keio University, Japan
Description:

Abstract

Diminished Reality (DR) has been considered as a sub-technology of Mixed and Augmented Reality (MAR). While MAR means technologies that add and/or overlay visual information onto images of real scene for providing users to enhance their visual experiences with the added/overlaid information, DR aims the similar enhanced visual experiences by deleting visual information from the images of real scene. Adding and deleting visual information might be considered as same technical issues, but they are actually totally different. In DR, visual information that is hidden by the deleted object should be recovered for filling into the deleted area. This recovery of the hidden area is not required for general adding/overlaying based MAR, but should be one of the typical issues for achieving DR. Camera pose estimation and tracking is a typical issue in MAR, but the condition of the scene and required performance for DR are not always the same as MAR. For example, the object to be diminished/removed should be detected and tracked while the camera is freely moving for DR.

In this tutorial, the topics of interest are challenging technical issues for DR, such as recovery of hidden area, detecting and tracking the object to be removed/diminished, tracking camera poses, illumination matching and re-lighting, etc. In addition to those technical issues for DR, examples of applications of DR, expected futures with DR, and human factors of DR are also included in the topics of interest of this workshop.

Schedule

9:00 - 9:15 Opening (15 min)
9:15 - 10:00 Keynote Talk: Survey of Diminished Reality
10:00 - 12:00 Contributed talks
12:00 - 13:00 Panel discussion and/or interactive session

Instructor Background

Prof. Hideyuki Tamura received B.Eng and the doctorate degrees both in electrical engineering from Kyoto University, Japan. His professional career starting in 1972 includes a Senior Research Official at the Electrotechnical Laboratory, MITI, Japan, the Director of Media Technology Laboratory, Canon Inc., and a member of the executive board of Mixed Reality Systems Laboratory Inc. In 2003, he joined Ritsumeikan University, where he is now an Eminent Professor, Research Organization of Science and Technology.

His research interest and major achievements are in the areas of pictorial pattern recognition, digital image processing, artificial intelligence, virtual reality, and multimedia systems. His most prominent work is that he planned and conducted the Key-Technology Research Project on Mixed Reality in Japan from 1997 to 2001. He organized the Special Interest Group on Mixed Reality, the Virtual Reality Society of Japan and founded the basic body of the International Symposium on Mixed and Augmented Reality (ISMAR). Now he is an emeritus member of the ISMAR Steering Committee.

Prof. Tamura served on the executive boards of several academic societies in Japan and received several awards from such societies as IEICE and IPSJ. He is (co)author and (co)editor of ten books, all in the field of computer vision, graphics, and multimedia, including "Mixed Reality -- Merging Real and Virtual Worlds" (Ohmsha & Springer, 1999).


Prof. Hideo Saito received his Ph.D. degree in Electrical Engineering from Keio University, Japan, in 1992. Since then, he has been on the Faculty of Science and Technology, Keio University. In 1997 to 1999, he had joined into Virtualized Reality Project in the Robotics Institute, Carnegie Mellon University as a visiting researcher. Since 2006, he has been a full Professor of Department of Information and Computer Science, Keio University. He served as program co-chair of ISMAR (International Symposium on Mixed and Augmented Reality) 2008 and 2009. Now he is a steering committee member of ISMAR. He also served as an Area Chair of ACCV (Asian Conference on Computer Vision) 2009, 2010, and 2012. His research interests include computer vision, mixed reality, virtual reality, and 3D video analysis and synthesis.

Prof. Fumihisa Shibata received the M.E. degree in computer science and the Ph.D. degree in engineering from Osaka University, Suita, Osaka, Japan, in 1996, and 1999, respectively. Since then, he has been a research associate at the Institute of Scientific and Industrial Research, Osaka University. In 2003, he joined Ritsumeikan University, Kusatsu, Shiga, Japan, where he was an associate professor at the College of Science and Engineering. In 2004, he became an associate professor at the College of Information Science and Engineering, Ritsumeikan University. Since 2013, he has been a full professor at Ritsumeikan University. His research interests include mobile computing, augmented/mixed reality, and human–computer interaction.

Prof. Maki Sugimoto received his Doctor of Philosophy in Engineering from The University of Electro-Communications. He was a visiting researcher for NTT Communication Science Laboratories, a research fellow at the Japan Society for the Promotion of Science, and a visiting scholar for MIT Computer Science and Artificial Intelligence Laboratory. He became a senior assistant professor for Keio University Graduate School of Media Design in 2008. In 2011, he assumed the current position as an assistant professor for Department of Information and Computer Science, Keio University Faculty of Science and Technology. He engaged in Display-based Computing System, Robotic-User Interfaces and Tele-operation interface for search-and-rescue robots at The University of Electro-communications and Keio University. His research interests include human-robot interfaces and Augmented Reality environments with actuated physical interfaces.

Google Glass, The META and Co. How to calibrate Optical See-Through Head Mounted Displays
Date & Time : Tuesday, September 09 02:00 pm - 05:30 pm
Location : TBA
Topic : 
Google Glass, The META and Co. How to calibrate Optical See-Through Head Mounted Displays
Organizers:
Jens Grubert, Graz University of Technology, Austria, Yuta Itoh, TU Munich, Germany
Description:

Abstract

Head Mounted Displays such as Google Glass and the META have the potential to spur consumer-oriented Optical See-Through Augmented Reality applications. A correct spatial registration of those displays relative to a user’s eye(s) is an essential problem for any HMD-based AR application. We provide an overview of established and novel approaches for the calibration of those displays including hands on experience in which participants will calibrate such head mounted displays.

The following list provides a tentative list of topics covered during the tutorial.

 

Part 1: Introduction to OST calibration

  • Why OST Calibration is important?
  • Differences to Camera Calibration
    • Introduce camera calibration
    • Why is OST calibration hard?
      • The user in the loop - pointing accuracy
      • Slipping, the need for recalibration
  • Principal aspects of OST-HMD calibration
    • overview of data collection
    • confirmation methods
    • optimization
    • mono vs stereo
  • Details of OST calibration
  • Data collection methods: 
    SPAAM, Multi Point collection, stereo methods
  • Confirmation methods
  • Optimization approaches
  • Evaluation: perceptual measures vs. analytic measures
  • State of the art: Semi-, fully automatic calibration methodses

Part 2: Hands-on calibration

  • SPAAM-based calibration of Epson Moverio / Google Glass with inside-out marker tracker 

 

Schedule

14:00 Welcome and introduction

Theory
14:15 Introduction to OST Calibration
15:00 coffee break
15:15 Details of OST Calibration
16:15 coffee break

Practice
16:30 Hands on session: calibration of OST HMDs
17:30 Discussion: experiences, feedback
17:50 wrap-up, mailing list
18:00 end of tutorial

Intended Audience

Who should attend: Researchers and engineers in the AR/VR field, who wish to (1) develop AR applications with their OST-HMDs, and/or (2) get an overview of and hands-on experience with calibration methods for OST-HMDs.

Level of expertise: All levels. Basic knowledge about computer vision with linear algebra will be introduced through the introductory part.

Instructor Background

Jens Grubert is a university assistant at Graz University of Technology. He received his Bakkalaureus (2008) and Dipl.-Ing. with distinction (2009) at Otto-von-Guericke University Magdeburg, Germany. As a research manager at Fraunhofer Institute for Factory Operation and Automation IFF, Germany, he implemented various calibration methods for industrial optical see-through head mounted displays and conducted long-term evaluations of those systems until August 2010. He has been involved in several industrial projects such as AVILUS and AVILUSPlus as well as EU FP7 projects such as EXPERIMEDIA and MAGELLAN. He is author of more than 20 peer reviewed publications and published a book about AR development for Android. His current research interests include mobile interfaces for situated media and user evaluations for consumer oriented Augmented Reality interfaces in public spaces.

Yuta Itoh is a research assistant at TU Munich. He holds B.Eng. (2008) and M.Eng. (2011) degrees in computer science from Tokyo Tech, where he studied machine learning. He spent two years as a researcher at Multimedia Lab. in Toshiba Corp. (2011-2013). His current research topic aims at developing OST-HMD system for maintenance assistance. As an outcome of the research, he recently published a paper about OST-HMD calibration technique. His research is a part of EU FP7 project EDUSAFE. He is a student member of IEEE.

Further Information

http://stctutorial.icg.tugraz.at/

Open and Interoperable Augmented Reality
Date & Time : Tuesday, September 09 09:00 am - 05:30 pm
Location : TBA
Topic : 
Open and Interoperable Augmented Reality
Organizers:
Christine Perey, PEREY REsearch & Consultion and AR Community founder, Rob Manson, BuildAR and MobAR, Marius Preda, Institut MINES-Telecom, Neil Trevett, NVIDIA and Khronos Group, Martin Lechner, Wikitude GmbH, George Percivall, PGC, Timo Engelke, Fraunhofer IGD, Peter Lefkin, MIPI Alliance, Bruce Mahone, SAE International, Mary Lynne Nielsen, IEEE Standards Association
Description:

Abstract

Today an experience developer must choose tools for authoring AR experiences based on many factors including ease of use, performance across a variety of platforms, reach and discoverability and cost. The commercially viable options are organized in closed technology silos (beginning with SDKs). A publisher of experiences must choose one or develop for multiple viewing applications, then promote one or more application to the largest possible audience. Developers of applications must then maintain the customized viewing application over time across multiple platforms or have the experience (and the application) expire at the end of a campaign.

A user equipped with an AR-ready device, including sensors and appropriate output/display support, must download one or more proprietary applications to detect a target and experience content published using an AR experience authoring platform.

There are alternatives that will foster innovation by providing common interfaces and modular architectures. When available, open and interoperable Augmented Reality systems will provide publishers more flexibility in how they go about reaching the largest audiences. End users will be able to choose the AR-enabled player they prefer without sacrificing a great breadth of potential experiences.

This tutorial provides comprehensive and in depth information that content publishers and developers need to know when planning to develop and deploy content for AR and designing the AR experiences in a manner that supports maximum reach and data portability. The tutorial presenters are experts in the area of individual standards and open source projects. They will describe open protocols and APIs that can be integrated with existing proprietary technology silos or used as alternatives to reduce delays and cost over the lifetime of AR-assisted system development and ownership.

Schedule

9:00 - 9:30 1. Overview: 0.5 hour
9:30 - 10:15 2. Khronos Group: 0.75 hours
10:15 - 10:45 3. MIPI Alliance: 0.5 hours
10:45 - 11:15 Break
11:15 - 12:00 4. IEEE Standards Association: 0.75 hours
12:00 - 12:45 5. Open Geospatial Consortium 0.75 hours
12:45 - 14:00 Lunch
14:00 - 14:30 6. MPEG 0.50 hours
14:30 - 15:00 7. Web3D Consortium 0.50 hours
15:00 - 15:30 8. SAE International 0.50 hours
15:30 - 16:00 Break
16:00 - 16:30 9. Open Source Web-based AR 0.50 hours
16:30 - 17:15 10. Panel Discussion 0.75 hours

Form of Presentation

The tutorial speakers will present information primarily through PowerPoint slides, using video where appropriate. At least four of the presenters will also have live demonstrations of interoperability using a standard or open source project. There will be two panel discussions. The first panel discussion will be about low-level standards such as those implemented in hardware (communications, processing, rendering, etc). and the second will focus on developer tools and APIs.

Intended Audience

This tutorial is appropriate for all segments of the AR ecosystem, from research to technology integrators and AR system user/buyers. There is no special engineering or vocabulary.

Instructor Background

Christine Perey has worked for more than 20 years in the domain of rich media communications, initially in the area of dynamic media technologies on personal computers and since 2004 in the domain of multimedia technologies for consumer applications on mobile platforms. Since 2006 Perey has studied and assisted research centers and companies to better understand and maximize their opportunities in the domain of Augmented Reality.

Perey is an evangelist for the expansion/adoption of open, standards-based multimedia-rich applications on mobile platforms. She has founded and organizes regular meetings of the Augmented Reality Community, Augmented Reality Meetup Groups and other initiatives dedicated to expansion of the AR market by way of open and interoperable interfaces and technologies."

Rob Manson is CEO & co-founder of buildAR.com, the world’s first web based Augmented Reality Content Management System. Rob is the Chair of the W3C Augmented Web CG and an Invited Expert with the ISO, W3C and the Khronos Group. He is an active evangelist within the global AR and standards communities and he is regularly invited to speak on the topics of the Augmented Web, Augmented Reality, WebRTC and multi-device platforms. Also, Rob’s latest book “Getting started with WebRTC” is now a 5 star hit on Amazon.

Marius Preda is Associate Professor at Institut MINES-Telecom and Chairman of the 3D Graphics group of ISO’s MPEG (Moving Picture Expert Group). He contributes to various ISO standards with technologies in the fields of 3D graphics, virtual worlds and augmented reality and has received several ISO Certifications of Appreciation. He leads a research team with a focus on Augmented Reality, Cloud Computing, Games and Interactive Media and regularly presents results in journals and at speaking engagements worldwide. He serves on the program committee international conferences and reviews top level research journals. Marius consults for nationals and European research funding agencies and is a French Minister of High Education and Research Expert for evaluating companies'/corporate research programs. Academically, Marius received a Degree in Engineering from Politehnica Bucharest, a PhD in Mathematics and Informatics from University Paris V and an eMBA from Telecom Business School, Paris.

Neil Trevett is Vice President of Mobile Ecosystems at NVIDIA, where he is responsible for enabling and encouraging advanced applications on smartphones and tablets. Neil is also serving as the elected President of the Khronos Group where he created and chaired the OpenGL ES working group that has defined the industry standard for 3D graphics on mobile devices. At Khronos he also chairs the OpenCL working group for portable, parallel heterogeneous computing, helped initiate the WebGL standard that is bringing interactive 3D graphics to the Web and is now working to help formulate standards for camera, vision and sensor processing.

Martin Lechner co-developed the first version of the Wikitude World Browser on the Android platform, and started building up the team behind the Wikitude technology soon after. He is now the CTO of Wikitude, part of the company management team and manages a team of 20+ developers. In addition to his day-to-day job at Wikitude, he chairs the Augmented Reality Markup Language (ARML) 2.0 Standards Working Group within the Open Geospatial Consortium (OGC). ARML 2.0 has recently been published as an official OGC Candidate Standard for describing and interacting with AR scenes. Martin initiated and managed the group throughout the entire process, involving more than 50 institutions and individuals, including several universities and multi-national companies. Martin studied both Computer Science and Mathematics at the University of Salzburg, Austria and the Victoria University of Technology in Melbourne, Australia. He is holding a PhD in Applied Computer Sciences and a Master in Mathematics. Before joining Wikitude,  e was working as a software engineer for Sony DADC.

George Percivall is Chief Engineer of the Open Geospatial Consortium (OGC). He is responsible for the OGC Interoperability Program and the OGC Compliance Program. His roles include articulating OGC standards as a coherent architecture, as well as addressing implications of technology and market trends on the OGC baseline. Prior to joining OGC, Mr. Percivall was Chief Engineer with Hughes Aircraft for NASA's Earth Observing System Data and Information System (EOSDIS) - Landsat/Terra release; Principal engineer for NASA's Digital Earth Office; and represented NASA in OGC, ISO and CEOS. He was Director of the GST's Geospatial Interoperability Group. Previously, he led developments in Intelligent Transportation Systems with the US Automated Highway Consortium and General Motors Systems Engineering including the EV1 program. He began his career with Hughes as a Control System Engineer on GOES/GMS satellites. He holds a BS in Engineering Physics and an MS in Electrical Engineering from the University of Illinois -  Urbana.

Timo Engelke is lead developer of IGD's Augmented Reality framework for mobile devices that integrates both a sophisticated computer vision tracking - that has evolved in more than 10 years of research - and a lightweight programming framework for AR applications based on HTML. He is also chair of the X3DOM AR Standards Working Group. As a freelancer he has worked since 1993 in software and hardware development in the areas of industrial and medical appliance. Since 2003 he is part of the Fraunhofer family and in the first years he dedicated his research to pervasive gaming, tangible interfaces and mobile device development. In 2008 he changed to Fraunhofer IGD where he works full times doing large area display color calibration and computer vision research for EU funded projects, like the project SKILLS and CHESS. Timo studied General Mechanical Engineering at Technical University Darmstadt.

Peter Lefkin serves as the managing director and secretary of the MIPI Alliance, a position appointed by MIPI’s Board of Directors. He is responsible for Alliance activities and operations from strategy development to implementation. Peter has previously been director of IEEE conformity assessment program, and marketing and business development executive, as well as COO and CFO of IEEE-ISTO. He has also held positions at Motorola, American National Standards Institute and the American Arbitration Association. Peter earned his bachelor’s degree from Boston University.

Bruce Mahone has been a leader in aerospace policy and technical issues in Washington, DC since 1988. He has been involved in the development and publication of more than 10,000 aerospace standards during that time. In 2006, Mahone became the Director of Washington Operations, Aerospace for SAE International. In that capacity, he oversees SAE's aerospace-related interaction with the U.S. government and the many standards, educational, and research organizations in the Washington area that affect the global aerospace sector.

Mary Lynne Nielsen is IEEE SA Technology Initiatives Director. Mary Lynne oversees the portfolio of new/emerging technology programs, ensuring effective and efficient communications and coordination among all stakeholders, and driving programs and efforts to meet the strategic objectives of the IEEE-SA. Current domains include Internet of Things, Cloud Computing and Augmented Reality.

The Glass Class: Designing Wearable Interfaces
Date & Time : Tuesday, September 09 09:00 am - 12:30 pm
Location : TBA
Topic : 
The Glass Class: Designing Wearable Interfaces
Contributor:Mark Billinghurst, The HIT Lab NZ, University of Canterbury, New Zealand
Description:

Abstract

The course will teach how to create compelling user experiences for wearable computers focusing on design guidelines, prototyping tools, research directions, and a hands-on design experience.  These topics will be presented using a number of platforms such as Google Glass, the Recon Jet and Vuzix M-100, although the material will be relevant to other wearable devices.

The class will begin with an overview of almost 50 years of wearable computing, beginning with the casino computers of Ed Thorp, through the pioneering efforts of researchers at CMU and MIT, to the most recent commercial systems. The key technology components of a wearable system will be covered, as well as some of the theoretical underpinnings.

Next, a set of design guidelines for developing wearable user interfaces will be presented. These include lessons learned from using wearables on a daily basis, design patterns from existing wearable interfaces, and relevant results from the research community. These will be presented in enough details that attendees will be able to use them in their own wearable designs.

The third section of the course will introduce a number of tools that can be used for rapid prototyping of wearable interfaces. These include screen-building tools such as Glasssim, through to templating tools that support limited interactivity, and simple programming tools such as Processing.

This will lead into a section that discusses the technology of wearable systems in more detail. For example, the different types of head mounted displays for wearables, tracking technology for wearable AR interfaces, input devices, etc.

Finally, we will discuss active areas of research that will affect wearable interfaces over the next few years. This includes technologies such as new display hardware, input devices, body worn sensors, and connectivity.

The course will have the following educational goals:

 

  • Provide an introduction to head mounted wearable computers
  • Give an understanding of current wearable computing technology
  • Describe key design principles/interface metaphors
  • Provide an overview of the relevant human perceptual principles
  • Explain how to use Processing for rapid prototyping
  • Show how to capturing and use sensor input
  • Outline active areas of research in wearable computing
  • Hands on demonstrations with Google Glass and other wearable computers 

 

Schedule

9:00 - 9:10 Introduction
9:10 - 9:30 Technology Overview
9:30 - 10:00 Design Guidelines
10:00 - 10:15 Demos
10:15 - 10:45 Development/Prototyping Tools
10:45 - 11:00 Demos
11:00 - 11:30 Wearable Technology
11:30 - 11:40 Example Application
11:40-12:00 Research Directions and Further Resources 

Form of Presentation

The tutorial will be presented through slide presentation material, videos, live demonstrations/coding examples, and hands-on demonstrations with a number of wearable systems and displays. Most of the material presented in the tutorial will also be provided online so that all of the attendees will have access to it later.

Intended Audience

This course is designed for people that would like to learn how to design and develop applications for head mounted wearable computers. The course assumes that people are able to develop their own Android software or have access to developers who can implement their designs. It also assumes familiarity of the basics of the user-centered design process and interaction design. The principles and tools learned should be relevant to a wide range of wearable computers such as Google Glass, Vuzix M-100, etc.

Instructor Background

Professor Mark Billinghurst is the director of the HIT Lab NZ at the University of Canterbury, one of the leading centers for Augmented Reality research and development. He has nearly 20 years experience of research in wearable and mobile devices, producing over 250 publications, and many innovative applications. He has a PhD from the University of Washington and conducts research in Augmented and Virtual Reality, multimodal interaction and mobile interfaces. He has previously worked at ATR Research Labs, British Telecom, Nokia and the MIT Media Laboratory. He was awarded the 2013 IEEE VR Technical Achievement Award for contributions to research and commercialization in Augmented Reality. In 2001 he co-founded ARToolworks, one of the oldest commercial AR companies. In 2012 he was on sabbatical in the Google Glass team.

Training Detectors and Recognizers in Python and OpenCV
Date & Time : Tuesday, September 09 02:00 pm - 05:30 pm
Location : TBA
Topic : 
Training Detectors and Recognizers in Python and OpenCV
Contributor:Joseph Howse, Nummist Media, Canada
Description:

Abstract

Monty Python's Flying Circus had a "cat detector van" so, in this tutorial, we use Python and OpenCV to make our very own cat detector and recognizer. We also cover examples of human face detection and recognition. More generally, we cover a methodology that applies to training a detector (based on Haar cascades) for any class of object and a recognizer (based on LBPH, Fisherfaces, or Eigenfaces) for any unique objects. We build a small GUI app that enables an LBPH-based recognizer to learn new objects interactively in real time. Although this tutorial uses Python, the project could be ported to Android and iOS using OpenCV's Java and C++ bindings.

Attendees will gain experience in using OpenCV to detect and recognize visual subjects, especially human and animal faces. GUI development will also be emphasized. Attendees will be guided toward additional information in books and online. There is no formal evaluation of attendees' work but attendees are invited to demonstrate their work and discuss the results they have achieved during the session by using different detectors and recognizers and different parameters.

Schedule

  • 14:00 - 14:25 1. Setting up OpenCV and related libraries (25 minutes)
    Windows XP/Vista/7/8
    Mac 10.6+ using MacPorts
    Debian Linux and its derivatives, including Ubuntu
  • 14:25 - 14:50 2. Building a GUI app that processes and displays a live camera feed (25 minutes)
  • 14:50 - 15:10 3. Detecting human faces (and other subjects) using prebuilt Haar cascades (20 minutes)
    Concept of Haar cascades
    Implementation of a Haar-based detector in OpenCV
    Our GUI for detection
  • 15:10 - 15:25 4. Break (15 minutes)
  • 15:25 - 16:15 5. Training a custom Haar cascade to detect cat faces (50 minutes)
    Obtaining annotated training images
    Parsing annotation data and preprocessing the training images
    Using OpenCV's training tools
  • 16:15 - 16:40 6. Recognizing faces of individual humans and individual cats (25 minutes)
    Local binary pattern histograms (LBPH) – concept and OpenCV implementation
    Our GUI for incrementally training and testing an LBPH recognizer
    Fisherfaces – concept and OpenCV implementation
    Eigenfaces – concept and OpenCV implementation
  • 16:40 - 17:00 7. Demos and discussion of attendees' work (optional) or discussion of the project's portability to Android and iOS (20 minutes)

Form of Presentation

The tutorial will include a presentation of PDF slides, videos featuring detection/recognition of real cats, and a live demo featuring detection/recognition of humans and artificial cats. The project and documentation will be available for download during the tutorial. Because of the long processing time required to train a detector, attendees will not be able to fully execute this step during the tutorial. However, a variety of pre-trained detectors will be provided and attendees will be able to parameterize and combine detectors in original ways and train their own recognizers. Training of human recognizers is a good opportunity for attendees to mingle. The tutorial allows time for attendees to demonstrate and discuss their work if they wish.

Intended Audience

This tutorial is intended for any developer who wants an introduction to detection and recognition in OpenCV. No expertise in computer vision or algorithms is assumed. Familiarity with Python and shell scripting would be helpful.

Instructor Background

Joseph (Joe) Howse has worked in the AR industry since 2011. He is President of Nummist Media Corporation Limited (http://nummist.com), providing software development and training services to clients worldwide. His publications include OpenCV for Secret Agents (Packt Publishing, forthcoming), OpenCV Application Programming for Android (Packt Publishing, 2013), OpenCV Computer Vision with Python (Packt Publishing, 2013), and “Illusion SDK: An Augmented Reality Engine for Flash 11” (ISMAR Workshop on Authoring Solutions for Augmented Reality, 2012). Joe holds a Master of Computer Science, MA in International Development Studies, and MBA from Dalhousie University. He has no difficulty detecting or recognizing any of his four splendid cats.

Further Information

http://nummist.com/opencv