October 17-21 2022 | Singapore

Keynote Speakers

Grand Ballroom 1 & 2 – L4 ( Hybrid with Zoom)
Day 2: 18 October 2022 (Tuesday) | 08:30am to 09:30am (GMT +8)

 

Henry Fuchs

is the Federico Gil Distinguished Professor of Computer Science at the University of North Carolina at Chapel Hill. He has been active in interactive 3D graphics and related fields since the 1970s, including in 3D reconstruction from medical and photographic imagery, rendering algorithms (BSP Trees), graphics engines (Pixel-Planes), office of the future, telepresence, and medical applications. He is a member of the US National Academy of Engineering, a fellow of the American Academy of Arts and Sciences, a recipient of the ACM SIGGRAPH Steven Anson Coons Award and an honorary doctorate from TU Wien, the Vienna University of Technology.

Title: All-day Augmented Reality Glasses: Promises and Problems

Abstract: Many of us foresee of a future in which AR eyeglasses are worn all day, replacing of our current prescription eyewear. That future may not arrive for a while and predicting its benefits and problems may be as premature as early predictions about the use of mobile phones or about “a helicopter in every garage.” Nevertheless, a few sample applications seem both obvious and promising, 1) continuous physiological monitoring for sudden and long-term health changes, and 2) virtually embodied avatars for guidance in navigation, exercise, and training. I will also talk about the rocky history of head-worn AR systems, which in contrast to the amazing, continuous advances in Integrated Circuit fabrication technology, has gone through multiple bust and boom cycles. I will talk about a few of the historic obstacles and how some were overcome and others side-stepped. I will summarize a few of the remaining problems, possible paths to their solution, and describe several of our projects in these areas.

Grand Ballroom 1 & 2 – L4 ( Hybrid with Zoom)
Day 3: 19 October 2022 (Wednesday) | 15:00pm to 16:00pm (GMT +8)

Marc Pollefeys

is a Professor of Computer Science at ETH Zurich and the Director of the Microsoft Mixed Reality and AI Lab in Zurich where he works with a team of scientists and engineers to develop advanced perception capabilities for HoloLens and Mixed Reality. He was elected Fellow of the IEEE in 2012. He obtained his PhD from the KU Leuven in 1999 and was a professor at UNC Chapel Hill before joining ETH Zurich.

He is best known for his work in 3D computer vision, having been the first to develop a software pipeline to automatically turn photographs into 3D models, but also works on robotics, graphics and machine learning problems. Other noteworthy projects he worked on are real-time 3D scanning with mobile devices, a real-time pipeline for 3D reconstruction of cities from vehicle mounted-cameras, camera-based self-driving cars and the first fully autonomous vision-based drone. Most recently his academic research has focused on combining 3D reconstruction with semantic scene understanding.

Title: Towards the Industrial Metaverse

Synopsis:  While true AR/MR consumers devices are probably still years away, devices like HoloLens2 already have compelling applications in industry today.  In this talk we will review what devices can do today and then present ongoing research expanding those capabilities.  We will discuss how egocentric activity recognition can be used to enable devices to better assist users in learning and performing tasks.  We will also see how combining edge devices with cloud compute capabilities can provide much more powerful solutions.  We’ll briefly look at remote rendering as an option to remove constraints on 3D model complexity.  Next, we’ll focus on spatial computing.  While mixed reality devices typically build their own 3D map of the environment on device, many high value scenarios require to be able to reliably share and persist spatially localized information with respect to a common coordinate system.  We will see how distributed cloud mapping and localization can enable these type of scenarios.  We will present results involving HMDs, but also robots as well as 3D reality capture devices.  Our goal is to enable seamless collaboration between on-site and remote people, as well as autonomous robots, through mixed reality.