October 17-21 2022 | Singapore

TUTORIALS

T2: Remote Collaboration using Augmented Reality: Development and Evaluation

Virtual Room 3 (Zoom)
Day 1: 17 October 2022 (Monday) | 15:30pm to 18:30pm (GMT +8)

Presenters:

Bernardo Marques, University of Aveiro
Samuel Silva, University of Aveiro
Paulo Dias, University of Aveiro
Beatriz Sousa Santos, University of Aveiro

Topics:

This tutorial will present essential concepts associated with collaboration and AR technologies from a human-centered perspective putting an emphasis on what characterizes the collaborative effort, e.g., team, time, task, communication, scene capture, shared context sources, user actuation, among others, as well as what people need to maximize the remote collaboration effort. This established the grounds on which to assess and evolve Collaborative AR technologies.

Afterward, the maturity of the field and a roadmap of important research actions, that may help address how to improve the characterization and evaluation of the collaboration process will be discussed. This is extremely important since current literature reports that most research efforts have been devoted to creating the enabling technology and overcoming engineering hurdles. Besides, most studies rely on single-user methods, which are not suitable for collaborative solutions, falling short of retrieving the necessary amount of data for more comprehensive evaluations. This suggests minimal support of existing frameworks and a lack of theories and guidelines. With the growing number of prototypes, the path to achieve usable, realistic and impactful solutions must entail an explicit understanding regarding how collaboration occurs through AR and how it may help contribute to a more effective work effort.

The tutorial will end with a call for action. The evaluation process must move beyond a simple assessment of how the technology works. Conducting thorough evaluations is paramount to retrieve the necessary data and obtain a comprehensive perspective on different factors of Collaborative AR: how teams work together, how communication happens, how AR is used to create a common ground, among others. Finally, we intend to have a period for discussion, in which the attendees may express their questions and opinions, including interesting topics for future research.

Acknowledgments: This research was developed in the scope of the PhD grant, funded by FCT [SFRH/BD/143276/2019]. It was also supported by IEETA in the context of the project [UIDB/00127/2020] and by the Smart Green Homes Project [POCI-01-0247-FEDER-007678], a co-promotion between Bosch Termotecnologia S.A. and the University of Aveiro.

T3: Build Your Own Social Virtual Reality With Ubiq

Virtual Room 1 (Zoom)
Day 5: 21 October 2022 (Friday) | 15:30am to 18:30pm (GMT +8)

Presenters:

Sebastian Friston, University College London
Ben Congdon, University College London
Anthony Steed, University College London

Topics:

One of the most promising applications of consumer virtual reality technology is its use for remote collaboration. A very wide variety of social virtual reality (SVR) applications are now available; from competitive games amongst small numbers of players; through to conference-like setups supporting dozens of visitors. The implementations strategies of different SVR applications are very diverse, with few standards or conventions to follow. There is an urgent need for researchers to be able to develop and deploy test systems so as to facilitate a range of research from new protocols and interaction techniques for SVRs through to multi-participant experiments on the impact of avatar appearance. This tutorial will explain the key concepts behind SVR software and introduce Ubiq, an open source (Apache licence) platform for developing your own SVR applications.

T4: Emotions and Touch in Virtual Reality

Waterfront 2 - Level 2 (Hybrid with Zoom)
Day 5: 21 October 2022 (Friday) | 13:00pm to 16:00pm (GMT +8)

Presenters:

Darlene Barker, University of Massachusetts Lowell

Topics:

Being able to connect to our environment enriches our experiences of our world whether it is in real life or virtual reality (VR). Emotions and touch can be utilized within VR to enhance the experience. Touch is one of the senses that has already been simulated in VR with the use of virtual tools, and haptic devices that manipulate the world; all of which is done leaving the whole body out of the equation. Sight and auditory senses are being used in diverse ways to help cover for the missing sense of touch and convince us that we are using touch. To bring the human body back into the mix where it can give a much stronger and immersive sense of presence in VR, we need touch and emotions in VR. This tutorial is meant to cover ongoing research on a multisensory collection framework—Thelxinoe, for the processing of emotions and simulating touch in VR.

Currently we can manipulate the virtual reality (VR) environment and convince ourselves of a sense of presence but by pulling another of our senses into the mix we can achieve a sense of presence in a more realistic manner. The applications for such a work are needed in a time when social distancing has become a norm. Along with the need for touch is the need for more coverage of the human body in terms of collecting data and applying stimuli. Current haptic methods need to be expanded upon for more coverage of the human body to get a better immersive experience to include tracking emotions in the world. Our research is the study of emotions in VR, and, more specifically, implementing touch in VR with the study of how emotions are being experienced in day-to-day life and recreating that in VR; we also aim to explore along with the how to further implement the experience of touch within VR.

To make more impact on social interaction within virtual reality (VR), we need to consider the impact of emotions on our interpersonal communications and how we can express them within VR. This tutorial will show the introductory research on the topic, where we propose the use of emotions that are based upon the use of voice, facial expressions, and touch to create the emotional closeness and nonverbal intimacy needed in nonphysical interpersonal communication. Virtual and long-distance communications lack the physical contact that we have with in-person interaction as well as the nonverbal cues that enhance what the conversation is conveying. The use of haptic devices and tactile sensations can help with the delivery of touch between parties and machine learning can be used for emotion recognition based on data collected from other sensory devices; all working towards better long-distance communications.

T5: OpenARK — Tackling Augmented Reality Challenges via an Open-Source Software Development Kit

Waterfront 3 - Level 2 (Hybrid with Zoom)
Day 1: 17 October 2022 (Monday) | 13:00pm to 16:00pm (GMT +8)

Presenters:

Allen Y. Yang, University of California, Berkeley
Mohammad Keshavarzi, University of California, Berkeley
Adam Chang, University of California, Berkeley

Topics:

This tutorial is a revised and updated edition of the OpenARK tutorial presented at ISMAR 2019 and 2020. The aim of this tutorial is to present an open-source augmented reality development kit, called OpenARK. OpenARK was founded at UC Berkeley in 2015. Since then, the project has received high-impact awards and visibility. Currently OpenARK is being used by several industrial alliances including HTC Vive, Siemens, Ford, and State Grid. In 2018, OpenARK won the only Mixed Reality Award at the Microsoft Imagine Cup Global Finals. In the same year in China, OpenARK also won a Gold Medal at the Internet+ Innovation and Entrepreneurship Competition, the largest such competition in China. OpenARK has received funding support from research grants by an Intel RealSense project, NSF and ONR.

OpenARK includes a multitude of core functions critical to AR developers and future products. These functions include multi-modality sensor calibration, depth-based gesture detection, depth-based deformable avatar tracking, and SLAM and 3D reconstruction. In addition to these functionalities, a lot of recent work has gone into developing a real time deep learning 3D object tracking module as a solution to the problem known as Digital Twin. The Digital Twin problem is the problem of overlaying a virtual augmentation over a real object with near perfect accuracy, enabling a wide variety of AR functionalities. All functions are based on state-of-the-art real-time algorithms and are coded to be efficient on mobile-computing platforms.

Another core component in OpenARK is its open-source depth perception databases. Currently we have made two unique databases available to the community, one on depth-based gesture detection and the other on mm-accuracy indoor and outdoor large-scale scene geometry models and AR attribute labeling. We would like to overview our effort in the design and construction of these databases that potentially could benefit the community at large.

Finally, we will discuss our effort in making depth-based perception easily accessible to application developers, who may not have and should not be forced to learn good understanding about 3D point cloud and reconstruction algorithms. The last core component of OpenARK is an interpreter of 3D scene layouts and its compatible AR attributes based on generative design principles first invented for creating architectural design layouts. We will discuss the fundamental concepts and algorithms of generative design and how it can be used to interpret common 3D scenes and their attributes for intuitive AR application development.

The teaching material of this tutorial will be drawn from a graduate-level advanced topics course on AR/VR offered at UC Berkeley for the past three years. Our teaching material can be downloaded from two websites:

  1. OpenARK GitHub website: https://github.com/augcog/OpenARK
  2. UC Berkeley Vive Center: https://vivecenter.berkeley.edu

T6: VR for Diversity

Waterfront 3 - Level 2 (Hybrid with Zoom)
Day 1: 17 October 2022 (Monday) | 09:00am to 12:00noon (GMT +8)

Presenters:

Mirjam Vosmeer, Amsterdam University of Applied Sciences.

Topics:

This tutorial will focus on issues of (gender)diversity and how these have been attended to in Virtual Reality experiences. Gender diversity in video games and the game industry will be discussed shortly, before moving on to VR. Outcomes of the research project VR for Diversity will then be presented, featuring the experience Amelia’s Dream on sexism and gender(in)equality and the Virtual Museum Exhibition about LGBTIQ+.

T7: Introduction to AR with Unity

Waterfront 3 - Level 2 (Hybrid with Zoom)
Day 5: 21 October 2022 (Friday) | 13:00pm to 16:00pm (GMT +8)

Presenters:

Hector Caballero, Unity

Topics:

AR Foundation includes core features from ARKit, ARCore, Magic Leap, and HoloLens, as well as unique Unity features to build robust apps that are ready to ship to internal stakeholders or on any app store. This framework enables you to take advantage of all of these features in a unified workflow.

T9: Cognitive Aspects of Interaction in Virtual and Augmented Reality Systems (CAIVARS)

Waterfront 3 - Level 2 (Hybrid with Zoom)
Day 5: 21 October 2022 (Friday) | 16:00pm to 19:00pm (GMT +8)

Presenters:

Manuela Chessa, University of Genoa
Guido Maiello, Justus-Liebig University Gießen

Topics:

In this tutorial, an interdisciplinary team of researchers will describe how to best design and analyze interaction in Virtual and Augmented Reality (VR/AR) systems from different perspectives. Manuela Chessa, an expert in perceptual aspects of human-computer interaction, will discuss how interaction in AR affects our senses, and how misperception issues negatively affect interaction. Several technological solutions to interact in VR and AR will be discussed and, finally, the challenges and opportunities of mixed reality (MR) systems will be analyzed. Fabio Solari, an expert in biologically inspired computer vision, will focus on foveated visual processing for action tasks and on the geometry and calibration of interactive AR. Guido Maiello, an innovative young neuroscientist, will present the link between our eyes and hands in the real world with the aim of improving the design of interaction techniques in VR and AR. Giovanni Maria Farinella, a leading computer vision expert, will discuss first person vision and egocentric perception. Dimitri Ognibene, whose research combines human-computer interaction and robotics with computational neuroscience and machine learning, will describe how perception is an active process in both humans and machines. Thrishantha Nanayakkara will show recent experiments using human participants and validation using soft robotic counterparts. Overall, this unique panel of multi-disciplinary researchers will delineate a compelling argument in favor of investigating human cognition and perception in the context of AR/VR.

T10: Interaction Design Patterns in VR

A flexible framework to evaluate and analyze a multitude of use-cases

Virtual Room 3  (Zoom)
Day 5: 21 October 2022 (Friday) | 08:30am to 11:30pm (GMT +8)

Presenters:

Rob Dongas, VEIL, University of Sydney
Suzan Oslin, VEIL, Open AR Cloud
Marina Roselli, VEIL, Mejuri
Ke Wang, VEIL, Remind
Andres Leon-Geyer, VEIL, Pontificia Universidad Católica del Perú, Universidad Peruana de Ciencias
Christian Beyle, VEIL, Universidad Católica de Temuco, VICO science

Topics:

The Virtual Experience Interaction Lab (VEIL) is a distributed global team of researchers, both academic and professional, that have developed a methodology for evaluating interaction patterns in VR. They will present the evaluation framework, demonstrate the evaluation process, describe the process for analysis, and provide support for the choices made. They will share the resources they have built to aid the team in maintaining a shared context across time zones and language barriers for the definition of important terms and testing methodologies. Participants will have the opportunity to run an evaluation of a recorded VR session.during the tutorial, as well as, explore the collected data to create a sample report.

Discussions will include how to classify interaction patterns, how interactions might differ between use cases, how to manage such a large dataset for meaningful analysis, and considerations for conducting research with a globally distributed team.

https://www.veilab.org/

T11: Developing Situated Analytics Applications with RagRug

Waterfront 3 - Level 2 (Hybrid with Zoom)
Day 1: 17 October 2022 (Monday) | 16:00pm to 18:00pm (GMT +8)

Presenters:

Dieter Schmalstieg, Graz University of Technology
Philipp Fleck, Graz University of Technology

Topics:

RagRug [1] is the first open-source toolkit [2] dedicated to situated analytics. The abilities of RagRug go beyond previous immersive analytics toolkits by focusing on specific requirement emerging when using augmented reality rather than virtual reality. RagRug lets users create visualizations that are (a) embedded with referents (specific physical objects in the environment) and (b) reactive to changes in the real world (both physical changes and changes in the data related to the referents). These capabilities are enabled by an easy-to-learn programming model (“event-driven functional reactive programming”) on top of the Unity game engine, the Node-RED framework for internet of things, and the Javascript programming language. RagRug ensures these tried-and-tested components work seamlessly together and delivers visualizations that are both expressive and easy to use.

It is important to note that RagRug does not break new ground in terms of the visualizations it can create; instead, it breaks new ground in how it integrates visualizations with referents. This ability comes from RagRug’s support for modeling both spatial and semantic properties of referents, and for its support of IoT sensors.

The modeling can be performed using a variety tools, such as CAD modeling or 3D scanning. The results are placed in one or more database back-ends, in such a way that an AR client application can query relevant data using the user’s current location or task description to formulate a meaningful query and retrieve relevant data on the fly without prior configuration of the AR client.

The visualization capabilities of RagRug build on the state of the art in immersive analytics, but it extends it towards allowing real-time reactions to data streaming from sensors observing changes in the environment. If new data comes in from the sensors, the situated visualization changes automatically. Programmers do not have to worry about the “how”, the can concentrate on the “what” of situated visualization.

[1] Philipp Fleck, Aimee Sousa Calepso, Sebastian Hubenschmid, Michael Sedlmair, Dieter Schmalstieg (2022) RagRug: A Toolkit for Situated Analytics. IEEE Transactions on Visualization and Computer Graphics.
[2] RagRug Github Page: https://github.com/philfleck/ragrug