Please use this identifier to cite or link to this item:
Title: Context-Aware Support of Dexterity Skills in Cross-Reality Environments
Authors: CEYSSENS, Jeroen 
DI FIORE, Fabian 
Advisors: Di Fiore, Fabian
Luyten, Kris
Issue Date: 2022
Source: proceedings of 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct, p. 954 -957
Abstract: Figure 1: Overview of the developed XR prototypes including (1) an AR tool for operator assistance, (2) an AR tool providing surface coverage for cleaning cleanrooms, (3) a VR simulation of nuclear environments, and (4) a welding tool using AR passthrough to simulate welding light, seam, heat and guidance for direction. ABSTRACT Within our work, we apply context-awareness to determine how AR/VR technology should adapt instructions based on the context to suit user needs. We focus on situations where the user must carry out a complex manual activity that requires additional information to be present during the activity to achieve the desired result. To this end, the emphasis is on activities that require fine-motor skills and in-depth expertise and training, for which XR is a powerful tool to support and guide users performing these tasks. The contexts we detect include user intentions, environmental conditions, and activity progressions. Our work builds on these contexts with the main focus on determining how XR should adapt for the end-user from a usability perspective. The feedback we request from ISMAR consists of input in detection, usability, and simulation categories, together with how to balance these categories to create real-time and user-friendly systems. The next steps of our work will consider how to content should adjust based on the cognitive load, activity space, and environmental conditions. Index Terms: Human-centered computing-Interaction design-Interaction design process and methods-Activity centered design Human-centered computing-Human computer interaction (HCI)-Interaction paradigms-Mixed / augmented reality 1 RELATED WORKS "Context awareness" is defined by The Oxford Dictionary of Computing as "The ability of a computer system to sense details of the external world and choose its course of action depending on its findings." 1. Due to the nature of "sensing the details of the external world", context-awareness has been studied extensively in combination with Cross-Reality (XR) technology, like Augmented Reality (AR) and Virtual Reality (VR), which interact heavily with the external world. These topics expand from improving assistance instructions by adjusting them to the operation [5,9,20], by blending the content with the real environment by taking into account the 1 Oxford Dictionary of Computing definition of context awareness: lighting and shadows [1, 8, 11], and by making the content adaptive to situations [10, 13]. Within these topics, Lindlbauer et al. (2019) used context-awareness to automatically change the amount of XR content shown based on the user's cognitive load and knowledge [10]. To further support the use of context-awareness in other XR applications, Chen et al. (2018, 2020) created frameworks for context-aware ubiquitous interaction [3] and for semantic-based material-aware interaction with the real environment [4]. Similarly, Gatullo et al. (2020) created a context-aware information manager for AR-provided technical documentation [6], and Wang et al. (2020) made a tool for authoring context-aware applications by utilizing programming-by-demonstration of daily activities [17]. Aside from object-based context-awareness, Orlosky et al. (2015) developed a management tool that automatically realigns AR content to avoid occlusion with real people in the environment [13]. All these systems allow content creators to quickly develop new context-aware XR content and interfaces that will adjust correctly for different situations. However, to achieve proper immersion in simulations, XR content also needs to adapt its visualization to the changing environment. To achieve this effect, Barreira et al. (2018) studied how to adapt the shadows of virtual objects based on the lighting present in real-life outdoor environments [1]. Meanwhile, Mandl et al. (2017) created a deep learning model for changing the material reflections of digital content to match that of the lighting learned from the real environment [11]. Kan et al. (2019) expanded further with a deep learning model for estimating light reflections in the real environment to simulate shadows and transparency of digital objects [8]. For our work, we will attempt to simulate outcomes from operations requiring dexterity skills. These simulations must blend in with the real environment as they would during live procedures to acquire correct study results. Aside from simplifying XR content creation and blending real and digital objects, it is also necessary to study how context-aware visualizations should behave in different circumstances. Lampen et al. (2020) studied how context-aware assistance can be provided in the automotive industry by providing an augmented human for support and found some initial benefits in user-experience, motivation, and performance [9]. Doughty et al. (2021) built a deep learning model for detecting surgical operations and provided context-aware guidance on the surgery operations to see how they can support 954 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)
Keywords: Human-centered computing;Interaction design;Interaction design process and methods;Activity centered design Human-centered computing;Human computer interaction (HCI);Interaction paradigms;Mixed / augmented reality
Document URI:
ISBN: 978-1-6654-5365-3
DOI: 10.1109/ISMAR-Adjunct57072.2022.00214
ISI #: 000918030200203
Rights: 2022 IEEE
Category: C1
Type: Proceedings Paper
Appears in Collections:Research publications

Files in This Item:
File Description SizeFormat 
  Restricted Access
Published version397.89 kBAdobe PDFView/Open    Request a copy
Show full item record

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.