Please use this identifier to cite or link to this item:
http://hdl.handle.net/1942/35409
Title: | Intelligibility and Control for Context-Aware Internet of Things Applications | Authors: | COPPERS, Sven | Advisors: | Luyten, Kris Coninx, Karin Vanacken, Davy |
Issue Date: | 2021 | Abstract: | All types of applications, ranging from recommendation systems such as ComputerAssisted Translation (CAT) tools, critical systems such as cockpit interfaces in commercial aircraft, to ubiquitous environments such as smart homes, are becoming more intelligent and autonomous. Considering their ever-growing variety and complexity, such applications can be hard to understand, which can be frustrating and cause users to lose trust. While low trust is obviously detrimental for any application, trust in an application can also be too high, which can lead to users trusting it blindly, even when it is not behaving as intended. To ensure users have appropriate trust at all times, it is crucial to align their expectations with the application’s actual behavior and impact. To achieve this, complexity should not be avoided, but rather tamed with well-designed user interfaces. In this context, intelligibility and scrutability have been identified as crucial user interface properties to help users build a more accurate mental model about the application. These properties are the common thread running through this thesis. In the first part of the thesis, I explore what information and visualisations make recommendations truly intelligible, and investigate how the feedforward mechanism can be enriched in a systematic manner for GUI widgets. In the second part of this thesis, I explore how the intelligibility of smart homes can be improved, and how users can redeem control in situations where unexpected behavior occurs. My user studies show that users appreciate being better informed, but only when the additional information benefits their activities (e.g. during decision making) and is not part of their readily available knowledge. Based on predictive models, feedforward contributes greatly to intelligibility by informing users beforehand about future systems states and contributes to more appropriate trust. This dissertation addresses key challenges regarding intelligibility in three domains: (1) representations for recommender systems; (2) feedforward for GUI widgets; and (3) the end-user development paradigm for smart homes. The presented concepts and prototypes offered promising solutions, but come with some limitations that highlight interesting opportunities for future work, such as building more accurate and complete predictive models, as well as investigating when to present feedforward and how to manage user attention. | Document URI: | http://hdl.handle.net/1942/35409 | Category: | T1 | Type: | Theses and Dissertations |
Appears in Collections: | Research publications |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2021-08-31-final.pdf Until 2026-09-15 | 8.15 MB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.