Zur Kurzanzeige

dc.contributor.author
Wolf, Julian
dc.contributor.supervisor
Meboldt, Mirko
dc.contributor.supervisor
Fürnstahl, Philipp
dc.date.accessioned
2023-02-27T08:37:51Z
dc.date.available
2022-12-14T09:14:35Z
dc.date.available
2022-12-14T10:22:43Z
dc.date.available
2023-02-27T08:37:51Z
dc.date.issued
2022
dc.identifier.uri
http://hdl.handle.net/20.500.11850/586967
dc.identifier.doi
10.3929/ethz-b-000586967
dc.description.abstract
Procedural tasks are common in many professions, such as maintenance, assembly, or surgery, and are characterized by an operator performing a predefined sequence of steps to achieve a specific goal. Because these tasks often involve elaborated machines, devices, or even patients, they place the highest requirements on correct task execution. Augmented reality (AR) head-mounted displays (HMDs) have been shown to provide effective support during procedural tasks. Compared to conventional information mediums, where information is often spread among multiple documents (e.g., maintenance) or external screens (e.g., surgery), AR HMDs display contextual information directly into the field of view of the operator without occupying the operators’ hands. While with AR, displayed information is only changed based on manual user input, context-aware AR promises to further improve the support provided by automatically adapting displayed information to best address the operator's current needs and by providing feedback. Understanding the strengths and weaknesses of these two technologies is key to developing support systems that can improve the quality of task execution, making procedural tasks safer and improving outcomes. Previous studies on context-aware systems have focused primarily on manual execution without consideration of an important part of human interaction, the perception. Eye tracking allows to measure perception and provides deep insights into cognitive processes, and might therefore bring benefits to context-aware systems that are important to be investigated. This work investigates different concepts of how AR and context-aware AR support systems can be designed, how they work, and how they affect operators’ task performance. It further aims to advance context-aware AR support by integrating eye tracking and by deriving a suitable system model to describe the relationships between human behavior, AR, and context-aware AR. Three studies are presented in this work. Study I investigates the benefits of contextual information in AR over traditional information mediums to provide training instructions. A study was conducted with 21 medical students performing an extracorporeal membrane oxygenation (ECMO) cannulation on a physical simulator setup. The evaluation comprised of a detailed error protocol with both a categorization into knowledge- and handling-related errors and an error severity ranking. The results showed clear benefits of AR over conventional instructions while pointing out certain limitations that might be improved by context-aware AR. Study II investigates effective visualization strategies when real-time feedback is provided continuously. A study was conducted with 4 expert surgeons and 10 surgical residents performing surgical drilling on a physical simulator setup. The results show that continuous performance feedback generally levels task performance between novice and expert operators, reveal clear advantages and preferences of certain AR visualizations, and give insights into how AR visualizations guide visual attention. In particular, the peripheral field around the area of execution proofed to be promising for displaying information as the operator can simultaneously perceive feedback and coordinate hand movement. Study III investigates the suitability of eye and hand tracking features for predicting and preventing an operator’s erroneous actions. A study was conducted on a memory card game to explore the potential and limitations of this approach. The first experiment, which involved 10 participants, recorded participants' eye and hand movement to derive a method for target prediction. The second experiment with 12 participants examined the timeliness and accuracy of the implemented method end-to-end and showed the method to be highly effective in preventing a user’s erroneous hand actions. One of the key conclusions of this work is that context-aware AR support can significantly improve procedural outcomes and even raise the task performance of less experienced operators to the level of experts. In addition, analyzing hand-eye coordination patterns in real-time allows for predictive AR support and error prevention, which might eventually provide a safety net for operators performing their first independent task executions. For future work, important research directions include integrating and advancing predictive AR support for more complex procedures, investigating effective visualization strategies in environments with multiple dynamic visual stimuli, as well as effective feedback and support strategies while operators transition from their first training to independent execution and eventually become experts.
en_US
dc.format
application/pdf
en_US
dc.language.iso
en
en_US
dc.publisher
ETH Zurich
en_US
dc.rights.uri
http://rightsstatements.org/page/InC-NC/1.0/
dc.subject
Augmented Reality (AR)
en_US
dc.subject
User Guidance
en_US
dc.subject
Context Awareness
en_US
dc.subject
Eye Tracking
en_US
dc.title
Towards Advanced User Guidance and Context Awareness in Augmented Reality-guided Procedures
en_US
dc.type
Doctoral Thesis
dc.rights.license
In Copyright - Non-Commercial Use Permitted
dc.date.published
2022-12-14
ethz.size
137 p.
en_US
ethz.code.ddc
DDC - DDC::6 - Technology, medicine and applied sciences::600 - Technology (applied sciences)
en_US
ethz.code.ddc
DDC - DDC::6 - Technology, medicine and applied sciences::610 - Medical sciences, medicine
en_US
ethz.code.ddc
DDC - DDC::0 - Computer science, information & general works::000 - Generalities, science
en_US
ethz.identifier.diss
28704
en_US
ethz.publication.place
Zurich
en_US
ethz.publication.status
published
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02130 - Dep. Maschinenbau und Verfahrenstechnik / Dep. of Mechanical and Process Eng.::02665 - Inst. f. Design, Mat. und Fabrikation / Inst. of Design, Materials a Fabrication::03943 - Meboldt, Mirko / Meboldt, Mirko
en_US
ethz.leitzahl.certified
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02130 - Dep. Maschinenbau und Verfahrenstechnik / Dep. of Mechanical and Process Eng.::02665 - Inst. f. Design, Mat. und Fabrikation / Inst. of Design, Materials a Fabrication::03943 - Meboldt, Mirko / Meboldt, Mirko
en_US
ethz.tag
User Study
en_US
ethz.tag
Surgical Navigation
en_US
ethz.tag
Hand-Eye Coordination
en_US
ethz.tag
Error Prevention
en_US
ethz.tag
Behavior Prediction
en_US
ethz.tag
Real-time Feedback
en_US
ethz.date.deposited
2022-12-14T09:14:36Z
ethz.source
FORM
ethz.eth
yes
en_US
ethz.availability
Open access
en_US
ethz.rosetta.installDate
2022-12-14T10:23:14Z
ethz.rosetta.lastUpdated
2024-02-02T20:08:36Z
ethz.rosetta.exportRequired
true
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Towards%20Advanced%20User%20Guidance%20and%20Context%20Awareness%20in%20Augmented%20Reality-guided%20Procedures&rft.date=2022&rft.au=Wolf,%20Julian&rft.genre=unknown&rft.btitle=Towards%20Advanced%20User%20Guidance%20and%20Context%20Awareness%20in%20Augmented%20Reality-guided%20Procedures
 Printexemplar via ETH-Bibliothek suchen

Dateien zu diesem Eintrag

Thumbnail
Thumbnail

Publikationstyp

Zur Kurzanzeige