Augmented and Virtual Reality Research at IBM Watson - overview
IBM Augmented Remote Assist (2020)
Augmented Reality (AR) enhances user’s perception of their surroundings by superimposing graphics and media on top of what they see in the real world. By showing information in the right context and the right location in the physical environment, AR reduces the cognitive effort needed to relate information to the physical environment (e.g., relating instructions to the hardware), cuts down the number of errors (through visual guidance) and reduces time required to look up information service information.
IBM’s Augmented Remote Assist solution uses the mobile device’s camera, as well as advanced computer vision techniques to recognize the hardware that needs support, and overlays 2D and 3D annotations on the hardware to provide users real time, on-the-spot instructions. Augmented Remote Assist, powered by remote support agents, can guide field technicians and users through visual instructions that appear on their mobile devices in real time.
IBM Augmented Remote Assist (iOS)
IBM Augmented Remote Assist (Android)
Dynamic Visual Instruction Generation for Augmented Reality-driven Technical Support (2020)
In the hardware technical support domain, scaling technician skills remains a prevalent problem. Given the large portfolio of hardware products service providers need to maintain, it is not possible for every technician to be expert at repairing every product. Augmented Reality addresses this problem through virtual procedures, which equip the technicians with the skills they need to support a wide range of hardware products.
In this project, we present a novel approach to dynamically construct virtual procedures, and we demonstrate the feasibility of our approach through a real-life implementation.
Visual State Detection for Augmented Reality-driven Technical Support (2018-2019)
Augmented Reality is increasingly explored as the new medium for two-way remote collaboration applications to guide participants more effectively and efficiently via visual instructions. As users strive for more natural interaction and automation in augmented reality applications, new visual recognition techniques are needed to enhance the user experience. Although simple object recognition is often used in Augmented Reality towards this goal, most collaboration tasks are too complex for such recognition algorithms to suffice.
In this project, we propose an active and fine-grained visual recognition approach for mobile Augmented Reality, which leverages 2D video frames, 3D feature points, as well as camera pose data to detect various states of an object. We demonstrate the value of our approach through a mobile application designed for hardware support, which automatically detects the state of an object to present the right set of information in the right context.
AI-driven Augmented Reality Diagnostics (2019)
This project explores how visual recognition, combined with Augmented Reality (AR) tracking, can enable Self-Assist experiences for hardware diagnostics.
In the demonstration video below, the support technician relies on the AR system to diagnose the hardware error through visual recognition, and receives instructions directly from the AR system via virtual procedures, which are interactive 3D visual representations of text-based knowledge articles that describe how to perform step-by-step repair actions.
Virtual Reality for Field Technician Training (2017-2018)
In the technology support domain, effective and efficient training of technicians remains a prevalent problem. In addition to the high cost and inconvenience of having to travel to a training location, limited access to trainers and training equipment further aggravates the problem.
In this project, we explore how Virtual Reality-based experiences can address the pain points of technician training through exploration of 3D visualizations and interaction techniques.
Merged Reality for Field Technician Guidance (2017)
Merged Reality explores real-time background extraction and chroma keying techniques for mobile devices to merge desired physical elements of two remote collaborators' environments, thereby giving a sense of co-location. Unlike Augmented Reality, which overlays virtual elements on top of what we see in the real world, Merged Reality allows us to overlay physical elements from remote environments on top of our reality.
In the context of technical support, Merged Reality provides a more natural interaction between remote experts and field technicians by enabling the remote experts to "point" at areas of interest in the field technician's environment simply by using their hands or other physical elements, such as a screw driver, ruler, etc., from their environment.