Augmented reality (AR) is a technology that helps solve real-world problems by overlaying computer graphics in the real world to create immersive experiences. AR can be deployed on various display devices, such as head-mounted displays, retinal display systems, etc. But, more conventionally, it is used to present a video feed to users on monitors or mobile devices.
AR has uses in a variety of fields:
- Entertainment: Graphics overlaid onto live/recorded video feeds
- Medical: Image-guided surgery and pre-operative scans, etc.
- Sports: Strategic placement of advertisements in live sports broadcasts
- Manufacturing & Maintenance: Guidance through a complex piece of machinery
- Shopping: Virtual customer experiences like trying on clothing, makeup, etc.
- Training: Real-life learning experiences for trainees along with annotations by experts
- Customer Service: Guidance through semi-technical tasks like installations and troubleshooting
There are various components to rendering AR on a video feed, namely the Scene Analyzer, Scene Generator, and Tracking System. Each of these components is necessary for accurately applying AR experiences to specific tasks.
The first—the Scene Analyzer—takes stock of the real scene that you are looking to overlay and identifies areas of interest given the parameters of the task. Below, we share an example of how we leverage this step to identify various device ports to provide users with instructions.
Next, the Scene Generator renders the scene that you wish to create by overlaying graphics on top of the real-life scene. This component is relatively less complex as the quality of the artificial/augmented graphics does not need to be very life-like.
The final piece is the Tracking System—the most crucial yet challenging and complex component to get right in AR. Much of a user’s experience is predicated on the Tracking System’s success, as this is the technology that is responsible for keeping the real and virtual scenes properly aligned to ensure the two worlds co-exist harmoniously.
The underlying technology that drives AR and its various components makes use of deep learning models that are primarily developed for vision geometry. They are essential for various AR applications (such as games, museums, and automotive).
The key Deep Learning models which form the building blocks are all in the field of Computer Vision. Specifically Convolution Neural Networks in the field of Geometry Applications are critical for building AR applications. PackNet and GeoNet are key components that help in estimating scene structure and camera motion across RGB image sequences using a self-supervised deep network.
Using AR to Improve Customer Experience
Our team’s AR research and implementation have primarily been focused on improving customer service by helping customers navigate through self-service solutions to their problems. This improves the customer experience by increasing both efficiency and resolution speed.
The idea behind our focus is to enable customers to independently perform specific tasks which would typically require help from technicians, either on the phone or through on-premises visits. This provides a major benefit to both the business and the customers, as obtaining a technician’s help comes with a couple of drawbacks—high costs and delays in customer resolutions—that lead to negative impacts on the customer experience.
On the other hand, if customers are armed with the technology that makes it possible for them to navigate some of these tasks, they will achieve their intended objectives faster and at a lower cost to the company.
Two notable, broad areas where the above is applicable are the self-installation of new devices and self-troubleshooting issues of already installed devices.
Below are a few examples that showcase instances where we have been able to create solutions for these use cases. As a whole, the AR we implemented works to analyze the ports of a device and then helps customers understand where each wire must go. To begin, the customer has to point their device’s camera to receive guidance on installation and start the troubleshooting.
Fig 1: Identification of port
Fig 2: Instructing the customer
With the help of the tracking system, detailed instructions are delivered to solve customer challenges.
Fig 3: Identification of multiple ports on the device
Fig 4a: Detailed instructions to the customer
Fig 4b: Detailed instructions to the customer
In cases where self-troubleshooting of already installed devices is required, an agent will guide the customer using AR. However, if there is no resolution after these steps, a technician visit is triggered. Nevertheless, having the AR-based self-service options as the first step in the process has the strong potential to prevent these visits from occurring.
With AR technology, the applications that businesses can offer customers in the way of expanded, immersive experiences are endless. For our team, that means taking customer service to the next level. Why? Because all too often, simple troubleshooting and basic issues tax technician resources and end up causing long delays to the customer and high costs to the company. Neither is an ideal scenario.
By leveraging AR to enhance troubleshooting options, eClerx can deliver customer support that results in high-quality customer experiences, streamlined problem-solving, and reduced costs associated with a lower incidence of technician visits. Customers no longer need to wait for help or take the time to call in for a technician. Our virtual-based tools rapidly provide clear, detailed instructions as often as needed without the use of external support, delivering an effective, optimal customer experience.