Next Stage of AR Mapping

Cory Hatten
3 min readNov 6, 2020

--

Simultaneous localization and mapping, also known as SLAM, is a method for creating digitized maps of environments in real time. (For an explanation and example of the uses of S.L.A.M. take a look at Autonomous Navigation). This is a field that robotics has been working on for quite a while now, and a field in which augmented reality (AR) is seeking to hone to a razors edge for accuracy. SLAM requires multiple inputs, algorithms, and a lot of processing power to be fully realized. Some of the common sensory inputs used to map an environment in real time are lidar, RGB-D cameras, infrared sensors, and odometers. Each of these sensory inputs has some flaws that hold them back from being the perfect choice for a singular AR input source. For instance an RGB-D can only function in adequate lighting, and infrared sensors do not function well in sunlight. Odometers are notorious for being just slightly off in their readings, and lidar up until recently was too cumbersome to use for mapping.

All of these problems can be rectified for robotics with a bit of work, one example of this is Nvidia’s Isaac. However augmented reality adds a whole new level of complexity to the equation of SLAM. Typically robots operate on a fixed Y axis and do not need to worry about pitch and yaw rotations. For fully realized AR SLAM virtual elements must be able to interact with and track the real world environment that is simultaneously being mapped. Gyroscopes become a necessity to keep track of how the user tilts their head, and virtual elements must react accurately to the changes. one method that aids in the accuracy of digital overlays are targets. Targets can best be likened to a QR code, it is an optical target for the camera to focus on giving it an absolute defined location to place an effect. “The most successful target designs are circular or square shapes. Circular shapes project onto an ellipsoid in the image, while squares project onto a general quadrilateral. Both shapes can easily be detected in the images.” (AR: Principles and Practices) The most appealing method of SLAM for AR is a camera and gyroscope based system, for compactness and hardware simplicity. This requires the use of high demanding computational algorithms, to map and overlay surfaces. This method also requires that the user be in adequate lighting. Overall SLAM for AR is still in the works but it is a field that is developing fast, the new Apple devices that are coming out have the capability to detect depth and layer AR elements in front of or behind objects that it sees in the real world. All in the size of something that fits comfortably in the palm of your hand.

Snapchat’s New AR

--

--