How Robots Perceive the World Around Them

How Robots Perceive the World Around Them

Robots can do amazing things, such as work with humans collaboratively in factories, deliver packages quickly within warehouse, and explore the surface of Mars. But despite those feats, we’re only beginning to see robots that can make a decent cup of coffee. For robots, being able to and understand the world around them is critical to streamlined integration. Such habitual practices, such as turning on the coffee machine, dispensing the beans and finding the milk and sugar, require certain perceptual abilities that are still fantasies for many machines.

However, this is changing. Several different technologies are being used to help robots better perceive the environment in which they work. This includes understanding the objects around them, and measuring distance. Below is a sampling of these technologies.

LiDAR: light & laser-based distance sensors

Several companies develop LiDAR (light and laser-based distance measuring and object detection) technologies to help robots and autonomous vehicles perceive surrounding objects. The principle behind LiDAR is simply to shine light at a surface and measure the time it takes to return to its source.

By firing rapid pulses of laser light at a surface in quick succession, the sensor can a complex “map” of the surface it’s measuring. There are currently three primary types of sensors: single beam sensors; multi-beam sensors; and rotational sensors.

Single beam sensors produce one beam of light, and are typically used to measure distances to large objects, such as walls, floors and ceilings. Within single beam sensors, the beams can be separated into: highly collimated beams, which are similar to those used in laser pointers (that is, the beam will remain small throughout the entire range). LED and pulsed diode beams are similar to flashlights (that is, the beam will diverge over large distances).

Multi-beam sensors simultaneously produce multiple detection beams, and are ideal for object and collision avoidance. Finally, rotational sensors produce a single beam while the device is rotated, and are often used for object detection and avoidance.

Part detection sensors

An important task often assigned to robots, especially within the manufacturing industry, is to pick up objects. More specifically, a robot needs to know where an object is located, and if it’s ready to be picked up. This requires the work of various sensors to help the machine detect the object’s position and orientation. A robot may already have sensors built-in to its part detection capabilities, which may serve as an adequate solution if you’re only looking to detect whether or not an object is present.

Also Read:  Microsoft Research’s VROOM combines life-sized AR avatars with robots

Part detection sensors are commonly used in industrial robots, and can detect whether or not a part has arrived at a particular location. There are a number of different types of these sensors, each with unique capabilities, including detecting the presence, shape, distance, color and orientation of an object.

Robot vision sensors offer several high-tech benefits to collaborative robots across industries. Both 2D and 3D vision allow robots to manipulate different parts, without reprogramming, pick up objects of an unknown position and orientation, and correct for inaccuracies.

3D vision & the future of robot “senses”

The introduction of robots into more intimate aspects of our lives (such as in our homes) requires a deeper and more nuanced understanding of three-dimensional objects. While robots can certainly “see” objects through cameras and sensors, interpreting what they see from a single glimpse is more difficult. A robot perception algorithm, developed by a Duke University graduate student and his thesis supervisor, can guess what an object is, how it’s oriented and “imagine” any parts of the object that may be out of view.

The algorithm was developed by using 4,000 complete 3D scans of common household objects, including: an assortment of beds, chairs, desks, monitors, dressers, nightstands, tables, bathtubs and sofas. Each scan was then broken down into tens of thousands of voxels, stacked atop one another, to make processing easier. Using probabilistic principal component analysis, the algorithm learned categories of objects, their similarities and differences. This enables it to understand what a new object is without having to sift through its entire catalog for a match.

While still in its infancy, the implementation of this algorithm (or those of a similar nature) pushes robotics one step further towards working in tandem with humans in settings far less structured and predictable than a lab, factory or manufacturing plant.

The ability to perceive and interact with surrounding objects and the environment is critical to functionality, and their applications working alongside humans. As technology advances, there will undoubtedly be a need for increased robotics education and literacy, as well as robotics technicians.

You might also like More from author

Comments are closed.