Researchers make android child’s face strikingly more expressive | Robotics
Japan’s affection for robots is no secret. But is the feeling mutual in the country’s amazing androids? Roboticists are now a step closer to giving androids greater facial expressions to communicate with.
Despite advances, capturing humanistic expressions in a robotic face remains an elusive challenge. Although their system properties have been generally addressed, androids’ facial expressions have not been examined in detail. This is owing to factors such as the huge range and asymmetry of natural human facial movements, the restrictions of materials used in android skin, and the intricate engineering and mathematics driving robots’ movements.
A trio of researchers at Osaka University has now found a method for identifying and quantitatively evaluating facial movements on their android robot child head. Named Affetto, the first-generation model was reported in a 2011 publication. The researchers have now developed a system to make the second-generation Affetto more expressive. Their findings offer a path for androids to express greater ranges of emotion, and ultimately have deeper interaction with humans.
“Surface deformations are a key issue in controlling android faces,” study co-author Minoru Asada explains. “Movements of their soft facial skin create instability, and this is a big hardware problem we grapple with. We sought a better way to measure and control it.”
The researchers investigated 116 different facial points on Affetto to measure its three-dimensional movement. Facial points were underpinned by so-called deformation units. Each unit comprises a set of mechanisms that create a distinctive facial contortion, such as lowering or raising of part of a lip or eyelid. Measurements from these were then subjected to a mathematical model to quantify their surface motion patterns.
While the researchers encountered challenges in balancing the applied force and in adjusting the synthetic skin, they were able to employ their system to adjust the deformation units for precise control of Affetto’s facial surface motions.
“Android robot faces have persisted in being a black box problem: they have been implemented but have only been judged in vague and general terms,” study first author Hisashi Ishihara says. “Our precise findings will let us effectively control android facial movements to introduce more nuanced expressions, such as smiling and frowning.”