Teaching computers to plan for the future | Tech News
As humans, we’ve gotten pretty good at shaping the world around us. We can choose the molecular design of our fruits and vegetables, travel faster and farther and stave off life-threatening diseases with personalized medical care. However, what continues to elude our molding grasp is the airy notion of “time” — how to see further than our present moment, and ultimately how to make the most of it. As it turns out, robots might be the ones that can answer this question.
Computer scientists from the University of Bonn in Germany wrote this week that they were able to design a software that could predict a sequence of events up to five minutes in the future with accuracy between 15 and 40 percent. These values might not seem like much on paper, but researcher Dr. Juergen Gall says it represents a step toward a new area of machine learning that goes beyond single-step prediction.
Although Gall’s goal of teaching a system how to understand a sequence of events is not new (after all, this is a primary focus of the fields of machine learning and computer vision), it is unique in its approach. Thus far, research in these fields has focused on the interpretation of a current action or the prediction of an anticipated next action. This was seen recently in the news when a paper from Stanford AI researchers reported designing an algorithm that could achieve up to 90 percent accuracy in its predictions regarding end-of-life care.
When researchers provided the algorithm with data from more than two million palliative-care patient records, it was able to analyze patterns in the data and predict when the patient would pass with high levels of accuracy. However, unlike Gall’s research, this algorithm focused on a retrospective, single prediction.
Accuracy itself is a contested question in the field of machine learning. While it appears impressive on paper to report accuracies ranging upwards of 90 percent, there is debate about the over-inflation of these values through cherry-picking “successful” data in a process called p-hacking.
In their experiment, Gall and his team used hours of video data demonstrating different cooking actions (e.g. frying an egg or tossing a salad) and presented the software with only portions of the action and tasked it with predicting the remaining sequence based on what it had “learned.” Through their approach, Gall hopes the field can take a step closer to true human-machine symbiosis.
“[In the industry] people talk about human robot collaboration but in the end there’s still a separation; they’re not really working close together,” says Gall.
Instead of only reacting or anticipating, Gall proposes that, with a proper hardware body, this software could help human workers in industrial settings by intuitively knowing the task and helping them complete it. Even more, Gall sees a purpose for this technology in a domestic setting, as well.
“There are many older people and there’s efforts to have this kind of robot for care at home,” says Gall. “In ten years I’m very convinced that service robots [will] support care at home for the elderly.”
The number of Americans over the age of 65 today is approximately 46 million, according to a Population Reference Bureau report, and is predicted to double by the year 2060. Of that population, roughly 1.4 million live in nursing homes according to a 2014 CDC report. The impact that an intuitive software like Gall’s could have has been explored in Japan, where just over one-fourth of the country’s population is elderly. From PARO, a soft, robotic therapy seal, to the sleek companion robot Pepper from SoftBank Robotics, Japan is beginning to embrace the calm, nurturing assistance of these machines.
With this advance in technology for the elderly also comes the bitter taste that perhaps these technologies will only create further divide between the generations — outsourcing love and care to a machine. For a yet mature industry it’s hard to say where this path with conclude, but ultimately that is in the hands of developers to decide, not the software or robots they develop. These machines may be getting better at predicting the future, but even to them their fates are still being coded.