We should build a baby-brained artificial intelligence | Tech News

Alison Gopnik’s career began with a psychology experiment she now considers ridiculous. Aiming to understand how 15-month-olds connect words with abstract concepts (daddy = caregiver), she decided to visit nine kids once a week for a year. The then ­Oxford graduate student would everything they said as part of her dissertation. “It was absurd for a million reasons,” says Gopnik, holed up on a winter Friday in her office at the University of California at Berkeley, where she is a professor of developmental psychology. “If a childhad moved away, if there weren’t any take-aways after the year, or any number of things, all that work would have been gone,” she says, before adding, “I would never allow a ­student of mine to do anything like that today.”

serverpoint hosting banner

Though her experiment didn’t solve any language-acquisition mysteries, it did overturn her assumptions about childhood learning and —and it altered her career path. Now her research has drawn the interest of - scientists who want to adapt her insights to their machine-learning algorithms. What she learned about kids’ smarts while a grad student still holds sway—for her field and possibly theirs. “Instead of thinking about children as these kind of starter adults, I realized they were profoundly different,” says Gopnik, now 62, and with her own grown children and grandchildren. “The way they use words, the meanings they express, the way they express them—none of it matched how adults think or speak.”

Today, Gopnik oversees her own cognitive development lab at UC Berkeley, and is the author of several books on early childhood learning and development. She’s a TED alum, a Wall Street Journal columnist, and has attained that singular ­intellectual height—crossing over into pop culture, by appearing on shows like Good Morning America and The Colbert Report. Gopnik’s message: Adult cognitive primacy is an illusion. Kids, her research shows, are not proto-adults with fruit-fly-like attention spans, but in fact our occasional superiors. “Children, even very young children,” she says, “are in many ways smarter, more inventive, and better at learning than adults.”

The reason: size and shape matter. ­Research shows that the bulk and structure of a child’s brain confer cognitive strengths and weaknesses. Same goes for adults. For example, a developed prefrontal cortex allows grown-ups to focus, plan, and control our impulses: valuable skills that let us write magazine articles and avoid jail time. But evidence suggests a developed cortex can also make it hard to learn new or surprising concepts and can impede creative thinking. Toddler brains, constantly abuzz with fresh neural connections, are more plastic and adaptive. This makes them bad at remembering to put on pants but surprisingly good at solving abstract puzzles and extracting unlikely ­principles from extremely small amounts of information.

These are handy skills. It turns out a lot of smart people want to think this way—or want to build machines that do. Artificial-intelligence researchers at places like Google and Uber hope to use this unique understanding of the world’s most powerful neural-learning apparatus—the one between a toddler’s ears—to create smarter self-driving cars. Coders can create software that beats us at board games, but it’s ­harder to apply those skills to a different task—say, traffic-­pattern analysis. Kids on the other hand, are genius at this kind of generalized learning. “It’s not just that they figure out how one game or machine works,” says Gopnik. Once they’ve figured out how your iPhone works, she says, they’re able to take that information and use it to figure out the childproof sliding lock on the front door.

Also Read:  These are the jobs that AI will steal

Cracking the codes of these little code breakers wasn’t Gopnik’s original career plan. As an undergrad, she began studying life’s big problems, toiling in the field of analytic philosophy. Back then, none of her peers pondered the thinking of kids. But Gopnik became ­convinced kids were key to unlocking one of the oldest epistemological queries: How do we know stuff about the world around us? Borrowing the brain-as-computer model, Gopnik sought to ask questions about the software this little machine, allowing it to perform complicated functions. “Kids are the ones doing more generalized learning than anybody else,” she says, “so why wouldn’t you want to understand why they’re so good at it?”

The advantages of installing a preschool perspective into machines, she says, can be understood by considering two popular, but opposing, strategies: bottom-up and top-down learning. The former works the way you expect: Say you want a computer to learn to recognize a cat. With a bottom-up or “deep-learning” strategy, you’d feed it 50,000 photos of furry felines and let it extract statistics from those examples. A top-down strategy, on the other hand, requires just one example of a cat. A system using this strategy takes that single picture, builds a model of “catness” (whiskers, fur, vertical pupils, etc.), and then uses it to try to identify other cats, revising its cat hypothesis as it goes, much like a scientist would.

Children employ both methods at once. They’re good at figuring out things and extracting statistics, says Gopnik. And they use that data to come up with new theories and structured pictures of the world. Successfully distilling both knowledge-building approaches into algorithms might produce artificial intelligence that can finally do more than just beat us at Go and recognize animals.
It might also, Gopnik hopes, change outmoded ideas that we all seem to share about intelligence. “We still tend to think that a 35-year-old male professor is the ultimate goal of human cognition,” she says, “that everything else is just leading up to or deteriorating from that cognitive peak.”

That model doesn’t make sense for a variety of reasons. Studies from fields like evolutionary biology, neuroscience, and developmental psychology suggest we simply have different cognitive strengths and strategies at different stages of our lives. “Children will have one set of ideas about how people and the world work when they’re 2, and then another set when they’re 3, and another set when they’re 5,” says Gopnik. “It’s like they’re actively trying to think up a coherent picture of the world around them, and then constantly changing that picture based on the observations they make.”

Also Read:  Nvidia’s AI can turn real-life videos into 3D renders

That frenetic hypothesis formation—and ongoing reformation—isn’t a bug; it’s a ­highly desired feature. And if we want our machines to possess anything approximating human intelligence, maybe we should think about giving them a childhood too.

Your brain from cradle to rocking chair

We’re born helpless and dumb. As we mature, experience and schooling teach us useful things, and we get woke. Then, year by year, we slip back into feeblemindedness. That’s the picture most of us have of intelligence. Unfortunately, it’s dumb. Research reveals that each period of cognitive development offers learning strategies as well as trade-offs. It’s that combo of aha and duh that actually makes humans truly intelligent.

Infant: 0-18 months
An infant brain forms 1 million new neural connections each second, helping her to develop emotions, motor skills, attachments, and working memory. At 11 months, she can already form hypotheses about how the world works. At 18 months, she has a sense of self.

Toddler: 2-5 years
When it comes to learning abstract concepts, preschoolers beat adults. At 4 years old, 66 percent of calories are headed to her brain—fuel for the exploration and creative thinking that define this period. By the time she finishes preschool, her gray matter has quadrupled in size.

School-age: 6-11 years
The brain of a 6-year-old has reached 90 percent of its adult size. Neural pruning ramps up as the brain discards unused connections. The prefrontal cortex starts to develop more, resulting in longer attention spans, and an increased reliance on language and logic to learn.

Adolescence: 12-24 years
Adolescence marks a return to the neural flexibility and plasticity that characterized her preschool years. But she’s not living in a protected context. A reliance on the amygdala—a center for emotions, impulses, and instinctive behaviors—might result in trademark “risk-taking.”

Adulthood: 25-59 years
By the time she reaches adulthood, prefrontal control is at its peak. A developed frontal lobe helps her plan for the future and control her impulses, but there’s evidence that creativity and cognitive flexibility takes a big hit. Learning anything surprising? Also a lot harder.

Senior: 60+ years
Bring on short-term-memory loss, neurodegenerative diseases, and declines in conceptual reasoning. Still, other cognitive abilities continue to grow. Skills involving vocabulary, math, verbal comprehension—what’s known as crystallized intelligence—are among them.

Bryan Gardiner is a contributing editor at Popular Science. He last wrote about ubiquitous computing.

This article was originally published in the Spring 2018 Intelligence issue of Popular Science.

You might also like More from author

Comments are closed.