AI debate raises more questions than answers
“The human race, version 2” — a thought to inspire hope or fear, maybe a little (or a lot) of both. “We today,” says Komazawa University economist Tomohiro Inoue, whose thought it is, “will soon be ‘the former human race.’”
The propulsion forward comes from artificial intelligence, already upon us and infinite in its potential. We’re going somewhere — that much is certain. Where? In what direction? That, there is no knowing.
Inoue’s remark occurs in a conversation with two others, published by the monthly Bungei Shunju (March) as part of a wide-ranging package of articles on the subject. Taken as a whole, the feature raises questions — more questions than answers; understandably enough, at this primitive stage. Clearly, AI can make us better — but how much better do we want to be? At what point does better become worse?
Imagine, Inoue suggests, an AI-enhanced job interview. Maybe the interviewer is a robot, or maybe a human who’ll feed your answers, facial expressions, blood pressure, sweat output, heart rate, who knows what, through AI analysis. Your skills will be measured, your enthusiasm weighed, your compatibility with the company environment computed and the company will end up with (presumably) just the employee it needs — maybe you, maybe someone else. Rejection is sweetened by the knowledge — the common knowledge — that AI does not err. To err is merely human. You’re grateful to have escaped the dead end of a job you’re not suited for. Everybody wins. That’s one way of looking at it. Why, then, does the scenario seem so eerily dreadful? Because it’s inhuman?
Is it inhuman? Isn’t it human to enhance our natural abilities? The first tools hundreds of thousands of years ago, the first machines thousands of years ago, the first factories centuries ago, all did just that.
AI enhances us mentally rather than physically, which seems to make all the difference. Machines and factories changed our lives. AI looks set to change us. In Inoue’s words, “The definition of ‘human’ is changing.”
Advanced intelligence was long a human monopoly, a defining human trait. Early computers could out-calculate us but required human programming. AI, as it develops, will not. It will learn as we do — from experience — only (it seems) faster, better. Inoue dates an “AI boom” to March 2016, when AlphaGo, a go-playing program developed by Alphabet Inc.’s Google DeepMind, trounced a human go champion 4-1 in a five-game match. This was marvelous to all — exhilaratingly so to some, ominously to others. Visions of machine intelligence elbowing us aside, taking over the world or using us as slaves were fringe fears, maybe, but not lunatic fringe.
It won’t come to that, argues anatomist Takeshi Yoro in an essay for Bungei Shunju. Yoro , best known for his 2004 bestseller “Baka no Kabe” (“The Wall of Fools”), sees no comparison between the infinitely complex human brain on the one hand and the digital brain whose neurons are ones and zeros. We’ll never calculate like them, but they’ll never be human, and as for being superhuman — consider, suggests Yoro, this experiment: Two children, one 5, the other 3, stand by as their older sister puts a stuffed animal into a box marked “A.” Big sister closes the box and leaves. Enter the children’s mother, who transfers the toy from box “A” to box “B.” She closes the box and leaves. Re-enter big sister. “Which box,” she asks, “will I open?”
“B,” says the 3-year-old — she knows that’s where the toy is. “A,” says the 5-year-old, taking into account big sister’s ignorance of the change. The point, says Yoro, is that the 5-year-old has learned to consider points of view other than her own. By age 5 we’ve evolved that far; not by age 3.
It’s not a question of intelligence vs. unintelligence but of human vs. pre-, non-, super- or inhuman. It’s thinking as opposed to calculating, and its budding in early childhood — not at birth as instinct, or later on as a skill mastered — points to something uniquely human. From it stem our first steps into a social and moral universe, knit together by concepts like law, the fair exchange of goods, government — ultimately, democracy and respect for the rights and freedoms of others. These are heights to which animals and, by Yoro’s reckoning, artificial intelligence, however artificially intelligent, cannot rise.
To the extent that morality and freedom and democracy are strengths rather than weaknesses, our supremacy over AI would seem assured. Yoro is not without doubts, however.
The danger he sees is not so much AI conquering the world and turning humans into domestic animals as of AI gradually infiltrating us — remaking us in its image. Will AI digitalize us?
To some degree we’re already part-digital. This predates AI. Yoro illustrates with a story. He was at his regular bank, on some business or other. Winding up the transaction, the bank officer said, “By the way, would you mind showing me some identification?” It so happened Yoro had nothing on him. He shrugged. “You know who I am,” he said.
“Yes, professor, but …”
What the banker knew, and who Yoro in person was, were quite beside the point. “Without ID,” Yoro sums up, “I may as well not exist.”
That benefits will accrue seems unquestionable. We can imagine enhanced intelligence inventing products now inconceivable, satisfying needs still unfelt, answering questions we haven’t even begun asking — about life, the universe, what have you. It can help us clean the environment, diagnose and cure disease, mind our children, nurse our elderly. Soon it will be driving our cars.
But will an AI-permeated society be democratic? Will it be free? The question Yoro raises implicitly is posed openly by Nobuo Kawakami, chief technology officer of media firm Dwango. As one of the participants in the dialogue with Inoue, he invokes democracy and freedom as core values of Western civilization. Progress and prosperity seemed to flow from them. They triumphed over other ideologies.
The sudden rise of China to superpower status, economically and politically, challenges, says Kawakami, that triumph. China, proudly undemocratic and unfree, is a world leader in AI — far ahead, he and Inoue agree, of democratic Japan. Is democracy holding Japan back?
“I doubt,” Kawakami says, “that the Japanese are so committed to democracy that they would defend it even at the expense of prosperity and national power.”
If it came to that, he fears, the popular conclusion may well be, “Let’s be like China.”
Big in Japan is a weekly column that focuses on issues being discussed by domestic media organizations. Michael Hoffman’s new book, “Fuji, Sinai, Olympos,” is now on sale.