Why Everything Elon Musk Fears About AI Is Wrong | Tech News
Piero Scaruffi’s LASER events have nothing to do with actual lasers. Instead, a Leonardo Art Science Evening Rendezvous brings together artists, inventors, scientists, scholars, and thinkers for informal presentations and conversations on a wide variety of topics.
A recent LASER at Stanford, for example, touched on the future of journalism, computer vision, millennials building offline lives via the internet, and a chat about whether ranting is ever a good idea.
Refreshingly, it was not a pontificating, podium-heavy lecture. Instead, 100+ academics and thinkers gathered for a lively conversation that continued as we headed out from the lecture hall and into the mellow Northern California night.
Scaruffi himself is eminently quotable and pulls no punches. He’s quick to point out he’s a historian, not a futurist, so he relies on facts not fantasy. Plus, as a cognitive scientist, former visiting scholar at Harvard and Stanford Universities, educator and author of several books and hundreds of articles on AI and cognitive science, he’s got a significant long-term view of many academic disciplines.
We caught up with him after the event via email, where he countered the current fears around the Singularity, as espoused by Elon Musk and others. Here are edited and condensed excerpts from our conversation.
Why do you feel Elon Musk, et al, are wrong to fear AI?
First of all, I would like to know what is the “AI” that they are talking about. If they are talking about technology in general, then welcome to the club. There’s a long history that goes back at least to the 1960s of philosophers, sociologists, and so on who warned humankind about the danger of technology. There are dangers in the way we deploy and use technology, whether it’s nuclear power or computers. Why should we worry more about AI than we should about nuclear weapons? Or about the very conventional and very dumb networks of computers that control the global financial markets? The AI that I know is a fascinating field of research, a branch of computational mathematics [but] unfortunately it is still in its Stone Age.
Why do you say that?
The AI practitioners that I know spend their days tweaking formulas and software code. That’s as far from Hollywood movies as you can get. I think Elon Musk and others have Hollywood movies in mind, and that’s a completely different AI—an AI that I don’t think it exists or will [do so] any time soon.
Progress in AI has been very slow, and, recently, mostly due to three factors:
- We finally have very large datasets (what you need to train neural networks)
- We finally have very fast and affordable processors (what you need to run multi-layer neural networks)
- One major corporation (the No. 1 or No. 2 most valued company in the world) has done a lot of PR for AI.
All of this is great for AI, which, I repeat, is a fascinating branch of mathematics. I am absolutely delighted that so many students are jumping into this field, a big difference from 10 years ago. This is a discipline that can potentially provide solutions in many fields, and save millions of lives.
Your experience of AI predates IBM Watson’s party trick on Jeopardy by several decades.
I entered the field in 1984. AI was already 30 years old, and it was experiencing a boom of interest. Back then neural networks were not really popular because it was too difficult to implement them on the computers of those days. We were working on “expert systems” that were meant to simulate human experts in specific domains. That was a very practical application, but we over-promised and lost credibility. Hence the long “AI winter” that ended only recently with the advent of GPUs and datasets. I joke that video games saved AI, because GPUs were originally invented for the interactive real-time graphics of video games.
What were you working on, in terms of AI, at Olivetti in ’85?
I founded an Artificial Intelligence Center in 1985 at their research labs in Cupertino, literally next door to Apple’s headquarters. My friends from back then joke that my (expensive) AI Center caused the collapse of Olivetti, which at the time was the tenth largest computer manufacturer in the world, but 10 years later…
What did you do after Olivetti?
I left in 1995 and went to Stanford as a visiting scholar for a few years. It was difficult to find a job in AI during the 1990s. I was saved by the web and e-commerce: I returned to AI when it became important to customize products online. Product configuration had been a classic application of [the] expert systems [I’d worked on].
You’ve said that we have organized our world so that machines can navigate it, which, as you point out, isn’t “intelligence” at all.
Sometimes the behavior of a machine looks intelligent because we have structured the environment in such a way that even an idiot can perform an amazing job there, and, in fact, you don’t even need a human being there. If Leonardo [da Vinci] came back today and saw the highly automated subways of Japan, he would probably ascribe a high degree of intelligence to those trains. But the real intelligence went into structuring the subway in such a way that trains can run largely autonomously. When you see a self-driving car, you shouldn’t take a picture of the car, but of the white lines on the asphalt. Someone has marked the road, and posted signs, and created the GPS system, so that a self-driving car can go from A to B.
If, in your opinion, AI isn’t intelligent at all, because nothing artificial can actually be intelligent, what would you call the field itself? Merely advanced computation with super fast processors?
Yes, the AI that I know is a branch of computational mathematics. That’s really all there is to it: math. And it’s not even that difficult. Compared with the equations of theoretical physics, which is what my university thesis was on, computational math is not that complicated. It was basically invented in 1936 by Turing, so it is only 80 years old. But look how it has changed the world.
Back to the event we all attended at Stanford University this evening. When and why did you start the LASER series?
Culture. Technology and science don’t exist in a vacuum; they are part of a broader contest. The Leonardo Art Science Evening Rendezvous (LASERs) are meant to emphasize that it is difficult to have one without the other. Creativity is not mono-dimensional. It spreads over many dimensions, and each one fuels the others. When I wrote A History of Silicon Valley with my friend Arun Rao, I was honestly puzzled: we were trying to explain what is unique about this place. I had started studying the Bay Area’s cultural landscape in the 50s and I finally found something that was truly unique in the world: the San Francisco counterculture, the political protests at UC Berkeley, the beatnik poets, the hippies, and you name it. The Bay Area was a formidable producer of really crazy ideas. No matter what you gave them, whether poetry or sculpture, they would do something totally unorthodox with it, including technology. The LASERs are simply a way to remind people that creativity is multidimensional and clearly the idea took off as there are LASER series in more than 30 universities worldwide.
Finally, in your opinion, how far are we from truly intelligent machines, or is that just the wrong question?
Yes, it’s the wrong question. That’s why I titled my book Intelligence is not Artificial. If a discipline can build an intelligent being, it will be biotech, and it sounds like they are really close. I see machines as useful, not intelligent. They can simulate many aspects of the human brain and, in very narrow domain, they—from the clock to AlphaZero—can perform a lot better. One can put together many many many apps and get the equivalent of a general-purpose intelligence, but it still not intelligence to me. I think that Elon Musk should worry a lot more about biotech than about AI.
If you’re in the Bay Area, the next LASER event is scheduled for Aug. 6 at Stanford.