Benign Superintelligence, And How To Get There. A Conversation With Ben Goertzel

Ben Goertzel, AI researcher
Podcast of the London Futurists
Never work with children or animals… or people
During a recent keynote speech by Ben Goertzel, the robot that accompanied him on stage fell silent. It wasn’t the robot’s fault, it was a human who accidentally kicked a cord out of a socket backstage. Goertzel jokes that the old warning about working with children and animals could be expanded to include a warning about working with people in the future.
Goertzel is a cognitive scientist and artificial intelligence researcher. He is CEO and Founder of SingularityNET, Director of the OpenCog Foundation and Chair of the transhumanist organization Humanity+. A unique and engaging speaker with countercultural views and styles, he frequently gives keynote addresses around the world. He joined the London Futurists Podcast to discuss artificial intelligence and the road to superintelligence.
Coining of the term Artificial General Intelligence
Goertzel is perhaps best known for coining the term artificial general intelligence, or AGI, in 2004 or so. Most people think of it as a machine with all the cognitive abilities of an adult human, but what Goertzel has in mind is a machine that can make inferences well beyond what is contained in its training data. An infinitely general AI would be able to achieve any computable reward function in any computable environment. Putting it this way, it becomes clear how limited people’s general intelligence is.
For humans, the intelligence we are capable of is an important yardstick, but Goertzel prefers to avoid comparing the intelligence of machines to that of humans, for human intelligence is arbitrary and by no means absolute. He agrees that human intelligence level is an important measure for economists, sociologists and so on. For mathematicians like him, it just doesn’t matter that much.
He also agrees that there are good reasons to build humanoid machines at or near the human level of intelligence. It should be easier to assign them roles in human societies and shape their ethics and behavior.
How to build an AGI
Goertzel believes that there are no simple solutions that allow progress towards AGI, but our minds are wired to prefer simple stories with a single hero or protagonist and a single villain, so Goertzel offers a simplified story of how he did it currently trying to create AGI. It involves combining three approaches to developing AI. The first approach is to leverage the proven power of neural networks, which have conquered the field since the Big Bang introduced deep learning in 2012.
The second approach is to combine deep learning with symbolic AI, using inductive, deductive, and abductive reasoning based on experiential data. Neural systems are now able to think logically, which came as a surprise to many, but Goertzel believes logic engines will continue to do this better.
His third approach is more unusual and involves evolutionary learning using algorithms that simulate natural selection in a computer. Genetic algorithms are the simplest form of evolutionary learning, and they are powerful engines of creativity.
You can’t just glue these three types of systems together, but they can all contribute to the same great graph of knowledge. (A knowledge graph is a way of showing the relationships between objects, events, and ideas.) This is an unusual tactic, but Goertzel has been using it since the late 1990s. He thinks it probably hasn’t caught on because it’s hard to make. The math is complicated and scaling difficult.
We could have had AGI now if we had tried
Almost twenty years ago, Goertzel gave a talk arguing that the singularity could be reached in a decade if we really tried. As a species, we didn’t really try, and that’s why it didn’t happen. (The term “singularity” has many definitions, but for present purposes it can mean the time when machine intelligence overtakes human intelligence.)
President Obama injected $4 trillion into the US economy when some banks failed in the 2008 financial crisis, channeling it mostly to banks and insurance companies. Would it be possible to use that kind of money to drive AGI or longevity instead? The global economy is capable of generating these sums of money when it feels the need to. Goertzel thinks we should.
The Upside of Superintelligence
Sam Altman, CEO of Open AI, recently went to Congress and, among other things, told the men and women charged with making US laws that you inevitably sound like a crazy person when you talk about the benefits of superintelligence. How would Goertzel describe this benefit? He prefaces his answer by saying that he doesn’t mind sounding like a lunatic, but he goes on to say that if we had access to a benign superintelligence, we could pretty much achieve any positive outcome that the challenge would be to decide what we consider a positive result.
They could 3D print any object they wanted using modest amounts of energy and give instructions in natural language. You could create a virtual world to satisfy any whim, regardless of physical laws or even ethics. Humans could merge with super-intelligent minds and go well beyond the limitations imposed by the three and a half pounds of gray flesh that make up our brains. You could have angelic wings and breathe sunlight.
Some people will likely choose to remain broadly human and simply eliminate most of the pain and discomfort that the flesh inherits, and Goertzel thinks there’s nothing wrong with that at all. Others would choose to “merge with the superhuman divine mind as aggressively as possible,” and there’s nothing wrong with that either. There could also be many intermediate states, many of which we cannot imagine today. Greg Egan, in his remarkable 1997 novel Diaspora, described a world where humanity has so forked (without becoming godlike).
You could switch between these options at will and also have numerous copies of yourself to experience many of the results at once.
The challenge is in the transition
The big question, Goertzel argues, isn’t whether a superintelligence could bring us this extraordinary future. The question is how do we get there from where we are today.
AI philosopher Eliezer Yudkowsky famously stated that the arrival of superintelligence will likely be an extinction event for humanity, but Goertzel deeply and vehemently disagrees. He thinks that the most likely attitude of a future super-intelligent mind will be indifference towards us. He believes Nick Bostrom and Stephen Hawking agree with Yudkowsky based on the tone of their comments, although a literal reading of what they said suggests they are agnostic about the outcome.
How not to build superintelligence
Goertzel cautiously emphasizes that he doesn’t know Sam Altman and may not understand his thinking, but he thinks that Altman is more of a practical man of action than a philosopher like Bostrom or Yudkowsky. Altman seems to have rushed to produce a superintelligence as soon as possible. Goertzel tries to do the same but sees two major differences between their approaches. First, Altman relies on commercial big tech companies or individual governments to host superintelligence, while Goertzel believes that superintelligence should instead be controlled by open-source movements like Linux or the Internet, which could be decentralized and democratically controlled.
The other big difference is that Altman focuses almost exclusively on neural network systems, because that’s a powerful technology available today – thanks to Google’s introduction of transformer systems in 2017. However, Goertzel argues that these systems have a very limited understanding of themselves or their interlocutors, and this kind of understanding would lead to better and safer interactions with people.
Reward maximization as a dangerous paradigm
Altman seems to have a strong belief in reward maximization as a paradigm for intelligence. Goertzel prefers what he calls an open view of intelligence. He believes that Yudkowsky arrives at his frightening conclusion because he assumes that a future superintelligence will be a reward maximizer, and that the superintelligence will hack its reward function for its own entertainment. Given her premise, he understands her conclusion.
Goertzel’s impression is that Altman also views humans as reward maximizers. This view pervades both Silicon Valley and the Effective Altruism community, and angers Goertzel.
What are the goals of an open-ended intelligence? Goertzel sees the difference in the fact that the goals of an open-ended intelligence are fluid and react flexibly to their environment. He believes that most of what happens in the human mind is not purposeful. When you develop AI that serves the interests of a company or a country, you bake into it an unhealthy adherence to relatively inflexible goals.
Evolution gives us goals of survival and reproduction, but those goals change rapidly in response to what’s happening around us. Goertzel believes that a strictly goal-oriented approach would likely lead to the dystopian outcomes that Yudkowsky and others fear. But the good news, he argues, is that the rigid goal-oriented approach is much less likely to lead to superintelligence than an open-ended approach.
follow me Twitter or LinkedIn. Cash my website or some of my other work here.