A provocative new paper is proposing that complex intelligent behavior may emerge from a fundamentally simple physical process. The theory offers novel prescriptions for how to build an AI but it also explains how a world-dominating superintelligence might come about. We spoke to the lead author to learn more.
In the paper, which now appears in Physical Review Letters, Harvard physicist and computer scientist Dr. Alex Wissner-Gross posits a Maximum Causal Entropy Production Principle a conjecture that intelligent behavior in general spontaneously emerges from an agent's effort to ensure its freedom of action in the future. According to this theory, intelligent systems move towards those configurations which maximize their ability to respond and adapt to future changes.
Causal Entropic Forces
It's an idea that was partially inspired by Raphael Bousso's Causal Entropic Principle, which suggests that universes which produce a lot of entropy over the course of their lifetimes (i.e., a gradual decline into disorder) tend to have properties, such as the cosmological constant, that are more compatible with the existence of intelligent life as we know it.
"I found Bousso's results, among others, very suggestive since they hinted that perhaps there was some deeper, more fundamental, relationship between entropy production and intelligence," Wissner-Gross told io9.
The reason that entropy production over the lifetime of the universe seems to correlate with intelligence, he says, may be because intelligence actually emerges directly from a form of entropy production over shorter time spans.
"So the big picture and the connection with the Anthropic Principle is that the universe may actually be hinting to us as to how to build intelligences by telling us through the tunings of various cosmological parameters what the physical phenomenology of intelligence is," he says.
To test this theory, Wissner-Gross, along with his MIT colleague Cameron Freer, created a software engine called Entropica. The software allowed them to simulate a variety of model universes and then apply an artificial pressure to those universes to maximize causal entropy production.
"We call this pressure a Causal Entropic Force a drive for the system to make as many futures accessible as possible," he told us. "And what we found was, based on this simple physical process, that we were actually able to successfully reproduce standard intelligence tests and other cognitive behaviors, all without assigning any explicit goals."
For example, Entropica was able to pass multiple animal intelligence tests, play human games, and even earn money trading stocks. Entropica also spontaneously figured out how to display other complex behaviors like upright balancing, tools use, and social cooperation.
In an earlier version of the upright balancing experiment, which involved an agent on a pogo-stick, Entropica was powerful enough to figure out that, by pushing up and down again repeatedly in a specific manner, it could "break" the simulation. Wissner-Gross likened it to an advanced AI trying to break out of its confinement.
"In some mathematical sense, that could be seen as an early example of an AI trying to break out of a box in order to try to maximize its future freedom of action," he told us.
The Cognitive Niche
Needless to say, Wissner-Gross's idea is also connected to biological evolution and the emergence of intelligence. He points to the cognitive niche theory, which suggests that there is an ecological niche in any given dynamic biosphere for an organism that's able to think quickly and adapt. But this adaptation would have to happen on much faster time scales than normal evolution.
"There's a certain gap in adaptation space that evolution doesn't fill, where complex but computable environmental changes occur on a time scale too fast for natural evolution to adapt to," he says, "This so-called cognitive niche is a hole that only intelligent organisms can fill."
Darwinian evolution in such dynamic environments, he argues, when given enough time, should eventually produce organisms that are capable, through internal strategic modeling of their environment, of adapting on much faster time scales than their own generation times.
Consequently, Wissner-Gross's results can be seen as providing an explicit demonstration that the cognitive niche theory can inspire intelligent behavior based on pure thermodynamics.
A New Approach to Generating Artificial Superintelligence
As noted, Wissner-Gross's work has serious implications for AI. And in fact, he says it turns conventional notions of a world-dominating artificial intelligence on its head.
"It has long been implicitly speculated that at some point in the future we will develop an ultrapowerful computer and that it will pass some critical threshold of intelligence, and then after passing that threshold it will suddenly turn megalomaniacal and try to take over the world," he said.
No doubt, this general assumption has been the premise for a lot of science fiction, ranging from Colossus: The Forbin Projectand 2001: A Space Odyssey, through to theTerminator films and The Matrix.
"The conventional storyline," he says, "has been that we would first build a really intelligent machine, and then it would spontaneously decide to take over the world."
But one of the key implications of Wissner-Gross's paper is that this long-held assumption may be completely backwards that the process of trying to take over the world may actually be a more fundamental precursor to intelligence, and not vice versa.
"We may have gotten the order of dependence all wrong," he argues. "Intelligence and superintelligence may actually emerge from the effort of trying to take control of the world and specifically, all possible futures rather than taking control of the world being a behavior that spontaneously emerges from having superhuman machine intelligence."
Instead, says Wissner-Gross, from the rather simple thermodynamic process of trying to seize control of as many potential future histories as possible, intelligent behavior may fall out immediately.
Seizing Future Histories
Indeed, the idea that intelligent behavior emerges as an effort to keep future options open is an intriguing one. I asked Wissner-Gross to elaborate on this point.
"Think of games like chess or Go," he said, "in which good players try to preserve as much freedom of action as possible."
The game of Go in particular, he says, is an excellent case study.
"When the best computer programs play Go, they rely on a principle in which the best move is the one which preserves the greatest fraction of possible wins," he says. "When computers are equipped with this simple strategy along with some pruning for efficiency they begin to approach the level of Go grandmasters." And they do this by sampling possible future paths.
A fan of Frank Herbert's Dune series, Wissner-Gross drew another analogy for me, but this time to the character of Paul Atreides who, after ingesting the spice melange and becoming the Kwisatz Haderach, could see all possible futures and hence choose from them, enabling him to become a galactic god.
Moreover, the series' theme of humanity learning the importance of not allowing itself to become beholden to a single controlling interest by keeping its futures as open as possible resonates deeply with Wissner-Gross' new theory.
Recursive Self-Improvement
Returning to the issue of superintelligent AI, I asked Wissner-Gross about the frightening prospect of recursive self-improvement the notion that a self-scripting AI could iteratively and unilaterally decide to continually improve upon itself. He believes the prospect is possible, and that it would be consistent with his theory.
"The recursive self-improving of an AI can be seen as implicitly inducing a flow over the entire space of possible AI programs," he says. "In that context, if you look at that flow over AI program space, it is conceivable that causal entropy maximization might represent a fixed point and that a recursively self-improving AI will tend to self-modify so as to do a better and better job of maximizing its future possibilities."
Is Causal Entropy Maximization Friendly?
So how friendly would an artificial superintelligence that maximizes causal entropy be?
"Good question," he responded, "we don't yet have a universal answer to that." But he suggests that the financial industry may provide some clues.
"Quantitative finance is an interesting model for the friendliness question because, in a volume sense, it has already been turned over to (specialized) superhuman intelligences," he told io9. Wissner-Gross previously discussed issues surrounding financial AI in a talk he gave at the 2011 Singularity Summit.
Now that these advanced systems exist, they've been observed to compete with each other for scarce resources, and especially at high frequencies they appear to have become somewhat apathetic to human economies. They've decoupled themselves from the human economy because events that happen on slower human time scales what might be called market "fundamentals" have little to no relevance to their own success.
But Wissner-Gross cautioned that zero-sum competition between artificial agents is not inevitable, and that it depends on the details of the system.
"In the problem solving example, I show that cooperation can emerge as a means for the systems to maximize their causal entropy, so it doesn't always have to be competition," he says. "If more future possibilities are gained through cooperation rather than competition, then cooperation by itself should spontaneously emerge, speaking to the potential for friendliness."
Attempting to Contain AIs
We also discussed the so-called boxing problem the fear that we won't be able to contain an AI once it gets smart enough. Wissner-Gross argues that the problem of boxing may actually turn out to be much more fundamental to AI than it has been previously assumed.
"Our causal entropy maximization theory predicts that AIs may be fundamentally antithetical to being boxed," he says. "If intelligence is a phenomenon that spontaneously emerges through causal entropy maximization, then it might mean that you could effectively reframe the entire definition of Artificial General Intelligence to be a physical effect resulting from a process that tries to avoid being boxed."
Which is quite frightening when you think about it.
Read the entire paper: A. D. Wissner-Gross, et al., "Causal Entropic Forces," Physical Review Letters 110, 168702 (2013).