Sunday, November 6, 2011

Stephen Wolfram: Computation & The Future of Mankind 3 @Singularity Summit 2011

More of the Stephen Wolfram's lecture delivered at the Singularity Summit 2011 in his lecture titled, Computation And The Future of Mankind.
We will seek to elucidate Stephen Wolfram's comments at the Singularity Summit 2011 with others he has made in a wonderful book titled, Randomness and Complexity: From Leibniz to Chaitin, where he wrote an essay entitled, Some Modern Perspectives on the Quest for Ultimate Knowledge.  We cannot overstate how strongly we agree with his insights and comments here.  His remarks will be in black italics, our annotations in orange, bracketed non-italics.

Computational  Irreducibility
The principle of computational equivalence has other implications too that are pretty important.  One of them is what I call computational irreducibility.  See a big idea of traditional exact science is that the systems we see in nature are computationally reducible.  We look at those systems, say an idealized Earth orbiting an idealized sun and we can with all our mathematical prowess, we can immediately predict what those systems are going to do.  We don't have to trace every orbit say, we just have to plug a number into a formula, and immediately get a result, using the fact that the system itself is computational reducible.  Well it turns out that in the computational universe, one finds that lots of systems aren't computational reducible, instead they're irreducible.  There is no way to work out what they will do in any reduced way.  there's no choice except to trace each step and see what happens.

It's actually pretty easy to see why this has to happen, given the principle of computational equivalents.  We've always had an idealization in science, that systems we study are computational much simpler than us as observers who study them.  But now the principle of computational equivalence says that that's isn't true.  Because it says that a little system like rule 30, can be just as computational sophisticated as us without brains and computers and so on.  and that's why it's computational irreducible.  And, by the way, also why its behavior seems to us so complex.  [Here we have some very interesting thoughts from Wolfram on abstract mathematics:
It's often imagined that mathematics somehow covers all arbitrary abstract systems.  But that's simply not true.  And this becomes very obvious when one starts investigating the whole computational universe.  Just like one can enumerate possible programs, can also enumerate "possible mathematicses": possible axiom systems that might be used to define mathematics.  And if one does that, one finds lost of axiom systems that seem just as rich as anything in out standard mathematics.  But they're different.  They're alternative mathematicses.  Now in that space of 'possible mathematicses" we can find out ordinary mathematics.  Logic-Boolean algebra-turns out for example to be about 50,000th "possible mathematics" that we reach.  But this kind of "sighting" makes it very clear that what we call mathematics today is not some absolute thing. It's just a particular formal system that arose historically from the arithmetic and geometry of ancient Babylon.  And that happens to have grown into one of the great cultural artifacts of our civilization.
and if this is not enough, Wolfram points out a critical point that has long been ignored in the sciences about the use of mathematics in it:
And even with our standard mathematics, there is something else that is going on: the questions that get asked in a sense always tend to keep to the region of computational reducibility.  Partly is has to do with the way generalization is done in mathematics.  The traditional methodology of mathematics puts theorems at the center of things.  So when it comes to working out how to broaden mathematics, what tends to be done is to ask what broader class of things will still satisfy some particular favorite theorem.  So that's how one goes from integers to real numbers, complex numbers, matrices, quaternions, and so on.  But inevitable it's the kind of generalization that still lets theorems be proved.  And it's not reaching anything like the kinds of questions that coule be asked - or that one would find just by systematically enumerating possible equations.
Wolfram holds little hope for the idea that solving "problems" in mathematics will happen "...as the centuries go by..." and "...more and more of the unsolved problems will  triumphantly be solved."  Wolfram states,
I actually suspect that we're fairly close to the edge of what's possible in mathematics.  And that quite close at hand - and already in the current inventory of unsolved problems - are plenty of undecidable questions.  Mathematics has tended to be rather like engineering: one constructs things only when one can foresee how they will work.  But that doesn't mean that that's everything that's there.  And from what I've seen studying the computational universe, my intuition is that the limits to mathematical knowledge are close at hand - and can be successfully be avoided only by carefully limiting the scope of mathematics.
Wolfram's ideas on this topic rings true to us.  We will quote him one more time in a very poignant statement:
Sometimes it is also said that, yes, there are many other questions that mathematics could study, but those questions would "not be interesting,"  But really, what this is saying is just that those questions would not fit into the existing cultural framework of mathematics.] 
Well computational irreducibility has all sorts of implications for the limits of science and  knowledge.  At a practical level, it makes it clear how important simulation is and how important it is to have the simplest possible models.  At a philosophical level, I think it finally gives us an explanation for how there can be both free will and determinism.  Things are determined, but to figure them out one has an irreducibly large computation.

The principle of computational equivalence has another implication, that relates to intelligence.  There's various concepts, like life for example that always seem a bit illusive.  It's pretty easy to tell what's alive and what's not here on Earth.  But all life shares a common history here.  It has all sorts of details in common like cell membranes, RNA and so on.  But what's the abstract definition of life?  Well there's various candidates, but realistically they all fail.  Either we have to fall back on the shared history definition or we have to just say in effect that what we need is a certain degree of computational sophistication.  

Intelligence
Same thing with intelligence.  We sort of know historically what human-like intelligence is about.  But we don't have a clear abstract definition, sort of independent of that history.  I think ultimately there's just isn't one.  In fact, it's sort of a consequence of computational equivalence that we can't make one.  There are all sorts of systems that are equivalents in this way.  There are expressions like, "the weather has a mind of its own."  One might have thought that that was just a primitive animistic view of  the world.  But what the principle of computational equivalents suggests is that, in fact, there is a computational equivalence between by what's going for example the fluid turbulence of the atmosphere, and things like the patterns of neuron firings in our brains.  This type of issue gets cast into a more definite form when one start to thinking about for example extraterrestrial intelligence.  If we see some sophisticated signal coming from the cosmos, does it necessarily follow that it needed some whole development of an intelligent civilization to make it, or could it have instead come from a physical process that operates according to simply underlying rules.  Our usual intuition is that if we see something sophisticated it must have a sophisticated cause.  But from what we've discovered from the computational universe, and encapsulated in the principle of computational equivalents, that's not the case.  When we see those glitches and signals from a pulsar or something, we can't really say they're not associated with something like intelligence.

Historically things like this were plenty confusing, like Tesla's radio signals from Mars that turned out to be modes of the ionosphere and so on.  Let's for a moment imagine the distant future, where technology discovered from the computational universe is what's vitally used.    All our processes of huma thinking and so on, are implemented at a molecular scale by motions of electrons in some block of some material or other.  Now imagine we find that block flying around somewhere, and we ask whether what it's doing is intelligent.  Well I don't think that's really a meaningful question as such.  In fact, I don't think there will ever be a fundamental distinction between that block and pretty generic block of material with electrons moving around.

As we learn from the principle of computational equivalence, at the fundamental level, they're both doing the same thing.  Of course, at the level of details there can be a huge distinction.  One of them can have an elaborate history that is all about our history and our evolution, the other doesn't.  This is all bound up with issues of purpose and so on.  But at the level of just looking at these blocks of material, without the details of history, there's no fundamental distinction I think.

Artificial Intelligence
Well needless to say, this sort of realization has implications for artificial intelligence.  When IU was younger I use to think that there would be some great idea, some cool  breakthrough that would suddenly give us artificial intelligence.  But what I've gradually realized, actually through the principle of computational equivalence, is that's just not how it will work.  Because, in a sense, all this intelligence that we're trying to make is ultimately at an abstract level just computation.  That might sound very abstract and philosophical, but at least for me it's had a big practical consequence.

When I was kid I was interesting the world's problem of taking the knowledge adding up so it would be become possible to ask questions on the basis of it.  Seemed like a hard problem, because to solve it seemed like, one would have to solve the problem of artificial intelligence.  But after I'd come up with the principle of computational equivalents, I gradually realized that that really wasn't true.  All one had to was computation.  At a personal level, I have this great system for doing computations mathematically, with a language that could efficiently represent all this abstract stuff and whole set of giant algorithms that could pretty much cover all the fundamental areas.   And so I thought, ok, maybe it is really so crazy to actually try to build a system that will make all the world's knowledge computable.  That is how I came to start building Wolfram Alpha.  I have to say that I'm still often surprised that Wolfram Alpha is possible as a practical matter at this point in history.  It's an incredible complicated technological object.   But I'm happy to say that the big discovery of the last couple of years is that it actually is possible to make this work.

We will continue with Stephan Wolfram's lecture in our next installment of this series.

2 comments:

Anonymous said...

Awesome post, where is the rss? I cant find it!

Anonymous said...

hi!!!

AI & MEDICINE


 See These Pages: FUTURISM TECH TRENDS SINGULARITY SCIENCE CENSORSHIP SOCIAL NETWORKS eREADERS MOBILE DEVICES 
 Coming soon.