Monday, November 7, 2011

Stephen Wolfram: Computation & The Future of Mankind 4 @Singularity Summit 2011

This is a further continuation of Stephen Wolfram's lecture titled, Computation and the Future of Mankind, delivered at the Singularity Summit 2011.
The lecture will be in italic white.  Any annotations we choose to make will appear in non-italic orange.

I'm happy to say that the big discovery of the last couple of years, is that it actually is possible to make this work.  It's steadily expanding domain by domain, in effect sort of automating the process, delivering kind of expert level knowledge on all kinds of things and making it to if there could be something figured out by an expert, from the knowledge that our civilization has accumulated, Wolfram Alpha can automatically figure it out.
In my little corner of the world we have Wolfram Alpha which takes lots of systematic knowledge and lets one be able to compute answers from it. As that gets done more and more, more and more of what happens in the world will become understandable and predictable. When we routinely know what's going to happen, at least up to the limits of what computational irreducibility allows. Right now one still has to ask for it, what one wants to know about. Increasingly though, the knowledge we need, will be preemptively delivered when and where we need it, with all kinds of interesting technologies that link more and more with our senses. Stephen Wolfram

From the point of view of some democratizing knowledge, it's pretty exciting and it's already clear that it's leading to some very interesting things.  Inside though, it's definitely a strange kind of object.  It starts from all sorts of sources of data that are first just raw data, but the real work is in making that data computable.  Making it so that one's not just looking things up but instead someone is able to figure things out from the data.

In that process, a big piece is that one has to implement all of it as methods and models and algorithms that have been developed across science and all those other areas.  One has to capture the expertise of actual human  experts that are always needed.  So then the result is that one can compute all kinds of things.  Then the challenge is to be able to say what to compute.   The only realistic way to do that is to be able to understand actual human language or actually the strange utterances that people enter in various way to Wolfram Alpha.

Somewhat to my surprise, and a bunch of thinking from a new kind of science, it's turned out that it's actually possible to do a pretty good job of this.  In effect, turning human utterances that are in the textual domain that seem like they're rather close to raw human thoughts, into a systematic, symbolic, internal representation from which one can compute answers and generate all those elaborate reports you see in Wolfram Alpha.

It's kind of interesting to see what happens in the actuality of Wolfram Alpha.  People are always asking it new things, asking it to compute answers to their specific situation.  The web has lots of stuff in it, but searching it quite a different proposition, because all you're ever doing is looking at things people happen to have written down.  What Wolfram Alpha is doing is actually figuring out new specific things.  In some ways, it's achieving all sorts of things people are saying in the past are characteristic of an artificial intelligence.  

Differences Between Human & Artificial Intelligence
But it is interesting to the extent that it's not like a human intelligence.  Think for example, of how it solved some specific physics problem.  It could do it like a human kind of reasoning through to an answer, kind of in the sort of in the medieval philosopher kind of style.  But instead, what it does is make use of the last three hundred or so years of science.  From an AI kind of point of view it just cheats.  It just sets up the equations, blasts through to the answer.  It's really not trying to emulate some human like intelligence.  Rather it's trying to be the structure that all the like human intelligence can build to do what it does as efficiently as possible.  It's not trying to be bird.  It's trying to be an airplane.

Predictions for the Future of AI
Well now I've explained a little bit about my worldview and how it's lead me to some very practical things like Wolfram Alpha.  Let me talk a little bit about what I think it tells us about the future.  I'll talk about the fairly near term future and about the much more distant future.  I don't talk very much about the future, frankly, I find it a bit weird because I like to deliver stuff not talk about what might be delivered.  But I certainly think a lot about the future myself.  And I've ended up with a whole inventory about major projects that I think can be done in the future.  The challenge for me is to wait for the right year or the right decade to actually do them.  I always try to remember what I predicted about the future in the past and check later whether I got it right.  Sometimes it's kind of depressing, I mean I was recently reminded in 1987, I worked with some team of students to win a prediction about the personal computer of the year 2000.  At the time, it seemed pretty obvious as to how some things were going to play out.  Looking recently at what we said, it's depressing how accurate it was.  I mean touch screen tablets with various characteristics, bit and uses and so on.  

I guess there are some things that are progressing in straight lines in that way.  Nothing is terribly surprising.  I must say personally, I always prefer to build to what I refer to as "alien artifacts."  Stuff that people didn't even imagine was possible until it arrives.  What can we see in kind of straight lines in where we are?

Data & Computation Will Be Ubiquitous
First data and computation are going to become more and more ubiquitous.  We'll have sensors and things that give us data on everything.  We'll be able to compute more and more from it.  It use to be the case that one sort of had to live ones life from ones wits and what one happened to know about and what would work out.  But then long ago, there were books that started to spread knowledge in a systematic way and more recently, there started to be algorithms and the web and computational knowledge and all those kinds of things.

Actually, we made this poster recently about the advance of systematic knowledge in the world and the ability to compute from it, from the Babylonians to now.  It's been a pretty important driving force in the development of civilization.   Gradually getting more systematic and more automated.  In my little corner of the world we have Wolfram Alpha which takes lots of systematic knowledge and lets one be able to compute answers from it.  As that gets done more and more, more and more of what happens in the world will become understandable and predictable.  When we routinely know what's going to happen, at least up to the limits of what computational irreducibility allows.  Right now one still has to ask for it,  what one wants to know about.  Increasingly though, the knowledge we need, will be preemptively delivered when and where we need it, with all kinds of interesting technologies that link more and more with our senses.

It's sort of a big question to know what kind of knowledge to deliver though.  For that, our systems have to know more and more about us, which isn't going to be hard.  Informational ratpacks like me, have been collecting data on themselves for ages.  I've got dozens of streams of data including like very key stroke I've typed for the past twenty years and stuff like that.  All of this will become completely ubiquitous.  We all will routinely be doing all sorts of different analytics and judging from my own experience, will quickly learn some interesting things about ourselves from it.  But, more than that it will allow our systems to successfully deliver to us knowledge preemptively.

How Will We Be One With Our Technology?
All kinds of detailed issues, will it be that our systems will systematically sense things, because the environment is explicitly tagged, or will the systems have to deduce things indirectly with vision and so on?  The end result is that quite soon, we'll have an increasing sort of symbiosis with our computational systems.  In fact, in Wolfram Alpha for example, in just a few short weeks it will be able to start taking images and data as well as language based queries as input.  So it will be getting that ramp.  Increasingly, our computational systems will be able to predict things, sort of optimize things, communicate things much more effectively than us humans ever can.

But here's a critical point.  We can have all this amazing computation, effectively with all this amazing intelligence, but the question is what is it suppose to do?  What is its purpose?  You look at all these systems in the computational universe and you see them doing all this amazing stuff that they're doing.  But, what is their purpose?

We will continue with the rest of Wolfram's talk in our next installment of this series.

No comments: