Here we have transcribed (with annotations) the lecture on AI delivered by David Friedman at the 2010 Foresight Conference.
Our annotations will be in orange brackets. The actual lecture will be in white italics.
Lecture
Let me now start, at least, speculating about the AI economy. I want to start by pointing out that there are really two radically different visions of AI. We tend to sort of forget that a lot of times.
Our annotations will be in orange brackets. The actual lecture will be in white italics.
Lecture
Let me now start, at least, speculating about the AI economy. I want to start by pointing out that there are really two radically different visions of AI. We tend to sort of forget that a lot of times.
See These Pages:
FUTURISM
TECH TRENDS
SINGULARITY
SCIENCE
CENSORSHIP
SOCIAL NETWORKS
eREADERS
MOBILE DEVICES
One of them is a version where an AI is a program but not a person, where the AI has human level intelligence, but it is not self-aware, it has no purposes of its own. It is just a tool of a human being. We're not going to have any moral feelings about giving rights to those AIs. There is no reason to give them rights anymore than there is to giving them to Microsoft Word on my computer. There are rights that other people have, Microsoft might object to my making too many copies of it, but the program itself has no rights. So, in that world, what you get, and Robin [a reference to Robin Hanson who also spoke at this conference, professor of economics at George Mason University and research associate at the Future of Humanity Institute, Oxford University] has already explored some of this stuff, in his talk a few minutes ago, what you get is the same effect as in other technologies, in which capital substitutes for labor, maybe compliments labor as well. But in which things used to be done by people, are now being done by machines. We know what happens. Those particular kinds of jobs at which machines are relatively good, people stop being hired to do, those at which machines are relatively worse to do as people, the people keep doing.
One thing which tends to confuse non-economists, and this is usually inb the context of international trade (sounds like an entirely different issue but isn't) is that they think that this some sort of an absolute measure of how good you are. So, they say, what if the computers are better at everything?
Let me digress for a moment into the economics of foreign trade (I like digressing into things I understand). I think it makes a very important point, because people similarly will say, what is going to happen to us if the Chinese can sell everything cheaper? We'll have nothing to do. That is not a meaningful question. Cheaper measured in what? Their prices are measured in their currencies, our prices are measure in out currencies. What determines the rate at which those currencies exchange? It is the cost of producing things in the different countries.
Thomas Malthus |
This is a mistake which continued until the early 19th century, when one of the really great economists, a fellow called David Ricardo (who we will back to in a minute), worked out a correct analysis, that you somehow think A can better at every aspect than B. In my price theory textbook I have it as two roommates who are going to divide cooking and washing dishes. What if one of them can cook and wash dishes in less time? It does not matter, you can still have an exchange. In which in effect we're exchanging half an hour of your time for an hour of my time, and in an hour of my time I can cook more but wash less dishes than you can so therefore we can make a trade of dishwashing.
So that's a very old story. The further point however which Robin raised, that it is possible that capital may a substitute rather than a compliment to labor and that therefore, if you find new ways of using capital, then one result might be, that the return to capital goes up and the return to labor goes down. Robin mentioned an article by Ricardo that I never read, but that's alright because I've read Ricardo's one great book [On the Principles of Political Economy and Taxation (1817)] in a later edition he has a note in the end saying, I argued initially that progress in accumulating capital could never make workers worse off, actually there is a theoretically possible case where it could and he (being the brilliant and intuitive mathematician that he was) works out the logic of that problem.
David Ricardo |
The point I am making is that any of these technological changes might change the demand for any of these three factors in production. Those will shift in importance in different ways and affect production in ways you like or in ways you don't like. You could not expect that a change like automation will make everybody poorer, or even make us on some sense on net poorer, because after all, once you make things through automation, you still know how to things the way you use to do. One way of seeing what's wrong with all these horror stories about automation, resulting in all of us starving to death is to say what a minute, all those starving people can always go off and create a society without that automation, build stuff they way they use to build, grow stuff the way they use to grow. So it can't make them worse off, it just gives them more options which can make them better off. That's again a slight oversimplification because it might be that the old way, required input from the people that were really good at the new way and therefore, it will be expensive to get them to cooperate. So, it's the right first approximation to say that the net result will be better off but they might make some people worse off as they change the distribution.
This sort of sounds plausible in the abstract, but some might say wait a minute, we're now not just talking about just replacing ditch-diggers, but replacing people. We're talking about things which are enough like us, so they can fill many of the niches we now fill. Well that's true, but has it occurred to you, that in the last two centuries we essentially eliminated the major job of humanity for about seven thousand years? Until quite recently, most people were farmers. To first approximation nobody's a farmer anymore, three percent of the population, something like that. Somehow, we survived wipping out the biggest job without any serious problem. We also didn't wipe out, but drastically reduced the second biggest job through all human societies until recently, which was producing children. It use to be that, I don't know what the exact numbers are, but maybe the majority of the female labor force of the world was primarily devoting itself to bearing nursing and caring for children, with its secondary activity household production, spinning, weaving, curing bacon, most of which has also been eliminated, because it's been moved ouf of the home and into the factory. So we have in fact, already seen a process of technological change eliminating jobs, on the scale that people imagine for this type of AI.
Advanced AI
Let me now go to the much more intriguing version of AI, which is the one where AI are people, where they are self-conscious, where they have their own purposes. I should start by saying that my thoughts on this are partly coming out of some points that Robin made a few years ago, which themselves are coming out of our common teacher David Ricardo. I could go on in greater length about Ricardo. I'm very much an admirer of him. I have a comment in one of my books, in terms of Ricardo's accomplishments, getting general equilibrium theory without mathematics, that to the modern economist, reading Ricardo's principles, feels as if you were a member of the first Mount Everest expedition and as you approach the summit, a hiker, came down in T-shirt and tennis shoes. That's David Ricardo. In any case, Ricardo, one of the things he is famous for is the iron law of wages. This is presumably coming out of the ideas of his friend Malthus, also a bright and interesting guy. The iron law of wages is often put as population increases until wages get down to subsistence. That's wrong. That's wrong certainly for Ricardo. Because Ricardo realizes that starvation is not the only thing that constrains population. So what Ricardo is saying is - the higher wages are the more the population will grow, not just because of not starvation but because when you're poor, having another kid means giving up the little leisure you have to work more or it means eating more potatoes and giving up the small amount of meat you're eating, giving up important things. As you get richer you can afford the kids. In Malthus's world and I assume Ricardo's although I don't think he ever discusses it, a lot of part of the benefit of having kids, is that people like sex and the kids are a side effect of sex. Presumably there are other to reasons to have kids, but that's the one that Malthus emphasizes. So what Ricardo is saying, that depending on the taste of the population, there will be some wage level at which the population maintains itself. He's quite explicit, because there is this wonderful passage where he says that the friends of mankind might wish that the laboring classes have expensive tastes. Because of the laboring classes have expensive tastes, then they will only be willing to produce the sacrifice to produce more children and the number of workers when the wages are high enough to more things, and then when you ground Ricardo's general equilibrium theory all the way through which I'm not going to try to do you reach an equilibrium at which real wages are at a level that just maintains the population, so if the workers have high expensive tastes, that will be a richer and happier worker and they are willing to have kids anytime they don't starve.
Robin then said but wait a minute, how do we apply the iron law to a world of AIs? And he said a little about that world today let me take up that question. AIs are a little different from ordinary people in a number of ways. One of the ways they are not different is that they both have costs. They require hardware and power just like babies require food and housing. They come into the world however, not accompanied by parents that are hardwired to care about them. That's one difference. They can be reproduced completely. They can really be cloned in the sense that human can't because you can't copy the contents of the mind as well.
Does the iron law then imply that the incomes of AIs will driven down to power plus the interest cost of the computer they run on? That depends. One of the things it depends on is the legal rights of AIs. I want to consider three alternative systems.
AIs as Slaves
System one is the system where the AI belongs to the human that created it. He has somehow a blank check maybe he can torture it or something to make it behave, but however he does it, any income he gets beyond the cost of maintaining it he pockets. That's a world of slavery, a world that makes a certain amount of sense how we think of computers today and programs. In that world, you would expect that the number of AIs will indeed increase because when I produce another one, it drives down the wages a little bit but not very much and most of that loss is to your AIs. When you produce another one most of the loss it to my AIs so we compete it down until AIs just barely cover their costs.
Now it's a little more complicated if we assume like people that the AIs vary a little bit. We go back to my monopolistic competition model for nanotech. If I've borne the large amount of fixed costs in creating an AI, and that AI is really good at solving problems A & B and no good for anything else and you've gotten one which solve problems C & D, I say well since I've got all the ones that solve problems A & B, if I produce too many that will drive down what I can rent out for, then I 'll restrict the numbers. But if different AIs are close substitute for each other, you get an iron law effect.
AIs as Persons
The next possibility is that each AI belongs to itself. Somehow its got to pay for its hardware. I'm not going to worry too much about what the legal rules are under which perhaps the person who originally created it has some claim against it, but it belongs to itself. Now the question is the producer isn't getting a benefit by producing extra ones if all they ow him is their costs. The question is does the AI itself have an incentive to clone itself? That I don't think I know an answer to. Except, if the rule is that the creator of the AI doesn't own it, but the AI owns its own clone, and is not altruistic towards them, then we're back in the same case I just described and we drive the wages down. If the AI own its own clones and is altruistic towards them, or more plausibly, if each new clone has the same rights as the first person, then it seems to me we're in a world where whether population increases or not, is ultimately going to depend on what the objectives of AI are. I had some interesting conversations yesterday with people here who think they have at least the beginning of a handle on what the objectives of the AI would be, maybe they do. It's not clear that we can predict it. Going back to Malthus and Ricardo, the desire for sex and the desire to have children were important to that system, I don't what the analogous desires of an AI would be. You could imagine an AI who strongly wants not to have copies of himself. He likes his own uniqueness. In that case, they won't clone itself.
We include the video clip which has additional questions and answers. If you cannot see the embedded video, here is the link: http://vimeo.com/9217432.
2 comments:
If I understand your representation of Ricardo correctly, he got it wrong. People in poor countries have more children than people in developed countries. People have lots of children to make sure enough children survive to take care of them in old age. Every country except Ireland (far as I know) that has developed its economy has seen a fall in birth rate.
Other things to consider; Moore's Law would indicated that the cost of AI will fall rapidly. And, one effect of AI would probably be that the doubling/halving period of Moore's Law may get dramatically shorter as a result of AI. Even without AI, I expect the cost of goods and services to fall dramatically in real dollars.
Thanks for your comments Thomas. First le me say that these are not my words. I transcribed a lecture by David Friedman. It is he that explained the significance of David Ricardo. I think you are right about the falling in birth rates. If I understand Ricardo and Friedman correctly, they are saying that the birthrate will fall when it goes below the subsistence level. This "subsistence" or "survival" is judged in different ways by different cultures. I would think that what would mean subsistence in some parts of Africa would not mean that in the US.
As for the progress of AI falling rapidly, I completely agree with your analysis. I do not think that Friedman would disagree with anything you have said here. Maybe I missed the point you were making about falling costs of AI?
Post a Comment