Kay + Hillis

Wired brings together two legendary minds: Alan Kay and Danny Hillis. The result is a fast-forward, neuron-boggling, early-warning scan of times ahead. There's no better way to sense which way the technological winds are blowing than to put Danny Hillis and Alan Kay in a room and let them talk. So that's what we did. Alan Kay, of course, is a guy whose prefix is "visionary." For once, this overused appellation applies. Back when computers were still eating punch cards, Kay was thinking about personal computing, hatching the idea of a "dynabook" that today's notebook computers still only hint at. Along the way, he also helped pioneer the concept of a "graphical user interface" - everyone who's double-clicked an icon or opened a new window owes at least a partial debt to Kay. Once the ur-scientist at Xerox PARC, Kay, 53, now holds the position of Fellow at Apple Computer, a post that allows him to pursue his interest in using technology to move education forward - and vice versa. Danny Hillis, 37, is the co-founder and chief scientist of Thinking Machines, a supercomputer company designed, in part, to create the sort of machine that Hillis was thinking about when he said, "I want to design a computer that will be proud of me." Schooled in the digital paradise of MIT's Artificial Intelligence Lab, Hillis is a hacker through and through. He has been working on the company's line of Connection Machines - massively parallel computers whose thousands of microprocessors make them look like, one observer said, "Darth Vader's refrigerator." (Hillis and Kay, at left, were photographed in front of the red blinking lights of a Connection Machine.) But he's been so busy with these machines that his plans to use them to emulate the evolutionary progress of living organisms have been back-burnered. On a rainy autumn Monday, the two convened at Hillis's corner office, a room filled with fascinating props - everything from antique toys to a blackboard crammed with dense equations. Wind-watchers will note with interest one fascination shared by these two legendary computer scientists: biology. - Steven Levy.
Danny Hillis: I have a story. We were demonstrating a database program on the Connection Machine to some CEO. When we showed it to him, he said, "Oh! That little computer out on the desk in front of my office can do that." Now, we're thinking, How can it possibly do that? Because what we were showing really did required a Connection Machine to do. He said, "No, no, no - I'm quite sure that little PC in front of my office can do that." Fortunately his vice president in information technologies was there and we called him over and said, "What's going on here? The CEO says his PC can do that." Well, it turned out that his PC was hooked up through Dow Jones to a Connection Machine. So in fact his computer could do that! The point is that pretty soon you'll have no more idea of what computer you're using than you have an idea of where your electricity is generated when you turn on the light. I think everybody has gotten so enamored with the decentralization of computers, and the idea that they can put a computer on their desks, that they're missing the countertrend, which is that all these computers are starting to talk to each other, and that the computing resource that they have available to them is a utility in a sense. So in fact there's a sort of countertrend to the decentralization of computers, which is this amazing centralization of the computing resource. As communication gets good enough, where something gets done becomes less and less relevant.
Alan Kay: That was the old ARPA dream. We used to say in the '60s, we don't care if there's an atomic-powered computer blasting down computations from the Moon. So as far as the user is concerned, the computer is what they see on the screen, and that's it. It doesn't matter where the damn thing is. And it shouldn't matter.
DH: But that's an idea that hasn't sunk in yet. So every time somebody sees a Connection Machine, they say, when am I going to get one on my desktop? The truth of the matter is it doesn't matter a damn when they're going to get one on their desktop. As soon as the thing on your desktop is good enough to give you as pretty a picture as you want, and it's good enough to interact with you at human bandwidth -
AK: That's all you care about.
DH: Up until now there's been this economic force: You always did better to get as small a computer as possible, because with your desktop computer you've got a hundred times as much computing for a dollar. But what's happening with parallel computing is that big computing is made out of exactly the same stuff as the little computing - it's all made out of microprocessors and DRAMs and little Winchester drives and so on. So, the economics of both are exactly the same. Communications is becoming cheap enough that so you can just draw on the resources of the network like you draw down on the power grid when you plug something into the wall.
AK: In the '60s, John McCarthy used to call that "the information utility." When PARC did the Alto computer, we invented the Ethernet right along with it, because there was a sense from all of us former ARPA people that the communications stuff was just as important as computing stuff. But if you look in the commercial world, networking came much later for PCs. I remember somebody in 1983 asking Steve Jobs at a meeting, "Where is the network?" And he threw a floppy disk at the guy. Jobs was a sneaker net guy up until the last instant.
DH: In fact, you really do not want the Library of Congress on your desk. What you want is to be able to get at the Library of Congress from your desk. The reason you don't want it on your desk is because as it gets out of date you have to worry about maintaining the Library of Congress!
AK: But there actually is a sinister part to your vision. It's hard to change information in books, but if we have everything online, then a somewhat untrustworthy group of people controlling the thing - which I think is what we have - gives us 1984.
DH: But you overestimate how much they're in control. You know, there was always this argument that information processing technology would be the tool of totalitarianism. I think that if you look at what happened, information processing technology was the downfall of totalitarianism.
AK: Well, sure, I think people will be able to have and will want to maintain their own archives, although they won't be as large as mass storage.
DH: Look, batteries are still useful, even though you can get electricity from a plug. Sure you want some money in your pocket, but mostly you keep your money in the bank. There'll be people who keep their money in their mattresses, and there'll be people who keep their data in their mattresses. Your home is not a terribly convenient place for storing data. By and large you want your data to be where you use it: You want it in your office, you want it when you're on the airplane. Having your own home computer is kind of like having your own home electric generator.
AK: See, you were so young in the '60s, you don't remember that there was that whole impulse of wanting to go off to some Oregon farm with a couple of wind generators.
DH: Yeah, I caught the tail end of that. But it turns out that there are just a lot of advantages to centralization. Once the network is really in place, and you have big parallel computers that hold the data, this is the thing that's going to make the home robot practical. Because, if you figure how big a computer you need for a home robot, it is quite substantial. You want it to be able to hook up your VCR to your piano, and to do things like that that require a lot of specialized knowledge. It's much more practical if you imagine that robot with just a little cellular phone to call up to some big database. Because most of the time the home robot is just moving from A to B and it can be done with a 4-bit microprocessor. But then occasionally it needs to process a picture and make some big decision like whether to throw away the dollar bill on the floor it runs across it while it's vacuuming. And that's exactly the point where it just wants to be able to ask for help from some big computational facility. Of course, you'll get charged an extra penny at the end of the month for the computation.
AK: I was thinking about ecological computing. When I was working with computers in the late '60s, all of the computer power on Earth could fit into a bacterium. The bacterium is only 1/500th of a mammalian cell, and we have 10 trillion of those cells in our bodies. Nothing that we have fashioned directly is even close to that in power. Pretty soon we're going to have to grow software, and we should start learning how to do that. We should have software that won't break when something is wrong with it. As a friend of mine once said, if you try to make a Boeing 747 six inches longer, you have a problem; but a baby gets six inches longer ten or more times during its life, and you never have to take it down for maintenance.
DH: There are a couple of things that are going to get us into that ecological computing. If you look at the way that we design software right now, we basically use the same methods that we used for, say, designing a motorcycle. Engineering has one technique, which is that you break a problem into parts, then you define the interactions between those parts, and reapply it to the whole. So, all you can build with engineering are these nice hierarchical things that have good, well-defined interactions. But if you look at a biological organism, it's a very different structure. You end up with systems that are infinitely more resilient. As you say, they can grow by 10 percent and it doesn't matter much. People's minds, which are surely very complicated compared to any software program, don't crash. When I first came into the MIT Artificial Intelligence Lab, it was the during golden days when language programs were sort of working and it looked like if you just kept on heading in that same direction then you could just engineer something that thought. But what happened was, we sort of reached a wall where things became more fragile and more difficult to change as they got more complex, and in fact we never really got much beyond that point. I mean, the state of natural language understanding today is not a whole lot advanced in terms of performance above what it was back then. Now, you could conclude from that that artificial intelligence is just an impossible task. Marvin [Minsky], who still imagines engineering AI, certainly has come to the conclusion that the brain is a very complex kludge. So you might conclude that we can never build one. But you can also conclude that it's simply the techniques we're using to approach AI that just aren't powerful enough.
AK: Well, the problem is that nobody knows how to do it the other way. But that doesn't mean you shouldn't try it.
DH: I think another way is going to be the only way it's possible. If we're ever going to make a thinking machine, we're going to have to face the problem of being able to build things that are more complex than we can understand. That means we have to build things by some method other than engineering them. And the only candidate that I'm aware of for that is biological evolution. But the problem is, as soon as you start doing that, you start realizing that the story that you were told in school about biological evolution is way too simple.
AK: Right. It was fortunate that they didn't have better instruments in the '50s, or they never would have gotten DNA. It was too simple. They didn't know about introns, and they didn't know about all this other stuff. It looked like a very simple pathway.
DH: The thing about biology is you start discovering any story is too simple. The one I like best is the one about the grayling moth. The grayling moth used to be used as one of the classic examples in cybernetics. There was a well-understood neural circuit from its eyes to its wings, so that when a moth got startled, it balanced the amount of light on its eyes by flapping the right wing more than the left. This caused them to fly around in a straight line towards the moon, or in circles around lights when they got frightened. This became the sort of classic example of biological servo-mechanism for a long time, until somebody discovered that in fact actually only female grayling moths work this way. A male grayling moth works completely differently: When it gets started it looks around for the nearest female and follows her!
AK: It's biological parsimony. Why bother evolving it in both sexes?
DH: But I guess the lesson that biologists learned is that every time you come up with a simple story of this does this, or this works this way, that it's actually much more complicated than that. Biologists are dealing with something so much more complicated than what we understand. If you look at biology as being a matter of adjusting protein sequences, then I think you miss the interesting part of what evolution is. But if you believe that morphogenesis is the critical thing, then the regulatory sequences are much more important than the actual proteins.
AK: When I was deeply into biology, I was fascinated by embryology. It's just unbelievable how it works.
DH: When people look at genes, they're sort of looking at the instructions in a structure. But the evolution of that structure is much more interesting than the evolution of the genes themselves. Biologists call that the evolution of evolvability. You know, there's a funny political thing that's going on in biology right now, too, which is that biologists, or the good ones, really know how big a gap there is between the theory of evolution and the phenomenon of evolution -
AK: But they don't dare say it because it would be grabbed onto by the fundamentalists.
DH: That's right! So, there's a little bit of an unspoken agreement that we don't talk about that in public. But now, something is new on the scene, which is we actually have the ability to do experiments on evolution. We can, within the computer, run populations of hundreds of thousands of generations or even millions of generations, and watch the process of evolution. We can go in, look at the history, and we don't have to worry about the incompleteness of the fossil record. In fact, we can look at the genetics of it. We can look at what the encoding function is, going from genotype to phenotype, so we can study things like morphogenesis. As soon as you do this, you discover that the effects that are important are very different from the effects that have normally been studied in evolution. It's the classical example of what happens when you actually try something, as opposed to philosophize about it. There's more: There is this guy who is evolving proteins that do RNA catalysis. He generates a whole bunch of random RNA and then arranges for them to bind if they are capable of doing this catalysis. Then he filters them so the ones that do the right thing are overrepresented in the mixture. And then he amplifies those few with DNA techniques and he harvests a generation, and then repeats that. At the end of it he gets very specific proteins that do specific things. So he gets evolution in a test tube, literally, now. There's no reason why this process couldn't be automated somehow. That's cool - it's like being around when they were making the first transistors. You could sort of see integrated circuits coming, even though they weren't quite ready. You wanted to rush out and try to build a computer out of them. But that evolution technology is about to get to the point where it starts positively feeding back on itself.
AK: I think this is one of the greatest intellectual happenings in computer science. You know, computer science inverts the normal. In normal science you're given a world and your job is to find out the rules. In computer science, you give the computer the rules and it creates the world. And so we have the reductionism dream. We can build the whole universe from just one principle.
DH: And in fact it's no coincidence that as physicists get better and better at taking things apart, the parts start looking more and more like computer science. Because computer science has sort of started at the bottom and is putting everything together by building it up. It used to be that the complexity of what we built was limited by our ability to handle materials. Mechanical machines are so complicated that after a while the parts break and the tolerances slip and so on. But when you build something with software, particularly now that you have giant computers, then you're really limited only by your imagination. Runs have absolutely perfect tolerances, and you can have as many of them as you want. So it's like the tinker-toy set with an infinite number of pieces that never fall apart.
AK: Yes! At some point, unless you are just a workaday carpenter-type programmer, you start getting interested in complexity. When you get interested in anything you start looking around for analogies; most scientists do, that's one of the best routes. And of course the analogies don't always map across. Whenever you spend long hours getting a bug out of a simple program that came out of some nonlinear interaction, you realize that any decent system in biology would have damped it out right away. You realize, oh!, everything is connected to everything else, and it's connected instantly and strongly.
DH: So everything interacts with everything else in biology, but somehow it seems to make the system robust instead of fragile.
AK: Right. It's hard to imagine that anybody who is interested in complexity wouldn't start looking at biology, because there isn't anything else anywhere close to it. Classical mathematics sort of checks out when you get into nonlinear phenomena, or even just simple recursive functions. Some civilizations had bricks throughout their entire existence and never figured out the arch, because the arch is a nonlinear organization of bricks. To me that's the most interesting thing: that in matters of complexity, architecture dominates material.
DH: Well, let's say that we do solve this problem of making intelligence by a process of evolution. Then I think the philosophers and the religious people will be perfectly happy, because they'll be able to say, "Well, ya know, just like God did it with chemistry, God did it again. God created us all in this and it wasn't something that humans engineered." And I think that they'll get very comfortable with the idea very quickly.
AK: But as you said earlier, it's possible that nobody will be able to understand the result.
DH: We'll end up with intelligent beings and not be able to tell any more about how they think than we can tell about how we think. And I think that once the bishop has had a long conversation with them, it will be a very natural step to extend moral law to them. I don't think this will cause any problems with the basic tenets of religion or philosophy. Consciousness is just a stupid hack. We have a lot of specialized hardware to code and decode grunts - conversation. Presumably you've had this experience of somebody explaining something to you and you misunderstand them, but your misunderstanding is actually much better than what they were trying to explain to you! That's taking advantage of your understanding hardware. Well, it turns out, since you've got all this hardware sitting around, you use the following stupid hack: Whenever you're thinking, you play the idea out on yourself and you explain it to yourself in hopes that you misunderstand it. You compress it into sort of this encoded representation, and that compressed representation is consciousness. In fact if you disconnected it, you would only get slightly stupider. But not so as anybody would notice.
AK: Did you ever read that book called The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian James? He claims that we didn't even get aware of consciousness until recently. It's the best book I've ever read that couldn't possibly be true.
DH: Years ago when you first started talking about dynabooks and networks and things like that, everybody sort of looked at you like you were a little bit crazy. They said, "Well, maybe something a little bit like that might happen, but surely he's exaggerating!" Enough of your predictions have already come true, that I'm sort of interested in hearing your next set.
AK: Maybe I'm running out! Actually the commercial world is so stodgy about carrying these things out that I'm still relevant. My predictions were pretty cold-blooded, because they came from really two completely distinct areas. In the '60s people who were thinking about personal computers tended to think of them of being like automobiles, in contrast to IBM, which was like the railroad. So there was this vehicular metaphor, with everyone trying to become Henry Ford. When I visited Seymour Papert at MIT I saw children doing something that couldn't fit into the vehicular metaphor. I was searching for something to relate it to, and I was thinking, Well, the one thing that we don't withhold from children that adults do is books. So I said, What if the computer were like a book? And so that got me thinking that way. The other kind of extrapolation is more interesting. I had read Gordon Moore's papers on where he thought silicon, particularly MOS silicon, was going to go. It was going to be a whole different ballgame. If you knew enough physics to read those papers, and you were a little bit romantic, then you could easily extrapolate and see that this wasn't going to be science fiction at all. IBM and DEC couldn't see it, because they couldn't imagine it would mean anything other than that they would be able to build better mainframes of the kind they were doing. But to me it meant that we were going to have very small machines, and millions of users, and that all this was going to coerce everyone to use interfacing. I would have given up on the idea I'm sure if Moore's Law hadn't been there. As soon as I could see it was going to happen and in about ten to fifteen years or so, then it became sort of the Holy Grail.
DH: Okay. Now, 25 years later, your Holy Grail has become the enemy. We're now stuck with the dynabook metaphor and everybody is thinking in terms of notebooks, laptops, and so on. That metaphor has become so powerful it's stopping people from seeing the new stuff.
AK: Yeah, it's totally obsolete. But you know what didn't happen? Neil Postman had a good analogy. He said when television first appeared, nobody knew what to do with it. For a few years they put on live plays, and it was some of the best modern drama that's been done in this country. And then it became a commercial thing and went the way of Laverne and Shirley. I think that the period of the '70s when Papert was doing his stuff and when we were doing our stuff out of Xerox PARC, it was kind of a "Playhouse '90" of computing before it got commercial. Now, almost nothing that Papert and I thought was important about these machines, or even you thought were important, is manifested anywhere out there. It's all blind-paper imitation , and, you know, it's pathetic. In the commercial world you have this problem that the amount of research you can do in a company is based on how well your current business is going, whereas there actually should be an inverse relationship; when things are going worse you should to do more research. There's a tendency of getting drawn into the short-term concerns.
DH: Does it seem to you like our society has been getting more and more focused on the short term? It seems to me like when I was growing up in the early '60s people used to talk about what would happen in the year 2000, and now it's 1993 and people are still talking about what will happen in the year 2000. So the future has been kind of shrinking about one year per year for my whole life! People now realize that 2020 is just going to be so different, that they can't even think about it. Whereas in 1960, 2000 seemed like you'd be able to get to it just by extrapolating 1960.
AK: Somebody said the 20th century is the century when change changed.
DH: What's the longest-term project you ever had? Presumably Apple has a five-year plan, but does it have a 50-year plan?
AK: I doubt it. The Japanese have 50-year plans.
DH: Well I have a design for a clock. The clock is a very large object, about the size of the Great Pyramid or something like that. It's physically very large, and it works mechanically. Maybe it's powered by seasonal temperature variations. The clock ticks once a year. It bongs every hundred years, once a century. And the cuckoo comes out on the millennium. If you start thinking about this clock and what it's going to be like the next time the cuckoo comes out, it will cause you to start thinking of the year 3000 as a real part of the future. Just the existence of this clock will cause people to stretch out their minds past that mental barrier of the millennium.
AK: I don't think Xerox PARC would have actually happened if it hadn't been for the long view of ARPA. This is folk wisdom by now, but I think ARPA was the best thing that ever happened to the US as far as funding stuff.
DH: One of the things that made ARPA so powerful was nobody took it seriously.
AK: Including the funders.
DH: That allowed them to do risky things, because nobody was afraid of them succeeding. Now people have realized that ARPA does actually succeed in its goals, and that's a very scary prospect.
AK: Now they want it to succeed. It's like the goose that laid the golden eggs. When you start forcing the process, it kills it. ARPA succeeded because they basically funded people instead of projects. They didn't really care what the people were doing. They figured neat people would do neat things. Ivan Sutherland was only 26 when he went to ARPA in '62 or '63 or so. Ivan's idea was, "Well, neat people can do things; let's find neat people and see what happens." And, boy, it's really hard to find any funding like that nowadays. The stuff I've always done has had a very low chance of success. They don't look like engineering projects, that's the biggest problem. So, I personally miss the whole ARPA set-up. People like you, Danny, did not have to go out and start a company in order to get funding. You weren't originally planning on becoming a mogul, right? You just had this great thing you wanted to do?
DH: The problem with what we're doing now is that it is respectable. People like IBM are now doing it, too. So it's time to start doing something with a low chance of success.
AK: Well, I think your genetic stuff has got a good chance of that!
DH: Well I think that the time has come for us to go into the real estate business in cyberspace. I want to build a place that's accessible from the network, and let the hackers homestead there. Let's see what they create. I want to do this as a real estate deal and get somebody to fund it on the grounds that they'll just own a lot of the real estate there. And these hackers will make it valuable in exchange for getting some plots of land. I'm a believer that this ought to be done commercially. For a while you support the economy by hiring them to do useful things in the universe, like, touring people around, building the library, or some of the basic community facilities for accessing data and seeing what's going on. But then you allow them to set up their own businesses of creating tools or creating personas, and so on. At first you'd probably have to have a few draws, like some entertainment.
AK: So you let people homestead it! They would be grubstaking a certain percentage of useful things. How much would it cost?
DH: I think you would do it like you did the Altos and so on at PARC. First you do something that's extrapolated a little bit using technology that's not quite economical. And then you wait for the world to catch up with you technologically. Soon all of this shopping- and video-on-demand infrastructure will be in and they'll use it just to deliver the old medium. At that point people will desperately look around for other things to do with it. Somebody who recognizes that sequence of events could put up a few million dollars now to start getting ready for the moment when they're going to need the content to put on that new thing. How expensive was the dynabook project at PARC?
AK: Well, my yearly budget back then, around '73, was like 500K. That was pre-oil dollars. We had some geniuses, so we didn't need a lot of people. Chuck Thacker did the first Alto in just three and a half months all by himself with a couple of technicians, so he just sort of threw shit at the wall and it worked. I'd say that only about twenty people did the six basic inventions: the Alto, the Ethernet, the user interface, object-oriented programming, the laser printer, and file servers.
DH: I agree. I don't think you want to do this with a lot of people. I think the whole idea of this would be leverage. If you give enough people some stake in it, then I bet every dollar of effort that you pay for, you would get $100 worth of efforts from people who are just doing it for a stake in the result. Basically what you're doing is you're building a frontier. If we're right that this is the next great thing, then you can attract the right twenty people. It would be fun to try to build this frontier. One important point - and I've been worrying about how to convince the sponsors of it - is that you can't make a frontier without outlaws, unfortunately. It's a necessary part of the ecological structure.
AK: Yeah, it's like the most important thing about language is that it lets you lie. You'd never make any progress otherwise.
DH: So, I think you have to build in some good cryptographic features, so that you can have privacy, and the ability to exchange information confidentially. A lot of scientists today are still secret Platonists. They think that somewhere in the universe, the forms are. You know, I just love steam engines. Their builders didn't try to hide the parts of the things; they actually decorated them. But it wasn't like the Pompidou museum in Paris, where they take something that was hidden and show you what was under. That's ugly. The steam engineers made things so the parts were part of the beauty of the machine. My belief is that in every piece of software you should be able to pop up the hood, and see a rendering of what's underneath. Today there's nothing interesting to see in the hardware, and the software is closed off from you.
DH: You don't learn much by taking apart a dynabook, except not to do it. It had never hit me before, but the current generation of kids don't even get to hack the operating system.
AK: So they have to make up superstitions and myths about it, 'cause that's the only thing they can do.
DH: So what are you going to do next?
AK: There's this interesting interplay between what you might call talent and how much of a meta-system we can put down on top of meager talents to learn how to do things. Two recent tennis champions, Ivan Lendl and Chris Evert, were not actual athletes. They were people who just learned how to play tennis. Some of the most natural tennis players, like Nastasi and Agassi, only do well when things are going well - they don't have learned skills to drop back on. So in any given population maybe 5 to 20 percent have a natural hacker sort of talent; they are often not helped by pedagogy. Pedagogy is about getting the other 80 percent of people within hailing distance. So I've been very interested in taking some very important ideas and wondering how you get them in a state where the 80 percent can actually learn them in an operational way. And that's why I keep coming back to computers.
DH: Ideas like what?
AK: Like feedback. Like the whole idea of how you can take things apart without ruining what they are. And the idea of universality from simple principles, that you don't need much to get everything.
DH: The question that I keep asking myself is, where is the next frontier? Where is that place that a new world is being constructed? Do you know any candidates?
AK: I think the frontier has to do with human learning. Knowledge is not completely relative. There are a hundred or so powerful ideas that basically mean the difference between life and death, and I think one of our major jobs should always be to be true and get as many people enfranchised into them as possible.
DH: But in fact, if you look at what's happening, it seems just the opposite. We're very much heading toward a two-class society, where either you're somebody who sort of knows about, or feels empowered to deal with all of the complexity in society, or you're one of the people that is a victim of it and is just on the receiving end of it all.
AK: And I think the gap actually gets bigger as the leading edge of knowledge gets less intuitive.


HOME PAGE Indice Internet e la vita artificiale