We conclude the transcription of Dr. Koene's lecture at the Global Future 2045 conference in Moscow.
Read In New Wide Format (or Internet Explorer Users)
See These Pages: FUTURISM TECH TRENDS SINGULARITY SCIENCE CENSORSHIP SOCIAL NETWORKS eREADERS MOBILE DEVICES
Read In New Wide Format (or Internet Explorer Users)
See These Pages: FUTURISM TECH TRENDS SINGULARITY SCIENCE CENSORSHIP SOCIAL NETWORKS eREADERS MOBILE DEVICES
Morphology & Function
If we're looking at scope and resolution, then there are sometimes shortcuts that we like to take, because it's a very difficult problem. The're two different kinds of shortcuts that can be contemplated. One of them is look only at the function. Take neurons, record from the neurons and if you record from many neurons at the same time, then instead of looking at how they're connected, by actually looking at those connections, you can derive connectivity, because you see how they interact. So if you use something like range and causality for example, you can try to create a functional connectivity map.
The problem with this is that you can easily mislabel function. If you want to study a complex system that has a lot of elements inside of it, and you want to see everything that it may have remembered and how it may respond, you have to observe it over a very long period of time, perhaps since birth.
What you can do, is you can look just at what is the morphology of a neuron, how does it look. You study many neurons that look the same way. You know they generally have a kind of receptor channel. They use certain neural transmitters. You know that they respond in a certain way so you know the library where you can map from this morphology to that kind of function. You can map to paramater distributions. Then if you have a very detailed morphological model, you can make what you see on the right there, which is a compartmental model where you build a very large model, morphologically of what the neuron is like and each of the little compartments is basically an electrical circuit model. You set the parameters according to what you find in your library and then you hope that the entire system works.
Technical Obstacles in Measurements
Now, the problem with this, is that measurement is never quite precise. Also, we need to understand that that library may not be one to one mapping is there always just one morphology that breaks to one type of function. It may break down there, in terms of error, because when we talk about error in neural networks, one of the things they are famous for is that they deal well with error. If you have random errors, this system's fairly robust. But that's not true for systematic errors or for cumulative errors.
If you have electro microscopy for example, you areas that are out of focus or where you don't know exactly what you're resolution is maybe measuring things wrong. The same when you're cutting slices of brain, you have a knife that has features in it, where you have characteristic errors and those accumulate. Now, it can be very difficult to tune a large system like that. If you take things and just try to tune it with a 100 billion neurons in it, then this becomes a problem that is much to big of an optimization for even quantum computers when they eventually come along.
We can look at it, at least explore it using something like this. This is a modeling environment that I use to explore what is the emergent functions are that pop right out of the structure as the neurons grow and connect into networks. But it can also be used to generate models, where we know exactly what we're putting in terms of structure and in terms of function, so what if we can look at what types of error tend to occur, you can find out how bad is it when you get this cumulative errors.
But in the end, what you really need to do is simplify the problem. This is a problem of system identification. What can you do in system identification to make it tractable, to make characterizations possible and to make the whole problem feasible computationally, is to make it a smaller problem. So simplify it. You want to take subsystems out of there, where it's very easy to describe the output/input functions, where it's very easy to get a characterization. That means you have to have those functional responses at that level. So that means that you need high resolution functional characterization, functional measurements, not just structure. That's kind of what I'm trying to get to. You need both of those function and structure.
Present Projects
There are real projects going on to do this for whole brain emulation. You see here the four yellow requirements, and on the bottom, I didn't put in the projects for the leftmost, but that's ok. You see on the bottom a number of projects that are going on. I'm only going to about two because of time constraints today, but they're all very interesting. I'm quickly going to run through what they're all about.
If you want to get the structural connectome out, the obvious thing to do is say this is structure. So it's spatial. It's something you want to look at. So what you can od is slice it really thin, look at the brain through en electron microscope and reconstruct. I'll talk a bit about that.
Connectomics
But the alternative is that what you really want to get into is actions and not all of this other messy stuff. So Anthony Zador at Cold Spring harbor, and Ed Calloway, they're working on methods using a virus basically to transfect neurons in mass and to deliver unique DNA barcodes that go to the presynaptic and postsynaptic sites. Then when you pull out sort of biogen tag that's connecting that's connecting the presynaptic and postsynaptic sites, you pull out both the barcodes. It's like pulling out pointers and are pointing to each other saying this neuron's connected to that neuron and that neuron's connected to this one. It's an interesting approach.
Demux-Tree
Functionally, you also have a few different ways of going about things. There's a general idea of hierarchically measurement that we call a Demux-tree for demultiplexing and a specific implementation of that one where Rulfo Linas came up with the idea of making nanowires that you can push through the capillary system in the brain so that it reaches every neuron and get measurements from them. The nanowires actually exist. They've been developed at the New York University School of Medicine. But there is still no way to actually getting them to branch like that. Also, they consume a lot of volume in the brain which is a problem.
Molecular Ticker-Tape
On the other hand, there is a biological approach. The good thing about biology is that you can easily apply it en masse, in large amounts and it's already at the right scale. So there's this thing called the molecular ticker-tape, which is a project that Ed Bowden at MIT and George Church at Harvard University, Konrad Korning at Northwestern and Halcyon Molecular also working on, which is the idea that inside the neurons, you can record voltage events with voltage gated channels and you can affect the writing on a DNA, the DNA being used a medium for putting information on it, telling it when an event occurred. Then all you have to do is pluck out that DNA and sequence it and you know when activity's been going on so you can do this in many neurons at the same time. The problem with this approach is that biology is still difficult for us to work with as an engineering project. There's a lot of random searching around with the right tools to use. What we'd really like to do is work at the cellular scale and be able to engineer to our hearts delight. That's kind of a third one which I'll talk about specifically. You see already shown as a picture over there.
IBM Synapse Project |
Tape-to-SEM
So volume microscopy or tape-to-SEM which is the approach of actually looking at it to get the structure out. This is something that Winfried Denk has been doing in Germany for a while with something called SBF SEM (Serial Block-Face Scanning Electron Microscopy). So he takes a block of brain tissue. He takes an image then ablates off a piece of the surface, another image, image, image, goes all the way down. The problem with this approach is it doesn't work very well for large volumes.
different microtome knives |
But Ken Hayworth has been working at Harvard for many years on building a system that can deal with the whole volume of the brain. That's called Tape-to-SEM or use to be called the ATLUM automatic tape collecting lathe ultramicrotome. A diamond knife cuts off pieces of this block of brain tissue and puts it on a tape. The tape can be stored as you see on the right over there, although it looks a bit messy. You have random access to all of those pieces so that you can do microscopy on it all the time.
Now, when you look at what these actually look like, you can see inside the red square here on one of the images, you can see where there's a synapse that's actually approaching a dendrite, that dark sort of area where the two meet, if we can reconstruct the morphology of many of these slices on top of one another. You can even see arrows pointing, those circles are vesicles containing neural transmitters. You can even get an idea of the chemical strength of the synapse is by just looking at this.
Here you see an example of a reconstructions like this, where you eventually get the whole cell and with the help of Winfried Denk, Briggman, et al and Bock, et al have recently published two papers in Nature in 2011, in which they explored this method and Briggman, et al worked in the retina Bock, et al worked in visual cortex and the visual system. What they did is that, well for example Briggman he first looked at the retinal cells functionally examining how they operate which perceptive fields they have, what they were sensitive for. Then he did the serial reconstruction, used that to predict what they would be sensitive for and found that they could predict that function from the structure and verified that indeed this is possible when you know a lot about the system.
So, we'd like to be able to do something similar functionally. Something that we are rather good at is working with the integrated circuit technology. That's something we have experience with. We know how to make hierarchies of systems that work, communications, how to aggregation of data, measurement, etc. When you actually look at what's possible right now, with the resolution that's available with the integrated circuit technology, if you want to get down to the cellular level, and you make a circuit that's the size of a red blood cell so 8 microns, you can build something that can be powered, for example by the distributed infrared radiation at wavelength of between 800-1,000 nanometers. It is in what we call a transparency window for tissue, it would get absorbed by the tissue. So you could use that for communication or for power. Also if you want to do passive communication. It's like RFID, you know RFID tags which work with wireless frequencies. If you want to do this in tissue again you can go to infrared. And MIT is doing this specifically for these kinds of ICs that they want to use inside the body.
So we've got both communication and power and you can stuff about 2,300 transistors on something this size, that's the same number of transistor s as in the original INTEL 4004 CPU. If you look at today's technology, actually you can make that four times as much and put on as many transistors there as where in the guidance system of the original cruise missiles.
Now you also need to make this work in the body, so it needs to be biocompatible. The easiest thing is you just stuff it in a little blob of silicon. But you could also incase it in an artificial red blood cell, which is basically a protein shell that's been constructed, which will functionalized so that we can use it in many different ways.
But again, this isn't exactly everything we want. These hubs, they're complex, they can data aggregation they can do all sorts of tasks, but they're still fairly large. They work inside the vascularture which leads to every neuron but we'd really like to work outside the vasculature in the interstitial places between the cells as well. For that you need something smaller. This is where it's nice where with integrated circuits we can easily make these kinds of hierarchical systems or TEAMS.
So you can make for instance, 2 micron large transistor surface but just enough intelligence to be able to do something simple like sensing or stimulating when required. As long as they're in contact with such a hub. What you end up with is a cloud of computation basically that you can use in the brain concurrently with its activity in vivo. The nice thing about this, because it doen't have all those long wires, but instead its just nodes, is that even if you had one of those little chips for each neuron in the brain, it only takes up about one cubic centimeter of space, which is about 1/1700 the size of the brain.
Computational Demands
We're not going to into much detail in this but if you calculate how much energy the brain uses and how much it takes for one action potential, and how many action potentials occur typically, we can calculate that if you had to transfer this to the model, with the many components and compartmental model I was doing, for the whole brain emulation so if you had 10,000 compartments worth you would need about 1.2 exaflops to be able to do the computations. That sounds like a lot right now, but for instance, the Indian Government has already put 2 billion dollars for building a computer that will run at 102 exaflops in 2017. That would be fast enough for 100 of those whole brain emulations at the same time. The price is quickly dropping for this.
What I'm trying to say, the whole crux of the message is that these are very concrete requirements and projects volume microscopy exists and just needs to scaled up. Molecular ticker-tape is coming out in 3,6 maybe 18 months depending on whether we're talking prototype or real system. The chips will also be made eventually. This is just an example showing you real work that is going on. For example on the right hand side there those are chips that been integrated inside cells and functioning while the cells were still alive.
How does this compare with the biological approach? On the left we see diagram of where we break down in aging, all the places in the biological systems where breakdowns happen. This was made by John Ferber. The problem with that is all these different connections you see here all require different projects to solve them. It's not like you have one engineered solution for all of them. So it's a really big problem. And on the right when you look at the data acquisition approach, well it's quite a bit more direct, simpler.
What we do at carboncopies is we're looking at the big picture because it's very important to look outside of the box with the different approaches that are possible. That's how we come across these kinds of projects. We figure out how we put them together, how to get people working together and talking to each other, but also at this time because it really is something that's concrete and feasible. It's very important that we get down to the details like what I am showing here at that corner on the right. We need to design and engineer the systems. It's time to do that. Often when you hear people talking well it would be pretty cool to do this cool to do that. That's nice, but we need actual projects going forward.
No comments:
Post a Comment