Read In New Wide Format (or Internet Explorer Users)
Morphology & Function
If we're looking at scope and resolution, then there are sometimes shortcuts that we like to take, because it's a very difficult problem. The're two different kinds of shortcuts that can be contemplated. One of them is look only at the function. Take neurons, record from the neurons and if you record from many neurons at the same time, then instead of looking at how they're connected, by actually looking at those connections, you can derive connectivity, because you see how they interact. So if you use something like range and causality for example, you can try to create a functional connectivity map.
The problem with this is that you can easily mislabel function. If you want to study a complex system that has a lot of elements inside of it, and you want to see everything that it may have remembered and how it may respond, you have to observe it over a very long period of time, perhaps since birth.
What you can do, is you can look just at what is the morphology of a neuron, how does it look. You study many neurons that look the same way. You know they generally have a kind of receptor channel. They use certain neural transmitters. You know that they respond in a certain way so you know the library where you can map from this morphology to that kind of function. You can map to paramater distributions. Then if you have a very detailed morphological model, you can make what you see on the right there, which is a compartmental model where you build a very large model, morphologically of what the neuron is like and each of the little compartments is basically an electrical circuit model. You set the parameters according to what you find in your library and then you hope that the entire system works.
Technical Obstacles in Measurements
Now, the problem with this, is that measurement is never quite precise. Also, we need to understand that that library may not be one to one mapping is there always just one morphology that breaks to one type of function. It may break down there, in terms of error, because when we talk about error in neural networks, one of the things they are famous for is that they deal well with error. If you have random errors, this system's fairly robust. But that's not true for systematic errors or for cumulative errors.
If you have electro microscopy for example, you areas that are out of focus or where you don't know exactly what you're resolution is maybe measuring things wrong. The same when you're cutting slices of brain, you have a knife that has features in it, where you have characteristic errors and those accumulate. Now, it can be very difficult to tune a large system like that. If you take things and just try to tune it with a 100 billion neurons in it, then this becomes a problem that is much to big of an optimization for even quantum computers when they eventually come along.
We can look at it, at least explore it using something like this. This is a modeling environment that I use to explore what is the emergent functions are that pop right out of the structure as the neurons grow and connect into networks. But it can also be used to generate models, where we know exactly what we're putting in terms of structure and in terms of function, so what if we can look at what types of error tend to occur, you can find out how bad is it when you get this cumulative errors.
There are real projects going on to do this for whole brain emulation. You see here the four yellow requirements, and on the bottom, I didn't put in the projects for the leftmost, but that's ok. You see on the bottom a number of projects that are going on. I'm only going to about two because of time constraints today, but they're all very interesting. I'm quickly going to run through what they're all about.
If you want to get the structural connectome out, the obvious thing to do is say this is structure. So it's spatial. It's something you want to look at. So what you can od is slice it really thin, look at the brain through en electron microscope and reconstruct. I'll talk a bit about that.
But the alternative is that what you really want to get into is actions and not all of this other messy stuff. So Anthony Zador at Cold Spring harbor, and Ed Calloway, they're working on methods using a virus basically to transfect neurons in mass and to deliver unique DNA barcodes that go to the presynaptic and postsynaptic sites. Then when you pull out sort of biogen tag that's connecting that's connecting the presynaptic and postsynaptic sites, you pull out both the barcodes. It's like pulling out pointers and are pointing to each other saying this neuron's connected to that neuron and that neuron's connected to this one. It's an interesting approach.
Functionally, you also have a few different ways of going about things. There's a general idea of hierarchically measurement that we call a Demux-tree for demultiplexing and a specific implementation of that one where Rulfo Linas came up with the idea of making nanowires that you can push through the capillary system in the brain so that it reaches every neuron and get measurements from them. The nanowires actually exist. They've been developed at the New York University School of Medicine. But there is still no way to actually getting them to branch like that. Also, they consume a lot of volume in the brain which is a problem.
On the other hand, there is a biological approach. The good thing about biology is that you can easily apply it en masse, in large amounts and it's already at the right scale. So there's this thing called the molecular ticker-tape, which is a project that Ed Bowden at MIT and George Church at Harvard University, Konrad Korning at Northwestern and Halcyon Molecular also working on, which is the idea that inside the neurons, you can record voltage events with voltage gated channels and you can affect the writing on a DNA, the DNA being used a medium for putting information on it, telling it when an event occurred. Then all you have to do is pluck out that DNA and sequence it and you know when activity's been going on so you can do this in many neurons at the same time. The problem with this approach is that biology is still difficult for us to work with as an engineering project. There's a lot of random searching around with the right tools to use. What we'd really like to do is work at the cellular scale and be able to engineer to our hearts delight. That's kind of a third one which I'll talk about specifically. You see already shown as a picture over there.
|IBM Synapse Project|
So volume microscopy or tape-to-SEM which is the approach of actually looking at it to get the structure out. This is something that Winfried Denk has been doing in Germany for a while with something called SBF SEM (Serial Block-Face Scanning Electron Microscopy). So he takes a block of brain tissue. He takes an image then ablates off a piece of the surface, another image, image, image, goes all the way down. The problem with this approach is it doesn't work very well for large volumes.
|different microtome knives|
But Ken Hayworth has been working at Harvard for many years on building a system that can deal with the whole volume of the brain. That's called Tape-to-SEM or use to be called the ATLUM automatic tape collecting lathe ultramicrotome. A diamond knife cuts off pieces of this block of brain tissue and puts it on a tape. The tape can be stored as you see on the right over there, although it looks a bit messy. You have random access to all of those pieces so that you can do microscopy on it all the time.
Briggman, et al and Bock, et al have recently published two papers in Nature in 2011, in which they explored this method and Briggman, et al worked in the retina Bock, et al worked in visual cortex and the visual system. What they did is that, well for example Briggman he first looked at the retinal cells functionally examining how they operate which perceptive fields they have, what they were sensitive for. Then he did the serial reconstruction, used that to predict what they would be sensitive for and found that they could predict that function from the structure and verified that indeed this is possible when you know a lot about the system.
Now you also need to make this work in the body, so it needs to be biocompatible. The easiest thing is you just stuff it in a little blob of silicon. But you could also incase it in an artificial red blood cell, which is basically a protein shell that's been constructed, which will functionalized so that we can use it in many different ways.
We're not going to into much detail in this but if you calculate how much energy the brain uses and how much it takes for one action potential, and how many action potentials occur typically, we can calculate that if you had to transfer this to the model, with the many components and compartmental model I was doing, for the whole brain emulation so if you had 10,000 compartments worth you would need about 1.2 exaflops to be able to do the computations. That sounds like a lot right now, but for instance, the Indian Government has already put 2 billion dollars for building a computer that will run at 102 exaflops in 2017. That would be fast enough for 100 of those whole brain emulations at the same time. The price is quickly dropping for this.
John Ferber. The problem with that is all these different connections you see here all require different projects to solve them. It's not like you have one engineered solution for all of them. So it's a really big problem. And on the right when you look at the data acquisition approach, well it's quite a bit more direct, simpler.
What we do at carboncopies is we're looking at the big picture because it's very important to look outside of the box with the different approaches that are possible. That's how we come across these kinds of projects. We figure out how we put them together, how to get people working together and talking to each other, but also at this time because it really is something that's concrete and feasible. It's very important that we get down to the details like what I am showing here at that corner on the right. We need to design and engineer the systems. It's time to do that. Often when you hear people talking well it would be pretty cool to do this cool to do that. That's nice, but we need actual projects going forward.