http://www.wbur.org/npr/142024614/peering-into-the-brain-but-at-what
Modern brain-imaging techniques have given researchers an unprecedented level of detail about the structure of the brain, but are they any closer to puzzling out how the brain really works? Harvard neuroscientist Jeff Lichtman talks about the limitations of brain imaging, and the challenges of trying to use imaging techniques to decode the brain's behavior.
Transcript
IRA FLATOW, host: This is SCIENCE FRIDAY. I'm Ira Flatow. Your thoughts, your memories, as you know, all come from your brain cells, billions of them packed together in your head. My next guest would like to make a map of how all those cells connect to one another, talk to each other, learn new things, make new memories.But it's going to take a lot to untangle those neurons. After all, your brain cells are just nanometers thick in some places. They flicker with electrical activity that's just a few milliseconds long, and there's evidence that their connections may vary dramatically from one person to another.
And our brains are changing, adapting, responding to our environments all the time. So how do you capture that? Well, my next guest thinks that we can capture it, and is an expert in imaging. Dr. Jeff Lichtman is a professor of molecular and cellular biology and a member of the Center for Brain Science at Harvard University. He joins us from Harvard. Welcome back to SCIENCE FRIDAY.
JEFF LICHTMAN: Hi, Ira, how are you?
FLATOW: You're - fine, thank you very much. You're an expert in imaging techniques, correct?
LICHTMAN: Yeah, we look at the brain in my laboratory. That's what we do, almost exclusively.
FLATOW: And what would be the ideal imaging technique? What are you looking for to be able to examine all of these connections?
LICHTMAN: The perfect technique, actually, is a combination of two, at the moment, extreme opposites. One is a technique that gives you enough resolution to see the finest connections between nerve cells, which requires resolution at the level of nanometers, as you've already mentioned. And the other is the scale to trace out wires that can extend for centimeters, or if you're a giraffe, from your spinal cord to your toe, even meters.
And those two kinds of technologies are often very different. And to fuse them into one technique requires going a little bit beyond the comfort zone of modern technology.
FLATOW: So we're not quite there yet?
LICHTMAN: Well, we're working on it, but it is awesome, truly disturbing how much data one has to obtain if you wish to map the entire brain at the level of every synapse.
FLATOW: How much data are we talking about here?
LICHTMAN: Well, let's take a cubic millimeter of brain, which is about the size of the smallest point you would see in an image taken with this technique called functional magnetic resonance imaging. So those images show you where blood flow in the brain goes up when you think, and they're very highly resolved. A cubic millimeter is the voxal size, the three-dimensional pixel size.
And one voxal of an FMRI image, if we imaged that with an electron microscope to see all the synapses with sufficient resolution, that would be about 1,000 terabytes of data or one petabyte. A terabyte is 1,000 gigabytes. So we're talking about a million gigabytes of data per cubic millimeter, and that's just one cubic millimeter of brain. And if you wanted a whole brain, you'd need thousands of petabytes, essentially more data than is the digital content of the world, if you will.
So it's more than fits on my laptop, to be sure.
(SOUNDBITE OF LAUGHTER)
FLATOW: Could you learn anything, though, at a single-cell level that, let's say, that Eric Kandel didn't learn with the single cells in sea slugs if you could go down that - drill down that deep?
LICHTMAN: I mean, there's extraordinary advances that have been made, certainly, in understanding the way synapses talk to each other and how they change with experience. But one of the mysteries of the brain is that the network that connects cells is a lot like an Internet network, and that is it's one to many and many to one.
There are interconnections between nerve cells and thousands of target cells, and thousands of different target cells impinge and talk to each nerve cell. And if one ever wants to understand how a network like that works, you actually have to look at the network, and that requires seeing more than a single cell.
FLATOW: And can we learn anything from, let's say, a network of networks like the Internet, that may apply to how the brain works?
LICHTMAN: Yeah, I mean, one interesting thing to keep in mind is that the network of the Internet is connecting the brains of individuals together. So it is a form of communication between the neurons in one brain with the neurons in other brains. That's really all it is: It's just wires that extent our connectivity.
So perhaps the same strategies that are wiring us up are used again when brains talk to other brains. Almost - we don't even realize it, but that may be what's going on.
FLATOW: Is it possible in your imaging world, and in the world you would like to create, if you have the right tools, to actually watch a thought originating or something being remembered?
LICHTMAN: Absolutely, that's the long-term goal of work like this, which is to see first how information about the world gets implanted in the brain. And once it's there, what form does it take that allows it to persist over decades, or if you're very lucky, even over a century?
There must be some structural substrate, a trace, if you will, of that memory, but we have very little idea now because these tools are just now being developed to actually map out what that would look like.
FLATOW: And so do you think these tools will be available in our lifetime, so to speak?
LICHTMAN: Yeah, I guess it depends how old you are.
(SOUNDBITE OF LAUGHTER)
LICHTMAN: I think these - my laboratory and a number of other labs are working very hard right now to generate tools that have the speed to generate these images quickly enough. I'll just give you an example that when we started about five years ago, we were obtaining information at about 1 million pixels of brain image per second, which sounds like a lot, but that's actually quite slow. To do a cubic millimeter of imaging at that rate takes about 140 years, and to do let's say a rodent brain would take about 7,000 years at that rate.
And over the past five years, we've sped up about 100-fold, and we think in the next two years we'll be at about a billion pixels per second. And then doing a mouse brain within a year might even be contemplatable. To do a human brain, however, is still an extraordinary challenge because humans have much bigger brains than mice. But the techniques would be the same.
FLATOW: And if you - let's stay then at the mouse brain level. Would one mouse brain look the same as another mouse brain?
LICHTMAN: Almost certainly not. The little part of the mouse nervous system that we looked at to completely wire - get a wiring diagram of, from one animal to the other, even from the left side to the right side of the same animal, where the function should be quite similar - we found every single instantiation of this wiring diagram was unique.
And I think a lot of people take that to mean what's the point of doing this at all, with all this variation. It's worth saying that if you watch two football games, or you watch two chess games, you'll find that every game of a particular sort is different from every other one, but after watching one game of chess, for example, you could infer the rules, so no other game would really be surprising to you.
And I guess that's the same thinking we have here, that there will be certain motifs, certain strategies of connectivity, if you will, of the way nerve cells are connected that from learning from one brain, would allow us to extrapolate in other brains.
FLATOW: Let's get some phone calls in, 1-800-989-8255. Jim(ph) in Muskegon, Michigan, hi Jim.
JIM: Hello, thanks for taking the call. I was curious whether or not you can map any changes in the brain as a result of PTSD, or does your work lead to any treatment possibilities? I particularly had some clients who relived some events based on triggers, otherwise benign things.
LICHTMAN: Yes, I think this is an extremely important point about our primitive knowledge of the brain. Compared to other organ systems, where most abnormalities have a physical, histological trace that you can see in a microscope, for most brain disorders, we don't have a physical trace. And I think this is largely a sign of how low-level our imaging is, relative to the questions.
At the moment, we're far away, to be perfectly honest, from getting a physical manifestation of something like post-TSD, but I think at some point, one would hope that mental illness, learning disorders and other kinds of behavior problems will be amenable to these kinds of studies.
FLATOW: And then I would imagine you need to be able to see a large part of the brain to see how that might originate.
LICHTMAN: Yeah, of course, one doesn't even know where the problem is. And this is this problem of size, the big and the small. You have to be able to accommodate a big area but at very high resolution, and that gives rise to datasets that are just at the moment so large that no one would know exactly how to work with them.
FLATOW: Do you need a supercomputer, you know, like the old Cray or any, put a bunch of them together to get a giant computer to do this? What kind of computer power do we need? Give us an idea.
LICHTMAN: I think what people - there's one thing is to generate the data, and then you need to store it in a large place. So you need storage capacity that exceeds what most people are used to, you know, many petabytes of storage, tens or hundreds of petabytes. And that is already one far end of computation.
But that's not sufficient. You then have to analyze this data to turn these pictures into an actual map, and that requires a kind of computational image analysis that is being developed right now but is very computer intensive. And the way this is done is typically with clusters of computers that parse this large problem into many small, little pieces, and so thousands of CPUs or even GPUs working simultaneously are necessary to do this.
So it is a large amount of computational space, but it's not the classic Cray single supercomputer but many small computers, each working on a teeny-weeny part of a very big problem.
FLATOW: Well, good. Could our home computers become part of a network like that, work together?
LICHTMAN: Well, this is one of the - yes indeed. I think one of the ideas, just as the Galaxy Zoo has been very potent as a way of analyzing images of deep space, my laboratory and a colleague of mine at MIT, Sebastian Seung, and another colleague of mine, Hanspeter Pfister in the engineering department here and several other groups, as well, are thinking about ways of recruiting interested parties to help us do this tracing and mapping out.
So not only your computer but your visual system we would take advantage of, as well.
FLATOW: Yeah, because we know that people are much better than computers at visually taking things apart and putting them back together.
LICHTMAN: Yeah, I mean, one of the great ironies of this work is that we are trying to get computers to do something that humans do quite trivially. Any five-year-old can trace these wires. Computers have a hard time doing this. And what we're trying to trace is the wiring diagram that explains basically how humans do this.
It's a very circular and philosophically interesting problem.
(SOUNDBITE OF LAUGHTER)
FLATOW: Jeff, can I ask you to stay with us?
LICHTMAN: Sure.
FLATOW: We're going to go to a break. We're talking with Jeff Lichtman, professor of molecular and cellular biology and member of the Center for Brain Science at Harvard. We're going to pick his brain a little longer, and stay with us. We'll come back. Our number, 1-800-989-8255. You can tweet us @scifri, and we'll continue right after the break. Stay with us.
(SOUNDBITE OF MUSIC)
FLATOW: I'm Ira Flatow. This is SCIENCE FRIDAY, from NPR.
(SOUNDBITE OF MUSIC)
FLATOW: You're listening to SCIENCE FRIDAY. I'm Ira Flatow. We're talking about how your brain is wired and attempts to take a look and make - snap pictures of it, with Dr. Jeff Lichtman, professor of molecular and cellular biology, member of the Center for Brain Science at Harvard University. And if his last name is familiar, that's because he is the father of Flora Lichtman, our multimedia editor. And we thank you for that project too.
(SOUNDBITE OF LAUGHTER)
LICHTMAN: That's the best thing I ever did, or one of the two. I also have another daughter. They're both, the pair, the best things I ever...
FLATOW: Well, we're very happy for you doing that. 1-800-989-8255. Let's go to the phone. Let's go to Steve in Chico, California. Hi, Steve.
STEVE: Good morning, gentlemen.
FLATOW: Hi there.
STEVE: My question is this: If I think a thought in an image, like if I think of an old dog I had when I was a kid, and that image is in my mind, and we know that it comes from brain cell activity, right, but if a surgeon cut into my brain, he would not find a little picture of my dog. He would simply see the neural activity, right?
LICHTMAN: The surgeon wouldn't - yeah, go ahead.
STEVE: Right, so do you have any sense where the actual image is, the picture of that dog that's in my mind? Where might that be in the universe?
LICHTMAN: Well, it's in your mind, that's for sure, and because it's a visual picture, it almost certainly is - at least one rendering of it, and there are probably many different parts of your brain involved, but it'll certainly be in the parts of the brain that are responsible for image processing, that take visual information and process it progressively farther along.
It has been clear that it's very hard to find a local stroke, for example, that damages a small part of the brain where a person ends up with a perfectly normal brain, except the image of their dog is missing. And that implies, of course, that your dog is distributed over a rather large area, or there are multiple copies of your dog.
There are places in your brain, and maybe this will come up later in the hour or in the next hour, of - where recognition of faces occur, and if that part of the brain is stroked out, for example, a person can't recognize anyone's face. It's very hard to get very specific memories lost, suggesting this distribution, that it's not localized spatially the way you would if you were an engineer - you might put your little dog in one place and a spoon right next to it and your cat on the other side. It's not so clear how it's organized.
FLATOW: And one of the things you write about and we've talked about is how plastic your brain is, right? It can be remolded, reshaped.
LICHTMAN: Yeah, so I think the - you know, the emphasis, especially as we get older, is on how plastic our brains are. But of course there's the other side of the coin, and I think this is often left unsaid, but I'd like to emphasize this, that the purpose of memory is to give you the opportunity based on often one trial learning, at some point in development, a lasting, indelible impression about the way the world is.
And that is a bit at odds with a constantly changing brain. And I think if my own daughters' comments to me are any reflection, as I've gotten older I get the impression my children think that my brain has hardened, calcified. I'm a little less open to new ideas than I was when I was younger.
And I see this as wisdom, not really a bad thing, you know, that I'm left with a brain that's consistent with the world, but of course the world is changing very rapidly now. So this is a somewhat painful thing for people my age, as new tools get invented.
But I think memory's main purpose is not to constantly change but to allow a person to hold on to, for example, how to ride a bicycle. If you learn as a child how to ride a bicycle, you can stop riding a bicycle for 20, 30 years. You get on a bicycle as an adult, and after a moment or two of unsteadiness, you're riding pretty well.
But look at an adult who's never ridden a bicycle as a child, and it's clear there's something about their brain, there's some indelible trace about bicycle riding that's missing. And they have a hard time learning.
FLATOW: But we also have the case where it's been shown that recall is very unreliable, that - isn't it true that we see that you can make up something in your mind, and your mind, at least the scans will show that it's as if you had actually seen it?
LICHTMAN: Yes, I mean one of the amazing things about memory is every time you recall something, it's up for grabs again. And often that is because when you recall it, you want to dress it up with more recent data. So if the recent data tends to overturn something that you remembered earlier, gradually that memory can morph into something that's quite opposite of what the original memory is.
And thinking about something long enough, you can begin to believe things are true that aren't. My brother and I growing up, I kept telling him over and over again a particular thing, that he was adopted, in fact, and I think as he - there was a point in his life when he was uncertain whether this was a fact or not, even though - just because he began running it through his mind, even though I was just teasing him.
FLATOW: Let's see if we can get one more call in here before we have to go. Clay(ph) in Oklahoma City. Hi, Clay.
CLAY: Hey, thanks for taking my call.
FLATOW: Hi, go ahead.
CLAY: Yeah, I would like to hear your - I forgot the scientist's name, I'm sorry. So our brains accumulate information from the time we're born until the time we die. And I want to know what you think about where is that information being stored. Is it being stored molecularly? Much of the research around brains is focused on the neural network and the depolarization, the signals sent to each nerve.
But I believe the information must be stored actually inside of the neurons, giving it identity, reason to respond a certain way. So could you please speak about that? Where might the - where might the information we gain as we grow old be stored physically, molecularly?
FLATOW: All right, good question.
LICHTMAN: I think this is a very good question and one that's somewhat contentious. I think we now understand that synaptic connections between nerve cells molecularly can change in ways that persist for long periods of time. And that has sometimes been mistaken, I think, as thinking that the memory per se is built into those molecules.
Ultimately the brain is just a behavior machine. Input comes in, it churns around inside, and then there comes an output. And that is through the connections between nerve cells. So for example, if I tip your - tap your patellar tendon, and your knee jerks, that's because of a reflex of nerves that activate cells in your spinal cord that then send information back out to the muscle and cause that kick.
You could think of all learning, all information, being the same way. If I say what is two plus two, that goes in your ears, it rattles around by activating nerve cells, and out comes first in your head the idea of four, and if you're a child, then a signal would be sent down to your deltoideus muscle in your shoulder, pulling your arm up. So you wave it back and forth so the teacher can see you, so you can then say it's four.
That is not coded molecularly. It's coded in a wiring diagram that connects the idea that comes in, the idea that's in there, to an output.
FLATOW: Thank you very much, Dr. Lichtman, for taking time to be with us today, very fascinating.
LICHTMAN: My pleasure.
No comments:
Post a Comment