skip to main content

To Draw a Better Map of the Brain, Professor Harnesses Mathematical Models

0

Technology advances have made it easier than ever to peer into the human mind. Badr Albanna, Ph.D., assistant professor of neurophysics, is devising new methods to predict what we’ll see when we do.

“In physics, we have a long tradition of theory and using mathematical models to describe complex systems. As neuroscience has been growing, it’s been bringing in a lot of people with training outside of biology to use some of that theory. I’m one of those people,” he said.

Albanna’s work combines physics, information theory, and statistical mechanics. The idea is that the same models that are used to predict the movement and interactions of atoms can also be applied to neurons as they’re interacting with each other in the brain.

Finding Order in the Chaos

“At a certain scale, that firing looks totally chaotic and random. But at the right scale, we can figure out what particular brain regions are doing, what your mind as a whole is doing, and how all that chaos comes together to give you a nice, predictable behavior,” he said.

In “Minimum and Maximum Entropy Distributions for Binary Systems with Known Means and Pairwise Correlations,” a paper he recently published in the journal Entropy, Albanna showed the range of entropies possible for neural models with specific properties.

In physics, entropy is a thermodynamic quantity that represents the unavailability of a system’s thermal energy that can be converted into mechanical work. Information theory provides another interpretation: entropy is also a way to measure the degree of uncertainty about the state of a system, or conversely, how much information we would need to understand exactly what state the system is in.

Albanna said that one way to comprehend that uncertainty is to imagine a standard cell phone contract. If one yes or no question’s worth of information is known as a “bit,” a 10-gigabyte cell phone plan is equivalent to about 100 billion bits – the answers to 100 billion yes or no questions – that the plan allows you to ask. Entropy describes everything else outside of that 100 billion.

The models borrowed from physics are referred to as “maximum entropy models,” Albanna said, because they are the models with the largest degree of uncertainty that are still consistent with the data at hand.

“People use the entropy of these models as a way to characterize how good a job these models are doing,” he said. “They’ll ask, ‘Does the entropy of the neural activity, or what you’re actually seeing in a real recorded data set, match up with what your model is predicting?’”

When it comes to modeling in physics, there are many reasons to feel confident that when the entropy of a system and the entropy of a model appear to match, the model is an accurate fit. It’s a lot harder in neuroscience though, because one can never really be sure which variables are the right ones to use in the model. Often, Albanna said, researchers pick whatever they can measure.

“You pick how often a neuron fires, but usually, you’re sort of groping around in the dark. You put your variables in and try to figure out whether you’re doing a good job. And you say, ‘Look, it matches up pretty well; the entropy is close.’ But we don’t know how poorly you could do.”

In his study, Albanna found that if a population of neurons behave interchangeably in a statistical sense, then any model consistent with the data gathered from monitoring of the cells will be a good fit in terms of the entropy. If this condition does not hold, the range of possible entropies is broad, and so these maximum entropy models are really capturing something important about the data.   

“It put some of these things that we are starting to take for granted in neuroscience on a little bit firmer footing, and showed we’re not cheating ourselves when we say these things do well,” he said.

The Gap between the Spikes

Another recent project that Albanna completed, a collaboration with a researcher from New York University’s School of Medicine, addresses the complexities of hearing.

Prior experiments with rats showed that it’s common that upon hearing a sound, only half the cortical cells in a part of the brain responsible for sound perception activate consistently. Though it may be tempting to focus exclusively on the cells that “light up” when prompted, Albanna said that’s a mistake. There is much to be learned from the other cells, he said.

“In fact, there are ways to show that these cells actually do respond. It looks like they may not be doing anything when you look through one lens, but if you look through the lens of our analysis, you can see that, in fact, they do carry information, at levels that are comparable to what those responsive cells are carrying.”

To do this, Albanna chose not to focus on the spikes that one sees when cells are activated. Instead, he focused on the gap between the spikes, known as the “interspike interval.” That’s where he found subtle differences that he said show how a particular cell is encoding information.

It’s still not clear what role the cells play in influencing rat behavior, but Albanna said the findings, which he and his co-author are submitting to the journal Nature Neuroscience, are an important first step.

Challenging Assumptions

Looking ahead, Albanna is in the process of developing graduate-level classes to accompany the undergraduate-level physics classes he teaches; the first will likely focus on psychophysics, which is the neuroscience of how perception works, for both through hearing and vision. He said he loves the field because unlike physics, it’s extremely young and in flux.

“There’s so much we don’t know, and so much of how you approach neuroscience depends on your perspective. Like, does this cell matter or not? We don’t always have concrete experimental answers to these questions yet, so you have to sort of build your view the best way you can, and then make sure that you’re always checking yourselves and challenging your assumptions,” he said.

Share.

Comments are closed.