skip to main content

Is it a Chair or a Dog? What the Eye Sees, the Brain Confirms

0

Your neurons do not lie. If you see something specific to them, they will light up brightly on the screen of an fMRI machine.

But researchers don’t know exactly why some neurons prefer some images more than others.

Daniel Leeds, PhD, assistant professor of computer and information science, is exploring just how that happens.

This past spring, Leeds conducted fMRI scans of the brain of a test subject at Carnegie Mellon University in Pittsburgh.

The subject was shown a series of real world images over an 80-minute period, while Leeds recorded which neurons in a specific part of the subject’s brain “lit up” with activity.

The study was a continuation of a larger one that Leeds conducted with scientists of Carnegie Mellon in 2012 and published last year in the journal Frontiers in Computational Neuroscience. In his work, Leeds uses a computer program that monitors the brain responses while subjects are observing the images and selects new images in real time to show them.

He and his collaborators were interested in specific visual properties that activate a community of neurons in a few cubic millimeters in the brain. They chose that specific part of the brain because it’s an area where our visual pathway becomes “more sophisticated,” he said.

“You have pixel detectors, effectively, in your eyes, and then you have edge detectors at the beginning of the pathway in your brain,” he said. “And then there are more complex representations as you go along the visual stream.”

However, once past the edge detectors, said Leeds, scientists are a lot more uncertain just what visual properties the brain is using. That’s where Leeds’ research lies—at the intermediate areas after the relatively simple edge detectors, “but before you get to the holistic level of, ‘I see a dog, or I see a chair.’”

Much research has already been done to identify where in the brain our vision is processed. Leeds’ research determines the visual and mathematical principles certain brain regions are using to understand pictures.

Helping Computers “See” Better

He said this new research should help scientists write better algorithms for computers to “see” like human brains do.

One way computers can ‘see,’ Leeds said, is via multilayer artificial neural networks.

“The first layer takes input from pixels, and then it produces its response to simple patterns in those pixels. Then another layer takes output from the bottom layer, comes up with another representation, and communicates it to a higher layer. And then you continue doing this until you get a rhinoceros or a dog,” he said.

“It gets more complex, but effectively the human brain follows a similar process.”

There are already computer programs that perform this process well, and Google’s image search engine is one example of a program that can automatically recognize what’s in a picture. But computer-algorithm’s “seeing” abilities still have limitations, said Leeds. For example, Ticketmaster and other commercial sites successfully use captchas, or image tests, to weed out bots or robotic viewers from humans on the Internet.

The data that came out of the experiment in the spring is still preliminary, but Leeds said they’ve learned some things about visual preferences in the brain in their ongoing work. For some brain regions they researched, statues on big rectangular pedestals sparked more excitement than statues without pedestals. For other brain regions, shiny surfaces or jagged surfaces were exciting.

“Understanding what types of visual information is important to brain regions helps us understand how the brain approaches the task of “seeing” objects,” said Leeds.

Leeds said the new program he has written to analyze brain data has improved his team’s understanding of vision in the brain. He and his team are working on publishing new results now.

Share.

Comments are closed.