How Do We Make Sense of What We See?

M.C. Escher

M.C. Escher's ambiguous drawings transfix us: Are those black birds flying against a white sky or white birds soaring out of a black sky?

Lines in Escher's drawings can seem to be part of either of two different shapes. How does our brain decide which of those shapes to “see?” In a situation where the visual information provided is ambiguous — whether we are looking at Escher's art or looking at, say, a forest — how do our brains settle on just one interpretation?

In a study published this month in Nature Neuroscience, researchers at The Johns Hopkins University demonstrate that brains do so by way of a mechanism in a region of the visual cortex called V2.

That mechanism, the researchers say, identifies “figure” and “background” regions of an image, provides a structure for paying attention to only one of those two regions at a time and assigns shapes to the collections of foreground “figure” lines that we see.

Rudiger von der Heydt

“What we found is that V2 generates a foreground-background map for each image registered by the eyes,” said Rudiger von der Heydt, a neuroscientist, professor in the university's Zanvyl Krieger Mind/Brain Institute and lead author on the paper. “Contours are assigned to the foreground regions, and V2 does this automatically within a tenth of a second.”

The study was based on recordings of the activity of nerve cells in the V2 region in the brain of macaques, whose visual systems are much like that of humans. V2 is roughly the size of a microcassette and is located in the very back of the brain. Von der Heydt said the foreground- background “map” generated by V2 also provides the structure for conscious perception in humans.

“Because of their complexity, images of natural scenes generally have many possible interpretations, not just two, like in Escher's drawings,” he said. “In most cases, they contain a variety of cues that could be used to identify fore- and background, but oftentimes, these cues contradict each other. The V2 mechanism combines these cues efficiently and provides us immediately with a rough sketch of the scene.”

Von der Heydt called the mechanism “primitive” but generally reliable. It can also, he said, be overridden by decision of the conscious mind.

“Our experiments show that the brain can also command the V2 mechanism to interpret the image in another way,” he said. “This explains why, in Escher's drawings, we can switch deliberately” to see either the white birds or the dark birds.

The mechanism revealed by this study is part of a system that enables us to search for objects in cluttered scenes, so we can attend to the object of our choice and even reach out and grasp it.

“We can do all of this without effort, thanks to a neural machine that generates visual object representations in the brain,” von der Heydt said. “Better yet, we can access these representations in the way we need for each specific task. Unfortunately, how this machine' works is still a mystery to us. But discovering this mechanism that so efficiently links our attention to figure-ground organization is a step toward understanding this amazing machine.”

Understanding how this brain function works is more than just interesting: It also could assist researchers in unraveling the causes of — and perhaps identifying treatment for — visual disorders such as dyslexia.

Other authors include Fangtu T. Qiu and Tadashi Sugihara, both of the Zanvyl Krieger Mind-Brain Institute. Funding for the research was provided by the National Institutes of Health.

All latest news from the category: Studies and Analyses

innovations-report maintains a wealth of in-depth studies and analyses from a variety of subject areas including business and finance, medicine and pharmacology, ecology and the environment, energy, communications and media, transportation, work, family and leisure.

Back to home

Comments (0)

Write a comment

Newest articles

Global effort to map the human brain releases first data

The BICAN Rapid Release Inventory provides early access to comprehensive single-cell data, aiming to accelerate brain research. The BRAIN Initiative® Cell Atlas Network (BICAN) has launched its first major data…

Modeling the minutia of motor manipulation with AI

An AI research collaboration led by EPFL professor Alexander Mathis creates a model which provides deep insights into hand movement, which is an essential step for the development of neuroprosthetics…

Reporter Skin

In-vitro Skin Makes Cell Reaction to Test Substance Measurable in Real Time. The EU has banned animal testing for cosmetics and non-animal alternative methods are preferable for the risk assessment…

Partners & Sponsors