Mechanisms for Coding Complex Images in the Early Visual System
2008 Seed Grant
Naoum Issa, M.D., Ph.D.
University of Chicago
The overall goal of our lab is to understand how complex visual scenes are represented in the
central nervous system. Our current understanding of the brain suggests that basic features of
images are represented in primary visual cortex (Area 17) and that more abstract aspects of the
scene like illusory contours or differences in texture are represented in higher cortical areas
based on the output of Area 17. For example, neurons in Area 17 would detect the edges of the
floral rug and the pattern of the flower weave, but higher cortical areas extract the shape of the
toy hidden under the rug. While the classical model of brain organization suggest that the
higher cortical areas build their representation from the simple representation in the primary
visual cortex, it is possible that many of the abstractions are encoded much earlier in the visual
system (in the retina or the lateral geniculate nucleus [LGN]). This possibility has been ignored
because it is assumed that abstraction is too complex for anything but cerebral cortex. The
proposed experiments challenge this assumption, and ask whether the properties of the
neurons before cortex make them able to extract behaviorally important information from a
scene. Specifically, the proposed research will use targeted microelectrode recordings to
determine if and how neurons in the LGN encode a particular class of abstract image features
known as second-order image features.