The human eye sends information to the brain at an estimated rate of approximately 10 megabits per second, roughly the speed of an ethernet connection. Processing such a large bandwidth stream of visual information on behaviorally relevant time scales requires that neurons extract and represent information from visual signals efficiently, i.e. represent the most information for the least cost in time, space and energy. In essence, the brain needs to compress the visual stream much the same way software compresses the digital representation of a movie. Little is known about how the brain accomplishes this critical task. Dr. Osborne will use her 2012 Seed Grant to investigate the neural mechanisms that visual cortex uses to represent information about moving scenes. She will use neural responses in the middle temporal cortical area (MT) as a model. Neurons in area MT respond selectively to visual motion and provide the visual inputs for smooth pursuit eye movements. By recording neural and behavioral responses together, her lab can determine not only how cortical neurons compress incoming visual signals to represent them efficiently but also whether those coding strategies are important for behavioral performance. This project is high-risk because little preliminary data exists, but it could also have a transformative impact on our understanding of how the brain processes stimuli under natural conditions and for how we conceptualize sensory processing. The work may impact our understanding of cognitive and behavioral deficits associated with abnormal cortical function and will greatly improve the design and function of visual prosthetics.
2012
Leslie Osborne, Ph.D.