As we look at the world around us, we have immediate access to the composition of the visual scene into objects, as well as our relationship in space to those objects. Likewise, in listening to speech, we are aware of meaning, often without even paying attention to the words themselves. This natural facility makes it possible to move though the world, catch or avoid moving objects, and base immediate decisions on a detailed understanding of the world around us. Only secondarily might we note the particular color or composition of particular points in the visual scene, or the durations of certain vowel sounds, or other low-level visual or auditory features of the scene. Our brain processes sensory information much differently than computers do: a computer can easily store the hue and luminance of every pixel of an image, but even with the best available software it cannot parse an arbitrary natural image into its underlying elements. However, in the absence of larger conceptual theories of how the brain processes information, established techniques have revolved around studying sensory systems abilities to represent information rather than understand the computation that it performs. In order to study computation in the brain, it is necessary to both establish larger theories about what is being computed, and design experiments to link these larger theories to observable physiology. Research in the NeuroTheory Lab is concerned both with developing larger theories of system-level function in vision and audition, as well as working closely with neurophysiologists to design and perform experiments that can guide and/or validate these theories. As a necessary third goal, we also develop new analytical tools to facilitate these new experiments, as well as increase what can be learned from existing experiments.

Daniel Butts
Biology Biosciences Research Building 1118
BSOS
Phone
Email
dab [at] umd.edu