We humans carry around in our heads rich internal mental models that constitute our construction of the world, and the relation of that world to us. These models can be expressed at multiple levels of abstraction, including beliefs about sensory stimuli and the output of our motor programs, or higher-level beliefs about self … There are two information streams that coincide in V1: the high-precision retinal input that is processed with high spatial acuity and the broad, abstract, less-precise strokes painted by cortical feedback. V1 is retinotopically organized and has small classical feed-forward receptive fields. Conversely, feedback conveys information about larger portions of the visual field, giving rise to extraclassical or contextual receptive fields.
Beautiful observations. They’re from a commentary in Proceedings of the National Academy of Sciences of the United States of America along with this lovely depiction.
AM is apparent motion. The supragranular layers of the cortex are the primary origin and termination of intracortical connections, which are either associational with the ipsilateral hemisphere, or connect across the corpus callosum to the contralateral hemisphere. The infragranular layers connect the cerebral cortex with subcortical regions.
Take a look at the original article in PNAS, Reconstructing representations of dynamic visual objects in early visual cortex. I learned of it yesterday through a Facebook site on vision rehabilitation, which cited another commentary on the article. Here’s part of the commentary in lay terms:
“Visual images and other raw sensory data must reach the cerebral cortex to be perceived, but the data are often missing details when they are sent from the eyes to the visual cortex, the part of the brain responsible for seeing. Thus, our visual system regularly fills in extensive details to create enriched images that help us to understand and interpret what we see. A growing body of evidence suggests these “filled-in” visual signals are represented at early stages of cortical processing.
The researchers used fMRI on study participants to explore the neural mechanisms underlying the reconstruction of these ‘filled-in’ images. They found that ‘intermediate’ object features, which aren’t in the retinal signals but are inferred during kinetic transformation, are reconstructed in neural responses at early stages of cortical processing, presumably via feedback from high-level brain areas.”
This is the neural essence of what Al Sutton alluded to when explaining how we build a visual space world.