It is always interesting, in a classic textbook like Principles of Neural Science, to compare changes in subsequent editions. If you check out the Table of Contents of the 5th edition, Part V on Perception has the same five chapters directly related to vision as found in the 6th edition which we’ve been reviewing. The titles of the first three chapters on vision have the same titles as in the previous edition: The Constructive Nature of Visual Processing; Low-Level Visual Processing: The Retina; and Intermediate-Level Visual Processing and Visual Primitives. The fourth chapter which was previously titled High-Level Visual Processing: Cognitive Influences is now titled: High-Level Visual Processing: From Vision to Cognition. As we reviewed in the previous blog entry, that change was indicative of the recognition of the broader role that vision plays in cognition than has previously been acknowledged in neural science. A similar change has occurred in the chapter to which we now turn our attention. In the previous edition it was titled: Visual Processing and Action. In the new edition it is titled: Visual Processing for Attention and Action. The importance of factoring attention into action is a no-brainer.
Chapter 25 opens as follows: “The human brain has an amazing ability to direct action to objects in the visual world – a baby reaching for an object, a tennis player hitting a ball, an artist looking at a model. This ability requires that the visual system solve three problems: 1) making a spatially accurate analysis of the visual world, 2) choosing the object of interest from the welter of stimuli in the visual world, and 3) transferring information on the location and details of the object to the motor system.”
This can be illustrated as multiple fruit visually impinging on the retina in early stages of visual processing, with the brain then directing attention of the individual through the eye toward a specific apple at intermediate and higher stages of visual processing, and then the eye directing the hand for action through the motor system.
The co-authors of this chapter are Michael E. Goldberg and Robert H. Wurtz, the latter having written an article for Daedalus in 2015 titled Brain Mechanisms for Active Vision. Daedalus began as a quarterly journal in 1955 and is published by MIT Press, continuing the mission of its predecessor, the Proceedings of the American Academy of Arts and Sciences which dates back to the 1840s. It is surprising that the chapter doesn’t make reference to Wurtz’s Daedalus article, the abstract of which reads as follows:
“Active vision refers to the exploration of the visual world with rapid eye movements, or saccades, guided by shifts of visual attention. Saccades perform the critical function of directing the high-resolution fovea of our eyes to any point in the visual field two to three times per second. However, the disadvantage of saccades is that each one disrupts vision, causing significant visual disturbance for which the brain must compensate. Exploring the interaction of vision and eye movements provides the opportunity to study the organization of one of the most complex, yet best-understood, brain systems. Outlining this exploration also illustrates some of the ways in which neuroscientists study neuronal systems in the brain and how they relate this brain activity to behavior. It shows the advantages and limitations of current approaches in systems neuroscience, as well as a glimpse of its potential future.”
The opening paragraph of Wurtz’s article reveals that although perception convinces us that we see the visual world as a coherent whole, we actually see a series of snapshots from which we construct a unified view of the world in our brains. The figure below which shows a record of the eye movements of a viewer inspecting the Georges Seurat painting A Sunday Afternoon on the Island of La Grande Jatte, illustrates the snapshot process. A record of the viewer’s eye movements is superimposed on the painting. The black lines show the path of the eyes as they move from one part of the painting to another. These saccadic eye movements are not only fast but frequent, occurring two to three times per second. The dots at the end of each saccade are visual fixations, the points at which the eyes come to rest. Nearly all of our useful vision occurs during fixations, because the scene is then stationary in front of the eyes. With successive fixations, the brain receives a series of snapshots of different fragments of the scene. From these fragments, we become convinced that we see the whole scene at once.
This serves as the backdrop for the beginning of chapter 25, introducing us to Helmholtz’s postulation that a copy of the motor command for each saccade is fed to the visual system so that the representation of the visual world can be adjusted to compensate for eye movement, thereby stabilizing the visual world. Although he originally called this a “sense of effort”, later scientists re-named it the efference copy or corollary discharge. As an aside, Tim Petito, Iz Greenwald, Charlie Fox, and Nick Despotidis worked this notion into a model of spatial localization and applied it to strabismus in the 1980s at SUNY Optometry.
As Goldberg and Wurtz explain, neurons in the parietal cortex, frontal eye field, prestriate visual cortex, and superior colliculus combine to share information and affect visual perception as the eyes move. They write that: “Each saccade can be considered a vector with two dimensions – direction and amplitude. Although the retinal image is different at each saccade, the brain can use the vector of each saccade to reconstruct the whole visual scene from the sequence of retinal images.” Here one begins to get a sense of the quote attributed to Larry Macdonald, that the brain writes equations for eye muscles to solve.
Goldberg and Wurtz note that it was Sherrington who postulated that the brain uses eye position to calculate the spatial location of objects from the position of their images on the retina. They emphasize that there is a representation of eye position in somatosensory cortex. The representation of space in parietal cortex is not organized into a single map like the retinotopic map in primary visual cortex. Instead, it is divided into at least four areas (LIP, MIP, VIP, AIP), that analyze the visual world in ways appropriate for individual motor systems. In other words, these four areas in the intraparietal sulcus (lateral, medial, ventral, and anterior) constitute four different visual maps, each of which corresponds to a particular motor workspace. Activity of these neurons is modulated by the position of the eyes in the orbit.
This helps to explain the conundrum of visual neglect, whereby damage to parietal cortex results in spatial inattention and inaction in contralateral field due to dis-integration or dis-connection of the neuromodulatory visual maps.