This paper integrates knowledge from physiology and psychophysics (i.e., visual perception) to propose a biological neural network model of cortical visual cell responses. We attempt to provide a model of how retinal and cortical cell interactions are able to detect static image luminance discontinuities -- such as at edges --, as well as moving luminance discontinuities -- i.e., motion stimuli. We address how important cortical cells known as simple cells combine retinal and thalamic signals to produce an effective contrast detection mechanism. An extension of the static model is then discussed in light of both psychophysical and physiological data on motion processing. The motion extension suggests a role for another important class of cortical cells known as complex cells. The static model is evaluated through a series of computer simulations that probe its capabilities with natural images, synthetic images (to assess noise tolerance), as well as images that allow us to compare the model's behavior with physiological results. The motion processing capabilities of the extended scheme are also evaluated through computer simulations. We suggest that this type of investigation can be used to attempt to advance our understanding of brain function, as well as devise powerful computational schemes that can be incorporated into artificial vision systems
See how this article has been cited at scite.ai
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.