LOS ALAMOS, N.M., Dec. 21, 2011 /PRNewswire-USNewswire/ -- An essential question confronting neuroscientists and computer vision researchers alike is how objects can be identified by simply "looking" at an image. Introspectively, we know that the human brain solves this problem very well. We only have to look at something to know what it is.
But teaching a computer to "know" what it's looking at is far harder. In research published this fall in the Public Library of Science (PLoS) Computational Biology journal, a team from Los Alamos National Laboratory, Chatham University, and Emory University first measured human performance on a visual task ‑ identifying a certain kind of shape when an image is flashed in front of a viewer for a very short amount of time (20-200 milliseconds). Human performance gets worse, as expected, when the image is shown for shorter time periods. Also as expected, humans do worse when the shapes are more complicated.
But could a computer be taught to recognize shapes as well, and then do it faster than humans? The team tried developing a computer model based on human neural structure and function, to do what we do, and possibly do it better.
Their paper, "Model Cortical Association Fields Account for the Time Course and Dependence on Target Complexity of Human Contour Perception," describes how, after measuring human performance, they created a computer model to also attempt to pick out the shapes.
"This model is biologically inspired and relies on leveraging lateral connections between neurons in the same layer of a model of the human visual system," said Vadas Gintautas of Chatham University in Pittsburgh and formerly a researcher at Los Alamos.
Neuroscientists have characterized neurons in the primate visual cortex that appear to underlie object recognition, noted senior author Garrett Kenyon of Los Alamos. "These neurons, located in the inferote