Man vs. Machine: Robots Still no Match for Human Vision, Study Shows
Robots still have a long way to go before they can rival humans in visual perception.
A recent study published in the Proceedings of the National Academy of Sciences proved that machines still struggle with interpreting visual patterns as compared to their human counterparts.
Francois Fleuret, author of the study and robotics engineer from the Idiap Research Institute in Martigny, Switzerland, said, "Humans understand and characterize images better, but statistics and computers are more powerful."
Visual machine learning has applications in a variety of fields including space exploration, industrial robotics and vehicle safety. Thus, machines are superior to humans at certain large-scale tasks such as airport security surveillance and facial recognition, the study said. But machines cannot interpret visual patterns, detect individual shapes and describe categories.
Fleuret's finding does not come as a surprise to scientific community, where it is a consensus that humans are better than machines at understanding visual patterns. Fleuret gave the following example: If a person is introduced to a new colleague at work, he or she could probably recognize this colleague the next day on a crowded bus. A computer would never be able to do that, he said.
He cited another example which is the CAPTCHA, a test used widely by personal computers to tell the difference between humans and robots as part of cyber-security. A CAPTCHA is usually a randomly generated word that is distorted or warped. While it should be easy for any human to identify the word, it would be impossible for a robot to do so.
Artificial intelligence researchers believe that for machines to achieve a higher level of visual understanding, they should be taught how to recognize individual parts of an object and combine their relative positions into a recognizable whole object.