Humans still trump AI in assessing moving scenes

Self-driving car. | Newsreel
AI lacks the ability to assess social interactions. | Photo: Gremlin (iStock)

Artificial Intelligence (AI) is inferior to humans in interpreting social interactions in a moving scene, skills vital for self-driving cars and assistive robots.

Research, led by scientists at Johns Hopkins University, found AI failed at understanding social dynamics and context necessary for interacting with people essential in navigate the real world.

Study lead author Leyla Isik said the problem may be rooted in the infrastructure of AI systems which were currently built on neural networks inspired by the area of the brain that processed static images, which was different from the area that processed dynamic social scenes.

“AI for a self-driving car, for example, would need to recognize the intentions, goals, and actions of human drivers and pedestrians. You would want it to know which way a pedestrian is about to start walking, or whether two people are in conversation versus about to cross the street,” Assistant Professor Isik said.

“Any time you want an AI to interact with humans, you want it to be able to recognize what people are doing. I think this sheds light on the fact that these systems can’t right now.”

Access the full study: Modeling dynamic social vision highlights gaps between deep learning and humans.