Social Robotics
Artificial Agents
In daily life, we observe and interact with others moving in myriad different ways, some more predictable than others. Dominant theories of action understanding suggest that we understand others’ actions by using our own motor experience to extrapolate what others might be doing, and what they are likely to do next. But what happens when we see a person moving like a machine, or a dancing robot, or a triangle negotiating a barrier to grab a cookie?
Our research suggests that the same sensorimotor brain regions engaged when watching our fellow humans move naturally are also engaged when watching non-human agents in action (whether Lego robots, 2D shapes, or a wind-up bulldozer), or humans moving in decidedly unpredictable and unfamiliar ways. Ongoing investigations seek to understand top-down modulations of these perceptual processes, and how different expectations change how we perceive and interact with non-human agents.