Sandy Huang, UC Berkeley. “Enabling Robot Transparency with Informative Actions”

Position:  PhD Candidate

Current Institution:  University of California, Berkeley

Abstract:  Enabling Robot Transparency with Informative Actions

As robots become more capable and commonplace, it is increasingly important that the policies they execute are transparent: human end-users need to understand how a robot will act, when it will fail, and why it failed. Unfortunately, passive familiarization takes a while — it would take people hours, possibly days, of riding in an autonomous car before they understand its driving style and which situations it can and cannot handle. A lack of understanding is dangerous because passengers may over-trust the car, expecting it to handle situations that it cannot. Thus, we need to speed up this process of making policies transparent. My research leverages insights from cognitive science, reinforcement learning, and optimization to select informative examples that more quickly enable end-users to understand a robot. We found that when human end-users see informative examples of a robot’s behavior, they more quickly understand how a robot acts and which situations it can and cannot handle, even when the robot’s policy is a complex black box. This leads to safer and more comfortable human-robot interaction, compared to the typical approach of relying on passive familiarization.

Sandy Huang is a PhD candidate in the Computer Science Department at UC Berkeley, co-advised by Anca Dragan and Pieter Abbeel. She received a bachelor’s degree with Honors in computer science from Stanford University in 2013. Her research focuses on making robot policies more transparent and robust, with the goal of giving human end-users a better understanding of how a robot will act, when it will fail, and why it failed. Her work was nominated for a best paper award at Human Robot Interaction (HRI) 2018. She received the National Science Foundation GRFP NDSEG Fellowship, the Berkeley Chancellor’s fellowship, and the Google Anita Borg Memorial Scholarship. She was a research intern at DeepMind in summer 2018, working with Raia Hadsell.