Introspective Autonomy


The Team

PIs: Joydeep Biswas‡, Shlomo Zilberstein†
Students: Connor Basich†, Sadegh Rabiee‡, Sandhya Saisubramanian†, Kavan Sikand‡, Amanda Adkins‡, Shuwa Miura†, Allyson Beach†
‡: University of Texas at Austin
†: University of Massachusetts Amherst
Industrial Partners: Kyle Hollins Wray, Stefan Witwicki (Renault-Nissan-Mitsubishi Alliance Innovation Center Silicon Valley)

Overview

Building and deploying autonomous service robots in real human environments has been a long-standing challenge in artificial intelligence and robotics. Such robots can assist humans in everyday activities and offer a transformational impact on society. In order to successfully realize their benefits, however, robots must be cognizant of their limitations, and when uncertain about an action, they must be able to ask for human assistance. When robots are deployed in novel environments, developers cannot fully foresee what errors the robots may make, what may be the root causes of such errors, how human confidence in the robots’ abilities may change as a result of such errors, and how well the robots may learn to autonomously overcome errors and reduce the reliance on human assistance. This project develops a comprehensive solution to these challenges by introducing competence-aware autonomy, enabling robots to learn what aspects of the environment, the situation, and the task lead to varying levels of success. When asking for human assistance, a competence-aware robot can offer evidence to explain its level of confidence. When left to act autonomously, a competence-aware robot can hypothesize contingency actions to help reduce its own uncertainty and to remain autonomous with high confidence. Consequently, the project transforms the ability of researchers and practitioners to deploy robots in unstructured environments where limited knowledge is available prior to deployment. This enables workers with limited robotics expertise to deploy robots more safely in unstructured environments and teach them over time to be progressively independent.

The project addresses the need to build competency-aware systems by introducing approaches to satisfy six core properties of introspective perception and planning. They include: 1) an approach to autonomously supervise the training of introspective perception by relying on different types of consistency metrics; 2) an approach to learn to identify causal factors of perception errors by considering both local and global cues in sensed data; 3) an approach to analyze sequences of actions and observations from logs to learn the impact of actions on introspective perception; 4) an introspective planning approach that is cognizant of different levels of autonomy, each associated with certain restrictions on autonomous operation; 5) an introspective planning approach that is cognizant of the cost of different forms of human assistance and can learn to minimize the reliance on humans over time; and 6) an introspective planning approach that can learn from human feedback about negative side effects and can attempt to explain them and mitigate their impact. The project identifies key patterns of interaction between these different components to enable a robot to autonomously learn to plan around its limitations and minimize the reliance on humans. Our algorithms are evaluated on several platforms, including the UT Campus Jackal and Husky, the UMass Campus Jackal, and the Nissan Autonomous Vehicle.