Synapse: Learning Preferential Concepts from Visual Demonstrations

Department of Computer Science, UT Austin

Illustration of Synapse for identifying good contingency locations. (a) Synapse learns a neuro-symbolic program that represents the preferential concept based on user demonstrations, which include both a NL explanation and a physical demonstration. The learning algorithm consists of three steps, namely updating the concept library, synthesizing a program sketch, and performing parameter synthesis; (b) The new sketch gets synthesized based on the previous sketch and the current natural language description; (c) Parameter synthesis determines suitable values for numeric parameters in the program sketch; (d) Synapse evaluates the program on a new query image to return a preference (in this case, boolean) mask over the input image.

Abstract

We address the problem of preference learning, which aims to learn user-specific preferences (e.g., 'good parking spot', 'convenient drop-off location') from visual input. Despite its similarity to learning factual concepts (e.g., 'red cube'), preference learning is a fundamentally harder problem due to its subjective nature and the paucity of person-specific training data.

To tackle this problem, we present new framework called Synapse, which is a neuro-symbolic approach designed to efficiently learn preferential concepts from limited demonstrations. Synapse represents preferences as neuro-symbolic programs in a domain-specific language (DSL) that operates over images, and leverages a novel combination of visual parsing, large language models, and program synthesis to learn programs representing individual preferences. We evaluate Synapse through extensive experimentation including a user case study focusing on mobility-related concepts in mobile robotics and autonomous driving. Our evaluation demonstrates that Synapse significantly outperforms existing baselines as well as its own ablations.

Demos

Approach

At a high level, the learning procedure consists of three steps:

  • Concept Library Update: Checking whether the existing concept library C is sufficient for successfully learning the desired preference evaluation function. For example, if the natural language explanation uses the term "far away" but the concept library does not contain a suitable definition, Synapse-Learn interactively queries the user for clarification and updates its concept library as needed.
  • Program Sketch Synthesis: If the concept library is sufficient for representing the preference, Synapse-Learn proceeds to synthesize a so-called program sketch, which is a program with missing constants to be synthesized. This is so because the user's natural language explanation is often sufficient to understand the general structure of the preference evaluation function but not its numeric parameters, which can only be accurately learned from the physical demonstrations.
  • Parameter Synthesis: We use all physical demonstrations provided thus far to synthesize the unknown numeric parameters of the sketch using a constraint-solving approach. For example, if the user's NL explanation mentions "not too close to the sidewalk", the physical demonstrations are needed to understand what the user considers "too close". Thus, a separate parameter synthesis procedure to determine suitable numeric parameters from the physical demonstrations.

Results

We evaluate Synapse against eight baselines and conduct multiple ablation studies to confirm our design choices.

Lifelong Learning Curve

Synapse learns new concepts and synthesizes better parameters as it sees more demonstrations.

User-study

We conduct a Synapse-based case-study to test if it can align well with multiple user preferences.

Latest News

Workshop Paper

Apr 04, 2024

Paper accepted at the Vision-Language Models for Navigation and Manipulation (VLMNM) workshop at ICRA 2024.

BibTeX

@article{modak2024synapse,
        title={Synapse: Learning Preferential Concepts from Visual Demonstrations},
        author={Modak, Sadanand and Patton, Noah and Dillig, Isil and Biswas, Joydeep},
        journal={arXiv preprint arXiv:2403.16689},
        year={2024}}