Papers
arxiv:2604.20468

MOMO: A framework for seamless physical, verbal, and graphical robot skill learning and adaptation

Published on Apr 23
Authors:
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

An interactive robot skill adaptation framework combines kinesthetic, natural language, and graphical interfaces with specialized components for human intention detection, tool-based LLM architecture, movement primitives, virtual fixtures, and ergodic control to enable flexible industrial robot operation.

AI-generated summary

Industrial robot applications require increasingly flexible systems that non-expert users can easily adapt for varying tasks and environments. However, different adaptations benefit from different interaction modalities. We present an interactive framework that enables robot skill adaptation through three complementary modalities: kinesthetic touch for precise spatial corrections, natural language for high-level semantic modifications, and a graphical web interface for visualizing geometric relations and trajectories, inspecting and adjusting parameters, and editing via-points by drag-and-drop. The framework integrates five components: energy-based human-intention detection, a tool-based LLM architecture (where the LLM selects and parameterizes predefined functions rather than generating code) for safe natural language adaptation, Kernelized Movement Primitives (KMPs) for motion encoding, probabilistic Virtual Fixtures for guided demonstration recording, and ergodic control for surface finishing. We demonstrate that this tool-based LLM architecture generalizes skill adaptation from KMPs to ergodic control, enabling voice-commanded surface finishing. Validation on a 7-DoF torque-controlled robot at the Automatica 2025 trade fair demonstrates the practical applicability of our approach in industrial settings.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.20468
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.20468 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.20468 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.20468 in a Space README.md to link it from this page.

Collections including this paper 1