Accelerating Robotic Reinforcement Learning with Agent Guidance

Haojun Chen1,2† Zili Zou2† Chengdong Ma1 Yaoxiang Pu2 Haotong Zhang1,2 Yuanpei Chen1,2✉ Yaodong Yang1,2✉
1Institute for Artificial Intelligence, Peking University
2PKU-PsiBot Joint Lab
†Equal contribution ✉Corresponding authors

Abstract

Reinforcement Learning (RL) offers a powerful paradigm for autonomous robots to master generalist manipulation skills through trial-and-error. However, its real-world application is stifled by severe sample inefficiency. Recent Human-in-the-Loop (HIL) methods accelerate training by using human corrections, yet this approach faces a scalability barrier. Reliance on human supervisors imposes a 1:1 supervision ratio that limits fleet expansion, suffers from operator fatigue over extended sessions, and introduces high variance due to inconsistent human proficiency.

We present Agent-guided Policy Search (AGPS), a framework that automates the training pipeline by replacing human supervisors with a multimodal agent. Our key insight is that the agent can be viewed as a semantic world model, injecting intrinsic value priors to structure physical exploration. By using executable tools, the agent provides precise guidance via corrective waypoints and spatial constraints for exploration pruning. We validate our approach on two tasks, ranging from precision insertion to deformable object manipulation. Results demonstrate that AGPS outperforms HIL methods in sample efficiency. This automates the supervision pipeline, unlocking the path to labor-free and scalable robot learning.

Pipeline

AGPS Pipeline Overview

Experiments

Click on each experiment to view videos. Videos load only when you expand the section.

USB Insert

Hang Chinese Knot