Skip to main content Skip to secondary navigation
Journal Article

Learning to Collaborate from Simulation for Robot-Assisted Dressing

dressing

We investigated the application of haptic feedback control and deep reinforcement learning (DRL) to robot-assisted dressing. Our method uses DRL to simultaneously train human and robot control policies as separate neural networks using physics simulations. In addition, we modeled variations in human impairments relevant to dressing, including unilateral muscle weakness, involuntary arm motion, and limited range of motion. Our approach resulted in control policies that successfully collaborate in a variety of simulated dressing tasks involving a hospital gown and a T-shirt. In addition, our approach resulted in policies trained in simulation that enabled a real PR2 robot to dress the arm of a humanoid robot with a hospital gown. We found that training policies for specific impairments dramatically improved performance; that controller execution speed could be scaled after training to reduce the robot's speed without steep reductions in performance; that curriculum learning could be used to lower applied forces; and that multi-modal sensing, including a simulated capacitive sensor, improved performance.

Paper   Video

Author(s)
Alexander Clegg
Zackory Erickson
Patrick Grady
Greg Turk
Charles C. Kemp
C. Karen Liu
Journal Name
IEEE Robotics and Automation Letters (RA-L and ICRA), 2020
Publication Date
May, 2020