Skip to main content Skip to secondary navigation
Journal Article

DASH: Modularized Human Manipulation Simulation with Vision and Language for Embodied AI

DASH manipulating objects

Creating virtual humans with embodied, human-like perceptual and actuation constraints has the promise to provide an integrated simulation platform for many scientific and engineering applications. We present Dynamic and Autonomous Simulated Human (DASH), an embodied virtual human that, given natural language commands, performs grasp-and-stack tasks in a physically-simulated cluttered environment solely using its own visual perception, proprioception, and touch, without requiring human motion data. By factoring the DASH system into a vision module, a language module, and manipulation modules of two skill categories, we can mix and match analytical and machine learning techniques for different modules so that DASH is able to not only perform randomly arranged tasks with a high success rate, but also do so under anthropomorphic constraints and with fluid and diverse motions. The modular design also favors analysis and extensibility to more complex manipulation skills.

Paper    Code    Video

Author(s)
Yifeng Jiang
Michelle Guo
Jiangshan Li
Ioannis Exarchos
Jiajun Wu
C. Karen Liu
Journal Name
ACM SIGGRAPH / Eurographics Symposium on Computer Animation (SCA), 2021
Publication Date
August, 2021