Karen Liu
Karen has a passion for human movement, whether it involves an animated character or a humanoid robot. She directs the Movement Lab at Stanford to pursue Computer Animation and Robotics research in parallel. In her view, animation is about a virtual body controlled by a physical brain (i.e. a human animator) while robotics is about a physical body controlled by a virtual brain. These seemingly opposite research areas share remarkably similar fundamental methodologies that Karen and her team developed, including physics simulation, generative models, imitation learning, reinforcement learning, and various optimal control techniques.
Karen’s interests in human movement have remained unchanged since her PhD advisor first showed her a physically simulated character hopping under moon gravity, but her vision has expanded from creating the coolest video game to building predictive human motion models for preventing musculoskeletal injury, studying human athletic and artistic performance, and developing new motion sensors for sport medicine. Her team has pioneered human-centric physics simulation, differentiable physics models, new mocap systems that capture both human actions and egocentric observations, and the largest dataset containing both kinematic and kinetic human motion data.
Karen’s passion for human movement has also led her to pursue research in humanoid robots. Starting with bipedal locomotion, Karen and her team have developed various techniques in reinforcement learning, physics simulation, system identification, and sim-to-real transfer, demonstrating that policies trained in simulation can be applied to an inherently unstable bipedal robot. Similarly, Karen’s fascination with human dexterity has evolved from hand animation to dexterous robot hands. Leveraging the similar morphology between human and robotic hands, her team has developed an in-the-wild hand motion capture system that enables learning dexterous robot manipulation from human demonstrations.
The fascinating parallel between Computer Animation and Robotics provides the Movement Lab with a unique perspective to explore various directions in Embodied AI, from building foundational models for human motion to creating intelligent digital twins and assistive machines that share autonomy with humans.
Karen is also committed to democratizing robotics education. She aims to create a learning experience for beginners that integrates hardware experiments, mathematical foundations, and state-of-the-art AI methodologies. In collaboration with Hands-on Robotics, a non-profit organization, she teaches a well-received undergraduate course at Stanford, "A Hands-On Introduction to AI-Enabled Robots." This course provides lecture materials, open-source code, and open-hardware video instructions to anyone in the world interested in building an AI-enabled quadruped from scratch.