SAVOR learns skill affordances for bite acquisition-how suitable a manipulation skill (e.g., skewering, scooping) is for a given utensil-food interaction. In our formulation, skill affordances arise from the combination of tool affordances (what a utensil can do) and food affordances (what the food allows).
Our OpenSearch system is capable of searching for a specified object class, given open-set instructions, across diverse embodiments and environments. This is enabled by our Open Scene Graph, which acts as a scene memory for a fully Foundation Model-based (FM) system, that is itself purely built from FMs.
This work investigates embodiment scaling laws—the idea that training on a more diverse set of robot embodiments improves generalization to unseen ones. Using a procedurally generated dataset of ~1,000 varied robots, the authors train generalist locomotion policies and show strong zero-shot transfer to real-world robots like the Unitree Go2 and H1.
We examines, experimentally and theoretically, one representation that enables visual navigation policies solely trained in the Habitat simulator to generalize to real-world scenes, both indoor and outdoors.
Our framework enables an agent to put misplaced objects back in place with partial map information by exploiting commonsense knowledge in large language models (LLMs).
Developed a self-driving agent in the Duckietown simulation with classical planning, computer vision, and imitation learning techniques. Placed among the top-scored project in the module.
Teaching Assistant
CS4750/CS5750/ECE4770/MAE4760 Foundations of Robotics, Fall 2024, Cornell University
CS5446/4246 AI Planning and Decision Making, Fall 2023, NUS
DBA5106 Foundations of Business Analytics, Fall 2023, NUS
CS5242 Neural Networks and Deep Learning, Spring 2023, NUS
DBA5106 Foundations of Business Analytics, Fall 2022, NUS
CSC4020 Fundamentals of Machine Learning, Spring 2022, CUHKSZ
ERG3010 Data and Knowledge Management, Fall 2021, CUHKSZ