SInViG: A Self-Evolving Interactive Visual Agent for Human-Robot Interaction.
Jie Xu, Hanbo Zhang, Xinghang Li, Huaping Liu, Xuguang Lan, Tao Kong.
ICRA Workshop 2024.
[ Paper]
[ Video]
Towards Unified Interactive Visual Grounding in The Wild.
Jie Xu, Hanbo Zhang, Qingyi Si, Yifeng Li, Xuguang Lan, Tao Kong.
ICRA 2024.
[ Paper]
[ Video]
[ Demo]
[ Website]
[ Code]
Vision-Language Foundation Models as Effective Robot Imitators.
Xinghang Li, Minghuan Liu, Hanbo Zhang, Cunjun Yu, Jie Xu, Hongtao Wu, Chilam Cheang, Ya Jing, Weinan Zhang, Huaping Liu, Hang Li, Tao Kong.
ICLR 2024.
[ Paper]
[ Website]
[ Code]
[ Model]
Experience Consistency Distillation Continual Reinforcement Learning for Robotic Manipulation Tasks.
Chao Zhao, Jie Xu, Ru Peng, Xingyu Chen, Kuizhi Mei, Xuguang Lan.
ICRA 2024.
[ Paper]
[ Video]
InViG: enchmarking Open-Ended Interactive Visual Grounding with 500K Dialogues.
Hanbo Zhang*, Jie Xu*, Yuchen Mo, Tao Kong.
CVPR Workshop 2024.
[ Paper]
[ Code]
[ Dataset]
A Continuous Learning Approach for Probabilistic Human Motion Prediction.
Jie Xu*, Shihong Wang*, Xingyu Chen, Jiahao Zhang, Xuguang Lan, Nanning Zheng.
ICRA 2022.
[ Paper]
[ Video]
Probabilistic Human Motion Prediction via A Bayesian Neural Network.
Jie Xu*, Xingyu Chen*, Xuguang Lan, Nanning Zheng.
ICRA 2021.
[ Paper]
[ Video1]
[ Video2]
EAN: Error Attenuation Network for Long-term Human Motion Prediction.
Jie Xu, Xuguang Lan, Jin Li, Xingyu Chen, Nanning Zheng.
CCHI 2019.
[ Paper]
[ Video]