Follow
Peng Xu
Peng Xu
Google Deepmind
Verified email at google.com
Title
Cited by
Cited by
Year
Do as i can, not as i say: Grounding language in robotic affordances
M Ahn, A Brohan, N Brown, Y Chebotar, O Cortes, B David, C Finn, C Fu, ...
arXiv preprint arXiv:2204.01691, 2022
13082022
Rt-1: Robotics transformer for real-world control at scale
A Brohan, N Brown, J Carbajal, Y Chebotar, J Dabis, C Finn, ...
arXiv preprint arXiv:2212.06817, 2022
7502022
Code as policies: Language model programs for embodied control
J Liang, W Huang, F Xia, P Xu, K Hausman, B Ichter, P Florence, A Zeng
2023 IEEE International Conference on Robotics and Automation (ICRA), 9493-9500, 2023
7062023
Rt-2: Vision-language-action models transfer web knowledge to robotic control
A Brohan, N Brown, J Carbajal, Y Chebotar, X Chen, K Choromanski, ...
arXiv preprint arXiv:2307.15818, 2023
6042023
Do as i can, not as i say: Grounding language in robotic affordances
A Brohan, Y Chebotar, C Finn, K Hausman, A Herzog, D Ho, J Ibarz, ...
Conference on robot learning, 287-318, 2023
4112023
Open x-embodiment: Robotic learning datasets and rt-x models
A O'Neill, A Rehman, A Gupta, A Maddukuri, A Gupta, A Padalkar, A Lee, ...
arXiv preprint arXiv:2310.08864, 2023
2672023
Language to rewards for robotic skill synthesis
W Yu, N Gileadi, C Fu, S Kirmani, KH Lee, MG Arenas, HTL Chiang, ...
arXiv preprint arXiv:2306.08647, 2023
2172023
Learning to walk in the real world with minimal human effort
S Ha, P Xu, Z Tan, S Levine, J Tan
arXiv preprint arXiv:2002.08550, 2020
1862020
Robots that ask for help: Uncertainty alignment for large language model planners
AZ Ren, A Dixit, A Bodrova, S Singh, S Tu, N Brown, P Xu, L Takayama, ...
arXiv preprint arXiv:2307.01928, 2023
1592023
Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models
P Xu, W Shao, K Zhang, P Gao, S Liu, M Lei, F Meng, S Huang, Y Qiao, ...
arXiv preprint arXiv:2306.09265, 2023
1492023
Rt-2: Vision-language-action models transfer web knowledge to robotic control
B Zitkovich, T Yu, S Xu, P Xu, T Xiao, F Xia, J Wu, P Wohlhart, S Welker, ...
Conference on Robot Learning, 2165-2183, 2023
1462023
Omniquant: Omnidirectionally calibrated quantization for large language models
W Shao, M Chen, Z Zhang, P Xu, L Zhao, Z Li, K Zhang, P Gao, Y Qiao, ...
arXiv preprint arXiv:2308.13137, 2023
1192023
Imagebind-llm: Multi-modality instruction tuning
J Han, R Zhang, W Shao, P Gao, P Xu, H Xiao, K Zhang, C Liu, S Wen, ...
arXiv preprint arXiv:2309.03905, 2023
932023
Visual-locomotion: Learning to walk on complex terrains with vision
W Yu, D Jain, A Escontrela, A Iscen, P Xu, E Coumans, S Ha, J Tan, ...
5th Annual Conference on Robot Learning, 2021
792021
Principles and guidelines for evaluating social robot navigation algorithms
A Francis, C Pérez-d'Arpino, C Li, F Xia, A Alahi, R Alami, A Bera, ...
arXiv preprint arXiv:2306.16740, 2023
562023
Pivot: Iterative visual prompting elicits actionable knowledge for vlms
S Nasiriany, F Xia, W Yu, T Xiao, J Liang, I Dasgupta, A Xie, D Driess, ...
arXiv preprint arXiv:2402.07872, 2024
532024
Learning model predictive controllers with real-time attention for real-world navigation
X Xiao, T Zhang, K Choromanski, E Lee, A Francis, J Varley, S Tu, ...
arXiv preprint arXiv:2209.10780, 2022
452022
Mmt-bench: A comprehensive multimodal benchmark for evaluating large vision-language models towards multitask agi
K Ying, F Meng, J Wang, Z Li, H Lin, Y Yang, H Zhang, W Zhang, Y Lin, ...
arXiv preprint arXiv:2404.16006, 2024
392024
Value function spaces: Skill-centric state abstractions for long-horizon reasoning
D Shah, P Xu, Y Lu, T Xiao, A Toshev, S Levine, B Ichter
arXiv preprint arXiv:2111.03189, 2021
362021
Tiny lvlm-ehub: Early multimodal experiments with bard
W Shao, Y Hu, P Gao, M Lei, K Zhang, F Meng, P Xu, S Huang, H Li, ...
arXiv preprint arXiv:2308.03729, 2023
332023
The system can't perform the operation now. Try again later.
Articles 1–20