Do as i can, not as i say: Grounding language in robotic affordances M Ahn, A Brohan, N Brown, Y Chebotar, O Cortes, B David, C Finn, C Fu, ... arXiv preprint arXiv:2204.01691, 2022 | 1308 | 2022 |
Rt-1: Robotics transformer for real-world control at scale A Brohan, N Brown, J Carbajal, Y Chebotar, J Dabis, C Finn, ... arXiv preprint arXiv:2212.06817, 2022 | 750 | 2022 |
Code as policies: Language model programs for embodied control J Liang, W Huang, F Xia, P Xu, K Hausman, B Ichter, P Florence, A Zeng 2023 IEEE International Conference on Robotics and Automation (ICRA), 9493-9500, 2023 | 706 | 2023 |
Rt-2: Vision-language-action models transfer web knowledge to robotic control A Brohan, N Brown, J Carbajal, Y Chebotar, X Chen, K Choromanski, ... arXiv preprint arXiv:2307.15818, 2023 | 604 | 2023 |
Do as i can, not as i say: Grounding language in robotic affordances A Brohan, Y Chebotar, C Finn, K Hausman, A Herzog, D Ho, J Ibarz, ... Conference on robot learning, 287-318, 2023 | 411 | 2023 |
Open x-embodiment: Robotic learning datasets and rt-x models A O'Neill, A Rehman, A Gupta, A Maddukuri, A Gupta, A Padalkar, A Lee, ... arXiv preprint arXiv:2310.08864, 2023 | 267 | 2023 |
Language to rewards for robotic skill synthesis W Yu, N Gileadi, C Fu, S Kirmani, KH Lee, MG Arenas, HTL Chiang, ... arXiv preprint arXiv:2306.08647, 2023 | 217 | 2023 |
Learning to walk in the real world with minimal human effort S Ha, P Xu, Z Tan, S Levine, J Tan arXiv preprint arXiv:2002.08550, 2020 | 186 | 2020 |
Robots that ask for help: Uncertainty alignment for large language model planners AZ Ren, A Dixit, A Bodrova, S Singh, S Tu, N Brown, P Xu, L Takayama, ... arXiv preprint arXiv:2307.01928, 2023 | 159 | 2023 |
Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models P Xu, W Shao, K Zhang, P Gao, S Liu, M Lei, F Meng, S Huang, Y Qiao, ... arXiv preprint arXiv:2306.09265, 2023 | 149 | 2023 |
Rt-2: Vision-language-action models transfer web knowledge to robotic control B Zitkovich, T Yu, S Xu, P Xu, T Xiao, F Xia, J Wu, P Wohlhart, S Welker, ... Conference on Robot Learning, 2165-2183, 2023 | 146 | 2023 |
Omniquant: Omnidirectionally calibrated quantization for large language models W Shao, M Chen, Z Zhang, P Xu, L Zhao, Z Li, K Zhang, P Gao, Y Qiao, ... arXiv preprint arXiv:2308.13137, 2023 | 119 | 2023 |
Imagebind-llm: Multi-modality instruction tuning J Han, R Zhang, W Shao, P Gao, P Xu, H Xiao, K Zhang, C Liu, S Wen, ... arXiv preprint arXiv:2309.03905, 2023 | 93 | 2023 |
Visual-locomotion: Learning to walk on complex terrains with vision W Yu, D Jain, A Escontrela, A Iscen, P Xu, E Coumans, S Ha, J Tan, ... 5th Annual Conference on Robot Learning, 2021 | 79 | 2021 |
Principles and guidelines for evaluating social robot navigation algorithms A Francis, C Pérez-d'Arpino, C Li, F Xia, A Alahi, R Alami, A Bera, ... arXiv preprint arXiv:2306.16740, 2023 | 56 | 2023 |
Pivot: Iterative visual prompting elicits actionable knowledge for vlms S Nasiriany, F Xia, W Yu, T Xiao, J Liang, I Dasgupta, A Xie, D Driess, ... arXiv preprint arXiv:2402.07872, 2024 | 53 | 2024 |
Learning model predictive controllers with real-time attention for real-world navigation X Xiao, T Zhang, K Choromanski, E Lee, A Francis, J Varley, S Tu, ... arXiv preprint arXiv:2209.10780, 2022 | 45 | 2022 |
Mmt-bench: A comprehensive multimodal benchmark for evaluating large vision-language models towards multitask agi K Ying, F Meng, J Wang, Z Li, H Lin, Y Yang, H Zhang, W Zhang, Y Lin, ... arXiv preprint arXiv:2404.16006, 2024 | 39 | 2024 |
Value function spaces: Skill-centric state abstractions for long-horizon reasoning D Shah, P Xu, Y Lu, T Xiao, A Toshev, S Levine, B Ichter arXiv preprint arXiv:2111.03189, 2021 | 36 | 2021 |
Tiny lvlm-ehub: Early multimodal experiments with bard W Shao, Y Hu, P Gao, M Lei, K Zhang, F Meng, P Xu, S Huang, H Li, ... arXiv preprint arXiv:2308.03729, 2023 | 33 | 2023 |