Direct preference optimization: Your language model is secretly a reward model R Rafailov, A Sharma, E Mitchell, CD Manning, S Ermon, C Finn Advances in Neural Information Processing Systems 36, 2024 | 1181 | 2024 |
Combo: Conservative offline model-based policy optimization T Yu, A Kumar, R Rafailov, A Rajeswaran, S Levine, C Finn Advances in neural information processing systems 34, 28954-28967, 2021 | 378 | 2021 |
Open x-embodiment: Robotic learning datasets and rt-x models A Padalkar, A Pooley, A Jain, A Bewley, A Herzog, A Irpan, A Khazatsky, ... arXiv preprint arXiv:2310.08864, 2023 | 166 | 2023 |
Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback K Tian, E Mitchell, A Zhou, A Sharma, R Rafailov, H Yao, C Finn, ... arXiv preprint arXiv:2305.14975, 2023 | 129 | 2023 |
Offline reinforcement learning from images with latent space models R Rafailov, T Yu, A Rajeswaran, C Finn Learning for dynamics and control, 1154-1168, 2021 | 124 | 2021 |
Offline meta-reinforcement learning with advantage weighting E Mitchell, R Rafailov, XB Peng, S Levine, C Finn International Conference on Machine Learning, 7780-7791, 2021 | 103 | 2021 |
Diffusion model alignment using direct preference optimization B Wallace, M Dang, R Rafailov, L Zhou, A Lou, S Purushwalkam, S Ermon, ... Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024 | 51 | 2024 |
Visual adversarial imitation learning using variational models R Rafailov, T Yu, A Rajeswaran, C Finn Advances in Neural Information Processing Systems 34, 3016-3028, 2021 | 41 | 2021 |
Contrastive prefence learning: Learning from human feedback without rl J Hejna, R Rafailov, H Sikchi, C Finn, S Niekum, WB Knox, D Sadigh arXiv preprint arXiv:2310.13639, 2023 | 36 | 2023 |
Vision-based manipulators need to also see from their hands K Hsu, MJ Kim, R Rafailov, J Wu, C Finn arXiv preprint arXiv:2203.12677, 2022 | 33 | 2022 |
From to : Your Language Model is Secretly a Q-Function R Rafailov, J Hejna, R Park, C Finn arXiv preprint arXiv:2404.12358, 2024 | 31 | 2024 |
Disentangling length from quality in direct preference optimization R Park, R Rafailov, S Ermon, C Finn arXiv preprint arXiv:2403.19159, 2024 | 27 | 2024 |
Preference fine-tuning of llms should leverage suboptimal, on-policy data F Tajwar, A Singh, A Sharma, R Rafailov, J Schneider, T Xie, S Ermon, ... arXiv preprint arXiv:2404.14367, 2024 | 25 | 2024 |
On the sum of powered distances to certain sets of points on the circle N Nikolov, R Rafailov Pacific journal of mathematics 253 (1), 157-168, 2011 | 23 | 2011 |
An emulator for fine-tuning large language models using small language models E Mitchell, R Rafailov, A Sharma, C Finn, CD Manning arXiv preprint arXiv:2310.12962, 2023 | 19 | 2023 |
Aligning modalities in vision large language models via preference fine-tuning Y Zhou, C Cui, R Rafailov, C Finn, H Yao arXiv preprint arXiv:2402.11411, 2024 | 18 | 2024 |
On extremums of sums of powered distances to a finite set of points N Nikolov, R Rafailov Geometriae Dedicata 167 (1), 69-89, 2013 | 18 | 2013 |
Open x-embodiment: Robotic learning datasets and RT-x models Q Vuong, S Levine, HR Walke, K Pertsch, A Singh, R Doshi, C Xu, J Luo, ... Towards Generalist Robots: Learning Paradigms for Scalable Skill Acquisition …, 2023 | 14 | 2023 |
Is model collapse inevitable? breaking the curse of recursion by accumulating real and synthetic data M Gerstgrasser, R Schaeffer, A Dey, R Rafailov, H Sleight, J Hughes, ... arXiv preprint arXiv:2404.01413, 2024 | 13 | 2024 |
Moto: Offline pre-training to online fine-tuning for model-based robot learning R Rafailov, KB Hatch, V Kolev, JD Martin, M Phielipp, C Finn Conference on Robot Learning, 3654-3671, 2023 | 10 | 2023 |