Folgen
Peter Henderson
Peter Henderson
Bestätigte E-Mail-Adresse bei princeton.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
36202021
Deep Reinforcement Learning that Matters
P Henderson*, R Islam*, P Bachman, J Pineau, D Precup, D Meger
AAAI Conference on Artificial Intelligence (AAAI), 2018
23852018
An Introduction to Deep Reinforcement Learning
V François-Lavet, P Henderson, R Islam, MG Bellemare, J Pineau
Foundations and Trends® in Machine Learning 11 (3-4), 219-354, 2018
17552018
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
TL Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
arXiv preprint arXiv:2211.05100, 2022
13982022
Holistic Evaluation of Language Models
P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ...
arXiv preprint arXiv:2211.09110, 2022
8342022
Towards the systematic reporting of the energy and carbon footprints of machine learning
P Henderson, J Hu, J Romoff, E Brunskill, D Jurafsky, J Pineau
Journal of Machine Learning Research 21 (248), 1-43, 2020
4822020
A Survey of Available Corpora For Building Data-Driven Dialogue Systems: The Journal Version
IV Serban, R Lowe, P Henderson, L Charlin, J Pineau
Dialogue & Discourse 9 (1), 1-49, 2018
454*2018
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
M Brundage, S Avin, J Wang, H Belfield, G Krueger, G Hadfield, H Khlaaf, ...
arXiv preprint arXiv:2004.07213, 2020
3932020
Reproducibility of Benchmarked Deep Reinforcement Learning Tasks for Continuous Control
R Islam*, P Henderson*, M Gomrokchi, D Precup
Reproducibility in Machine Learning Workshop (ICML), 2017
3262017
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
X Qi, Y Zeng, T Xie, PY Chen, R Jia, P Mittal, P Henderson
arXiv preprint arXiv:2310.03693, 2023
1872023
Ethical Challenges in Data-Driven Dialogue Systems
P Henderson, K Sinha, N Angelard-Gontier, NR Ke, G Fried, R Lowe, ...
AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2018
1842018
When does pretraining help? assessing self-supervised learning for law and the CaseHOLD dataset of 53,000+ legal holdings
L Zheng*, N Guha*, BR Anderson, P Henderson, DE Ho
Proceedings of the Eighteenth International Conference on Artificial …, 2021
1672021
With Little Power Comes Great Responsibility
D Card, P Henderson, U Khandelwal, R Jia, K Mahowald, D Jurafsky
arXiv preprint arXiv:2010.06595, 2020
1062020
Visual adversarial examples jailbreak aligned large language models
X Qi, K Huang, A Panda, P Henderson, M Wang, P Mittal
Proceedings of the AAAI Conference on Artificial Intelligence 38 (19), 21527 …, 2024
104*2024
Foundation models and fair use
P Henderson, X Li, D Jurafsky, T Hashimoto, MA Lemley, P Liang
Journal of Machine Learning Research 24 (400), 1-79, 2023
942023
Underwater Multi-Robot Convoying using Visual Tracking by Detection
F Shkurti, WD Chang, P Henderson, MJ Islam, JCG Higuera, J Li, ...
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017
692017
Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset
P Henderson*, MS Krass*, L Zheng, N Guha, CD Manning, D Jurafsky, ...
arXiv preprint arXiv:2207.00220, 2022
612022
Data Governance in the Age of Large-Scale Data-Driven Language Technology
Y Jerncite, H Nguyen, S Biderman, A Rogers, V Danchev, S Tan, ...
602022
Benchmark Environments for Multitask Learning in Continuous Domains
P Henderson, WD Chang, F Shkurti, J Hansen, D Meger, G Dudek
Lifelong Learning: A Reinforcement Learning Approach Workshop (ICML), 2017
572017
Legalbench: A collaboratively built benchmark for measuring legal reasoning in large language models
N Guha, J Nyarko, D Ho, C Ré, A Chilton, A Chohlas-Wood, A Peters, ...
Advances in Neural Information Processing Systems 36, 2024
562024
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20