Differentially private meta-learning J Li, M Khodak, S Caldas, A Talwalkar arXiv preprint arXiv:1909.05830, 2019 | 146 | 2019 |
Interpretable machine learning: Moving from mythos to diagnostics V Chen, J Li, JS Kim, G Plumb, A Talwalkar Communications of the ACM 65 (8), 43-50, 2022 | 71* | 2022 |
DataComp-LM: In search of the next generation of training sets for language models J Li, A Fang, G Smyrnis, M Ivgi, M Jordan, S Gadre, H Bansal, E Guha, ... arXiv preprint arXiv:2406.11794, 2024 | 47* | 2024 |
A learning theoretic perspective on local explainability J Li, V Nagarajan, G Plumb, A Talwalkar arXiv preprint arXiv:2011.01205, 2020 | 18 | 2020 |
Language models scale reliably with over-training and on downstream tasks SY Gadre, G Smyrnis, V Shankar, S Gururangan, M Wortsman, R Shao, ... arXiv preprint arXiv:2403.08540, 2024 | 15 | 2024 |
Characterizing the Impacts of Semi-supervised Learning for Weak Supervision J Li, J Zhang, L Schmidt, AJ Ratner Advances in Neural Information Processing Systems 36, 2024 | 8 | 2024 |
Better Alignment with Instruction Back-and-Forth Translation T Nguyen, J Li, S Oh, L Schmidt, J Weston, L Zettlemoyer, X Li arXiv preprint arXiv:2408.04614, 2024 | | 2024 |