Hyperband: A novel bandit-based approach to hyperparameter optimization L Li, K Jamieson, G DeSalvo, A Rostamizadeh, A Talwalkar The Journal of Machine Learning Research 18 (1), 6765-6816, 2017 | 2072 | 2017 |
Random search and reproducibility for neural architecture search L Li, A Talwalkar Uncertainty in artificial intelligence, 367-377, 2020 | 598 | 2020 |
A system for massively parallel hyperparameter tuning L Li, K Jamieson, A Rostamizadeh, E Gonina, J Ben-Tzur, M Hardt, ... Proceedings of Machine Learning and Systems 2, 230-246, 2020 | 234 | 2020 |
Massively parallel hyperparameter tuning L Li, K Jamieson, A Rostamizadeh, E Gonina, M Hardt, B Recht, ... arXiv preprint arXiv:1810.05934 5, 2018 | 131 | 2018 |
Geometry-aware gradient algorithms for neural architecture search L Li, M Khodak, MF Balcan, A Talwalkar arXiv preprint arXiv:2004.07802, 2020 | 59 | 2020 |
Federated hyperparameter tuning: Challenges, baselines, and connections to weight-sharing M Khodak, R Tu, T Li, L Li, MFF Balcan, V Smith, A Talwalkar Advances in Neural Information Processing Systems 34, 19184-19197, 2021 | 41 | 2021 |
On data efficiency of meta-learning M Al-Shedivat, L Li, E Xing, A Talwalkar International Conference on Artificial Intelligence and Statistics, 1369-1377, 2021 | 16 | 2021 |
A system for massively parallel hyperparameter tuning. arXiv 2018 L Li, K Jamieson, A Rostamizadeh, E Gonina, M Hardt, B Recht, ... arXiv preprint arXiv:1810.05934, 0 | 16 | |
Hyperband: Bandit-Based Configuration Evaluation for Hyperparameter Optimization AT Liam Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh ICLR, 2017 | 13* | 2017 |
Rethinking neural operations for diverse tasks N Roberts, M Khodak, T Dao, L Li, C Ré, A Talwalkar Advances in Neural Information Processing Systems 34, 15855-15869, 2021 | 11 | 2021 |
Weight-Sharing for Hyperparameter Optimization in Federated Learning M Khodak, T Li, L Li, MF Balcan, V Smith, A Talwalkar Int. Workshop on Federated Learning for User Privacy and Data …, 2020 | 9 | 2020 |
Massively parallel hyperparameter tuning, 2018 L Li, K Jamieson, A Rostamizadeh, K Gonina, M Hardt, B Recht, ... URL https://openreview. net/forum, 2018 | 8 | 2018 |
Exploiting reuse in pipeline-aware hyperparameter tuning L Li, E Sparks, K Jamieson, A Talwalkar arXiv preprint arXiv:1903.05176, 2019 | 7 | 2019 |
Random search and reproducibility for neural architecture search. arXiv e-prints L Li, A Talwalkar arXiv preprint arXiv:1902.07638, 2019 | 5 | 2019 |
Learning operations for neural PDE solvers N Roberts, M Khodak, T Dao, L Li, C Ré, A Talwalkar Proc. ICLR SimDL Workshop, 2021 | 2 | 2021 |
A simple setting for understanding neural architecture search with weight-sharing M Khodak, L Li, N Roberts, MF Balcan, A Talwalkar ICML AutoML Workshop, 2020 | 2 | 2020 |
Towards Efficient Automated Machine Learning L Li Google, 2020 | 1 | 2020 |
Weight-sharing beyond neural architecture search: Efficient feature map selection and federated hyperparameter tuning M Khodak, L Li, N Roberts, MF Balcan, A Talwalkar Proc. 2nd SysML Conf., 2019 | 1 | 2019 |
On Weight-Sharing and Bilevel Optimization in Architecture Search M Khodak, L Li, MF Balcan, A Talwalkar | 1 | |
Cross-Modal Fine-Tuning: Align then Refine J Shen, L Li, LM Dery, C Staten, M Khodak, G Neubig, A Talwalkar arXiv preprint arXiv:2302.05738, 2023 | | 2023 |