Follow
Kumar Kshitij Patel
Kumar Kshitij Patel
Phd Student, TTIC
Verified email at ttic.edu - Homepage
Title
Cited by
Cited by
Year
Don't use large mini-batches, use local SGD
T Lin, SU Stich, KK Patel, M Jaggi
arXiv preprint arXiv:1808.07217, 2018
4412018
Is local SGD better than minibatch SGD?
B Woodworth, KK Patel, S Stich, Z Dai, B Bullins, B Mcmahan, O Shamir, ...
International Conference on Machine Learning, 10334-10343, 2020
2482020
Minibatch vs local sgd for heterogeneous distributed learning
BE Woodworth, KK Patel, N Srebro
Advances in Neural Information Processing Systems 33, 6281-6292, 2020
1852020
Communication trade-offs for local-sgd with large step size
KK Patel, A Dieuleveut
Advances In Neural Information Processing Systems 32 (32), 2825-2830, 2019
81*2019
Corruption-tolerant bandit learning
S Kapoor, KK Patel, P Kar
Machine Learning 108 (4), 687-715, 2019
552019
A stochastic newton algorithm for distributed convex optimization
B Bullins, K Patel, O Shamir, N Srebro, BE Woodworth
Advances in Neural Information Processing Systems 34, 26818-26830, 2021
122021
On Convexity and Linear Mode Connectivity in Neural Networks
D Yunis, KK Patel, PHP Savarese, G Vardi, J Frankle, M Walter, K Livescu, ...
OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 2022
112022
Federated online and bandit convex optimization
KK Patel, L Wang, A Saha, N Srebro
International Conference on Machine Learning, 27439-27460, 2023
7*2023
Towards Optimal Communication Complexity in Distributed Non-Convex Optimization
KK Patel, L Wang, B Woodworth, B Bullins, N Srebro
Advances in Neural Information Systems 36, 2022
72022
On the still unreasonable effectiveness of federated averaging for heterogeneous distributed learning
KK Patel, M Glasgow, L Wang, N Joshi, N Srebro
Federated Learning and Analytics in Practice: Algorithms, Systems …, 2023
32023
On the Effect of Defections in Federated Learning and How to Prevent Them
M Han, KK Patel, H Shao, L Wang
arXiv preprint arXiv:2311.16459, 2023
12023
Private Overparameterized Linear Regression without Suffering in High Dimensions
L Wang, D Zou, KK Patel, J Wu, N Srebro
2023
The system can't perform the operation now. Try again later.
Articles 1–12