Low-rank matrix completion using alternating minimization P Jain, P Netrapalli, S Sanghavi Proceedings of the forty-fifth annual ACM symposium on Theory of computing …, 2013 | 912 | 2013 |
Phase retrieval using alternating minimization P Netrapalli, P Jain, S Sanghavi IEEE Transactions on Signal Processing 63 (18), 4814-4826, 2015 | 522 | 2015 |
How to escape saddle points efficiently C Jin, R Ge, P Netrapalli, SM Kakade, MI Jordan International Conference on Machine Learning, 1724-1732, 2017 | 483 | 2017 |
Non-convex robust PCA P Netrapalli, UN Niranjan, S Sanghavi, A Anandkumar, P Jain arXiv preprint arXiv:1410.7660, 2014 | 267 | 2014 |
Learning the graph of epidemic cascades P Netrapalli, S Sanghavi ACM SIGMETRICS Performance Evaluation Review 40 (1), 211-222, 2012 | 187 | 2012 |
Learning sparsely used overcomplete dictionaries via alternating minimization A Agarwal, A Anandkumar, P Jain, P Netrapalli SIAM Journal on Optimization 26 (4), 2775-2799, 2016 | 156 | 2016 |
Accelerated gradient descent escapes saddle points faster than gradient descent C Jin, P Netrapalli, MI Jordan Conference On Learning Theory, 1042-1085, 2018 | 152 | 2018 |
Streaming PCA: Matching matrix Bernstein and near-optimal finite sample guarantees for Oja’s algorithm P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford Conference on learning theory, 1147-1164, 2016 | 110* | 2016 |
Learning sparsely used overcomplete dictionaries A Agarwal, A Anandkumar, P Jain, P Netrapalli, R Tandon Conference on Learning Theory, 123-137, 2014 | 108 | 2014 |
What is local optimality in nonconvex-nonconcave minimax optimization? C Jin, P Netrapalli, M Jordan International Conference on Machine Learning, 4880-4889, 2020 | 106* | 2020 |
Information-theoretic thresholds for community detection in sparse networks J Banks, C Moore, J Neeman, P Netrapalli Conference on Learning Theory, 383-416, 2016 | 106* | 2016 |
Faster eigenvector computation via shift-and-invert preconditioning D Garber, E Hazan, C Jin, C Musco, P Netrapalli, A Sidford International Conference on Machine Learning, 2626-2634, 2016 | 102* | 2016 |
Accelerating stochastic gradient descent for least squares regression P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford Conference On Learning Theory, 545-604, 2018 | 89 | 2018 |
Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford Journal of Machine Learning Research 18, 2018 | 89* | 2018 |
A clustering approach to learning sparsely used overcomplete dictionaries A Agarwal, A Anandkumar, P Netrapalli IEEE Transactions on Information Theory 63 (1), 575-592, 2016 | 85* | 2016 |
Fast exact matrix completion with finite samples P Jain, P Netrapalli Conference on Learning Theory, 1007-1034, 2015 | 83 | 2015 |
Provable efficient online matrix completion via non-convex stochastic gradient descent C Jin, SM Kakade, P Netrapalli arXiv preprint arXiv:1605.08370, 2016 | 78 | 2016 |
One-bit compressed sensing: Provable support and vector recovery S Gopi, P Netrapalli, P Jain, A Nori International Conference on Machine Learning, 154-162, 2013 | 71 | 2013 |
Efficient algorithms for large-scale generalized eigenvector computation and canonical correlation analysis R Ge, C Jin, P Netrapalli, A Sidford International Conference on Machine Learning, 2741-2750, 2016 | 56 | 2016 |
On the insufficiency of existing momentum schemes for stochastic optimization R Kidambi, P Netrapalli, P Jain, S Kakade 2018 Information Theory and Applications Workshop (ITA), 1-9, 2018 | 55 | 2018 |