Follow
Simon Shaolei Du
Simon Shaolei Du
Assistant Professor, School of Computer Science and Engineering, University of Washington
Verified email at cs.washington.edu - Homepage
Title
Cited by
Cited by
Year
Gradient descent finds global minima of deep neural networks
SS Du, JD Lee, H Li, L Wang, X Zhai
International Conference on Machine Learning 2019, 2018
7172018
Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks
S Arora, SS Du, W Hu, Z Li, R Wang
International Conference on Machine Learning 2019, 2019
5442019
On exact computation with an infinitely wide neural net
S Arora, SS Du, W Hu, Z Li, RR Salakhutdinov, R Wang
Advances in Neural Information Processing Systems 32, 2019
4822019
Gradient descent provably optimizes over-parameterized neural networks
SS Du, X Zhai, B Poczos, A Singh
International Conference on Learning Representations 2019, 2018
4342018
Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima
SS Du, JD Lee, Y Tian, B Poczos, A Singh
International Conference on Machine Learning 2018, 2017
1932017
On the power of over-parametrization in neural networks with quadratic activation
SS Du, JD Lee
International Conference on Machine Learning 2018, 2018
1882018
Gradient descent can take exponential time to escape saddle points
SS Du, C Jin, JD Lee, MI Jordan, A Singh, B Poczos
Advances in neural information processing systems 30, 2017
1872017
What Can Neural Networks Reason About?
K Xu, J Li, M Zhang, SS Du, K Kawarabayashi, S Jegelka
International Conference on Learning Representations 2020, 2019
1292019
Stochastic variance reduction methods for policy evaluation
SS Du, J Chen, L Li, L Xiao, D Zhou
International Conference on Machine Learning 2017, 2017
1292017
Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels
SS Du, K Hou, B Póczos, R Salakhutdinov, R Wang, K Xu
Advances in Neural Information Processing Systems 2019, 2019
1242019
When is a convolutional filter easy to learn?
SS Du, JD Lee, Y Tian
International Conference on Learning Representations 2018, 2017
1162017
Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced
SS Du, W Hu, JD Lee
Advances in Neural Information Processing Systems 31, 2018
1102018
Computationally efficient robust estimation of sparse functionals
SS Du, S Balakrishnan, A Singh
Conference on Learning Theory, 2017, 2017
110*2017
Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning?
SS Du, SM Kakade, R Wang, LF Yang
International Conference on Learning Representation 2020, 2019
1092019
Understanding the acceleration phenomenon via high-resolution differential equations
B Shi, SS Du, MI Jordan, WJ Su
Mathematical Programming, 1-70, 2021
1082021
Provably efficient RL with rich observations via latent state decoding
SS Du, A Krishnamurthy, N Jiang, A Agarwal, M Dudík, J Langford
International Conference on Machine Learning 2019, 2019
1042019
Harnessing the power of infinitely wide deep nets on small-data tasks
S Arora, SS Du, Z Li, R Salakhutdinov, R Wang, D Yu
International Conference on Learning Representations 2020, 2019
972019
How neural networks extrapolate: From feedforward to graph neural networks
K Xu, M Zhang, J Li, SS Du, K Kawarabayashi, S Jegelka
arXiv preprint arXiv:2009.11848, 2020
942020
Few-shot learning via learning the representation, provably
SS Du, W Hu, SM Kakade, JD Lee, Q Lei
arXiv preprint arXiv:2002.09434, 2020
902020
Linear convergence of the primal-dual gradient method for convex-concave saddle point problems without strong convexity
SS Du, W Hu
International Conference on Artificial Intelligence and Statistics 2019, 2018
892018
The system can't perform the operation now. Try again later.
Articles 1–20