Follow
Ohad Shamir
Ohad Shamir
Verified email at weizmann.ac.il - Homepage
Title
Cited by
Cited by
Year
The power of depth for feedforward neural networks
R Eldan, O Shamir
Conference on learning theory, 907-940, 2016
9332016
Learnability, stability and uniform convergence
S Shalev-Shwartz, O Shamir, N Srebro, K Sridharan
The Journal of Machine Learning Research 9999, 2635-2670, 2010
839*2010
Optimal Distributed Online Prediction Using Mini-Batches.
O Dekel, R Gilad-Bachrach, O Shamir, L Xiao
Journal of Machine Learning Research 13 (1), 2012
7642012
Making gradient descent optimal for strongly convex stochastic optimization
A Rakhlin, O Shamir, K Sridharan
arXiv preprint arXiv:1109.5647, 2011
7632011
Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes
O Shamir, T Zhang
International conference on machine learning, 71-79, 2013
6272013
Communication-efficient distributed optimization using an approximate newton-type method
O Shamir, N Srebro, T Zhang
International conference on machine learning, 1000-1008, 2014
6112014
On the computational efficiency of training neural networks
R Livni, S Shalev-Shwartz, O Shamir
Advances in neural information processing systems 27, 2014
5712014
Size-independent sample complexity of neural networks
N Golowich, A Rakhlin, O Shamir
Conference On Learning Theory, 297-299, 2018
5402018
Better mini-batch algorithms via accelerated gradient methods
A Cotter, O Shamir, N Srebro, K Sridharan
Advances in neural information processing systems 24, 2011
3782011
Adaptively learning the crowd kernel
O Tamuz, C Liu, S Belongie, O Shamir, AT Kalai
arXiv preprint arXiv:1105.1033, 2011
3132011
Nonstochastic multi-armed bandits with graph-structured feedback
N Alon, N Cesa-Bianchi, C Gentile, S Mannor, Y Mansour, O Shamir
SIAM Journal on Computing 46 (6), 1785-1826, 2017
293*2017
Spurious local minima are common in two-layer relu neural networks
I Safran, O Shamir
International conference on machine learning, 4433-4441, 2018
2812018
Proving the lottery ticket hypothesis: Pruning is all you need
E Malach, G Yehudai, S Shalev-Schwartz, O Shamir
International Conference on Machine Learning, 6682-6691, 2020
2492020
An optimal algorithm for bandit and zero-order convex optimization with two-point feedback
O Shamir
The Journal of Machine Learning Research 18 (1), 1703-1713, 2017
2432017
Is local SGD better than minibatch SGD?
B Woodworth, KK Patel, S Stich, Z Dai, B Bullins, B Mcmahan, O Shamir, ...
International Conference on Machine Learning, 10334-10343, 2020
2402020
Learning and generalization with the information bottleneck
O Shamir, S Sabato, N Tishby
Theoretical Computer Science 411 (29-30), 2696-2711, 2010
2342010
Depth-width tradeoffs in approximating natural functions with neural networks
I Safran, O Shamir
International conference on machine learning, 2979-2987, 2017
219*2017
Communication complexity of distributed convex learning and optimization
Y Arjevani, O Shamir
Advances in neural information processing systems 28, 2015
2122015
Failures of gradient-based deep learning
S Shalev-Shwartz, O Shamir, S Shammah
International Conference on Machine Learning, 3067-3075, 2017
2092017
On the complexity of bandit and derivative-free stochastic convex optimization
O Shamir
Conference on Learning Theory, 3-24, 2013
2092013
The system can't perform the operation now. Try again later.
Articles 1–20