Mert Gürbüzbalaban
Mert Gürbüzbalaban
Rutgers University
Verified email at mit.edu - Homepage
TitleCited byYear
On the convergence rate of incremental aggregated gradient algorithms
M Gurbuzbalaban, A Ozdaglar, PA Parrilo
SIAM Journal on Optimization 27 (2), 1035-1048, 2017
842017
Why random reshuffling beats stochastic gradient descent
M Gürbüzbalaban, A Ozdaglar, PA Parrilo
Mathematical Programming, 1-36, 2015
692015
Fast Approximation of the Norm via Optimization over Spectral Value Sets
N Guglielmi, M Gurbuzbalaban, ML Overton
SIAM Journal on Matrix Analysis and Applications 34 (2), 709-737, 2013
452013
A globally convergent incremental Newton method
M Gürbüzbalaban, A Ozdaglar, P Parrilo
Mathematical Programming 151 (1), 283-313, 2015
352015
On Nesterov’s nonsmooth Chebyshev–Rosenbrock functions
M Gürbüzbalaban, ML Overton
Nonlinear Analysis: Theory, Methods & Applications 75 (3), 1282-1289, 2012
272012
Global convergence rate of proximal incremental aggregated gradient methods
ND Vanli, M Gurbuzbalaban, A Ozdaglar
SIAM Journal on Optimization 28 (2), 1282-1300, 2018
212018
Explicit solutions for root optimization of a polynomial family with one affine constraint
VD Blondel, M Gurbuzbalaban, A Megretski, ML Overton
IEEE transactions on automatic control 57 (12), 3078-3089, 2012
172012
A universally optimal multistage accelerated stochastic gradient method
NS Aybat, A Fallah, M Gurbuzbalaban, A Ozdaglar
Advances in Neural Information Processing Systems, 8523-8534, 2019
152019
Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Stochastic Optimization: Non-Asymptotic Performance Bounds and Momentum-Based Acceleration
X Gao, M Gurbuzbalaban, Z Lingjiong
arXiv preprint arXiv:1809.04618, 2018
152018
Surpassing gradient descent provably: A cyclic incremental method with linear convergence rate
A Mokhtari, M Gurbuzbalaban, A Ribeiro
SIAM Journal on Optimization 28 (2), 1420-1447, 2018
152018
Convergence rate of incremental gradient and newton methods
M Gürbüzbalaban, A Ozdaglar, P Parrilo
arXiv preprint arXiv:1510.08562, 2015
122015
A tail-index analysis of stochastic gradient noise in deep neural networks
U Simsekli, L Sagun, M Gurbuzbalaban
arXiv preprint arXiv:1901.06053, 2019
112019
When cyclic coordinate descent outperforms randomized coordinate descent
M Gurbuzbalaban, A Ozdaglar, PA Parrilo, N Vanli
Advances in Neural Information Processing Systems, 6999-7007, 2017
102017
Robust accelerated gradient methods for smooth strongly convex functions
NS Aybat, A Fallah, M Gurbuzbalaban, A Ozdaglar
arXiv preprint arXiv:1805.10579, 2018
92018
A stronger convergence result on the proximal incremental aggregated gradient method
ND Vanli, M Gurbuzbalaban, A Ozdaglar
arXiv preprint arXiv:1611.08022, 2016
82016
Some regularity results for the pseudospectral abscissa and pseudospectral radius of a matrix
M Gürbüzbalaban, ML Overton
SIAM Journal on Optimization 22 (2), 281-285, 2012
82012
Accelerated linear convergence of stochastic momentum methods in wasserstein distances
B Can, M Gurbuzbalaban, L Zhu
arXiv preprint arXiv:1901.07445, 2019
52019
Breaking reversibility accelerates Langevin dynamics for global non-convex optimization
X Gao, M Gurbuzbalaban, L Zhu
arXiv preprint arXiv:1812.07725, 2018
52018
Avoiding Communication in Proximal Methods for Convex Optimization Problems
MMD Saeed Soori, Aditya Devarakonda, James Demmel, Mert Gurbuzbalaban
arXiv preprint arXiv:1710.08883, 2017
52017
Decentralized Computation of Effective Resistances and Acceleration of Consensus Algorithms
S Aybat, M Gurbuzbalaban
arXiv preprint arXiv:1708.07190, 2017
52017
The system can't perform the operation now. Try again later.
Articles 1–20