Xiangru Lian
Xiangru Lian
Kuaishou Technology Co., Ltd, University of Rochester
Verified email at mail.xrlian.com - Homepage
Title
Cited by
Cited by
Year
Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent
X Lian, C Zhang, H Zhang, CJ Hsieh, W Zhang, J Liu
arXiv preprint arXiv:1705.09056, 2017
3472017
Asynchronous parallel stochastic gradient for nonconvex optimization
X Lian, Y Huang, Y Li, J Liu
Advances in Neural Information Processing Systems, 2737-2745, 2015
3152015
Staleness-aware Async-SGD for Distributed Deep Learning
W Zhang, S Gupta, X Lian, J Liu
International Joint Conference on Artificial Intelligence, 2016
1742016
Asynchronous decentralized parallel stochastic gradient descent
X Lian, W Zhang, C Zhang, J Liu
International Conference on Machine Learning, 3043-3052, 2018
1522018
: Decentralized Training over Decentralized Data
H Tang, X Lian, M Yan, C Zhang, J Liu
International Conference on Machine Learning, 4848-4856, 2018
1092018
A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order
X Lian, H Zhang, CJ Hsieh, Y Huang, J Liu
Advances in Neural Information Processing Systems, 2016
622016
Doublesqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression
H Tang, C Yu, X Lian, T Zhang, J Liu
International Conference on Machine Learning, 6155-6165, 2019
532019
Finite-sum Composition Optimization via Variance Reduced Gradient Descent
X Lian, M Wang, J Liu
Artificial Intelligence and Statistics, 2017
512017
Asynchronous Parallel Greedy Coordinate Descent
Y You*, X Lian*(equal contribution), J Liu, HF Yu, I Dhillon, J Demmel, ...
Advances in Neural Information Processing Systems, 2016
402016
Revisit batch normalization: New understanding and refinement via composition optimization
X Lian, J Liu
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
122019
NMR evidence for field-induced ferromagnetism in (Li 0.8 Fe 0.2) OHFeSe superconductor
YP Wu, D Zhao, XR Lian, XF Lu, NZ Wang, XG Luo, XH Chen, T Wu
Physical Review B 91 (12), 125107, 2015
112015
Efficient smooth non-convex stochastic compositional optimization via stochastic recursive gradient descent
H Yuan, X Lian, CJ Li, J Liu
52019
Staleness-aware Async-SGD for Distributed Deep Learning. CoRR abs/1511.05950 (2015)
W Zhang, S Gupta, X Lian, J Liu
arXiv preprint arXiv:1511.05950, 2015
52015
Stochastic recursive momentum for policy gradient methods
H Yuan, X Lian, J Liu, Y Zhou
arXiv preprint arXiv:2003.04302, 2020
42020
Revisit Batch Normalization: New Understanding from an Optimization View and a Refinement via Composition Optimization
X Lian, J Liu
arXiv preprint arXiv:1810.06177, 2018
22018
Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent
X Lian, C Zhang, H Zhang, CJ Hsieh, W Zhang, J Liu
Advances in Neural Information Processing Systems 30 8, 5331-5341, 2018
12018
1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
H Tang, S Gan, AA Awan, S Rajbhandari, C Li, X Lian, J Liu, C Zhang, ...
arXiv preprint arXiv:2102.02888, 2021
2021
APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm
H Tang, S Gan, S Rajbhandari, X Lian, C Zhang, J Liu, Y He
arXiv preprint arXiv:2008.11343, 2020
2020
Stochastic Recursive Variance Reduction for Efficient Smooth Non-Convex Compositional Optimization
H Yuan, X Lian, J Liu
arXiv preprint arXiv:1912.13515, 2019
2019
: Decentralization Meets Error-Compensated Compression
H Tang, X Lian, S Qiu, L Yuan, C Zhang, T Zhang, J Liu
arXiv preprint arXiv:1907.07346, 2019
2019
The system can't perform the operation now. Try again later.
Articles 1–20