Deep neural networks for estimation and inference MH Farrell, T Liang, S Misra Econometrica 89 (1), 181-213, 2021 | 451 | 2021 |
Just interpolate: Kernel “ridgeless” regression can generalize T Liang, A Rakhlin Annals of Statistics 48 (3), 1329--1347, 2020 | 390 | 2020 |
Fisher-rao metric, geometry, and complexity of neural networks T Liang, T Poggio, A Rakhlin, J Stokes The 22nd International Conference on Artificial Intelligence and Statistics …, 2019 | 256 | 2019 |
Interaction Matters: A Note on Non-asymptotic Local Convergence of Generative Adversarial Networks T Liang, J Stokes The 22nd International Conference on Artificial Intelligence and Statistics …, 2019 | 228 | 2019 |
Deep neural networks for estimation and inference: Application to causal effects and other semiparametric estimands MH Farrell, T Liang, S Misra arXiv preprint arXiv:1809.09953 20, 2018 | 221 | 2018 |
On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels T Liang, A Rakhlin, X Zhai Conference on Learning Theory 125, 2683--2711, 2020 | 157* | 2020 |
How well generative adversarial networks learn distributions T Liang Journal of Machine Learning Research 22 (228), 1-41, 2021 | 129* | 2021 |
Escaping the local minima via simulated annealing: Optimization of approximately convex functions A Belloni, T Liang, H Narayanan, A Rakhlin Conference on Learning Theory 40, 240--265, 2015 | 93 | 2015 |
Law of log determinant of sample covariance matrix and optimal estimation of differential entropy for high-dimensional Gaussian distributions TT Cai, T Liang, HH Zhou Journal of Multivariate Analysis 137, 161--172, 2015 | 93 | 2015 |
A precise high-dimensional asymptotic theory for Boosting and minimum-ℓ1-norm interpolated classifiers T Liang, P Sur The Annals of Statistics 50 (3), 1669-1695, 2022 | 92 | 2022 |
Learning with square loss: Localization through offset rademacher complexity T Liang, A Rakhlin, K Sridharan Conference on Learning Theory 40, 1260--1285, 2015 | 86 | 2015 |
Computational and statistical boundaries for submatrix localization in a large noisy matrix TT Cai, T Liang, A Rakhlin Annals of Statistics 45 (4), 1403--1430, 2017 | 80 | 2017 |
Textual Factors: A Scalable, Interpretable, and Data-driven Approach to Analyzing Unstructured Information LW Cong, T Liang, X Zhang SSRN: https://ssrn.com/abstract=3307057, 2019 | 58 | 2019 |
Training neural networks as learning data-adaptive kernels: Provable representation and approximation benefits X Dou, T Liang Journal of the American Statistical Association 116 (535), 1507-1520, 2021 | 53 | 2021 |
Deep learning for individual heterogeneity: an automatic inference framework MH Farrell, T Liang, S Misra arXiv preprint arXiv:2010.14694, 2020 | 50 | 2020 |
On how well generative adversarial networks learn densities: Nonparametric and parametric results T Liang arXiv 2018, 2018 | 45 | 2018 |
Weighted message passing and minimum energy flow for heterogeneous stochastic block models with side information TT Cai, T Liang, A Rakhlin Journal of Machine Learning Research 21 (11), 1--34, 2020 | 37* | 2020 |
Interpolating Classifiers Make Few Mistakes T Liang, B Recht Journal of Machine Learning Research 24 (20), 1−27, 2023 | 35 | 2023 |
Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability B Tzen, T Liang, M Raginsky Conference on Learning Theory 75, 857--875, 2018 | 34 | 2018 |
Geometric inference for general high-dimensional linear inverse problems TT Cai, T Liang, A Rakhlin Annals of Statistics 44 (4), 1536--1563, 2016 | 31 | 2016 |