Seuraa
Tri Dao
Tri Dao
Stanford University, Princeton University
Vahvistettu sähköpostiosoite verkkotunnuksessa stanford.edu - Kotisivu
Nimike
Viittaukset
Viittaukset
Vuosi
Flashattention: Fast and memory-efficient exact attention with io-awareness
T Dao, D Fu, S Ermon, A Rudra, C Ré
Advances in Neural Information Processing Systems 35, 16344-16359, 2022
1892022
A kernel theory of modern data augmentation
T Dao, A Gu, A Ratner, V Smith, CD Sa, C Ré
Proceedings of the 36th International Conference on Machine Learning, ICML, 9-15, 2019
1682019
Hippo: Recurrent memory with optimal polynomial projections
A Gu, T Dao, S Ermon, A Rudra, C Ré
Advances in neural information processing systems 33, 1474-1487, 2020
992020
Learning fast algorithms for linear transforms using butterfly factorizations
T Dao, A Gu, M Eichhorn, A Rudra, C Ré
International conference on machine learning, 1517-1527, 2019
802019
Combining recurrent, convolutional, and continuous-time models with linear state space layers
A Gu, I Johnson, K Goel, K Saab, T Dao, A Rudra, C Ré
Advances in neural information processing systems 34, 572-585, 2021
552021
StarCoder: may the source be with you!
R Li, LB Allal, Y Zi, N Muennighoff, D Kocetkov, C Mou, M Marone, C Akiki, ...
arXiv preprint arXiv:2305.06161, 2023
522023
Mongoose: A learnable lsh framework for efficient neural network training
B Chen, Z Liu, B Peng, Z Xu, JL Li, T Dao, Z Song, A Shrivastava, C Re
International Conference on Learning Representations, 2020
502020
Gaussian quadrature for kernel features
T Dao, CM De Sa, C Ré
Advances in neural information processing systems 30, 2017
502017
Learning compressed transforms with low displacement rank
A Thomas, A Gu, T Dao, A Rudra, C Ré
Advances in neural information processing systems 31, 2018
452018
Scatterbrain: Unifying sparse and low-rank attention
B Chen, T Dao, E Winsor, Z Song, A Rudra, C Ré
Advances in Neural Information Processing Systems 34, 17413-17426, 2021
43*2021
Kaleidoscope: An efficient, learnable representation for all structured linear maps
T Dao, NS Sohoni, A Gu, M Eichhorn, A Blonder, M Leszczynski, A Rudra, ...
International Conference on Learning Representations, 2020
382020
Low-precision random Fourier features for memory-constrained kernel approximation
J Zhang, A May, T Dao, C Ré
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
382019
Hungry Hungry Hippos: Towards Language Modeling with State Space Models
DY Fu, T Dao, KK Saab, AW Thomas, A Rudra, C Re
The Eleventh International Conference on Learning Representations, 2023
35*2023
Monarch: Expressive structured matrices for efficient and accurate training
T Dao, B Chen, NS Sohoni, A Desai, M Poli, J Grogan, A Liu, A Rao, ...
International Conference on Machine Learning, 4690-4721, 2022
302022
Pixelated butterfly: Simple and efficient sparse training for neural network models
T Dao, B Chen, K Liang, J Yang, Z Song, A Rudra, C Re
International Conference on Learning Representations, 2021
302021
Hyena Hierarchy: Towards Larger Convolutional Language Models
M Poli, S Massaroli, E Nguyen, DY Fu, T Dao, S Baccus, Y Bengio, ...
International Conference on Machine Learning, 2023
272023
On the downstream performance of compressed word embeddings
A May, J Zhang, T Dao, C Ré
Advances in neural information processing systems 32, 2019
242019
Knowledge distillation as semiparametric inference
T Dao, GM Kamath, V Syrgkanis, L Mackey
International Conference on Learning Representations, 2021
202021
Decentralized training of foundation models in heterogeneous environments
B Yuan, Y He, J Davis, T Zhang, T Dao, B Chen, PS Liang, C Re, C Zhang
Advances in Neural Information Processing Systems 35, 25464-25477, 2022
182022
Rethinking neural operations for diverse tasks
N Roberts, M Khodak, T Dao, L Li, C Ré, A Talwalkar
Advances in Neural Information Processing Systems 34, 15855-15869, 2021
18*2021
Järjestelmä ei voi suorittaa toimenpidettä nyt. Yritä myöhemmin uudelleen.
Artikkelit 1–20