Follow
Heejun Lee
Title
Cited by
Cited by
Year
Sparse token transformer with attention back tracking
H Lee, M Kang, Y Lee, SJ Hwang
The Eleventh International Conference on Learning Representations, 2022
62022
SEA: Sparse Linear Attention with Estimated Attention Mask
H Lee, J Kim, J Willette, SJ Hwang
arXiv preprint arXiv:2310.01777, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–2