Big bird: Transformers for longer sequences M Zaheer, G Guruganesh, KA Dubey, J Ainslie, C Alberti, S Ontanon, ... Advances in neural information processing systems 33, 17283-17297, 2020 | 1498 | 2020 |
Fnet: Mixing tokens with fourier transforms J Lee-Thorp, J Ainslie, I Eckstein, S Ontanon arXiv preprint arXiv:2105.03824, 2021 | 303 | 2021 |
ETC: Encoding long and structured inputs in transformers J Ainslie, S Ontanon, C Alberti, V Cvicek, Z Fisher, P Pham, A Ravula, ... arXiv preprint arXiv:2004.08483, 2020 | 274 | 2020 |
LongT5: Efficient text-to-text transformer for long sequences M Guo, J Ainslie, D Uthus, S Ontanon, J Ni, YH Sung, Y Yang arXiv preprint arXiv:2112.07916, 2021 | 130 | 2021 |
Realformer: Transformer likes residual attention R He, A Ravula, B Kanagal, J Ainslie arXiv preprint arXiv:2012.11747, 2020 | 72 | 2020 |
Formnet: Structural encoding beyond sequential modeling in form document information extraction CY Lee, CL Li, T Dozat, V Perot, G Su, N Hua, J Ainslie, R Wang, Y Fujii, ... arXiv preprint arXiv:2203.08411, 2022 | 49 | 2022 |
Making transformers solve compositional tasks S Ontanón, J Ainslie, V Cvicek, Z Fisher arXiv preprint arXiv:2108.04378, 2021 | 49 | 2021 |
Encoding long and structured data in transformers J Ainslie, S Ontanon, C Alberti, P Pham, A Ravula, S Sanghai arXiv preprint arXiv:2004.08483 2, 2020 | 32 | 2020 |
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints J Ainslie, J Lee-Thorp, M de Jong, Y Zemlyanskiy, F Lebrón, S Sanghai arXiv preprint arXiv:2305.13245, 2023 | 18 | 2023 |
Colt5: Faster long-range transformers with conditional computation J Ainslie, T Lei, M de Jong, S Ontañón, S Brahma, Y Zemlyanskiy, ... arXiv preprint arXiv:2303.09752, 2023 | 18 | 2023 |
FNet: Mixing Tokens with Fourier Transforms. arXiv 2021 J Lee-Thorp, J Ainslie, I Eckstein, S Ontanon arXiv preprint arXiv:2105.03824, 2021 | 16 | 2021 |
Improving compositional generalization in classification tasks via structure annotations J Kim, P Ravikumar, J Ainslie, S Ontañón arXiv preprint arXiv:2106.10434, 2021 | 14 | 2021 |
FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference M de Jong, Y Zemlyanskiy, J Ainslie, N FitzGerald, S Sanghai, F Sha, ... arXiv preprint arXiv:2212.08153, 2022 | 12 | 2022 |
Readtwice: Reading very large documents with memories Y Zemlyanskiy, J Ainslie, M de Jong, P Pham, I Eckstein, F Sha arXiv preprint arXiv:2105.04241, 2021 | 11 | 2021 |
Sparse upcycling: Training mixture-of-experts from dense checkpoints A Komatsuzaki, J Puigcerver, J Lee-Thorp, CR Ruiz, B Mustafa, J Ainslie, ... arXiv preprint arXiv:2212.05055, 2022 | 9 | 2022 |
ETC: Encoding long and structured inputs in transformers A Ravula, C Alberti, J Ainslie, L Yang, PM Pham, Q Wang, S Ontanon, ... | 9 | 2020 |
Big bird: Transformers for longer sequences. arXiv 2020 M Zaheer, G Guruganesh, A Dubey, J Ainslie, C Alberti, S Ontanón, ... arXiv preprint arXiv:2007.14062, 2007 | 9 | 2007 |
Generate-and-retrieve: Use your predictions to improve retrieval for semantic parsing Y Zemlyanskiy, M de Jong, J Ainslie, P Pasupat, P Shaw, L Qiu, ... arXiv preprint arXiv:2209.14899, 2022 | 6 | 2022 |
Sparse Mixers: Combining MoE and Mixing to build a more efficient BERT J Lee-Thorp, J Ainslie arXiv preprint arXiv:2205.12399, 2022 | 5 | 2022 |
Iterative decoding for compositional generalization in transformers L Ruiz, J Ainslie, S Ontañón arXiv preprint arXiv:2110.04169, 2021 | 5 | 2021 |