Standing on the shoulders of giant frozen language models Y Levine, I Dalmedigos, O Ram, Y Zeldes, D Jannai, D Muhlgay, Y Osin, ... arXiv preprint arXiv:2204.10019, 2022 | 37 | 2022 |
The inductive bias of in-context learning: Rethinking pretraining example design Y Levine, N Wies, D Jannai, D Navon, Y Hoshen, A Shashua arXiv preprint arXiv:2110.04541, 2021 | 27 | 2021 |
Which transformer architecture fits my data? a vocabulary bottleneck in self-attention N Wies, Y Levine, D Jannai, A Shashua International Conference on Machine Learning, 11170-11181, 2021 | 17 | 2021 |
Human or not? A gamified approach to the Turing test D Jannai, A Meron, B Lenz, Y Levine, Y Shoham arXiv preprint arXiv:2305.20010, 2023 | 15 | 2023 |
Huge frozen language models as readers for open-domain question answering Y Levine, O Ram, D Jannai, B Lenz, S Shalev-Shwartz, A Shashua, ... ICML 2022 Workshop on Knowledge Retrieval and Language Models, 2022 | 5 | 2022 |