Follow
Xinyi Wu
Xinyi Wu
Google DeepMind
Verified email at google.com
Title
Cited by
Cited by
Year
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
10362023
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
8782022
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ...
arXiv preprint arXiv:2403.05530, 2024
1952024
Nl-augmenter: A framework for task-sensitive natural language augmentation
KD Dhole, V Gangal, S Gehrmann, A Gupta, Z Li, S Mahamood, ...
arXiv preprint arXiv:2112.02721, 2021
672021
What makes a good counselor? learning to distinguish between high-quality and low-quality counseling conversations
V Pérez-Rosas, X Wu, K Resnicow, R Mihalcea
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
612019
Explaining relationships between scientific documents
K Luu, X Wu, R Koncel-Kedziorski, K Lo, I Cachola, NA Smith
arXiv preprint arXiv:2002.00317, 2020
60*2020
Linguistically-informed transformations (LIT): A method for automatically generating contrast sets
C Li, L Shengshuo, LZ Liu, X Wu, X Zhou, S Steinert-Threlkeld
arXiv preprint arXiv:2010.08580, 2020
322020
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
SU Toshniwal, S Debnath, S Shakeri, S Thormeyer, S Melzi, S Reddy, ...
ArXiv, abs/2206.04615, 2022
142022
The system can't perform the operation now. Try again later.
Articles 1–8