Follow
Yi Luan
Yi Luan
Google Deepmind
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
22092023
Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction
Y Luan, L He, M Ostendorf, H Hajishirzi
Proc. Conf. Empirical Methods Natural Language Process (EMNLP), 2018, 2018
8102018
Entity, relation, and event extraction with contextualized span representations
D Wadden, U Wennberg, Y Luan, H Hajishirzi
Proc. Conf. Empirical Methods Natural Language Process (EMNLP), 2019., 2019
7152019
Sparse, Dense, and Attentional Representations for Text Retrieval
Y Luan, J Eisenstein, K Toutanova, M Collins
Transactions of the Association for Computational Linguistics 9, 329-345, 2021
4052021
A general framework for information extraction using dynamic span graphs
Y Luan, D Wadden, L He, A Shah, M Ostendorf, H Hajishirzi
Proc. Conf. North American Assoc. for Computational Linguistics (NAACL), 2019., 2019
3962019
Text generation from knowledge graphs with graph transformers
R Koncel-Kedziorski, D Bekal, Y Luan, M Lapata, H Hajishirzi
Proc. Conf. North American Assoc. for Computational Linguistics (NAACL), 2019, 2019
3832019
Large Dual Encoders Are Generalizable Retrievers
J Ni, C Qu, J Lu, Z Dai, GH Ábrego, J Ma, VY Zhao, Y Luan, KB Hall, ...
arXiv preprint arXiv:2112.07899, 2021
3512021
Promptagator: Few-shot Dense Retrieval From 8 Examples
Z Dai, VY Zhao, J Ma, Y Luan, J Ni, J Lu, A Bakalov, K Guu, KB Hall, ...
arXiv preprint arXiv:2209.11755, 2022
1872022
Instruction-following evaluation for large language models
J Zhou, T Lu, S Mishra, S Brahma, S Basu, Y Luan, D Zhou, L Hou
arXiv preprint arXiv:2311.07911, 2023
1362023
ASQA: Factoid Questions Meet Long-Form Answers
I Stelmakh, Y Luan, B Dhingra, MW Chang
arXiv preprint arXiv:2204.06092, 2022
1202022
Scientific information extraction with semi-supervised neural tagging
Y Luan, M Ostendorf, H Hajishirzi
Proc. Conf. Empirical Methods Natural Language Process (EMNLP), 2017., 2017
1102017
Multi-task learning for speaker-role adaptation in neural conversation models
Y Luan, C Brockett, B Dolan, J Gao, M Galley
Proc. Joint Conference on Natural Language Processing (IJCNLP), 2017., 2017
952017
LSTM based conversation models
Y Luan, Y Ji, M Ostendorf
Proc. Int. Workshop on Conversational Natural Language Processing (ConvNLP …, 2016
682016
CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement Learning
Z Wu, Y Luan, H Rashkin, D Reitter, GS Tomar
arXiv preprint arXiv:2112.08558, 2021
622021
Paperrobot: Incremental draft generation of scientific ideas
Q Wang, L Huang, Z Jiang, K Knight, H Ji, M Bansal, Y Luan
Proc. Annu. Meeting Assoc. for Computational Linguistics (ACL), 2019., 2019
622019
Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?
Y Chen, H Hu, Y Luan, H Sun, S Changpinyo, A Ritter, MW Chang
arXiv preprint arXiv:2302.11713, 2023
592023
Method for using a multi-scale recurrent neural network with pretraining for spoken language understanding tasks
S Watanabe, Y Luan, B Harsham
US Patent 9,607,616, 2017
522017
Gecko: Versatile Text Embeddings Distilled from Large Language Models
J Lee, Z Dai, X Ren, B Chen, D Cer, JR Cole, K Hui, M Boratko, ...
arXiv preprint arXiv:2403.20327, 2024
492024
Open-domain Visual Entity Recognition: Towards Recognizing Millions of Wikipedia Entities
H Hu, Y Luan, Y Chen, U Khandelwal, M Joshi, K Lee, K Toutanova, ...
arXiv preprint arXiv:2302.11154, 2023
442023
Contextualized Representations Using Textual Encyclopedic Knowledge
M Joshi, K Lee, Y Luan, K Toutanova
arXiv preprint arXiv:2004.12006, 2020
302020
The system can't perform the operation now. Try again later.
Articles 1–20