Follow
Lifu Tu
Lifu Tu
Salesforce AI Research
Verified email at ttic.edu - Homepage
Title
Cited by
Cited by
Year
Codegen: An open large language model for code with multi-turn program synthesis
E Nijkamp, B Pang, H Hayashi, L Tu, H Wang, Y Zhou, S Savarese, ...
arXiv preprint arXiv:2203.13474, 2022
9182022
Commonsense knowledge base completion
X Li, A Taheri, L Tu, K Gimpel
Proceedings of the 54th Annual Meeting of the Association for Computational …, 2016
2042016
An empirical study on robustness to spurious correlations using pre-trained language models
L Tu, G Lalwani, S Gella, H He
Transactions of the Association for Computational Linguistics 8, 621-633, 2020
1852020
Pay attention to the ending: Strong neural baselines for the roc story cloze task
Z Cai, L Tu, K Gimpel
Proceedings of the 55th Annual Meeting of the Association for Computational …, 2017
732017
Learning approximate inference networks for structured prediction
L Tu, K Gimpel
arXiv preprint arXiv:1803.03376, 2018
672018
ENGINE: Energy-based inference networks for non-autoregressive machine translation
L Tu, RY Pang, S Wiseman, K Gimpel
arXiv preprint arXiv:2005.00850, 2020
572020
Xgen-7b technical report
E Nijkamp, T Xie, H Hayashi, B Pang, C Xia, C Xing, J Vig, S Yavuz, ...
arXiv preprint arXiv:2309.03450, 2023
252023
Prompt-Tuning Can Be Much Better Than Fine-Tuning on Cross-lingual Understanding With Multilingual Language Models
L Tu, C Xiong, Y Zhou
arXiv preprint arXiv:2210.12360, 2022
222022
Generating diverse story continuations with controllable semantics
L Tu, X Ding, D Yu, K Gimpel
arXiv preprint arXiv:1909.13434, 2019
202019
Learning to embed words in context for syntactic tasks
L Tu, K Gimpel, K Livescu
arXiv preprint arXiv:1706.02807, 2017
202017
Benchmarking approximate inference methods for neural structured prediction
L Tu, K Gimpel
arXiv preprint arXiv:1904.01138, 2019
192019
Long sequence modeling with xgen: A 7b llm trained on 8k input sequence length
E Nijkamp, T Xie, H Hayashi, B Pang, C Xia, C Xing, J Vig, S Yavuz, ...
Salesforce AI Research Blog, 2023
172023
Quality signals in generated stories
M Sagarkar, J Wieting, L Tu, K Gimpel
Proceedings of the Seventh Joint Conference on Lexical and Computational …, 2018
162018
Improving joint training of inference networks and structured prediction energy networks
L Tu, RY Pang, K Gimpel
arXiv preprint arXiv:1911.02891, 2019
152019
Efficiently Aligned Cross-Lingual Transfer Learning for Conversational Tasks using Prompt-Tuning
L Tu, J Qu, S Yavuz, S Joty, W Liu, C Xiong, Y Zhou
arXiv preprint arXiv:2304.01295, 2023
52023
An Exploration of Arbitrary-Order Sequence Labeling via Energy-Based Inference Networks
L Tu, T Liu, K Gimpel
arXiv preprint arXiv:2010.02789, 2020
42020
AugTriever: Unsupervised Dense Retrieval by Scalable Data Augmentation
R Meng, Y Liu, S Yavuz, D Agarwal, L Tu, N Yu, J Zhang, M Bhat, Y Zhou
4*
xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations
C Qin, C Xia, K Ramakrishnan, M Ryoo, L Tu, Y Feng, M Shu, H Zhou, ...
arXiv preprint arXiv:2408.12590, 2024
12024
Unlocking Anticipatory Text Generation: A Constrained Approach for Faithful Decoding with Large Language Models
L Tu, S Yavuz, J Qu, J Xu, R Meng, C Xiong, Y Zhou
arXiv preprint arXiv:2312.06149, 2023
12023
Investigating Factuality in Long-Form Text Generation: The Roles of Self-Known and Self-Unknown
L Tu, R Meng, S Joty, Y Zhou, S Yavuz
arXiv preprint arXiv:2411.15993, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–20