The rise and potential of large language model based agents: A survey Z Xi, W Chen, X Guo, W He, Y Ding, B Hong, M Zhang, J Wang, S Jin, ... arXiv preprint arXiv:2309.07864, 2023 | 280 | 2023 |
Textflint: Unified multilingual robustness evaluation toolkit for natural language processing X Wang, Q Liu, T Gui, Q Zhang, Y Zou, X Zhou, J Ye, Y Zhang, R Zheng, ... Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021 | 112* | 2021 |
Secrets of rlhf in large language models part i: Ppo R Zheng, S Dou, S Gao, Y Hua, W Shen, B Wang, Y Liu, S Jin, Q Liu, ... arXiv preprint arXiv:2307.04964, 2023 | 53* | 2023 |
Robust sparse Bayesian learning for DOA estimation in impulsive noise environments R Zheng, X Xu, Z Ye, J Dai Signal Processing 171, 107500, 2020 | 29 | 2020 |
Flooding-X: Improving BERT’s resistance to adversarial attacks via loss-restricted fine-tuning Q Liu, R Zheng, B Rong, J Liu, Z Liu, Z Cheng, L Qiao, T Gui, Q Zhang, ... Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022 | 28 | 2022 |
How robust is gpt-3.5 to predecessors? a comprehensive study on language understanding tasks X Chen, J Ye, C Zu, N Xu, R Zheng, M Peng, J Zhou, T Gui, Q Zhang, ... arXiv preprint arXiv:2303.00293, 2023 | 25 | 2023 |
Robust lottery tickets for pre-trained language models R Zheng, R Bao, Y Zhou, D Liang, S Wang, W Wu, T Gui, Q Zhang, ... arXiv preprint arXiv:2211.03013, 2022 | 19 | 2022 |
Sparse Bayesian learning for off-grid DOA estimation with Gaussian mixture priors when both circular and non-circular sources coexist R Zheng, X Xu, Z Ye, TH Al Mahmud, J Dai, K Shabir Signal Processing 161, 124-135, 2019 | 16 | 2019 |
Self-polish: Enhance reasoning in large language models via problem refinement Z Xi, S Jin, Y Zhou, R Zheng, S Gao, T Gui, Q Zhang, X Huang arXiv preprint arXiv:2305.14497, 2023 | 15 | 2023 |
Secrets of rlhf in large language models part ii: Reward modeling B Wang, R Zheng, L Chen, Y Liu, S Dou, C Huang, W Shen, S Jin, E Zhou, ... arXiv preprint arXiv:2401.06080, 2024 | 14* | 2024 |
Interpolating coprime arrays with translocated and axis rotated compressed subarrays by iterative power factorization for DOA estimation TH Al Mahmud, K Shabir, R Zheng, Z Ye IEEE Access 6, 16445-16453, 2018 | 14 | 2018 |
Orthogonal subspace learning for language model continual learning X Wang, T Chen, Q Ge, H Xia, R Bao, R Zheng, Q Zhang, T Gui, X Huang arXiv preprint arXiv:2310.14152, 2023 | 13 | 2023 |
InstructUIE: multi-task instruction tuning for unified information extraction X Wang, W Zhou, C Zu, H Xia, T Chen, Y Zhang, R Zheng, J Ye, Q Zhang, ... arXiv preprint arXiv:2304.08085, 2023 | 12 | 2023 |
Off-grid DOA estimation aiding virtual extension of coprime arrays exploiting fourth order difference co-array with interpolation TH Al Mahmud, Z Ye, K Shabir, R Zheng, MS Islam IEEE Access 6, 46097-46109, 2018 | 12 | 2018 |
Loose lips sink ships: Mitigating length bias in reinforcement learning from human feedback W Shen, R Zheng, W Zhan, J Zhao, S Dou, T Gui, Q Zhang, X Huang arXiv preprint arXiv:2310.05199, 2023 | 9 | 2023 |
Efficient adversarial training with robust early-bird tickets Z Xi, R Zheng, T Gui, Q Zhang, X Huang arXiv preprint arXiv:2211.07263, 2022 | 9 | 2022 |
Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment S Dou, E Zhou, Y Liu, S Gao, J Zhao, W Shen, Y Zhou, Z Xi, X Wang, ... arXiv preprint arXiv:2312.09979, 2023 | 8 | 2023 |
Decorrelate irrelevant, purify relevant: Overcome textual spurious correlations from a feature perspective S Dou, R Zheng, T Wu, S Gao, J Shan, Q Zhang, Y Wu, X Huang arXiv preprint arXiv:2202.08048, 2022 | 6 | 2022 |
Generalized super-resolution DOA estimation array configurations’ design exploiting sparsity in coprime arrays K Shabir, TH Al Mahmud, R Zheng, Z Ye Circuits, Systems, and Signal Processing 38, 4723-4738, 2019 | 4 | 2019 |
Characterizing the impacts of instances on robustness R Zheng, Z Xi, Q Liu, W Lai, T Gui, Q Zhang, XJ Huang, J Ma, Y Shan, ... Findings of the Association for Computational Linguistics: ACL 2023, 2314-2332, 2023 | 3 | 2023 |