Seuraa
Joel Jang
Joel Jang
Vahvistettu sähköpostiosoite verkkotunnuksessa cs.washington.edu - Kotisivu
Nimike
Viittaukset
Viittaukset
Vuosi
Towards Continual Knowledge Learning of Language Models
J Jang, S Ye, S Yang, J Shin, J Han, G KIM, SJ Choi, M Seo
ICLR 2022, 2022
832022
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
J Jang*, S Ye*, C Lee, S Yang, J Shin, J Han, G Kim, M Seo
EMNLP 2022, 2022
482022
Knowledge Unlearning for Mitigating Privacy Risks in Language Models
J Jang, D Yoon, S Yang, S Cha, M Lee, L Logeswaran, M Seo
ACL 2023, 2023
432023
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts
J Jang*, S Ye*, M Seo
NeurIPS 2022 Workshop on Transfer Learning for NLP (TL4NLP), 2022
432022
Exploring the benefits of training expert language models over instruction tuning
J Jang, S Kim, S Ye, D Kim, L Logeswaran, M Lee, K Lee, M Seo
ICML 2023, 2023
342023
Camels in a changing climate: Enhancing lm adaptation with tulu 2
H Ivison, Y Wang, V Pyatkin, N Lambert, M Peters, P Dasigi, J Jang, ...
arXiv preprint arXiv:2311.10702, 2023
332023
Prometheus: Inducing fine-grained evaluation capability in language models
S Kim, J Shin, Y Cho, J Jang, S Longpre, H Lee, S Yun, S Shin, S Kim, ...
ICLR 2024, 2024
28*2024
Guess the Instruction! Making Language Models Stronger Zero-Shot Learners
S Ye, D Kim, J Jang, J Shin, M Seo
ICLR 2023, 2023
27*2023
Sequential targeting: a continual learning approach for data imbalance in text classification
J Jang, Y Kim, K Choi, S Suh
Expert Systems with Applications 179, 115067, 2021
23*2021
Supervised health stage prediction using convolutional neural networks for bearing wear
S Suh, J Jang, S Won, MS Jha, YO Lee
Sensors 20 (20), 5846, 2020
222020
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin, M Seo
EMNLP 2023, 2023
172023
Personalized soups: Personalized large language model alignment via post-hoc parameter merging
J Jang, S Kim, BY Lin, Y Wang, J Hessel, L Zettlemoyer, H Hajishirzi, ...
arXiv preprint arXiv:2310.11564, 2023
152023
Fixed Input Parameterization for Efficient Prompting
E Choi, Y Jo, J Jang, J Jang, M Seo
ACL 2023 Findings, 2023
14*2023
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
S Ye, J Jang, D Kim, Y Jo, M Seo
Findings of the Association for Computational Linguistics: EMNLP 2023, 12288 …, 2023
10*2023
Continually updating generative retrieval on dynamic corpora
S Yoon, C Kim, H Lee, J Jang, M Seo
arXiv preprint arXiv:2305.18952, 2023
32023
Music2Video: Automatic Generation of Music Video with fusion of audio and text
Y Kim*, J Jang*, S Shin*
arXiv preprint arXiv:2201.03809, 2022
32022
Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis
S Yang, J Kim, J Jang, S Ye, H Lee, M Seo
TACL 2024, 2024
22024
LangBridge: Multilingual Reasoning Without Multilingual Supervision
D Yoon, J Jang, S Kim, S Kim, S Shafayat, M Seo
arXiv preprint arXiv:2401.10695, 2024
12024
How Well Do Large Language Models Truly Ground?
H Lee, S Joo, C Kim, J Jang, D Kim, KW On, M Seo
NAACL 2024, 2024
12024
Semiparametric Token-Sequence Co-Supervision
H Lee, D Kim, J Jun, S Joo, J Jang, KW On, M Seo
arXiv preprint arXiv:2403.09024, 2024
2024
Järjestelmä ei voi suorittaa toimenpidettä nyt. Yritä myöhemmin uudelleen.
Artikkelit 1–20