When choosing plausible alternatives, clever hans can be clever P Kavumba, N Inoue, B Heinzerling, K Singh, P Reisert, K Inui arXiv preprint arXiv:1911.00225, 2019 | 56 | 2019 |
Are prompt-based models clueless? P Kavumba, R Takahashi, Y Oda Journal of Natural Language Processing 29 (3), 991-996, 2022 | 27 | 2022 |
Prompting for explanations improves Adversarial NLI. Is this true?{Yes} it is {true} because {it weakens superficial cues} P Kavumba, A Brassard, B Heinzerling, K Inui Findings of the Association for Computational Linguistics: EACL 2023, 2165-2180, 2023 | 10 | 2023 |
COPA-SSE: Semi-structured explanations for commonsense reasoning A Brassard, B Heinzerling, P Kavumba, K Inui arXiv preprint arXiv:2201.06777, 2022 | 8 | 2022 |
Learning to Learn to be Right for the Right Reasons P Kavumba, B Heinzerling, A Brassard, K Inui arXiv preprint arXiv:2104.11514, 2021 | 8 | 2021 |
Improving evidence detection by leveraging warrants K Singh, P Reisert, N Inoue, P Kavumba, K Inui Proceedings of the Second Workshop on Fact Extraction and VERification …, 2019 | 8 | 2019 |
Balanced COPA: Countering superficial cues in causal reasoning P Kavumba, N Inoue, B Heinzerling, K Singh, P Reisert, K Inui Association for Natural Language Processing, 1105-1108, 2020 | 1 | 2020 |
Exploring Supervised Learning of Hierarchical Event Embedding with Poincaré Embeddings P Kavumba, N Inoue, K Inui Proceedings of the 25th Annual Meeting of the Association for Natural …, 2019 | 1 | 2019 |
Analysing and Mitigating Superficial Cues in Commonsense Reasoning Benchmarks P KAVUMBA 東北大学電通談話会記録 89 (2), 18-19, 2021 | | 2021 |
Beyond Superficial Cues: Improving Pretrained Language Model Robustness in Natural Language Understanding Tasks P Kavumba Tohoku University, 0 | | |
None the wiser? Adding “None” Mitigates Superficial Cues in Multiple-Choice Benchmarks P Kavumba, A Brassard, B Heinzerling, N Inoue, K Inui | | |