Miruna Clinciu
Miruna Clinciu
Ph.D. Student, Edinburgh Centre for Robotics
Verified email at
Cited by
Cited by
Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions
DM Howcroft, A Belz, MA Clinciu, D Gkatzia, SA Hasan, S Mahamood, ...
Proceedings of the 13th International Conference on Natural Language …, 2020
The gem benchmark: Natural language generation, its evaluation and metrics
S Gehrmann, T Adewumi, K Aggarwal, PS Ammanamanchi, ...
arXiv preprint arXiv:2102.01672, 2021
A survey of explainable AI terminology
MA Clinciu, H Hastie
Proceedings of the 1st Workshop on Interactive Natural Language Technology …, 2019
A study of automatic metrics for the evaluation of natural language explanations
M Clinciu, A Eshghi, H Hastie
arXiv preprint arXiv:2103.08545, 2021
Underreporting of errors in NLG output, and what to do about it
E Van Miltenburg, MA Clinciu, O Dušek, D Gkatzia, S Inglis, L Leppänen, ...
arXiv preprint arXiv:2108.01182, 2021
It’s commonsense, isn’t it? demystifying human evaluations in commonsense-enhanced nlg systems
MA Clinciu, D Gkatzia, S Mahamood
Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), 1-12, 2021
Let's Evaluate Explanations!
MA Clinciu, H Hastie
HRI 2020 Workshop on Test Methods and Metrics, 2020
You reap what you sow: On the Challenges of Bias Evaluation Under Multilingual Settings
Z Talat, A Névéol, S Biderman, M Clinciu, M Dey, S Longpre, S Luccioni, ...
Challenges {\&, 2022
Emergent Structures and Training Dynamics in Large Language Models
R Teehan, M Clinciu, O Serikov, E Szczechla, N Seelam, S Mirkin, ...
Challenges {\&, 2022
I don't understand! Evaluation Methods for Natural Language Explanations
M Clinciu, A Eshghi, H Hastie
Twenty Years of Confusion in Human Evaluation
DM Howcroft, A Belz, M Clinciu, D Gkatzia, SA Hasan, S Mahamood, ...
The system can't perform the operation now. Try again later.
Articles 1–11