Jaap Jumelet
Cited by
Cited by
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
TMLR, BigBench, 2022
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items
J Jumelet, D Hupkes
BlackboxNLP 2018, 2018
Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment
J Jumelet, W Zuidema, D Hupkes
CoNLL 2019, 2019
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
A Sinclair*, J Jumelet*, W Zuidema, R Fernández
TACL, 2022
Language Models Use Monotonicity to Assess NPI Licensing
J Jumelet, M Denić, J Szymanik, D Hupkes, S Steinert-Threlkeld
ACL Findings 2021, 2021
Language Modelling as a Multi-Task Problem
L Weber, J Jumelet, E Bruni, D Hupkes
EACL 2021, 2021
The Birth of Bias: A case study on the evolution of gender bias in an English language model
O Van Der Wal, J Jumelet, K Schulz, W Zuidema
GeBNLP 2022, 2022
Feature Interactions Reveal Linguistic Structure in Language Models
J Jumelet, W Zuidema
ACL Findings 2023, 2023
diagNNose: A Library for Neural Activation Analysis
J Jumelet
BlackboxNLP 2020, 2020
DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
A Langedijk, H Mohebbi, G Sarti, W Zuidema, J Jumelet
NAACL Findings 2024, 2023
Attention vs non-attention for a Shapley-based explanation method
T Kersten, HM Wong, J Jumelet, D Hupkes
DeeLIO 2021, 2021
Transformer-specific Interpretability
H Mohebbi, J Jumelet, M Hanna, A Alishahi, W Zuidema
EACL Tutorial, 2024
Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution
J Jumelet, W Zuidema
EMNLP Findings 2023, 2023
Curriculum learning with adam: The devil is in the wrong details
L Weber, J Jumelet, P Michel, E Bruni, D Hupkes
arXiv preprint arXiv:2308.12202, 2023
Interpretability of Language Models via Task Spaces
L Weber, J Jumelet, E Bruni, D Hupkes
ACL 2024, 2024
Filtered Corpus Training (FiCT) Shows that Language Models can Generalize from Indirect Evidence
A Patil, J Jumelet, YY Chiu, A Lapastora, P Shen, L Wang, C Willrich, ...
arXiv preprint arXiv:2405.15750, 2024
Do Language Models Exhibit Human-like Structural Priming Effects?
J Jumelet, W Zuidema, A Sinclair
ACL Findings 2024, 2024
Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in Dialogue
A Molnar, J Jumelet, M Giulianelli, A Sinclair
CoNLL 2023, 2023
ChapGTP, ILLC's Attempt at Raising a BabyLM: Improving Data Efficiency by Automatic Task Formation
J Jumelet, M Hanna, MH Kloots, A Langedijk, C Pouw, O van der Wal
BabyLM / CoNLL 2023, 2023
Bottom-up Parsing for the Extended Typelogical Grammars
J Jumelet
Universiteit Utrecht, 2017
The system can't perform the operation now. Try again later.
Articles 1–20