Follow
Shauli Ravfogel
Shauli Ravfogel
PhD student, Bar-Ilan University
Verified email at macs.biu.ac.il - Homepage
Title
Cited by
Cited by
Year
Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models
EB Zaken, S Ravfogel, Y Goldberg
Proceedings of the 60th Annual Meeting of the Association for Computational …, 2021
3212021
Null it out: Guarding protected attributes by iterative nullspace projection
S Ravfogel, Y Elazar, H Gonen, M Twiton, Y Goldberg
Proceedings of the 58th Annual Meeting of the Association for Computational …, 2020
2132020
Measuring and improving consistency in pretrained language models
Y Elazar, N Kassner, S Ravfogel, A Ravichander, E Hovy, H Schütze, ...
Transactions of the Association for Computational Linguistics 9, 1012-1031, 2021
1352021
Amnesic probing: Behavioral explanation with amnesic counterfactuals
Y Elazar, S Ravfogel, A Jacovi, Y Goldberg
Transactions of the Association for Computational Linguistics 9, 160-175, 2021
1242021
Studying the inductive biases of RNNs with synthetic variations of natural languages
S Ravfogel, Y Goldberg, T Linzen
The 2019 Conference of the North American Chapter of the Association for …, 2019
662019
Contrastive explanations for model interpretability
A Jacovi, S Swayamdipta, S Ravfogel, Y Elazar, Y Choi, Y Goldberg
arXiv preprint arXiv:2103.01378, 2021
562021
Linear adversarial concept erasure
S Ravfogel, M Twiton, Y Goldberg, RD Cotterell
International Conference on Machine Learning, 18400-18421, 2022
352022
Can LSTM learn to capture agreement? The case of Basque
S Ravfogel, FM Tyers, Y Goldberg
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and …, 2018
352018
It's not Greek to mBERT: inducing word-level translations from multilingual BERT
H Gonen, S Ravfogel, Y Elazar, Y Goldberg
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting …, 2020
342020
Counterfactual interventions reveal the causal effect of relative clause representations on agreement prediction
S Ravfogel, G Prasad, T Linzen, Y Goldberg
Proceedings of the 25th Conference on Computational Natural Language Learning, 2021
322021
Ab antiquo: Neural proto-language reconstruction
C Meloni, S Ravfogel, Y Goldberg
Proceedings of the 2021 Conference of the North American Chapter of the …, 2019
23*2019
Measuring Causal Effects of Data Statistics on Language Model'sFactual'Predictions
Y Elazar, N Kassner, S Ravfogel, A Feder, A Ravichander, M Mosbach, ...
arXiv preprint arXiv:2207.14251, 2022
202022
When bert forgets how to pos: Amnesic probing of linguistic properties and mlm predictions
Y Elazar, S Ravfogel, A Jacovi, Y Goldberg
arXiv preprint arXiv:2006.00995, 2020
192020
Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models
E Ben Zaken, S Ravfogel, Y Goldberg
arXiv e-prints, arXiv: 2106.10199, 2021
182021
Dalle-2 is seeing double: flaws in word-to-concept mapping in Text2Image models
R Rassin, S Ravfogel, Y Goldberg
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting …, 2022
162022
Unsupervised distillation of syntactic information from contextualized word representations
S Ravfogel, Y Elazar, J Goldberger, Y Goldberg
arXiv preprint arXiv:2010.05265, 2020
122020
Visual comparison of language model adaptation
R Sevastjanova, E Cakmak, S Ravfogel, R Cotterell, M El-Assady
IEEE Transactions on Visualization and Computer Graphics 29 (1), 1178-1188, 2022
92022
Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked languagemodels
S Ravfogel, E Ben-Zaken, Y Goldberg
arXiv preprint arXiv:2106.10199 8, 2021
82021
LEACE: Perfect linear concept erasure in closed form
N Belrose, D Schneider-Joseph, S Ravfogel, R Cotterell, E Raff, ...
arXiv preprint arXiv:2306.03819, 2023
72023
Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked languagemodels. arXiv e-prints, pages arXiv–2106
EB Zaken, S Ravfogel, Y Goldberg
72021
The system can't perform the operation now. Try again later.
Articles 1–20