In training language models, training choices—such as the random seed for data ordering or the token vocabulary size—significantly influence model behaviour. Answering counterfactual questions like “How would the model perform if this instance were excluded from training?” is computationally expensive, as it requires re-training the model. Once these training configurations are set, they become fixed, creating a “natural experiment” where modifying the experimental conditions incurs high computational costs. Using econometric techniques to estimate causal effects from observational studies enables us to analyse the impact of these choices without requiring full experimental control or repeated model training. In this talk, I will present our paper, Causal Estimation of Memorisation Profiles (Best Paper Award at ACL 2024), which introduces a novel method based on the difference-in-differences technique from econometrics to estimate memorisation without requiring model re-training. I will also discuss preliminary results from ongoing work that applies the regression discontinuity design to estimate the causal effect of selecting a specific vocabulary size.
Invited Speaker: Pietro Lesci (University of Cambridge)
Bio: Pietro Lesci is a PhD student in Natural Language Processing at the University of Cambridge, working with Professor Andreas Vlachos. His research explores the causal effects of training choices on language models, focusing on memorisation, shortcut learning, and tokenisation. His work has been presented at major NLP conferences like ACL and NAACL. He received a Best Paper Award at ACL 2024 and funding from the Translated Imminent Research Grant for his research contributions. Pietro’s experience spans academia and industry, including roles at Amazon AWS AI Labs, Bain & Company, and Bocconi University. He holds an MSc in Economic and Social Sciences from Bocconi University.