Automated fact-checking is typically presented as an epistemic tool that fact-checkers, social media consumers, and other stakeholders can use to fight misinformation. Nevertheless, most research papers are surprisingly vague about how this technology is to be used. In the first part of this talk, I will argue that vague arguments about intended use hinders research in the area. I will use content analysis of highly cited papers to document and clarify the problem, and to establish recommendations. In the second part, I will try to follow my own recommendations, as I introduce a new dataset for automated fact-checking. I propose that human fact-checking is an effective method for fighting misinformation, and accordingly attempt to reverse-engineer the process. The resulting dataset allows reasoning about the capacity for models – including LLMs – to help human fact-checkers with some or all of their real-world fact-checking tasks.
Invited Speaker: Michael Schlichtkrull (University of Cambridge)
Bio:
Michael Schlichtkrull is an affiliated lecturer and postdoctoral research associate at the University of Cambridge, and an incoming lecturer at Queen Mary University of London. He works on automated fact checking and other epistemically complicated NLP problems. He has also worked extensively on graph neural networks for NLP, tackling problems including relational link prediction, question answering, and interpretability. Michael received his PhD from the University of Amsterdam, during the process of which he also spent several years visiting the University of Edinburgh.