There has been a rising interest in automating toxic language detection and fact-checking to help experts with online moderation and fact verification. However, in the absence of clear definitions of crucial terms and experts’ needs, one may ask how we can have the impact we aim for and build robust tools for toxic language detection and automated fact-checking. In this presentation, I will share insights and lessons learned from past and ongoing work on toxic language detection and automated fact-checking. I will talk about (1) the construction of multilingual resources for toxic language detection and related problems (e.g., selection bias) and (2) work on question generation for fact-checking.
Invited Speaker: Nedjma Ousidhoum