How do we make technical language easy to understand for a lay reader? In this talk, we will consider text simplification from a variety of angles. Should we focus on full texts, single words or phrases? What datasets and techniques are available for simplification? For whom should we simplify? To answer these questions and more, we will examine several research works and shared tasks hosted at SemEval and other *ACL workshops, identifying relevant datasets and best practices. We will conclude by considering the technical language that we use to describe our own research, particularly focussing on anthropomorphised terms used to describe large language models.
Invited Speaker: Matthew Shardlow (Manchester Metropolitan University)
Bio: Dr. Matthew Shardlow is a reader in Natural Language Processing in the Department of Computing and Mathematics at the Manchester Metropolitan University. He previously was part of the Horizon 2020 funded OpenMinTeD project and studied his PhD at the University of Manchester under an EPSRC funded Centre for doctoral training. He currently leads projects with industry partners including international publicly traded companies and local government. He is an organiser of the Text Simplification, Accessibility and Readability workshop (EMNLP 2022, RANLP 2023, EMNLP 2024, EMNLP 2025), the SemEval-2021 shared task on Lexical Complexity Prediction (ACL 2021), The TSAR-2022 shared task on lexical simplification (EMNLP2022) and the BEA-2024 MLSP shared task (NAACL 2024). His research interests lie in the field of distributional semantics and particularly the application of machine learning techniques to natural language tasks. He has previously worked on topics including named entity recognition, event extraction, machine translation, emoji semantics, text generation and has more recently explored phenomena of consciousness and anthropomorphisation in association to LLMs.