The assessment of language competencies is a laborious task that traditionally requires the convergence of many players, such as test designers, item writers, linguists, psychometricians, assessment experts, etc. Each of them helps define the methods, standards and instruments for assessing language proficiency and ensure that tests are well-designed, valid and accurate. AI and NLP have made it possible to automate some of the tasks in this field, from the automatic generation of practice items to the automated assessment of students’ language production. This talk will look at some of these tasks, how AI can be used to automate them and its implications for different stakeholders.
Invited Speaker: Mariano Felice (British Council)
Bio: Mariano Felice is a Senior Researcher and Data Scientist for Language Assessment and Learning at the British Council. His work involves looking at how artificial intelligence (AI) and natural language processing (NLP) can improve computer-assisted learning and testing, from mining datasets and building models to supporting colleagues in the adoption of new technology. Mariano is also a Visiting Researcher at the University of Cambridge, where he was a Research Associate as part of the ALTA Institute. He has a PhD in computer science from the University of Cambridge and has worked extensively on NLP for language learning and assessment, including grammatical error correction, automatic error typing, system evaluation, automatic cloze test generation and item calibration.