Seminar: "Are Emergent Abilities in Large Language Models just In-Context Learning?"

Abstract

Large language models have exhibited emergent abilities, demonstrating exceptional performance across diverse tasks for which they were not explicitly trained, including those that require complex reasoning abilities. The emergence of such abilities carries profound implications for the future direction of research in NLP, especially as the deployment of such models becomes more prevalent. However, one key challenge is that the evaluation of these abilities is often confounded by competencies that arise in models through alternative prompting techniques, such as in-context learning and instruction following, which also emerge as the models are scaled up. In this study, we provide the first comprehensive examination of these emergent abilities while accounting for various potentially biasing factors that can influence the evaluation of models. We conduct rigorous tests on a set of 18 models, encompassing a parameter range from 60 million to 175 billion parameters, across a comprehensive set of 22 tasks. Through an extensive series of over 1,000 experiments, we provide compelling evidence that emergent abilities can primarily be ascribed to in-context learning. We find no evidence for the emergence of reasoning abilities, thus providing valuable insights into the underlying mechanisms driving the observed abilities and thus alleviating safety concerns regarding their use.

Date
Nov 9, 2023 13:00 — 14:00
Location
Abacws

Invited Speaker: Harish Tayyar Madabushi (University of Bath)

Bio:

Dr Tayyar Madabushi’s long term research goals are focused on investigating methods of incorporating high-level cognitive capabilities into models. In the short and medium term, his research is focused on the infusion of world knowledge, common sense and reasoning into pre-trained language models to improve performance on complex tasks such as multi-hop question answering, conversational agents, and social media analysis.

Dr Tayyar Madabushi completed his PhD in AI, focusing on automated question answering at the University of Birmingham in 2019 and began his current post as Lecturer in Artificial Intelligence at the University of Bath in 2022. His research has been influential in the area of combining construction grammar and pre-trained language models, as he conducted the first exploration in this field, paving the way for further developments.

Fernando Alva-Manchego
Fernando Alva-Manchego
Lecturer

My research interests include text adaptation, evaluation of natural language generation, and NLP for education.