Seminar: "Context Mixing in Transformers"

Abstract

In both text and speech processing, variants of the Transformer architecture have become ubiquitous. The key advantage of this neural network topology lies in the modeling of pairwise relations between elements of the input (tokens): the representation of a token at a particular Transformer layer is a function of the weighted sum of the transformed representations of all the tokens in the previous layer. This feature of Transformers is known as ‘context mixing’ and understanding how it functions in specific model layers is crucial for tracing the overall information flow. In this talk, I will first introduce Value Zeroing, as measure of context mixing, and show that the token importance scores obtained through Value Zeroing offer better interpretations compared to previous analysis methods in terms of plausibility, faithfulness, and agreement with probing. Next, by applying Value Zeroing to models of spoken language, we will see how patterns of context mixing can reveal striking differences between the behavior of encoder-only and encoder-decoder speech Transformers.

Date
Feb 29, 2024 13:00 — 14:00
Location
Abacws

Invited Speaker: Hosein Mohebbi (Tilburg University, Netherlands)

Bio: Hosein Mohebbi is a PhD candidate at the Department of Cognitive Science and Artificial Intelligence at Tilburg University, Netherlands. He is part of the InDeep consortium project, doing research on interpretability of deep neural models for text and speech. His research has been published in leading NLP venues such as ACL, EACL, and EMNLP, where he also regularly serves as a reviewer. His contribution to the Computational Linguistics community extends to co-organizing BlackboxNLP (2023, 2024), a popular workshop focusing on analyzing and interpreting neural networks for NLP, and offering a tutorial at EACL 2024 conference.

Jose Camacho-Collados
Jose Camacho-Collados
Professor & UKRI Future Leaders Fellow