Seminar: "Preference Alignment, with Reference Mismatch, and without Reference Models"

Abstract

In this talk, I’ll cover two recent papers for preference alignment: Odds-Ratio Preference Optimisation (ORPO, EMNLP 2024), discussing the role of the reference model for preference alignment (e.g. DPO , RLHF), and Margin-aware Preference Optimization (under review @ CVPR ), thinking about the risks of reference mismatch: where the preference alignment data has features diverging from the reference model.

Date
Feb 13, 2025 13:00 — 14:00
Location
Abacws

Invited Speaker: James Thorne (KAIST Graduate School of AI)

Bio: James is Assistant Professor at the KAIST Graduate School of AI, South Korea, working on large-scale and knowledge-intensive natural language understanding. James recently completed his PhD at the University of Cambridge where he developed models and methods for automated fact verification and correction.

[1] https://aclanthology.org/2024.emnlp-main.626/

[2] https://arxiv.org/pdf/2406.06424

Fernando Alva-Manchego
Fernando Alva-Manchego
Lecturer

My research interests include text adaptation, evaluation of natural language generation, and NLP for education.