In this talk, I first examine the effect of cognitive biases on learning languages. I introduce learners’ biases against sound changes, homophony, and unfamiliar patterns and their connection to the frequency of variants presented in the input. In a series of experiments, adult native English and Korean speakers were exposed to an artificial language in which plural forms were probabilistically marked by one of two prefixes. One of the prefixes triggered sound changes and created homophony but the other prefix did not trigger a sound change. Results showed that learners were poorer at learning the sound change that triggered homophony compared to the one that did not trigger homophony However, when learners were frequently exposed to homophonous patterns in their native languages, they learned both sound changes successfully. To model these biases, we introduce a Discount model in which the weights were assigned to training data that exhibit biased patterns. This model straightforwardly implements the biases and accurately predicts learning outcomes. In addition, I examined the online surveys and found that the credibility of online news articles is affected by various elements such as topic, stance, toxicity, and experts’ quotes. These features were extracted from news articles by implementing various classifiers, which were used to train a classifier that assesses the trustworthiness of news articles. This approach incorporates the key elements into the model and enhances the interpretability of the NLP model.
Invited Speaker: Hanbyul Song (Cardiff University)
Bio: Dr. Hanbyul Song is a Research Assistant in Cardiff University’s DISINFTRUST project. With a PhD in Linguistics from University College London, her research investigated biases influencing the acquisition of probabilistic morpho-phonological patterns. Her research expands into cognitive psychology, computational modelling, NLP, and language processing, aiming to reveal the complexities of cognitive biases and their implications.