This study explores whether English language models (LLMs) contain biases towards specific regions, and how these biases affect their performance in tasks like Natural Language Processing (NLP). By looking into subtle differences in word meanings across regions, particularly in places with fewer language resources like New Zealand, the authors aim to understand how these biases impact the models’ accuracy and fairness in real-world applications.
Title: Regional Bias in Monolingual English Language Models
Abstract: In Natural Language Processing (NLP), pre-trained language models (LLMs) are widely employed and refined for various tasks. These models have shown considerable social and geographic biases creating skewed or even unfair representations of certain groups. Research focuses on biases toward L2 (English as a second language) regions but neglects bias within L1 (first language) regions. In this work, we ask if there is regional bias within L1 regions already inherent in pre-trained LLMs and, if so, what the consequences are in terms of downstream model performance. We contribute an investigation framework specifically tailored for low-resource regions, offering a method to identify bias without imposing strict requirements for labeled datasets. Our research reveals subtle geographic variations in the word embeddings of BERT, even in cultures traditionally perceived as similar. These nuanced features, once captured, have the potential to significantly impact downstream tasks. Generally, models exhibit comparable performance on datasets that share similarities, and conversely, performance may diverge when datasets differ in their nuanced features embedded within the language. It is crucial to note that estimating model performance solely based on standard benchmark datasets may not necessarily apply to the datasets with distinct features from the benchmark datasets. Our proposed framework plays a pivotal role in identifying and addressing biases detected in word embeddings, particularly evident in low-resource regions such as New Zealand.
Authors: Jiachen Lyu, Katharina Dost, Yun Sing Koh, Jörg Wicker — School of Computer Science, The University of Auckland.
Read more: Researchgate – Regional Bias in Monolingual English Language Models.