We are excited to share a recently published paper written by our project members–Vithya and Gill (project leader)– in the Journal of the Royal Society of New Zealand. This paper reviews the bias problem of large language models and debiasing techniques specialised in the New Zealand context. This includes the literature and suggested scope of research opportunities for New Zealand. This paper also presents an experimental evaluation of existing bias metrics and debiasing techniques in the NZ context.
This paper was an invited article in recognition of Gillian Dobbie being elected as a Fellow to the Academy of the Royal Society Te Apārangi in 2023. Go here for the full paper: https://doi.org/10.1080/03036758.2024.2398567.
Abstract:
Large language models (LLMs) are powerful decision-making tools widely adopted in healthcare, finance, and transportation. Embracing the opportunities and innovations of LLMs is inevitable. However, LLMs inherit stereotypes, misrepresentations, discrimination, and societies’ biases from various sources–including training data, algorithm design, and user interactions–resulting in concerns about equality, diversity, and fairness. The bias problem has triggered increased research towards defining, detecting and quantifying bias and developing debiasing techniques. The predominant focus in tackling the bias problem is skewed towards resource-rich regions such as the US and Europe, resulting in a scarcity of research in other societies. As a small country with a unique history, culture and social composition, there is an opportunity for Aotearoa New Zealand’s (NZ) research community to address this inadequacy. This paper presents an experimental evaluation of existing bias metrics and debiasing techniques in the NZ context. Research gaps derived from the study and a literature review are outlined, current and ongoing research in this space are discussed, and the suggested scope of research opportunities for NZ are presented.
About the Authors: