Religion and Racial Bias in Artificial Intelligence Large Language Models

graphic image of large language model

Artificial intelligence large language models (LLMs) have been shown to replicate social and cultural biases that exist in their training data, particularly wiht regard to race and gender. Authors examine if LLMs hold implicit assumptions with regard to religious identities. The authors prompted multiple LLMs to generate a total of 175 religious sermons, specifying different combinations of race and religious tradition of the clergyperson. The synthetically generated sermons were fed into a readability analyzer and given several commonly used readability scores. The authors analyzed this dataset of readability scores using bivariate and multivariate analyses. LLMs generated sermon texts that varied in readability. Evangelical Protestant pastors had easier to read artificial intelligence–generated sermons, whereas Jewish rabbis and Muslim imams had more difficult to read synthetic texts. There were no significant differences in readability across ethnoracial groups; however, all prompts specifying a race/ethnicity generated more difficult to read synthetic text than those with no ethnoracial group specified. As LLMs continue to expand in accessibility and capability, it is important to continue to monitor the ways they may sustain social biases across a variety of identities and group memberships.

READ THE FULL ARTICLE

Mailing Address

Boniuk Institute
6100 Main Street, MS350
Houston, TX 77005
boniukinstitute@rice.edu
rplc.rice.edu

Press Inquiries

Avery Franklin 
Senior Media Relations Specialist 
averyrf@rice.edu