Artificial intelligence (AI) tools are reshaping how we live and work. Often for the better.
But, as with any powerful new tool, there are potential pitfalls and unintended consequences.
One of the emerging concerns surrounding AI is the propensity for AI tools to create content that perpetuates gender biases.
My recently published research, with Dr Toby Newstead and Dr Suze Wilson, shares our learnings from using generative AI to create narratives of ‘good’ and ‘bad’ leadership throughout history. The gender biases we found were concerning (!!).
You can read the full article online, published in the journal Organizational Dynamics.
How the Biases Creep In
When you ask (i.e., prompt) an AI tool to create content for you, it does so based on how its underlying model was trained. In other words, it does so based on its training data.
Because the data used to train AI models is largely derived from content that humans have previously published online (i.e., content that may have originally contained biases), these biases are, in turn, perpetuated when the AI model uses that training data to inform how it generates new content.
Our study titled “How AI can perpetuate – Or help mitigate – Gender bias in leadership” explores this issue and finds concerning signs of biases in AI-generated content.
To generate data for our analysis, we asked a generative AI tool to write narratives about ‘good’ and ‘bad’ leaders (men, women, and no gender indication provided) across the ‘past’, ‘present’, and ‘future’.
We were interested to see how the content generated by AI described different combinations of these variables and whether the generated content might be perpetuating, and thus amplifying, stereotypes while reinforcing harmful notions about gender roles.
In the leadership domain, which was the primary focus of our investigation, where women already encounter numerous obstacles, these biases can exacerbate the gender gap. Biases shape perceptions and reinforce stereotypes.
The good news is that it’s possible to take steps to mitigate these biases.
The first step is awareness.By understanding that AI-generated content may contain biases, content creators have an opportunity to correct biases before publication.
Our study offers practical guidance on how to spot biases in AI-generated content.We provide practical strategies for using AI tools to mitigate and correct biases in AI-generated content (see Table 1 in the paper). We provide examples of prompts (i.e., instructional commands) that you might try playing with to see if the content you’re producing and/or consuming contains biases.
By understanding the potential pitfalls of this exciting technology, we have an opportunity to move towards building a more equitable future.
Read the full article published in the journal Organizational Dynamics.