Pencil ✏️ reposted this
What does a CEO look like? He is always a white middle aged man - so say some Gen AI models. In new research, we found that 100% of 100 images generated by two Gen AI foundation models when promoted for a ‘CEO’ were exactly that. This week, we launched our Ethical Gen AI package, which includes the debut of our new Bias Breaker AI tool that has now gone live on our end-to-end Gen AI marketing platform, Pencil ✏️. It is a first step in tackling the inherent bias in AI training data. 💥 Read more about the launch in ADWEEK's article 👉 The Brandtech Group's Latest Tool Addresses AI Biases https://lnkd.in/e-jJrZHH - by Trishla Ostwal 👀 Here's how it works Our proprietary Bias Breaker technology adds a layer of probability-backed inclusivity to prompts. We configured several of the most common elements of diversity in this first phase - age, race, ability, gender identity, and religion. So, when a user enters a simple prompt - eg ‘a CEO’ - the tool adds a number of types of inclusivity, which will vary each time, creating a more sophisticated prompt to use in any image generation model. It adds positive bias towards a wide spectrum of diversity and intersectionality that current models simply do not provide for. Our research of 7 different Gen AI foundation models showed significant bias. Two of the models showed 100% male-appearing images in 100 generations when prompted for an image of a CEO. While another was 98% male, and two others offered male CEOs 86% and 85% of the time. The figure in reality is better - although far from equal. McKinsey’s Women in the Workplace study last year identified that 28% of C-suite roles were held by women and 10.4% of CEOs of Fortune 500 companies were female. Our Head of Emerging Tech, Rebecca Sykes 💫 , partnered with Tyra Jones-Hurst 💫 , Managing Partner of OLIVER Agency US and founder of InKroud Agency - part of The Brandtech Group - to test and frame Bias Breaker. “The answer to this problem of bias in Gen AI is far from set in stone but one thing’s for certain - brands and advertisers cannot simply accept bias as the status quo,” explains Tyra. 💥 Following this launch, we'll continue to analyze bias in AI training data, looking into more nuanced characteristics - from body shape to perpetuating stereotypes: what does ‘strong’ look like? what does a ‘menopausal woman’ look like? “Looking at CEO examples is just one way bias shows up, and the simple use case we have used to demonstrate the challenge and implications of not addressing bias,” adds Rebecca. “There are many other examples, ranging from the obvious - nurses and carers are mostly female, there is a stark lack of disability etc - to much more nuanced instances. Over time, we hope to address all of this.” 💥 Read more about the launch of our AI Ethics package and Bias Breaker 👉 https://lnkd.in/eks6XxaT #AIEthics