- The Prompt Innovator
- Pages
- Holy Revelation an Afro-Pope
Holy Revelation: Afro-Pope?! Google´s Gemini Gone Woke
Google Gemini, when tasked with generating an image of the Pope, produced a surprising result: an Asian woman and a black man dressed in papal attire. This outcome, while humorous, underscores a glaring incongruity. Throughout history, all 266 Popes have been white men, and the Catholic Church prohibits the ordination of women as priests or bishops, let alone their ascension to the papacy.
![](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/0c0fa72e-1250-444c-b933-ef0a2a74b4de/Sk%C3%A6rmbillede_2024-02-27_kl._11.33.43.png?t=1709030058)
Earlier this month, Google unveiled its latest venture into image generation capabilities with the introduction of Gemini, formerly known as Bard. However, the excitement surrounding this AI tool quickly turned to controversy as concerns emerged regarding its handling of racial diversity.
Gemini's image generation feature has come under fire for disproportionately depicting Black, Native American, and Asian individuals over White individuals, leading to accusations of overcorrecting for racial bias. In response, Google has swiftly addressed the issue, acknowledging the inaccuracies and expressing a commitment to rectifying them promptly.
Jack Krawczyk, Senior Director of Product Management at Gemini Experiences, emphasized the urgency of improving the accuracy of image depictions. He underscored the importance of diversity in image generation but acknowledged that Gemini had missed the mark in its attempts.
![](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/e1d17023-79b5-4c5a-b019-7818daad32d0/Sk%C3%A6rmbillede_2024-02-27_kl._11.04.15.png?t=1709030153)
Fox News Digital Tests On Gemini
Fox News Digital conducted multiple tests on Gemini, revealing further complexities in its approach to diversity. While the AI declined to provide images based solely on race, it suggested celebrating individual qualities rather than reinforcing harmful stereotypes. This stance reflects a broader effort to promote inclusivity and move away from historical biases in media representation.
The controversy surrounding Gemini is not unique in the realm of AI. Past incidents, such as Google's mislabelling of a photo of a black couple and OpenAI's portrayal of predominantly white men in image queries, highlight ongoing challenges in achieving equitable outcomes in AI technologies.
As Google and other tech giants navigate these challenges, the quest for unbiased AI remains a pressing priority. It's a reminder of the critical importance of ethical considerations in AI development and the need for continual vigilance in addressing biases.
Why AI has diversity issues and bias
Efforts to mitigate bias have made limited progress in large part because AI image tools are typically trained on data scraped from the internet. These web-scrapes are primarily limited to the United States and Europe, which offers a limited perspective on the world. Much like large language models act like probability machines predicting the next word in a sentence, AI image generators are prone to stereotyping, reflecting the images most commonly associated with a word, according to American and European internet users.