Google halts Gemini AI image creation amid AI racism accusations

Published February 23rd, 2024 - 04:19 GMT
Google AI
BOSNIA AND HERCEGOVINA, SARAJEVO, 18.4.2023 (Shutterstock)

ALBAWABA - After criticism for its inaccurate portrayals of historical individuals, Google has decided to stop using its AI chatbot Gemini's feature that creates human images.

An executive's social media remarks have reappeared amid anger over Google's AI debut. Google's Senior Director of Product Management for Gemini Experiences, Jack Krawczyk, is under fire for his race and privilege comments. As Google is criticized for its AI's handling of racist pictures and achievements, these posts have gone viral.

After people flagged Google's AI for highlighting Black, Native American, and Asian achievements while ignoring White ones, the uproar emerged. In Fox News Digital's AI tests, White people were constantly shown. In response, the AI claimed it could not fulfill the request, citing concerns about perpetuating harmful stereotypes based on race.

The IT behemoth declared, "We are actively addressing the recent issues with Gemini's image creation feature," in response to recent criticisms. We will temporarily disable this feature and release an improved version shortly."

Although the feature is widely used, Google acknowledged that it has not achieved its goals with the existing functionality and reaffirmed its commitment to improving the descriptions of image production.

This action follows criticism that Google received for misrepresenting white people as African Americans in historical photos, which led to a backlash against the proliferation of false historical representations.

Yesterday, Google apologized and admitted that certain historical personalities' photos had been rendered incorrectly.

Former Twitter activist Gary Marcus (@GaryMarcus) commented on Google's AI scandal. Marcus wondered whether GoogleAI wanted to eliminate white men from history.

He presented an alternative possibility that addressing biases may have backfired. Effective guardrail implementation in such systems is unpredictable, he said. Marcus advised individuals to grasp how the incident affected Large Language Model (LLM) management and move on.

After the AI's disastrous rollout, Google's reputation has been questioned. Krawczyk has previously indicated that Google was actively improving its AI capabilities and urged rapid action. However, his earlier views raise questions about Google's diversity and inclusion policies.

Subscribe

Sign up to our newsletter for exclusive updates and enhanced content