fbpx

Google’s Gemini Busted For Hallucinating With Images, Company Pauses AI Bot

Google’s Gemini Busted For Hallucinating With Images, Company Pauses AI Bot

Gemini

Image Source: Google

Google’s latest artificial intelligence venture, Gemini, has hit a snag as reports emerge of the technology producing hallucinatory images, prompting the company to pause its AI bot for further investigation.

In a memo dated Feb. 28, Google CEO Sundar Pichai addressed the company’s AI blunder, which led to Google taking its Gemini image-generation feature offline for further testing.

Pichai called the issues “problematic” and said they “have offended our users and shown bias.” The news was first reported by Semafor.

Earlier this month, Google introduced the image generator through Gemini, the company’s main group of AI models. The tool allows users to enter prompts to create an image. Over the past week, users discovered historical inaccuracies that went viral online, and the company pulled the feature last week, saying it would relaunch it in the coming weeks, CNBC reported.

“I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong,” Pichai said. “No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us.”

The incident comes amidst growing concerns about biases and inaccuracies in artificial intelligence, particularly in image generation. Gemini, touted as a cutting-edge AI tool, is designed to create realistic images based on textual prompts. However, recent developments suggest that the bot may be generating nonsensical or inaccurate outputs, akin to hallucinations.

The controversy surrounding Gemini highlights the broader challenges faced by tech companies in developing AI systems that are free from biases and capable of producing accurate outputs. In recent years, concerns about racial bias in AI algorithms have gained traction, with studies revealing disparities in facial recognition and speech recognition technologies.

Factors such as overfitting, biased training data, and the complexity of AI algorithms can contribute to the generation of erroneous outputs, leading to what researchers describe as “hallucinations,” IBM reported. The issue of AI hallucinations, while not new, underscores the complexities involved in training and deploying AI models effectively.

Stephanie Dinkins, an artist based in Brooklyn, has been at the forefront of blending art and technology in her work. In May, she received a $100,000 grant from the Guggenheim Museum for her innovative contributions, notably her ongoing series of interviews with Bina48, a humanoid robot.

Over the past seven years, Dinkins has been exploring the capacity of artificial intelligence to authentically represent Black women, capturing emotions like joy and sorrow through various text prompts. Initially, the outcomes were underwhelming, if not disconcerting: her algorithm generated a humanoid figure in pink hues veiled by a black cloak.

“I anticipated something more reflective of Black womanhood,” said the African-American artist, who, despite advancements in technology since her initial trials, she found herself resorting to evasive language in her prompts to guide the AI image generators toward her desired depiction, “to provide the machine with an opportunity to deliver the desired outcome.” However, regardless of whether she used terms like “African American woman” or “Black woman,” the AI consistently distorted facial features and hair textures at a high frequency.

“Improvements obscure some of the deeper questions we should be asking about discrimination,” Dinkins told The New York Times. “The biases are embedded deep in these systems, so it becomes ingrained and automatic. If I’m working within a system that uses algorithmic ecosystems, then I want that system to know who Black people are in nuanced ways, so that we can feel better supported.”

Image Source: Google