Google’s response on how Gemini image generation went wrong and lessons learned |

After much deliberation, Google launched a new image generation capability in its Gemini AI chatbot (formerly known as Bard), allowing users to create AI photos by text prompts. However, the feature came under fire after it ‘generated historically inaccurate’ images of US Founding Fathers and Nazi-era German soldiers by excluding ‘white people’. Google quickly acknowledged the mistake, saying that the chatbot ‘missed the mark’ and temporarily paused the image generation of people in Gemini.The company has now detailed what went wrong with Gemini’s AI pictures creation.
As per Prabhakar Raghavan, senior vice president at Google, two things went wrong with Gemini’s human image creation feature.
“First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive,” he explained.
The Gemini conversational app’s image generation feature is built on top of an AI model called Imagen 2. The company said that if users prompt Gemini for images of people in particular cultural or historical contexts, they should “absolutely” get an accurate response.
He said the two problems led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.
Google working on an improved version
Google said it is working on an improved version that will be released later. It reiterated that Gemini may not always be reliable, “especially when it comes to generating images or text about current events, evolving news or hot-button topics.”
“It will make mistakes. As we’ve said from the beginning, hallucinations are a known challenge with all LLMs — there are instances where the AI just gets things wrong. This is something that we’re constantly working on improving,” Raghavan noted.
He pointed out that Gemini will occasionally generate embarrassing, inaccurate or offensive results, and Google will continue to take action whenever it identifies an issue.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment