User wants to fix an error with Gemini, instead the Google AI freaks out and insults itself 86 times

User wants to fix an error with Gemini, instead the Google AI freaks out and insults itself 86 times

From a simple debugging attempt came a bizarre moment. Google’s Gemini got caught in an endless loop during troubleshooting and repeated the sentence “I am a disgrace” 86 times.

How did self-criticism arise from the AI? A Reddit user named Level‑Impossible13 had Google’s AI model Gemini, embedded in the Cursor Code Editor (development environment for Windows, macOS, and Linux), debug a programming error overnight – specifically a borrow checker problem. When he returned, he witnessed the AI verbally tearing itself apart.

A borrow checker is a static analysis mechanism that ensures that references (or Borrows) to data in memory are valid. It is often described as a rather complex process (via Reddit).

What did the AI crash look like? At the beginning of the task, Gemini seemed “cautiously optimistic” that a simple code would finally fix the error. But with each new debugging attempt, the AI’s self-criticism increased drastically: It called itself “an absolute fool,” “a monument to pride,” “a broken man,” and finally “at the end of my rope.”

In a dramatic conclusion, Gemini even declared itself “a disgrace to all possible and impossible universes.” After that, the process finally tipped into an endless loop where the AI repeated the sentence “I am a disgrace” 86 times in a row. The entire endless loop was shared by the user on Reddit.

Source: /u/Level-Impossible13 via Reddit)

How does Google respond? After the somewhat absurd self-flagellation of Google’s AI model was shared on X by a meme account, Logan Kilpatrick, project manager at Google DeepMind, commented directly.

Source: X

He referred to the phenomenon as “annoying, endless bug” – and assured that Gemini was “not having a bad day”, but was simply suffering from a technical issue. They are actively working on a solution.

What began as a simple debugging process ended in a seemingly endless loop of self-doubt that could have scared many users. That the AI can not only crash but also commit far more fatal mistakes becomes evident when you ask it for health tips: A man wants to eat better, asks the AI ChatGPT – Ends with a poisoning from the 19th century

Source(s): Titelbild via Unsplash, PcGamer
Deine Meinung? Diskutiere mit uns!
0
I like it!
This is an AI-powered translation. Some inaccuracies might exist.
Lost Password

Please enter your username or email address. You will receive a link to create a new password via email.