You are not mistaken: Your favorite AI has really become dumber, and an AMD engineer provides the proof

AI auf Smartphone

A new analysis is causing a stir: A well-known AI for developers is said to have lost significant performance. The software engineer and head of AI at AMD, Stella Laurenzo, mentions up to 67% less thinking depth. The possible reasons behind this are likely recent updates.

Which AI assistant is it about? Claude Code is a coding tool developed by Anthropic that runs directly in the terminal and helps developers refactor, document, or debug code efficiently.

Now, the coding AI from Anthropic is under criticism. The trigger is an analysis by the tech giant AMD, which documents a significant decline in performance.

Start video
Why is RAM so expensive? The current situation explained in 2 minutes

The center of the criticism is an investigation of about 7,000 real usage sessions. It was observed that the AI performed worse in many cases than before. According to Stella Laurenzo, head of AI at AMD, and reports from Winbuzzer and The Register, efficiency is said to have decreased by up to 67 percent in certain scenarios (via GitHub).

Especially notable was that the AI, when faced with tasks it could reliably solve before, now often delivered incorrect or incomplete results.

Unable to perform complex technical tasks

How was the deterioration discovered? The findings come not from an isolated test, but from evaluating real use. Stella Laurenzo had thousands of interactions with the AI analyzed to better understand its behavior in everyday situations.

The AMD team had logged every conducted Claude Code session since January 2026. The data foundation consists of thousands of logged sessions with tens of thousands of individual thought steps (Winbuzzer).

This revealed a clear pattern: The AI needed, on average, more attempts to find solutions and abandoned more complex tasks. The AMD internal developers also reported that answers were less precise or skipped important steps.

These changes were measurable and occurred consistently over many sessions. According to Laurenzo, this means Claude Code is unable to perform complex technical tasks.

Made dumber by update?

Why has the AI deteriorated? A clear cause has not yet been confirmed. However, according to the analysis, the decline is not due to a large, obvious error, but rather two small changes that together had significant consequences. On the one hand, the introduction of adaptive thinking and the adjustment of the so-called effort value.

What is “adaptive thinking” and why is it a problem? In February 2026, Anthropic introduced a feature called “adaptive thinking.” The idea behind it: The AI decides for itself how much “thinking effort” it puts into an answer (devpik & Anthropic).

In simplified terms:

  • For easy tasks, it only thinks briefly or skips them entirely.
  • For difficult tasks, it should think longer and more thoroughly.

This saves computing power and makes the AI faster.

In practice, however, something unexpected happened: In some cases, the AI decided to not think at all.

This specifically means that for the tasks where careful thinking would have been crucial, the AI skipped this step and instead delivered incorrect answers.

What else has changed? Additionally, at the beginning of March, a second, even less noticeable change was made: The so-called “effort” value was adjusted. This value determines how much maximum “thinking power” the AI is even allowed to exert.
Previously, this was set to “high” by default, afterward only to “medium” (Venturebeat & GitHub).

Meanwhile, the lead developer of Claude Code, Boris Cherny (GitHub) has responded to the criticism from Stella Laurenzo and confirmed certain adjustments and possible deteriorations.

Artificial intelligence is now used in many areas. Above all, many people use it in everyday life. And Generation Z has discovered AI as a therapeutic aid. But this can lead to a big problem: AI like ChatGPT tells users what they want to hear

This is an AI-powered translation. Some inaccuracies might exist.