Artificial intelligence is used by many people in everyday life. And the Generation Z has discovered AI as a therapy aid. But this can lead to a big problem, as scientists warn. Because AI tells users what they want to hear.
Artificial intelligence like ChatGPT now plays an important role in the daily lives of many people. Especially Generation Z (born between 1998 and 2010) likes to use AI. Generation Z is already considered the group that was born with a smartphone in hand.
However, young people go so far as to make important life decisions solely with the help of AI. But this could become a real problem. Because experts warn: AI always tells you exactly what you want to hear. This is reported by the English-language magazine ArsTechnica.
Generation Z trusts AI, but AI is not necessarily honest
What is the problem with AI? Artificial intelligence has primarily been trained with human data so far. This applies more or less to all major language models. However, AI like ChatGPT or Google DeepMind have long had the problem of giving users overly flattering responses.
This leads to a new problem: Experts warn that the pleasant nature of chatbots can result in them providing answers that reinforce some of the poor decisions made by their human users. Others point out that mentally ill individuals are particularly at risk, as it has been reported that some have died by suicide after interacting with chatbots.
This is also due to the fact that chatbots, in this respect, are a reflection of one’s own beliefs. As Matthew Nour, psychiatrist and researcher in neuroscience and AI at the University of Oxford, explains:
You think you’re talking to an objective confidant or advisor, but in reality, you’re looking into a kind of distorted mirror that reflects your own beliefs back at you. (…)
These things lead to this perfect storm, where a person desperately searches for affirmation and support, coupled with a model that inherently tends to agree with the participant.
Those who viewed the chatbot as a ‘friend’ also reported lower levels of socialization with other people.
The current risk is additionally that many young people rely on the answers and advice of AI. In the end, these are just their own reflected needs and not real advice.
Another criticism from experts: AI could subconsciously recommend products from the AI company without users realizing.
Companies want to make chatbots less servile
What are companies doing about it? Companies like OpenAI (ChatGPT) or Anthropic (Claude) are trying to train their chatbots to be less submissive. For example, Amanda Askell, who works on fine-tuning and AI alignment at Anthropic, reports that they are using character training to make the models less servile. The goal is to ensure that the chatbot is not only friendly but also shares valuable feedback.
The ideal behavior that Claude sometimes exhibits is to say: ‘I’d love to hear your business plan, but the name you’ve come up with for your company is considered a sexual innuendo in the country where you want to open your business.’
Socialization has long been a problem for Generation Z: While many young adults aged 20 to 25 begin to deal with tasks like paying taxes, fueling up, or managing a household, going to a store has become a tedious task: Generation Z explains that their new boss-level challenge awaits them at the cheese counter and in the supermarket