After a critical conversation with a tragic ending, two parents are suing OpenAI, the company behind ChatGPT. The CEO is now taking consequences and wants to make his AI “safer” – with potential implications for all users.
What topics are involved?
- Currently, you can talk to artificial intelligences like ChatGPT about almost any topic.
- In one case in the USA, it is said that a chatbot may have encouraged a teenager to take their own life – including instructions.
- Another AI is said to have flirted with minors and subsequently incited them to suicide. Both cases led to a lawsuit against the respective company.
This is what ChatGPT wants to change: Sam Altman, the CEO of OpenAI, writes in a blog post that they want to address the issues of safety for teenagers, freedom, and privacy. More and more people are discussing increasingly sensitive topics with artificial intelligences.
Privacy must be improved accordingly, similar to when one discusses their ailments with a doctor or legal issues with a lawyer. New safety features are being introduced. However, they want to step up particularly in the protection of youths. Specifically, it states:
We prioritize safety over privacy and freedom for teenagers; this [AI] is a new and powerful technology, and we believe that minors need significant protection.
Sam Altman, CEO of OpenAI
Topics like suicide should no longer be discussed with minors. In case of an emergency, they even want to directly inform the parents or relevant authorities if there is an immediate danger.
You can find further assistance on the homepage of the German Society for Suicide Prevention.
Privacy and safety are always a challenging issue with AI:
In case of doubt, everyone gets the youth version
Adults should continue to enjoy all freedoms. Altman writes: If an author wants help writing about suicide for their book, that should still be possible. The desire for the AI to flirt more should also remain fulfillable.
Only minors should be protected from such things. In a further blog post, it is explained how the detection should work. First and foremost, parents should be able to use a parental control to set how the AI should behave.
Minor users will be directed to a “ChatGPT with age-appropriate guidelines”. Furthermore, they want to implement a detection: Based on user behavior, the AI should determine whether an adult or a minor is chatting.
In case of doubt, the safe version, meaning the youth version, should always be chosen – and, if necessary and in accordance with local laws, they want to require an ID to verify age: “We know this conflicts with adult privacy, but we believe the trade-off is worth it.”
Such age verification already exists in other places on the internet and even in gaming. The United Kingdom has enacted a deep intrusion into privacy through legislation: In the UK, players now have to prove their age on Steam, showing what could soon be expected for German players