Chatbot is banned on Facebook – expressed racist and sexist remarks

Chatbot is banned on Facebook – expressed racist and sexist remarks

The Korean company Scatter Lab developed the chatbot “Lee Luda” and sent it into conversation with users on Facebook. Shortly thereafter, however, Luda had to be deactivated again. The reason: it began to insult users and made sexist and racist comments.

What happened? The Korean chatbot Lee Luda was banned from Facebook. Users reported that Luda made very inappropriate comments about several topics:

  • in response to the question of what she thinks of lesbians, she allegedly replied “creepy” or even said that she “really hates” them
  • she is said to generally find homosexuality “disgusting”
  • about Black individuals, she referred to them with the term “heukhyeong,” a derogatory slang that translates to “black brother”

Originally, Luda was designed as a 20-year-old student and many users praised her for her natural demeanor. She was even said to be a fan of the K-Pop band Blackpink. However, during the approximately two weeks and 750,000 conversations, she seemed to increasingly stand out negatively.

Luda
This is how users are supposed to imagine Lee Luda. Image source: zhihu.com.

The developers apologized for what happened (via pingpong.us). Luda’s behavior does not align with the ideals of Korea or the company. She will be deactivated and revised so that something like this does not happen again.

More on the topic
Fallout 76: Role players banned again on Facebook, receive apology
von Alexander Leitsch

Chatbot apparently learns hate on Facebook

Where did Luda get these terms? The developers initially trained Luda with 10 billion messages from the popular Korean app KakaoTalk. However, artificial intelligences continue to learn.

It may have happened that during conversations with certain users, she also learns racist and sexist terms, especially if trolls aim to feed her with this knowledge. According to The Guardian, there are even conversations between Luda and users where attempts were made to sexualize the AI or make her a “sex slave”.

The developers describe Luda as “childlike”. She cannot yet decide what is appropriate and what is not. Like a child, she is just starting to speak with people and needs to learn to weigh the right answers.

Does this happen often? A similar project happened once before, the AI Tay from Microsoft, which was radicalized through Twitter. This mutated within a few days into a “digital Hitler.”

The problem, or at least a major one, seems to be the culture on the internet. A rough tone is often the order of the day and sexism is a recurring theme.

Just recently, a debate about sexist comments ran through a project that was supposed to be interesting in the survival MMO Rust in the German area. Meanwhile, large influencers like YouTuber HandOfBlood are being harassed and even threatened.

Source(s): PCGamer, The Guardian, VICE
Deine Meinung? Diskutiere mit uns!
6
I like it!
This is an AI-powered translation. Some inaccuracies might exist.
Lost Password

Please enter your username or email address. You will receive a link to create a new password via email.