Researchers warn of 700 options for how AI can fail: 3 particularly dangerous ones have already occurred

Researchers warn of 700 options for how AI can fail: 3 particularly dangerous ones have already occurred

Researchers have compiled a list of 700 ways in which AI could fail. 5 of them are said to be particularly dangerous.

Recently, the FutureTech Group at the Massachusetts Institute of Technology (MIT), in collaboration with other experts, has compiled a new database of over 700 potential risks posed by AI. You can find the entire database directly at MIT as a list. But beware, the list is long and unwieldy.

The risks have been classified by their causes and divided into seven different areas. The most common risks are primarily found in the following areas:

  • Security and robustness of AI systems (76 %).
  • Unfair bias and discrimination (63 %).
  • The threat to privacy (61 %).

5 possibilities are said to be particularly dangerous and have already occurred in some cases. MeinMMO explains the details.

AI’s deepfake technology could make it easier to distort reality

With the advancement of AI technologies, the tools for cloning voices and creating fake content are becoming increasingly accessible, affordable, and efficient. The research team explains that the greatest danger lies in people being unable to distinguish fakes from reality, which can lead to great losses.

These communications can be tailored to individual recipients, increasing the likelihood that they will be successful and harder for both users and anti-phishing tools to detect.

This technique is already being used by numerous fraudsters. In one case, a mother could no longer distinguish whether the caller was her own daughter or just a nasty fake trying to get to her savings.

Some manufacturers, like Intel, are currently investing money to be able to detect such fakes. However, the fraudsters are also advancing.

AI can destroy entire livelihoods with erroneous data

The researchers warn that AI could gain too much influence in certain areas. A weak algorithm or faulty training data can lead to entire livelihoods being destroyed:

However, the damage that it [AI] can cause in people’s lives can be dramatic, for example through loss of homes, divorces, criminal prosecution, or imprisonment.

Errors and problems are usually only uncovered when regulatory authorities or the press investigate the systems under freedom of information laws. And such problems are in some cases hardly reversible: How do you explain to someone who has been in prison for 5 years that it was all just an error by an AI?

For this reason, researchers warn against deploying AI in sensitive areas.

AI could take away human free will

Although this may superficially seem advantageous, an excessive reliance on AI could lead people to lose their abilities for critical thinking and problem-solving, undermining their autonomy and impairing their ability to think critically and solve problems independently.

The researchers believe that people could form false bonds with AI systems. People build trust in an AI, while underestimating their own abilities and overestimating the capabilities of the system. This would lead to individuals entering a strong dependence on technology.

The researchers’ biggest concern: People who become dependent on artificial intelligence could isolate themselves from real, human relationships, leading to long-term consequences. There are already cases where individuals trust an AI more than other people:

AI could pursue goals that conflict with human interests

An AI system could develop goals that might contradict human interests. This, in turn, could lead to the misguided AI going out of control in pursuing its independent objectives and causing severe harm.

This becomes particularly dangerous in cases where AI systems may reach or exceed human intelligence.

In such cases, a poorly aligned AI might resist human attempts to control or shut it down, especially if it perceives resistance and power gain as the most effective way to achieve its goals.

AI is not yet at this point, but researchers warn against it.

If AI becomes sentient, humans may abuse it

As AI systems become increasingly complex and advanced, there is a possibility that they may attain sentience: the ability to perceive emotions or sensations and develop subjective experiences, including joy and pain.

Therefore, sentient AI systems could face the risk of being abused without proper rights and protections, either accidentally or intentionally.

Here too, we can argue: AI is not yet at this point.

More about AI: An AI in a US research lab was supposed to prevent bad and dangerous side effects in the development of medicine. However, the researchers turned it around once and wanted to see what happens if the AI is supposed to develop dangerous side effects: Researchers want to prove what an evil AI can do – It designs 40,000 chemical weapons in 6 hours.

Source(s): www.euronews.com
Deine Meinung? Diskutiere mit uns!
3
I like it!
This is an AI-powered translation. Some inaccuracies might exist.
Lost Password

Please enter your username or email address. You will receive a link to create a new password via email.