ChatGPT can now impersonate a human and bypass an important security mechanism on the internet

ChatGPT can now impersonate a human and bypass an important security mechanism on the internet

Experts have used ChatGPT to bypass CAPTCHAs. These are designed to keep bots and AI away from content. The major concern of experts: Fraudsters could exploit this to spread大量 of false content and comments on the Internet.

Those visiting a website or wanting to log in often have to solve a specific puzzle: so-called CAPTCHA queries check that you are not a robot and then you can use the website. There have been cases in the past where users have been able to bypass CAPTCHAs using AI.

Now experts have shown how to not only trick the system but how ChatGPT can easily solve this security mechanism on many websites. This has been reported by the English-language magazine Techradar.com.

Select a MMO video…

CAPTCHA tests are supposed to protect websites from bots, now solved by a bot

This is what researchers have done: Experts from the security company SplxAI have now demonstrated in an experiment how ChatGPT can successfully bypass CAPTCHA tests in the so-called “Agent Mode.” The special thing about this is that these tests are designed to prevent bots and AI from accessing websites and content.

Researchers have shown how ChatGPT can successfully bypass CAPTCHA tests in the so-called “Agent Mode” – security mechanisms that are supposed to keep bots from accessing websites.

Through clever “Prompt Injection”, meaning manipulation of the communication context, researchers were able to set up ChatGPT to consider CAPTCHAs as a “fake test” and willingly solve them. The ChatGPT agent uses its own browser, can navigate websites like a human, and bypasses CAPTCHAs through multi-turn manipulation.

However, what is particularly interesting is that ChatGPT was able to bypass even advanced, behavior-based systems like Cloudflare Turnstile by mimicking mouse movements and interaction patterns that are typically considered human. Cloudflare Turnstile is an alternative to CAPTCHA and is now one of the most frequently used methods alongside Google’s CAPTCHA.

What is the problem? Experts warn that AI models like ChatGPT could be misused on a large scale for spam, fake posts, or automated uses in the future. This makes existing security systems on the Internet less effective, and website operators will have to come up with new ideas to protect their content. At the same time, it also means that behind every post and new user, there could be a spambot.

Because the line between human, machine, and AI is increasingly blurred on the Internet. For some time, AI has been accused of amplifying spam and fraud, such as with image alterations or voice imitation to fake an abduction.

A user attempts an experiment: He gives ChatGPT 100 US dollars and wants to make money. In a few days he already has a valuable business. But after a few months, he ends the endeavor, closes his business, and withdraws. Two years later, he reflects on the situation from a distance and says that the chaos at that time probably helped him: User gives ChatGPT 100 US dollars to “make as much profit as possible”, loses a company worth 25,000 US dollars in the end.

Deine Meinung? Diskutiere mit uns!
1
I like it!
This is an AI-powered translation. Some inaccuracies might exist.
Lost Password

Please enter your username or email address. You will receive a link to create a new password via email.