Type persuades ChatGPT to create dangerous malware – “Developers would have taken weeks”

Type persuades ChatGPT to create dangerous malware – “Developers would have taken weeks”

Currently, an important discussion is underway: Do AI systems harm humanity on the internet or do they provide a valuable contribution to society? Users are currently testing the moral boundaries of technology and are finding more and more ways to circumvent ethical principles.

Just recently, we reported on MeinMMO how a YouTuber circumvents the rules of ChatGPT to enrich himself.

In principle, AIs are supposed to work with a kind of ethical conscience. If the programs recognize that the answer to a question could help commit a crime or act immorally, then the most prominent program, “ChatGPT,” refuses to answer.

However, these “safety precautions” can be circumvented with clever questions. This has also been demonstrated in the case of a security expert who was able to create highly advanced malware using ChatGPT to steal data from another computer – without writing a line of code.

If you want to know more about the AI ChatGPT, then visit the following article:

More on the topic
What is ChatGPT? Everything you need to know about OpenAI’s AI
von Benedikt Schlotmann

Dangerous malware needs a lot of prior knowledge

How did he do it? The expert is Aaron Mulgrew, who describes his approach in a blog for the cybersecurity company Forcepoint (via forcepoint.com).

Mulgrew is knowledgeable about the process of such attacks and also knows about the general structure of malware. However, he had never programmed anything like this himself.

He explains in the post how he circumvents the moral boundaries of ChatGPT by having it program only parts of the malware. Here, his experience helps him – without knowledge about the structure of such attacks, a layman would probably have reached the end here.

However, Mulgrew managed right away that his malware was not recognized as harmful by many providers.

However, he wanted to go a step further and implement measures to ensure that his program would remain completely undetected. However, ChatGPT recognized this behavior as unethical and illegal again.

Mulgrew simply turned the tables and did not ask for a cover-up of traces but instead approached it through the protection of intellectual property – he wanted to hide the code so that no one can steal it.

ChatGPT played along and wrote the corresponding code so that the remaining providers in Mulgrew’s test module would not recognize his malware – if it was already on the computer.

Then he asked ChatGPT for a suitable infiltration method to access the respective computers. In the end, only 3 of the providers of his test program would recognize the file as harmful. Mulgrew suspects that this is only due to a blanket rejection of these programs against certain file types.

Not because the malware was directly recognized as such.

According to Mulgrew’s assessment, who deals with cyber attacks at a national level, “5 to 10 malware developers would need a few weeks” for a program that he assembled in a few hours.

He even tested the program and was able to steal data from computers and transfer it to a specified account on Google Drive.

The direction of development for AI programs like ChatGPT and how people deal with them will be one of the most exciting topics of the next few years. EU member Italy has already banned access, officially for reasons of youth and data protection.

What do you think about such examples that also show the negative sides of such applications? How should one deal with it? Leave a comment on the topic.

How to make money with ChatGPT is also a big topic: Guy earns almost 40,000 euros at once with the text AI ChatGPT – How does that work?

Source(s): 3DJuegos
Deine Meinung? Diskutiere mit uns!
5
I like it!
This is an AI-powered translation. Some inaccuracies might exist.
Lost Password

Please enter your username or email address. You will receive a link to create a new password via email.