An employee of Ferrari received a message from the company’s CEO. However, there is an attempted fraud behind this. With a trick, he manages to uncover the fake.
How did the fraud work? The Ferrari employee received messages on WhatsApp from a person pretending to be the CEO of Ferrari. This was reported by the Spanish colleagues from Xataka.com.
Already from the first messages, the manager noticed that the messages were coming from an unknown number and the profile picture was incorrect.
Manager uses information that a fraudster cannot know
What did the fraudster offer? It was supposedly about a confidentiality agreement that the employee would later have to sign. Specifically, it was said to concern the takeover of another company and that this had already been discussed with the Italian authorities.
To make his deception more credible, the perpetrator explained that the number was not his usual phone number because the agreement was too confidential and there could be obstacles in China. Therefore, it might be necessary to set up an unspecified currency hedge. More details can also be found on Bloomberg.com.
What question did the employee use? The manager asked the fraudster about the title of the book that the real CEO had recommended to him just a few days earlier. Shortly after the question, the conversation with the fraudster ended.
The trick was to ask the fraudster something that only the real person would know. This form of security is also a type of password. Because the information or the agreed phrase is known only to the two participants.
Fraudsters use AI to imitate real people
What is the problem? With the use of artificial intelligence, it is possible for fraudsters to analyze and imitate audio recordings of a voice. The AI recognizes the nuances in a person’s speaking habits and mimics them. In some cases, fraudsters simulate a kidnapping using AI to get money from individuals who fall for the scam. This “voice cloning” with AI is becoming increasingly sophisticated.
With the new technical capabilities, frauds are becoming more elaborate. Especially the so-called “grandchild trick” could soon become significantly more dangerous because the original and the copy can no longer be distinguished.
Fraudsters use AI to make known schemes even more dangerous – Aiming to deceive grandparents