Network fraud continues to be a persistent issue, and with the advancement of artificial intelligence, a deception technique known as “Deepfake” has become increasingly difficult to guard against. This method involves using voice cloning tools to carry out frauds that are highly convincing, making it challenging even for experts to distinguish between truth and deception. The FBI and security experts have issued warnings to the public, recommending the establishment of secret codes among family and friends for self-protection.
According to a report by Forbes, Adrianus Warmenhoven, a cybersecurity expert from NordVPN, stated that “phone scammers are increasingly using voice cloning tools for fraudulent activities because this software has become more affordable and effective over time.”
Warmenhoven explained that a common tactic involves using deepfake audio to “impersonate a target’s family members,” and then using simulated emergencies to extort money or personal information.
A shocking report from October 2024 revealed alarming statistics: in the United States alone, in the past 12 months, the total number of phone scam victims exceeded 50 million, with an estimated loss of $452 per victim.
Siggi Stefnisson, Chief Technology Officer for Cybersecurity at Norton and Avast, warned that “Deepfakes will become indistinguishable, as artificial intelligence becomes sophisticated enough that even experts may struggle to discern what is real.”
Catherine De Bolle, Executive Director of Europol, confirmed that “artificial intelligence is fundamentally reshaping organized crime” and is now “more flexible and dangerous than ever before.” By rapidly adopting these new technologies, fraud schemes are becoming larger and more difficult to detect, as scammers arm themselves with a powerful method of attack.
Attackers can create complex and believable messages at twice the speed, continuously refining and adjusting these messages through automation, with each iteration becoming more credible than the last. Artificial intelligence is reducing the costs for criminals.
De Bolle suggested strategies for combating AI fraud, stating, “target their finances, disrupt their supply chains, and stay ahead in technological usage.”
The FBI has consistently issued warnings to the public regarding such attacks, even releasing a public service advisory numbered I-120324-PSA specifically addressing this issue. Both the FBI and Warmenhoven recommend taking the same precautions, which may sound harsh and alarming, including hanging up the phone and establishing a secret code known only to close family and friends.
Warmenhoven also advised caution regarding the content of social media posts. “Social media is the largest open-source voice resource for cybercriminals,” he warned, urging everyone to be vigilant about what they share, as it could negatively impact their security through deepfakes, voice cloning, and other deception techniques generated by AI tools.
To reduce the risks of complex and increasingly dangerous AI attacks targeting mobile users, the FBI advises hanging up immediately if receiving a call demanding money from someone claiming to be a family member or close friend, and using a direct method to verify the caller’s identity.
The FBI further cautioned that everyone should create a secret word or phrase known only to intimate contacts and use it to identify callers who claim to be troublesome individuals, regardless of how persuasive they may sound. The credibility of these calls often stems from deepfake technology, which utilizes publicly available audio clips, such as those from social media videos, and then disguises the voice via AI to convey the scammer’s input.