Download
The emerging cyber security scams you must be aware of
CGTN
Attendees walk past a company logo at a consumer electronics expo in Shanghai, April 27, 2023. /CFP
Attendees walk past a company logo at a consumer electronics expo in Shanghai, April 27, 2023. /CFP

Attendees walk past a company logo at a consumer electronics expo in Shanghai, April 27, 2023. /CFP

Modern technologies from artificial intelligence to ultra-fast wireless communications have revamped people's work and life in recent years. While enjoying the convenience of the technologies, it is also important to be mindful that some are capable of causing devastation if they fall into the wrong hands.

Apple's co-founder Steve Wozniak once warned that AI technology may make scams and misinformation harder to identify. "AI is so intelligent it's open to the bad players, the ones that want to trick you about who they are."

Authorities are cracking down on online cons and the criminal groups behind them, yet the scams continue. The following are some of the most frequent online traps. With the help of technologies like AI, even tech-savvy users are at risk of becoming a victim.

A woman stands in front of an AI application which can change her face to a targeted celebrity face via the camera at an expo in Shenzhen, November 14, 2018. /CFP
A woman stands in front of an AI application which can change her face to a targeted celebrity face via the camera at an expo in Shenzhen, November 14, 2018. /CFP

A woman stands in front of an AI application which can change her face to a targeted celebrity face via the camera at an expo in Shenzhen, November 14, 2018. /CFP

1. Deepfake faceswap

Deepfakes are an emerging AI technology which are as amazing as they are scary, defying the old saying "seeing is believing." 

The technology is able to seamlessly copy one's appearance to another person in a video, with a convincing voice and motions to match. The creator can generate videos which appear to show someone saying something with natural body language, but in reality the person has said nothing of the sort.

Back in 2019, a mobile app called "ZAO" provided a new feature allowing users to swap their face with celebrities in a wide range of videos by just uploading their own photos, putting deepfake technology on the map around the world. The app went viral as people had fun with the face-swapping functions, but it soon prompted concerns over privacy invasion.

Use of the technology is growing rapidly, and often for nefarious purposes. According to Sensity, an Amsterdam-based firm monitoring AI-developed synthetic media, such technology is mostly used to create sexually explicit videos, accounting for an astonishing 96 percent of all. Financial scammers are also taking advantage of deepfake tech.

A plethora of reports have depicted how everyday people's photos posted on social platforms have been stolen and used for deepfake porn. Wang Jie, deputy director of the Institute of Rule of Law under the Beijing Academy of Social Sciences, pointed out people's face and voice need to be protected in the AI era. 

The Character.AI app on a smartphone in the Brooklyn borough of New York, U.S., July 12, 2023. /CFP
The Character.AI app on a smartphone in the Brooklyn borough of New York, U.S., July 12, 2023. /CFP

The Character.AI app on a smartphone in the Brooklyn borough of New York, U.S., July 12, 2023. /CFP

2. AI voice cloning scam calls

Voice cloning apps replicate a voice based on a length of voice samples, which sometimes can be easily acquired online, which gives the fraudsters opportunities to swindle money from victims by using technology to impersonate their family members or friends. Some con artists have even made up a kidnap story and asked for a ransom, using a faked voice as proof.

Thanks to the generative-AI boom, the voice cloning applications have been substantially improved, becoming more accessible and easily generated – three seconds of voice is now all they need, whereas previously they required a significant number of audio samples to generate a targeted voice. 

According to a CNN report in April, a family in Arizona, U.S. was scammed by a ransom call in which they heard their daughter screaming followed by a man's voice asking for $1 million ransom. Though the emergency responders helped identify the call as a hoax using AI, the fear was real.

ChatGPT provides various applications, but is controversial. /CFP
ChatGPT provides various applications, but is controversial. /CFP

ChatGPT provides various applications, but is controversial. /CFP

3.ChatGPT phishing emails

Email phishing has existed for decades. The old email scams were full of story loopholes and outrageously basic typos, but new technologies such as ChatGPT and machine learning are seemingly smoothing out the telltale flaws in this type of scam.

Cybercriminals no longer need to craft emails and replies on their own. They can throw all their problems to AI and the algorithms can help them analyze who their ideal victims are among all the replies. Sophisticated phishing scam emails can make the victims feel obligated to click on a link that could plant malware on their device.

A growing number of phishing emails are being written by chatbots, which create convincing-sounding text, according to the UK's most prominent cybersecurity firm, Darktrace. The emails can be easily regarded as coming from an employer, to trick employees to open them. The phishing emails can also be "tailored" by feeding AI users' personal information from social media.

An Android expert explains the different features of Android's smart home with the use of Nest devices at Mobile World Congress 2023 in Barcelona, Spain, February 27, 2023. /CFP
An Android expert explains the different features of Android's smart home with the use of Nest devices at Mobile World Congress 2023 in Barcelona, Spain, February 27, 2023. /CFP

An Android expert explains the different features of Android's smart home with the use of Nest devices at Mobile World Congress 2023 in Barcelona, Spain, February 27, 2023. /CFP

4. IoT device hacking

Once we were amazed by our capability, empowered by technological advances, to control various personal devices, from cars and fridges to lights and curtains from our small, omnipotent cellphones. But in fact those devices can be key entry points for cyberattacks, Microsoft's Digital Defense Report 2022 warned.

According to CNBC, an estimated 17 billion gadgets, from printers to garage door openers, around the world can be easily hacked. While the security of IT hardware and software has strengthened in recent years, "the security of Internet of Things (IoT)... has not kept pace," according to the report. Most users are not aware of the necessity to update the software patches in time, creating a weakness that can be weaponized by hackers.

Search Trends