Deepfake tech could be significant threat – CyberCube

Insurers should consider the potential of the technology to create big losses

Deepfake tech could be significant threat – CyberCube

Cyber

By Ryan Smith

The use of deepfake video and audio technologies could evolve into a major cyber threat to businesses within the next two years, according to cyber analytics specialist CyberCube.

The ability to create realistic audio and video fakes using artificial intelligence and machine learning is growing steadily, according to CyberCube’s new report, Blurring reality and fake. Recent technology advances and the increased dependence of businesses on video-based communication have also accelerated developments, CyberCube said.

With the increasing number of video and audio samples of businesses and people accessible online – largely due to the COVID-19 pandemic – cyber criminals have a growing trove of data from which to build photo-realistic simulations, which can then be used to manipulate people, CyberCube said. “Mouth-mapping” technology, which mimics the movements of the human mouth during speech, complements existing deepfake technologies.

“As the availability of personal information increases online, criminals are investing in technology to exploit this trend,” said report author Darren Thomson, CyberCube’s head of cybersecurity strategy. “New and emerging social engineering techniques like deepfake video and audio will fundamentally change the cyber threat landscape and are becoming both technically feasible and economically viable for criminal organisations of all sizes.

“Imagine a scenario in which a video of Elon Musk giving insider trading tips goes viral – only it’s not the real Elon Musk,” Thomson continued. “Or a politician announces a new policy in a video clip, but once again, it’s not real. We’ve already seen these deepfake videos used in political campaigns; it’s only a matter of time before criminals apply the same technique to businesses and wealthy private individuals. It could be as simple as a faked voicemail from a senior manager instructing staff to make a fraudulent payment or move funds to an account set up by a hacker.”

The report also examined the growing use of more traditional social engineering techniques, which exploit human vulnerabilities to gain access to personal information and protection systems. One such technique is social profiling, which assembles the information necessary to create a fake identity for a targeted individual based on information available online or from physical sources like trash or stolen medical records.

According to the report, the overlap between domestic and business IT systems created by the COVID-19 pandemic, combined with increasing use of online platforms, is making social engineering easier for cyber criminals. AI technology is also making it possible to create social profiles on a larger scale.

CyberCube said there was little insurers could do to combat the development of deepfake technologies, but stressed that risk selection would become increasingly important for cyber underwriters.

“There is no silver bullet that will translate into zero losses,” Thomson said. “However, underwriters should still try to understand how a given risk stacks up to information security frameworks. Training employees to be prepared for deepfake attacks will also be important.”

The report said that insurers should consider the potential of deepfake technology to create large losses, as it could be used to try to destabilise a political system or a financial market.

In March 2019, cyber criminals used AI-based software to fake an executive’s voice to demand the fraudulent transfer of US$243,000, according to CyberCube.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!