The dark side of the metaverse

Various risks abound in this unexplored and highly speculative space, expert says

The dark side of the metaverse

Risk Management News

By

One of the most frequently discussed topics in technology today is the “metaverse”, which is loosely described as the intersection between the virtual and physical worlds. Due to it being in its infancy, it has yet to be fully defined and it is still partly in the realm of speculation.

Bill Malik (pictured above), vice president of infrastructure strategies at Trend Micro, estimates the full implementation of the metaverse to be around five to 10 years away from fully becoming a reality. However, cybersecurity experts have already foreseen some threats that need to be addressed beforehand.

A recent report by Trend Micro warned of the existence of the darkverse, which is the dark web brought to the metaverse. Due to the lack of oversight from regulators and law enforcement, the darkverse is a space for underground marketplaces, criminal communications, and illegal activities.

“The metaverse allows individuals and bots to act essentially without supervision, standards, regulations, or laws,” Malik told Corporate Risk and Insurance. “Among the risks are possible theft or alteration of an organization’s intellectual property, violations of an individual’s privacy, and criminal transactions.”

According to the report, darkverse spaces will be in secure locations, accessible only to those with the proper authentication tokens. Communication will be limited to proximity-based messaging, and these marketplaces will serve as venues of illegal activity, such as selling malware, trading of stolen data, and planning for real-world crimes.

Malik said that legitimate organizations doing business on the metaverse should have sufficient protection for their information technology (IT) and operational technology (OT).

“A business transaction links a seller who has a product or service and some intellectual property with a buyer who has some money and a business requirement over a communications medium,” Malik said. “In the metaverse, the infrastructure that makes it seem real consists of many different forms of technology, both conventional IT and OT, working to handle the sensing of components, their physical interrelationships, and their interactions. While most IT protocols can be secured, OT lacks information security and privacy design principles. So, bad actors will be able to subvert business transactions by stealing or altering the product, the service, or the intellectual property, stealing or redirecting the buyer’s money, snooping on the business requirement, or tampering with the transactions flowing between them.”

Another factor that complicates dealing with the metaverse is that nobody fully understands what it is. This could lead to serious lapses and oversights from organizations’ risk managers.

“The metaverse will need greater network bandwidth, processing power, and storage capacity than traditional electronic commerce or contemporary digital transformation,” Malik said. “The largest mistake will be misunderstanding the infrastructure demands the metaverse will command. Close to that will be failing to understand the myriad vulnerabilities this environment adds to the organization’s attack surface.”

Due to the metaverse being an intersection of the virtual and physical worlds, real-life issues such as social engineering, propaganda and “fake news” are expected to bleed into the metaverse, complicating how organizations and individuals navigate this space.

“These risks are currently major problems and will only increase with time,” Malik said. “Businesses will face enhanced business email compromise, spear phishing, and ransomware attacks, which will now have a larger and more expensive target – the costly metaverse infrastructure itself. Individuals will find an emotionally engaging environment brimming with enhanced sensors, giving advertisers and propagandists greater insight into participants, and greater influence and persuasive capabilities.”

Malik explained that using metaverse’s enhanced interactivity and data collection, bad actors can exploit humans’ psychological tendencies to advance their goals.

“We know from psychology that people respond to visual images that they may only see for an instant,” Malik said. “These responses show up as micro-expressions, such as the briefest smile or frown. While a participant is enjoying the show, an advertiser might flash a single frame of, let’s say, a sheep, which the participant might briefly smile at. Note that neither the image nor the smile reaches the conscious awareness of the participant. A few moments later, the advertiser might flash an image of a bull, at which the participant might briefly frown. The advertiser now knows that this participant has an emotional reaction to those images. Later, the participant may watch a news clip of two candidates. While the first candidate is speaking, the advertiser slips in a brief image of a sheep. The participant doesn’t see the image but thinks ‘She’s nice.’ When the second candidate is on screen, the advertiser flashes an image of a bull. ‘He’s creepy’, the participant feels. The advertiser has successfully influenced the participant who never consciously saw either trigger. In this way, the metaverse too, will be able to harvest vast and detailed insights into each of its participants.”

One way to protect organizations and individuals from the various risks in the metaverse, is to provide participants with adequate training to avoid falling prey to bad actors, Malik said. However, that is not enough.

“Metaverse purveyors could provide training spaces so participants could exercise judgment and practice dealing with fake news, rumors, and persuasive techniques,” Malik said. “However, the corporations funding this environment have no economic incentive to make their users smart. The paying customers – the advertisers and influencers that generate the revenue – would prefer an uninformed consumer. They would be easier targets.

“Ultimately, we will have to resort to regulation and legislation to make the metaverse safe,” he said. “That will take time. The ongoing revelations of privacy abuses and security lapses by today’s social media giants show that self-regulation will not work. It is critical for the tech and security community to also step in now to think about how the metaverse will be exploited by threat actors over the next few years.”

Keep up with the latest news and events

Join our mailing list, it’s free!