The metaverse, undergirded by the digital economy, is a hybrid world of real and virtual. In this unreal AI-based world, authenticity is paramount if we were to surmount new cyberthreats.

In the unreal world where images, chatbots and augmented and virtual realities created by AI are getting increasingly convenient to access, new technologies are also making synthetic data and AI-generated content more realistic than ever.

According to Accenture’s latest Technology Vision report, 79% of APAC consumers expect to interact more with AI or AI-generated content over the next three years. This has ignited growing concerns over bad actors using this technology to damage brand reputations and creating bigger hurdles for organisations. 

The rising dangers of the unreal world

James Nunn-Price, Security Lead for Growth Markets
Accenture

The advantages of the unreal world have been hotly discussed. When deployed rightfully, synthetic realness pushes AI to new heights, and allows individuals to engage in seamless experiences that bridge the physical and virtual worlds. Synthetic data can also train AI models in ways that real-world data practically cannot. It increases the diversity of data and enables AI to counter bias, thereby overcoming the pitfalls of real-world data.

That said, using these technologies also pushes businesses into controversial terrains. It raises tough questions, such as how companies should leverage generative AI in a way that is authentic for their customers, their partners, and their brand. AI in general has also accelerated the sense of mistrust – with deepfakes and phishing instances on the rise.

The fact of the matter is, malicious use of the “unreal” has become decidedly lucrative for bad actors; people have motive to do harm, whether for profit, political power, or something else altogether. Unless managed correctly, bad actors in the unreal world will bring about the most damage to businesses and their reputation. This is particularly concerning, especially given that 65% of global consumers lack confidence in identifying deepfake videos or synthetic content.  

Businesses are becoming architects of the unreal world. As business decision-makers push AI into more collaborative and creative roles, they are blurring the lines between what is real and what is not. All this comes at a time when trust in the technology sector is on a steep decline, reaching an all-time low in 17 out of 27 countries.

APAC also saw a 168% increase year-on-year in the number of cyber-attacks, and globally, it was reported that 81% of organisations around the world have experienced a phishing attack since the onset of COVID-19. The consequences of these attacks can be damaging to the top and bottom line. According to the FBI, email-based scams alone have cost businesses more than $2 billion over a five-year period – and the growing believability of these scams may very well cost them more.


Authenticity remains key

While synthetic realness can sow distrust and discord, it can also improve human relationships. AI deployed authentically can help businesses build trust and bridge gaps. Before embarking on technologies in the unreal world, businesses need to ask themselves the following questions:

  1. Is your business prepared to take full advantage of unreal world technologies?

Start by understanding and exploring the uses of synthetic data. Unreal technologies such as chatbots, or AI-generated images, video and content can help extend a brand’s reach and create new avenues of connecting with customers.

A good starting point would be to pilot the use of unreal technologies to augment the business or even start enabling employees to leverage them as a partner to enhance design, simulation, and decision-making capabilities.

  • How are you protecting your organization and your customers from malicious use of the unreal?

It is important to identify emerging malicious applications of unreal world technologies before they become a systemic risk. Focusing on the veracity and provenance of information coming in and out of the organization ensures that unintended falsehoods are not perpetuated.

Business leaders must be able to differentiate their use of unreal world technologies from those of threat actors and start building trust with their customers through a clear and communicated purpose.

Developing a response plan or a playbook that explores the most damaging threat scenarios, and training everyone in the organisation on these protocols would be key to protecting the business from malicious use of the unreal.

  • How will your business shape the unreal world?

In thinking of the future of the unreal, businesses will need to prioritize authenticity. Distrust or harm created by a malicious, careless, or negligent actor in the unreal world could affect how people embrace and trust the unreal at large.

When businesses engage with the unreal world, they will need to look for ways to be authentic and hold authenticity to a higher standard. C-suite leaders should also prioritise the impact of AI on their business and learn how to hold it to a higher standard.  

Building an unreal world

As AI progresses and models improve, the unreal world will continue to be a work-in-progress. Organizations exploring the unreal will see implications across several business functions such as security, marketing, customer relations, R&D, and beyond.

But whether we use synthetic data in ways to improve the world or in ways that have us fall victim to malicious actors is yet to be determined.

Most likely, we will land somewhere in the expansive in-between, which is why elevating authenticity within the business is so important. Authenticity is the compass and the framework that guides businesses to use AI in a genuine way, by considering provenance, policy, people, and purpose.

Ultimately, it will unlock new attitudes towards and experiences with AI, unleashing the full benefits of the unreal world.