If the evolution of deepfake tech continues to outpace the development of countermeasures, 2020 can become the year we begin to see threat actors deploy this technology en masse.

So, one morning, you get a text message from your accountant followed up by a voicemail. The accountant is asking you to verify some of your financial details.

Everything seems to check out: the text message comes from their phone number, and the voice is unmistakably theirs. They even ask you about that business trip to Singapore you took last month.

Should you trust them and provide those financial details? Just a few years ago, the answer would be an unequivocal “yes”. These days, however, the text message you receive can be easily faked. And the voice you thought belonged to your accountant can come from a synthetic file generated from the accountant’s voice samples taken from videos they shared on social media – a method of audiovisual content manipulation also known as a deepfake.

As deepfake technology becomes increasingly complex, imaginary examples like what I’ve outlined above can become an everyday reality for thousands of businesses.

Will 2020 be the year deepfakes become a legitimate threat? Is trust now a luxury we can no longer afford, even when it comes to things we can see with our own eyes? So far, it seems that most of us aren’t even remotely ready for the potential damage that deepfakes are about to cause.

Deepfakes are no longer difficult to produce

Creating a deepfake video is easier than most people think, and the more public the target, the easier it gets. All that’s required is a deepfake algorithm, such as the Chinese Zao app, and a large enough collection of images that show the target’s face from a sufficient number of angles. A thousand or so pictures or video frames should do the trick, which in the age of social media is more than doable with most people’s profiles.

If you want your deepfake video to look as convincing as possible, you’ll have to find a source video containing a person with similar features to your target. Once “trained” on the target during the previous step, the deepfake algorithm will then superimpose the target’s face on the person in your source video.

While there’s no guarantee that you’ll produce a completely convincing deepfake on your first try, the software is getting better by the day, with reports of apps that can create deepfakes in real-time being made available in the near future.

The danger is growing

As deepfake technology continues to advance, it’s not difficult to imagine having video calls with deepfakes masquerading as your friends and relatives, all being produced in real time.

Even scarier is the fact that there are no effective countermeasures to deepfakes available so far. That said, there’s some light at the end of the tunnel. Some examples include DARPA’s SemaFor and the Deepfake Detection Challenge, created to “spur researchers around the world to build innovative new technologies that can help detect deepfakes and manipulated media.”

Until these and similar projects bear fruit, however, the only methods of defense available to most of us are education and vigilance.

The price of being slower than the enemy

Even though the security industry is well aware of the dangers posed by deepfakes, the bigger challenge is spreading awareness to the rest of the digital population. Yes, insiders are already mindful of the fact that anything they see online can be faked, but it’s the majority who will bear the brunt of the potential damage.

If the evolution of deepfake tech continues to outpace the development of countermeasures, 2020 can become the year we begin to see threat actors deploy this technology en masse – not only to influence elections, but also to commit identity theft, gain unauthorized access by bypassing biometric authentication, fake court evidence, and much more.

With such a threat looming over the horizon, it’s time for the media, Big Tech, and legislators to take immediate action. Because without effective defenses present both on the industrial and regulatory levels, the only limit to the destructive potential of deepfakes is the cybercriminal’s imagination.