Sat, Nov 16, 2024 | Jumada al-Awwal 15, 1446 | DXB ktweather icon0°C

In an era of deepfakes, a deep quest for truth matters

In addition to producing deceiving content about real people, the technology can also create non-existent characters

Published: Sun 28 Nov 2021, 11:31 PM

  • By
  • Gopika Nair

Top Stories

“We are for reals,” a woman with a friendly face and periwinkle button-up shirt says in an advert. Four other people pop up beside her, lip-syncing the same words.

These are people you might see in your day-to-day life. They could be a friend of a friend, a stranger sitting across from you at a restaurant or just another person riding the Metro. But of course, the five talking heads in the video are not real; they’re deepfakes and they’re about to take over our world.

Deepfakes are a form of synthetic media technology that manipulates images, soundbites and video using artificial intelligence software.

In a fake news PSA (public-service announcement) posted three years ago, former US President Barack Obama warns Internet users about the dangers of this technology. “We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things,” he says. “This is a dangerous time.”

The person in the footage sounds and looks like Obama, but in a twist of irony, the PSA reveals that American comedian and filmmaker Jordan Peele created the fake video to highlight how easy it is to distort words and misuse people’s likeness.

In addition to producing deceiving content about real people, the technology can also create non-existent characters with composite images. One exists as a popular 19-year-old influencer with more than three million followers on Instagram.

Lil Miquela’s posts indicate she is just like the rest of us. She likes avocado toast and samosas, she spends her weekends with real-life friends and she laments about lost love.

Though her bio says she is a “robot living in LA”, the same question pops up on almost every post: “Are you real or what?” She isn’t, but there are no dead giveaways that she is an avatar controlled by LA-based start-up Brud. The only thing that belies her lack of humanity is her soulless eyes.

Today, no one has to navigate the complex world of deep learning technology to create their own deepfakes; many apps already exist. Most popularly, Reface, an AI-powered app with more than 100 million downloads, allows users to swap faces with another person, essentially generating deepfake photos and videos. But despite all the hubbub around deepfakes and its ethics, synthetic technology is advancing rapidly.

Israeli start-up Hour One, the creator of the “we are for reals” advert, is using deepfakes for business purposes. To make sense of the untapped potential of deepfakes, Oren Aharon, the CEO and cofounder of Hour One, asks the world to imagine a company-wide meeting where someone has to record a presentation on video. But what if they needed to ditch the meeting without repercussions? With deepfake technology, an avatar of that person can deliver the speech.

“You get something amazing in two minutes, you can send it to everyone, and nobody needs to waste their time,” Aharon said in an interview with Fast Company.

Hour One, like several other companies, has adopted a well-defined mission to use deepfakes for creating human-centric experiences. Even major tech players, such as Apple and Amazon, are wielding the powers of synthetic media to make Siri and Alexa more realistic.

But can we, as human beings, ever accept the illusion that deepfake characters are ‘real’, let alone connect with them on any personal level? Moreover, do we want to?

The concept is undeniably an alluring one, and we can only assume that’s the reason why millions of people around the world are captivated by robot influencers. But we must draw a line somewhere to protect ourselves.

Deepfakes are innocuous when you’re using an app to swap faces with Kim Kardashian just for the heck of it. But the rise of synthesised content is not an optimistic development for a world that’s already struggling to combat misinformation. Deepfakes, be it video clips or soundbites, are only going to expedite a catastrophe, especially where sensitive political, social and environmental issues are concerned.

As deepfakes burgeon, we may not pause to think about what is real, or worse, we may stop trusting anything we see or hear, a phenomenon scholar Aviv Ovadya has termed “reality apathy”.

Bleak as it might seem, indifference is the natural response in this scenario. When seeing is no longer believing, the only thing we can rely on is our gut. But intuition alone is not enough to challenge imperceptible technological advances, no matter how strong the instinct might be.

Ultimately, we can only hope that major tech companies will continue to take action and curtail the spread of harmful deepfakes. Twitter changed its policy about deepfakes in 2020, announcing that the company would remove content that has been “significantly altered or fabricated” to mislead people. Others, including Facebook, YouTube and Reddit, have adopted similar measures.

But if we can’t fend off deepfakes with laws and legislation, the only thing we can do is learn to live with it. The most important step is squashing complacency. Even if bouts of reality apathy strike, we must continue to question the facts and seek out reputed sources to confirm the truth.

The deepfake version of Obama aptly summarised the importance of staying vigilant when he said: “How we move forward in this age of information is going to be the difference between whether we survive or whether we become some kind of dystopia.” — gopika@khaleejtimes.com



Next Story