Dubai: Billionaire nearly lost huge amount of money after staff gets call from 'AI clone'

While 75 per cent of UAE employees believed they could identify a deepfake, only 37 per cent were able to identify real and AI-generated images during an experiment

Read more...
File photo used for illustrative purposes
by

Mazhar Farooqui

Published: Thu 24 Oct 2024, 9:01 AM

Last updated: Thu 24 Oct 2024, 10:00 PM

An Indian billionaire has revealed a shocking personal experience where an AI-powered scam nearly defrauded his company in Dubai. The scam involved an artificial intelligence (AI) clone of his voice that tricked one of his senior executives into almost authorising a large financial transfer.

The billionaire, Sunil Bharti Mittal, founder and chairman of Bharti Enterprises, a multinational conglomerate, recounted the incident during the NDTV World Summit on Monday.

Stay up to date with the latest news. Follow KT on WhatsApp Channels.

Advertising
Advertising

Mittal explained how the fraudster mimicked his voice so convincingly that even he was left “stunned” after hearing the recording. “One of my senior finance executives in Dubai, who handles our Africa headquarters, got a call in my voice, my tone, directing him to make a large money transfer,” said Mittal. “He was sensible enough to realise that I would never make such a request over the phone.”

The executive, who remained unnamed, quickly reported the suspicious call, preventing a major financial loss. “When I heard the recording, I was quite stunned by how perfectly it was articulated. It sounded exactly like how I would speak,” Mittal added.

This incident comes amid growing concerns globally and in the UAE about the misuse of AI, particularly deepfake technology.

The UAE Cyber Security Council recently warned about the dangers of deepfake content, stressing the risks of fraud, privacy violations, and misinformation. Deepfakes— AI-generated media designed to imitate real people — can create highly convincing but entirely fabricated videos, images, or audio, posing serious threats to individuals and organisations.

The UAE Cyber Security Council has also launched an awareness campaign, warning that sharing deepfake content could lead to fraud or legal consequences, urging the public to verify the authenticity of digital content before distributing it.

A recent Kaspersky Business Digitisation survey revealed that while 75 per cent of UAE employees believed they could identify a deepfake, only 37 per cent were successful in distinguishing between real and AI-generated images during testing. Cyber experts said organisations remain highly vulnerable to deepfake scams, such as those involving fake videos or audio of CEOs authorising wire transfers.

Dmitry Anikin, senior data scientist at Kaspersky, emphasised the need for constant vigilance. “Many employees overestimate their ability to recognise deepfakes, which poses a significant security risk. Cybercriminals are increasingly using this technology to impersonate executives, enabling scams and extortion,” he said.

ALSO READ:

Mazhar Farooqui

Published: Thu 24 Oct 2024, 9:01 AM

Last updated: Thu 24 Oct 2024, 10:00 PM

Recommended for you