Distorted fingers, ears: How to identify AI-generated images on social media

As the technologies become more sophisticated, distinguishing deepfakes from genuine media can pose significant challenges, raising concerns about privacy, security, and the potential for abuse in the digital age

by

Somya Mehta

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Top Stories

Published: Thu 8 Aug 2024, 7:00 AM

Last updated: Sat 10 Aug 2024, 1:07 PM

In 2024, it is abundantly clear that we now live in the post-truth era. The digital revolution that brought about social media has made information dissemination quicker and more accessible than ever before. While it has many upsides, the consequences of inaccurate, incorrect, and outright fake information floating around on the Internet are becoming more and more dangerous.

With artificial intelligence (AI) thrown into the mix, the threat looms even larger. Now that AI enables people to create lifelike images of fictitious scenarios simply by inserting text prompts, you no longer need an expert skill-set to produce fake images.


In recent years, this advancement has led to a rapid surge in deepfakes like never before. “There have been numerous incidents in the USA and Europe where images of students in schools have been used to generate inappropriate videos for degradation, revenge, or ransom,” says Farhad Oroumchian, Professor of Information Sciences, University of Wollongong in Dubai.

“Understanding whether we are dealing with real or AI-generated content has major security and safety implications. It is crucial to protect against fraud, safeguard personal reputations, and ensure trust in digital interactions,” he adds.

What are deepfakes and why should we spot them?

Deepfakes are a form of synthetic media where artificial intelligence techniques, particularly deep learning algorithms, are used to create realistic but entirely fabricated content. These technologies can manipulate videos, audio recordings, or images to make it appear as though individuals are saying or doing things they never actually did.

As the technologies become more sophisticated, distinguishing deepfakes from genuine media can pose significant challenges, raising concerns about privacy, security, and the potential for abuse in the digital age.

These images can be used to spread misleading or entirely false content, which can distort public opinion and manipulate political or social narratives. Additionally, AI technology can enable the creation of highly realistic images or videos of individuals without their consent, raising serious concerns about privacy invasion and identity theft.

Just last week, billionaire X owner Elon Musk faced backlash for sharing a deepfake video featuring US Vice President Kamala Harris, which tech campaigners claimed violated the platform’s own policies.

In another recent example earlier this year, high-profile Bollywood actors Aamir Khan and Ranveer Singh were featured in fake videos that went viral online, allegedly criticising Indian Prime Minister Narendra Modi for failing to fulfil campaign promises.

Hollywood actress Scarlett Johansson, too, became a target of an apparently unauthorised deepfake advertisement. And these are just some of many examples of how deepfakes and misinformation have plagued the Internet.

In fact, the advancement of deepfake technology has reached a point where celebrity deepfakes now have their own dedicated TikTok accounts. One such account features deepfakes of Tom Cruise, replicating his voice and mannerisms to create entertaining content.

@deeptomcruise High and tigh then… eyebrows ✂️ #footloose ♬ Footloose - Kenny Loggins

The ethical implications of this are significant; the ability to generate convincing fake content challenges our perceptions of reality and can lead to misuse in various contexts, from defamation to fraudulent activities.

As AI technology advances, being vigilant about these issues will help protect the integrity of information and individual rights in the digital age.

How to tell if an image is AI-generated

Given the rapid evolution of AI, it’s becoming nearly impossible to definitively tell if an image is AI-generated. “Distinguishing between real and AI-generated images is increasingly challenging,” says Yohan Wadia, a UAE-based entrepreneur and digital artist known for his famous ‘Satwa Superheroes’ AI-generated art series. “While some images may have subtle signs, many are almost indistinguishable from real photos.”

To this, Professor Oroumchian adds, “It depends on the sophistication of the tools used to create them. Very expensive tools can produce images and videos that are almost impossible for even experts to detect, let alone individuals. However, the devil is in the details.”

Determining whether an image is AI-generated can be quite challenging, but there are several strategies you can use to identify such images.

Visual makeup: A careful visual inspection is often helpful. The telltale signs, according to Professor Oroumchian, often include inconsistencies in the fingers, ears, and the background around the head.

Image: Freepik
Image: Freepik

“A few months ago, the Royal Family published an AI-generated picture of Kate Middleton and her children, which quickly became an embarrassment when it was identified as fake. Details around the fingers and ears exposed the image as AI-generated,” he adds.

When examining an image of a human or animal, common places to check include the fingers—their size, shape, and colour compared to the rest of the body.

“Additionally, the orientation and alignment of the eyes, as well as the ears and their alignment, and the background behind them or the head, can offer clues. Other areas to scrutinise are the background details on both sides of the body and between the arms and legs,” explains Professor Oroumchian.

Cross-validation: Look for unusual details or anomalies in the image, such as distorted text, inconsistent lighting, or unnatural features.

“The best approach is to use a conceptual checklist: question the plausibility of the scenario depicted, consider if the person would realistically be in that situation, and cross-validate using search engines and reverse image searches,” says Wadia.

Image: Freepik
Image: Freepik

“Despite their hyperrealism, AI-generated images can occasionally display unnatural details, background artefacts, inconsistencies in facial features, and contextual implausibilities. This critical analysis will help in assessing the authenticity of an image,” he adds.

Reverse image searches: Using reverse image search engines like Google Images or TinEye can also be useful, says Wadia. By searching for the image, you may find its origin or other instances where it has been used, which can help determine whether it was created by AI.

There are also specialised tools and software designed to detect AI-generated content, such as Deepware Scanner and Sensity AI. These tools analyse various aspects of the image to identify potential signs of AI manipulation.

Repetitive patterns: AI-generated images may also display repetitive patterns or elements that appear unnatural, as AI sometimes struggles to generate complex, realistic textures or scenes.

Additionally, images that appear overly perfect or symmetrical, with blurred edges, might be AI-generated, as AI tools sometimes create images with an unnatural level of precision.

Image metadata: Another method is to analyse the image’s metadata, which can provide clues about its origin. Metadata, or EXIF data, might include information about the tools used to create or edit the image.

If the metadata indicates that an AI tool was involved, this could be a sign that the image is AI-generated. “Identifying AI-generated images and videos is becoming a field unto itself, much like the field of generating those images,” says Professor Oroumchian.

As an artist, Wadia firmly believes that there needs to be a clear indication when an image is AI-generated, “Especially if it could cause a stir or unrest among viewers."

“While ultra-realistic AI images are highly beneficial in fields like advertising, they could lead to chaos if not accurately disclosed in media. That’s why it’s crucial to implement laws ensuring transparency about the origins of such images to maintain public trust and prevent misinformation," he adds.

However, until there’s an adequate framework to regulate the rapid advancements in AI being used to create deepfakes, combining the above approaches and maintaining a discerning eye while surfing the web can increase your chances of identifying AI-generated images.

Pro tip: You can use this site to determine AI-generated images.

somya@khaleejtimes.com

ALSO READ:


More news from Lifestyle