Last week, a video on social media rekindled fears about deepfakes, a type of AI-generated media that is hard to detect as synthetic. The video was a clip of a woman walking into an elevator. The video that went viral had the face of Telugu actor Rashmika Mandanna. The woman in the original clip was British-Indian social media influencer Zara Patel. The video went viral because it was a deepfake of a female celebrity. In 2020, researchers uncovered what was then a rudimentary but still dangerous, underground service called DeepNude. It allowed people to create fake nude images by supplying regular photos of an individual. A vast majority of the anonymous users who had used it had done so to create non-consensual intimate images of women. In the time since, there have already been arrests, police investigations and legal changes to outlaw such images, especially in several western nations.
But non-consensual intimate imagery is not the only risk from synthetic media. Last year, western countries suspected Russia could use deepfakes to justify its invasion of Ukraine. In May this year, a deepfake image of smoke near the White House rattled stock markets. While not exactly called deepfake, the use of synthetic media by Hollywood studios to be able to revive dead actors is one of the issues behind the writer’s strike. In all these, the technology behind deepfakes has evolved faster than our abilities to understand and regulate it.
The Mandanna deepfake shows India must begin discussions to understand the threat landscape and ways to encode protections in law and processes. Deepfakes have the potential not just for personal injury, but to hurt national security and trust in institutions.