google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Hollywood News

Spectacle of AI Tests Boundaries of Truth

Hyderabad: As Guy Debord once wrote, The Spectacle is not a collection of images, but a social relationship between people mediated by images. Now mediation has learned to speak, sing and even paint using our voices. The line between appearance and reality has become stagnant. This is the world of artificial intelligence (AI) and the rise of AI-generated content.

This week, the Indian government proposed new rules that would require every post, image, song or video created with AI to be clearly labeled. While the idea seems simple enough, can originality be enacted?

The push for disclosure came after a series of digital hoaxes that appeared almost theatrical. Actor Rashmika Mandanna’s deepfake went viral on social media, and this month artist Abhay Sehgal was accused by his colleagues of passing off his AI collages as oil paintings. “He steals other artists’ work and calls it original,” one user wrote. These paintings were even sold to celebrities including Ranbir Kapoor. The artist community pointed out that Abhay not only uses artificial intelligence to create his works, but also steals pixels from other artists, claiming them to be “original”. Many singers on social media were also called out by Arijit Singh and Sonu Nigam’s enigmatic renditions based on cloned voices.

Each of these episodes is worrying because it raises concerns about what counts as real. Courts in Delhi and Mumbai have described deepfakes as a threat that is “nearly impossible to detect”. While a few platforms like Instagram have begun tagging AI content, a Stanford study this year found that even when users see a tag that says “Created by AI,” most still believe what they see. If seeing once meant believing, that bond has frayed.

Under the draft rules, social media companies and content creators must disclose and label synthetic materials. Images must carry a tag for at least ten percent of their area, and audio and video must carry a tag for ten percent of their length. Platforms with more than five million users will be required to collect statements from uploaders and verify them. A false claim may result in takedown and loss of legal protection.

Lawyer Aditya Kashyap says this step is long overdue but cumbersome. He notes that India still prosecutes deepfakes under scattered laws on obscenity and fraud, while copyright itself predates the algorithmic age. “You can’t match the speed of production tools with the provisions written for film reels,” he says. “We need penalties that recognize scale and intent, and a structure that understands both.”

Technology experts also express this doubt. “Sensing is also artificial intelligence,” says Rajat C, an engineer at a global technology firm. “If it’s easy today, another model will come out tomorrow. It’s always a game of catching up.” He describes experiments in which devices can sign data at creation time so viewers can verify a source, but warns that even this can quickly become murky. “Every photo already has some level of AI. The filters on Instagram or the perfect phone camera are not because of the high quality of the camera, but because of the AI. So what’s real?”

Artists find the debate both humorous and brutal. Designer Aatmashri Sanyal recalls her reaction to the Sehgal scandal. “I joked that I felt left out because they didn’t steal my work,” he says. “No plagiarism almost felt like I didn’t do it.” He supports labeling but wants nuance. “During my internship days, we used AI instead of stock photos, not real art. This feels unethical. It worries me that people just make posters and call it design.” He proposes a consent and copyright model in which artists would choose whether their work can train algorithms and get paid each time they do.

“Every AI output must explain its source and change history,” says Kashyap. “Intermediaries are required to maintain audit trails, respond quickly to takedown requests, and submit transparency reports.” He argues that India should create a national mission to monitor synthetic media. “AI must remain a tool for creativity, research and management. The problem begins when deception becomes the product.”

While the rules are a step forward, they leave many questions unanswered and the show goes on. As a world built on imitation learns to imitate itself, the law follows to ask what the truth looks like.

gfx:

1. According to the Delhi and Bombay High Courts, Deepfakes are now “almost impossible to distinguish”.

2. The spread of synthetic media has raised serious concerns about what counts as real online.

3. A Stanford study showed that even when content is labeled “AI-generated,” most users still believe it.

4. India’s new draft rules require all AI content to be clearly labeled; 10 percent tag space for images and 10 percent time for audio or video.

5. Social media platforms with more than five million users must collect and verify uploaders’ statements; There will be penalties for false claims, including takedown and loss of legal protection.

6. Lawyer Aditya Kashyap says existing laws on obscenity and fraud are outdated for the AI ​​era and calls for new penalties that reflect scale and intent.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button