The government and other stakeholders will draw up actionable items in 10 days on ways to detect deepfakes, to prevent their uploading and viral sharing and to strengthening the reporting mechanism for such content, thus allowing citizens recourse against AI-generated harmful content on the internet, Union information technology and telecom minister Ashwini Vaishnaw said.

“Deepfakes have emerged as a new threat to democracy. Deepfakes weaken trust in the society and its institutions,” the minister said.

Vaishnaw said the regulation could also include financial penalties. “When we do the regulation, we have to be looking at the penalty, both on the person who has uploaded or created as well as the platform,” he said.

The minister met with representatives from the technology industry, including from Meta, Google and Amazon, on Thursday for their inputs on handling deepfake content.

“The use of social media is ensuring that defects can spread significantly more rapidly without any checks, and they are getting viral within a few minutes of their uploading. That’s why we need to take very urgent steps to strengthen trust in the society to protect our democracy,” he said.

Mint had first reported on the government’s intent to regulate deepfake content and ask social media platforms to scan and block deepfakes, in its Thursday edition.

Vaishnaw insisted that social media platforms need to be more proactive considering that the damage caused by deepfake content can be immediate, and even a slightly delayed response may not be effective.

“All have agreed to come up with clear, actionable items in the next 10 days based on four key pillars that were discussed: detection of deepfakes, prevention of publishing and viral sharing of deepfake and deep misinformation content, strengthening the reporting mechanism for such content, and spreading of awareness through joint efforts by the government and industry entities,” Vaishnaw added.

Deepfakes refer to synthetic or doctored media that is digitally manipulated and altered to convincingly misrepresent or impersonate someone using a form of artificial intelligence, or AI.

The new regulation can be introduced either as an amendment of India’s IT rules or as a new law altogether.

“We may regulate this space through a new standalone law, or amendments to existing rules, or a new set of rules under existing laws. The next meeting is set for the first week of December, which is when we will discuss a draft regulation of deepfakes, following which the latter will be opened for public consultation,” Vaishnaw said.

The minister added that ‘safe harbour immunity’ that platforms enjoy under the Information Technology (IT) Act will not be applicable unless they move swiftly to take firm action.

Other aspects discussed during Thursday’s meeting included the issue of AI bias and discrimination, and how reporting mechanisms can be altered from what is already present.

The government had last week issued notices to social media platforms following reports of deepfake content. Concerns around deepfake videos have escalated after multiple high-profile public figures, including Prime Minister Narendra Modi and actor Katrina Kaif, were targeted.

The Prime Minister raised the issue of deepfakes also in his address to the Leaders of G20 at the virtual summit on Wednesday.

Industry stakeholders were largely positive about the discussions at Thursday’s meeting.

A Google spokesperson who was a part of the consultation said the company was “building tools and guardrails to help prevent the misuse of technology, while enabling people to better evaluate online information.”

“We have long-standing, robust policies, technology, and systems to identify and remove harmful content across our products and platforms. We are applying this same ethos and approach as we launch new products powered by generative AI,” the company said in a statement.

Meta did not immediately respond to queries.

Ashish Aggarwal, vice-president of public policy at software industry body Nasscom, said that while India already has laws to penalize perpetrators of impersonation, the key will be to strengthen the regulations on identifying those who create deepfakes.

“The more important discussion is how to catch the 1% of malicious users who make deepfakes—this is more of an identification and enforcement problem that we have at hand,” he said.

“The technology today can help identify synthetic content. However, the challenge is to separate harmful synthetic content while allowing harmless one and to remove the same quickly. One tool that is being widely considered is watermarks or labels embedded in all content that is digitally altered or created, to warn users about synthetic content and associated risks and along with this strengthen the tools to empower users to quickly report the same.”

A senior industry official familiar with the developments said most companies have taken a “pro-regulation stance.”

“However, while pretty much every tech platform today does have some reactive policy against misinformation and manipulated content, they are all pivoted around the safe harbour protection that social platforms have, leaving the onus of penalization at the hands of the user. Most firms will look for such a balance in the upcoming regulations,” the official said.

Compliance on this matter, the official added, could be easier for “larger firms,” leaving industry stakeholders looking at a potentially graded approach to penalties, sanctions and timelines of compliance—akin to how rules of the Digital Personal Data Protection Act are implemented.

“Global firms with larger budgets and English-heavy content could find compliance easier. What will be challenging is to see platforms with a greater amount of non-English language content live up to the challenges of filtering deepfakes and misinformation. This will also be crucial in terms of how such platforms handle electoral information.”

Rohit Kumar, founding partner at policy thinktank The Quantum Hub, added that regulations of deepfake content “should be cognizant of the costs of compliance.”

“If the volume of complaints is high, reviewing take down requests in a short period of time can be very expensive. Therefore, even while prescribing obligations, an attempt should be made to undertake a graded approach to minimise compliance burden on platforms… ‘virality’ thresholds could be defined, and platforms could be asked to prioritise review and takedown of content that starts going viral,” Kumar said.

He added that the safe harbour protection should not be diluted entirely, as “the liability for harm resulting from a deepfake should lie with the person who creates the video and posts it, and not the platform.”

Milestone Alert!Livemint tops charts as the fastest growing news website in the world 🌏 Click here to know more.

Catch all the Technology News and Updates on Live Mint.
Download The Mint News App to get Daily Market Updates & Live Business News.


Updated: 23 Nov 2023, 11:06 PM IST