Panel Discussion on Securing the Feed: Social Media’s Strategies to Combat Deepfakes
In an era where Artificial Intelligence has made remarkable strides, one facet that has emerged as both intriguing and concerning is the realm of deepfakes. Defined by the digital manipulation of video, audio, and images using AI, deepfakes pose a significant threat as they seamlessly blend into reality, potentially leading to reputation damage, the fabrication of evidence, and a erosion of trust in democratic institutions.While our ongoing dialogue about women’s inclusion in the field of AI is crucial, it is equally imperative to address the alarming trend where women are increasingly becoming victims of the perils associated with deepfake technology. Recent incidents, such as the circulation of deepfakes featuring celebrities like Rashmika Mandanna and Taylor Swift, have garnered millions of views and raised serious concerns about the potential misuse of this technology. The swift action taken by social media platforms, like X (formerly Twitter), to remove Taylor Swift’s deepfakes underscores their ability to react promptly when a high-profile figure is affected.
However, this prompts a vital question – if such responsiveness is possible for celebrities, can we not expect the same urgency when the privacy and well-being of an average individual is at stake? Reports suggest a troubling surge in the creation and circulation of deepfakes, since the advent of AI and AI-based applications. Our discussion aims to delve into the dynamics of this comprehensive ecosystem, involving multiple stakeholders such as social media platforms, users, lawmakers, and technology developers and deployers.
IGPP organised a discussion on the above mentioned discussions titled,
Securing the Feed: Social Media’s Strategies to Combat Deepfakes with a line of the following panelists:-
Ms. Maya Sherman, AI Policy Researcher & Ethicist
Mr. Aman Taneja, Principal Associate and Lead- Emerging Technologies, Ikigai Law
Moderator: Dr. Manish Tiwari, Director, Institute for Governance, Policies and Politics
Following were the discussion pointers for the discussion:-
1. Balancing Public Safety and Social Media Platforms‘ Autonomy:
The actions of social media platforms, like X, in promptly addressing deepfake incidents. Also, whether the measures taken by platforms strike an appropriate balance between protecting public safety and preserving user experience.
2. Safe Harbor Provision and Government Intervention:
The impact of the safe harbor provision on social media platforms’ accountability for deepfake content. Also, how important is government’s intervention to balance the protection of free expression and the need for responsible content moderation?
3. Accessibility and Impact on Common Users:
Are the quick actions taken by platforms to remove deepfake content accessible and understandable for the common users? The grievance redressal mechanisms in operation and preventive measures that can be taken by different social media platforms.
4. Evolving AI and Regulatory Landscape:
The sufficiency of existing laws and necessity of new regulations in combating the menaces of AI, especially deepfakes.