UK To Penalise Big Tech If AI-Created Nonconsensual Intimate Images Not Removed In 48 Hours: Report

The United Kingdom is considering new regulations that would classify AI-generated nonconsensual intimate images similarly to child sexual abuse and terrorism-related content. This proposed measure aims to compel technology companies to take swift action against such harmful material, mandating its removal within 48 hours of being reported. The initiative reflects growing concerns over the misuse of artificial intelligence in creating explicit content without individuals' consent, which has raised ethical and legal challenges. By positioning these images within the same legal framework as serious offenses, the UK government seeks to enhance protections for individuals and deter the proliferation of AI-generated pornography. This move is part of a broader effort to hold big tech firms accountable for the content shared on their platforms, ensuring they take responsibility for safeguarding users from exploitative practices. The implications of these regulations are significant, as they could reshape how tech companies manage and respond to user-generated content, particularly in an era where AI technologies are rapidly evolving. The proposed legislation is still under discussion, with stakeholders from various sectors weighing in on its potential impact and effectiveness.
Originally reported by NDTV Profit. Read original article
Related Articles
India is conducting import mapping exercise
India is conducting import mapping exercise
Textile, auto exporters see profit squeeze
Textile and automobile exporters in India are facing a significant challenge following the government's recent decision...
Automakers may trade credits to meet CAFE norms
Automakers may trade credits to meet CAFE norms
Amit Shah to review Seemanchal border shift row
Amit Shah to review Seemanchal border shift row