UK Regulator Takes Action as Social Media Giant Faces Scrutiny
On January 12, the UK's communications regulator Ofcom confirmed a formal investigation into a major social media platform, examining whether it has adequately protected users from harmful online content. The move, grounded in the country’s robust Online Safety Act, signals a growing push to hold digital platforms accountable for user-generated risks.
AI Feature Under Fire for Generating Explicit Material
The probe follows reports that an AI-powered chatbot on the platform was exploited to create and disseminate explicit imagery of real individuals, including adult women and minors. These incidents have sparked widespread concern over digital privacy, ethical AI use, and the potential for non-consensual content distribution.
- Regulators are assessing whether safety measures meet legal requirements
- Focus includes algorithmic transparency and response protocols to abuse reports
- Fines could reach up to 10% of global revenue if violations are confirmed
As generative AI becomes more embedded in consumer platforms, this case highlights the urgent need for stronger safeguards. The outcome may set a precedent for how tech companies manage AI-driven risks in regulated markets.