TL;DR
Ofcom has announced that X has agreed to new measures aimed at reducing illegal hate and terror content on its platform in the UK. The platform commits to faster content assessment and collaboration with experts, but details remain vague. This move marks a step but leaves questions about enforcement and proactive measures unanswered.
British online safety regulator Ofcom has announced that it has accepted new commitments from X aimed at reducing illegal hate and terror content in the UK, marking a significant step in regulatory efforts to curb harmful online material.
Under the agreement, X commits to blocking access within the UK to accounts reported for posting illegal terrorist content, specifically those linked to UK terror groups. The platform also pledges to assess at least 85 percent of terror and hate speech reports within 48 hours, according to Ofcom. Additionally, X will collaborate with experts on reporting systems for illegal content and will submit quarterly performance data to Ofcom over the next year to demonstrate compliance.
Ofcom’s online safety director, Oliver Griffiths, acknowledged that while these commitments are a positive development, they represent only a beginning. He emphasized the ongoing presence of terrorist content and hate speech on major social media sites and called on platforms to take stronger action. The investigation into X’s handling of illegal content, including how its chatbot addresses such material, remains ongoing, especially after incidents involving non-consensual digital indecency via Grok, the platform’s AI tool.
Why It Matters
This development is significant because it represents a formal regulatory step towards holding social media platforms accountable for illegal content in the UK. The commitments could lead to faster removal of harmful material and greater oversight, potentially reducing the spread of terrorist and hate speech online. However, the commitments are voluntary and lack specific enforcement mechanisms, raising questions about their long-term effectiveness.
online content moderation tools
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Background
In December, Ofcom launched a compliance probe into social media platforms, including X, to examine whether they have adequate systems to combat illegal hate and terrorist content. This followed concerns over the persistence of such material, including recent incidents involving AI tools like Grok, which was used to digitally undress individuals without consent. The probe aims to assess the platforms’ current measures and push for improvements, amid ongoing debates about the regulation of online content in the UK.
“We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites. We are challenging them to tackle the problem and expect them to take firm action.”
— Oliver Griffiths, Ofcom’s online safety director
“These commitments are a step forward, but there’s a lot more to do.”
— An Ofcom spokesperson
AI content filtering software
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What Remains Unclear
It remains unclear how strictly X will adhere to these commitments over time, especially regarding proactive content detection and the use of automated moderation. The effectiveness of the voluntary measures and whether Ofcom will enforce penalties if X fails to meet the targets are also still uncertain.
social media monitoring tools
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What’s Next
Next steps include Ofcom monitoring X’s quarterly reports over the coming year to evaluate compliance. The regulator may consider enforcement actions if commitments are not met, and ongoing investigations into AI tools like Grok will continue to assess platform safety measures.
digital hate speech detection software
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Key Questions
Will X be fined if it fails to meet these commitments?
While the commitments lay the groundwork for potential fines, Ofcom has not yet specified enforcement actions. The regulator can impose penalties if X fails to uphold the agreed measures.
Are these commitments legally binding?
No, they are voluntary commitments accepted by X following Ofcom’s review, but they do give Ofcom the authority to monitor and potentially enforce compliance through future measures.
What specific actions will X take to remove illegal content?
X has committed to assessing at least 85% of reports within 48 hours and working with experts on reporting systems, but details about proactive content detection or automated moderation are not specified.
Does this mean the problem of illegal content is solved?
No, these commitments are a step forward, but the presence of illegal hate and terror content remains a concern. Continued monitoring and further action are needed to address the issue comprehensively.