New powers could stop social media lies from running riot

https://unsplash.com/photos/a-group-of-people-standing-around-a-car-CtdI7aSwz6s?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash

Elon Musk doesn’t like it. Stronger legal measures are in the pipeline to rein in the spread of dangerous lies and misleading claims on social media platforms including X, YouTube, Facebook, TikTok and Google.

The racist riots last summer, following the Southport murders, didn’t only show how social media disinformation can provoke real-life violence on Britain’s streets. The chain of events also exposed failings in the existing legislation which – if corrected – could help prevent a similar situation developing in future.

The government’s Science, Innovation and Technology Committee is currently looking at specific changes to the Online Safety Act to tackle these failings.

The Act was passed in 2023 and is being implemented in phases, with some initial stages coming into effect now. It’s a significant piece of legislation; when the Act is fully in force OFCOM will be able to fine offending companies up to £18 million or 10 percent of their qualifying worldwide revenue, whichever is greater. 

As the Act is being rolled out its effectiveness is being closely observed by the US-based Center for Countering Digital Hate (CCDH), a global organisation which was one of the key advisers to the government and OFCOM as the legislation was being drawn up.

Immediately after the riots in August, CCDH produced a report that said:

“Social media platforms failed the British public.

“Worse still, they played a significant role in fomenting the lies, hate, extremist beliefs, and antipathy towards institutions that erupted over a series of warm summer nights into extraordinary spasms of violence across the United Kingdom.

“False claims about the Southport attacker’s identity – lies identifying him as a Muslim asylum-seeker – spread widely and quickly. Far-right agitators received millions of views on X, formerly Twitter. Towns and cities across the UK saw attacks on mosques and hotels housing asylum seekers, inspired by these online posts.”

CCDH’s research confirmed that the platforms’ own algorithms, and their ‘trending’ and ‘recommended’ features, pushed the false claims out to large numbers of users:

“In short, viral, unchecked falsehoods about the attack spread rapidly on social media, with the platforms seemingly unaware or incapable of moderating the spread.”

CCDH’s research found that Elon Musk played a particularly big part in stirring up unrest; he posted false information to his 195 million followers on X and promoted the idea of a ‘civil war’ in the UK. The platform also profited from the volatile situation by placing advertisements for big brands including GlaxoSmithKline and the International Olympic Committee alongside posts that were spreading lies and hate.

These findings were shared with the Home Office and the Department for Science Innovation and Technology, along with OFCOM, the Metropolitan Police’s counter-terrorism unit and representatives from major British advertisers.

On Tuesday 21 January the Science, Innovation & Technology Select Committee heard testimony from Imran Ahmed, founder and CEO of the CCDH.

Imran Ahmed, CEO of the Center for Countering Digital Hate This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

He acknowledged that some aspects of the Online Safety Bill weren’t yet in force last summer, including the requirement for platforms to take action on hate speech and incitements of violence. But other problems aren’t covered by the legislation as it stands, and the CCDH wants them to be tackled through the following proposals:

  • Mandatory ‘data access paths’ that would enable fact checkers and organisations such as the CCDH to monitor social media platforms more effectively, tracking the spread of harmful content, so that they can raise the alarm when necessary.
  • Crisis response powers for OFCOM. The regulator should be able to obtain temporary emergency powers to force social media platforms to take urgent action when public safety is at risk.
  • Reintroduction of the requirement for social media platforms to assess and report misinformation. The Act covers illegal and harmful content, but is less clear on misinformation, unless the content is harmful to children.
  • Tackling algorithms, ‘trending’ topics and recommendations that enable false information to go viral. The CCDH wants social media platforms to be made responsible for controlling and restricting the rapid spread of damaging content in a ‘crisis of information’.

Speaking to West Country Voices earlier this week, Mr Ahmed said this set of changes would have made a big difference if they’d been in place during the run-up to the riots.

He said: “It would give us greater visibility into exactly what’s happening on these platforms,.

“It would enable regulators to essentially declare an emergency and seek urgent answers on what mitigation they’re putting into place to stop the spread of disinformation that may lead to loss of life or to real harm.

“It would also place new responsibilities on platforms to have the right guardrails against disinformation spreading rapidly, that could cause real social harm.

“People have the right to hold their opinions – people even have the right to be wrong – but these are the world’s wealthiest platforms, they don’t have a right to profit by causing harm at scale to British people and our society, and we should be able to expect the same levels of accountability and transparency as we would from any other broadcasters or publishers.”

The Labour MP for Exeter, Steve Race, is a member of the Select Committee, and supports the tighter regulation.

He said: “I’m increasingly concerned about misinformation, disinformation and hate speech on social media sites, which can often drive real-world actions that put lives at risk.

“Freedom of speech is vital in a democracy, and the exchange of ideas and debate benefits us all. But we long ago decided that ‘shouting fire in a crowded theatre’ is an abuse of free speech, and the modern-day equivalent, spread via vast social media networks, has a society-wide impact.

“The Online Safety Act goes some way to place the responsibility for mitigating against these harms onto the tech giants who run them; but as we see from the changing nature of the platforms, we will have to do more to make sure our society is safeguarded from states and individuals that seek to weaponise free speech to cause harm.

“It’s imperative that all people who use social media are aware of the dangers – from outright scams to fake news – and are able to critically think about what platforms are putting in front of us. And if you see fake news or harmful content, report it to the platform, don’t engage with it or share it – that’s how they make their money to generate more of it.”  

It was far-right propaganda that triggered last summer’s racist riots, but this is not intended to be a political issue. Imran Ahmed said he was pleased to see that the Online Safety Act won all-party support in Parliament – and he hopes people of all political persuasions will now lobby their MPs to support the tightening-up proposals.

He said: “I’m a real believer that when politicians do the right thing we should acknowledge that, and it’s important that they acknowledge that Parliament did the right thing to protect society and our democracy .. and remind them that this is important to British people.

“The British public don’t want our democracy and our society to be held hostage to the profits of large American corporations who are acting in an increasingly arrogant and unhinged way.”

Arrogant and unhinged? Who might that possibly refer to?

It could certainly apply to some of the recent activities of Elon Musk, including his online attacks on the Labour government. Some political commentators believe he’s angry about the prospect of the UK’s Online Safety Act undermining his huge social media influence – and setting a precedent for other countries around the world to regulate the big online platforms.

All the more reason, then, to support these amendments to the Act – and urge the government to withstand any pressure from President Trump and his far-right advisers to water down these vital powers.

Find us on BlueSky
Find our YouTube channel