Inquiry into misinformation and harmful algorithms asks gruelling questions of senior figures in social media – and MPs not happy with their answers.
There was a febrile atmosphere in Parliament yesterday morning, as senior figures from four leading technology companies sat before MPs to update them on their efforts to tackle online misinformation and the harmful effects of some algorithms. The MPs were clearly not impressed by the answers provided.
The Science, Innovation and Technology Committee had heard evidence from Google, Meta, TikTok and X last year as part of its inquiry into social media, misinformation and harmful algorithms. At the heart of the inquiry is a fundamental question: how effectively can and do tech companies police the content on their own platforms, which are used by so many millions of us every day? In effect, does government need to step in?
Yesterday, MPs wanted to hear what progress had been made on these pressing issues. Answering questions were Wifredo Fernández, the Director of Global Government Affairs at X, Alistair Law, the Director of Public Policy for Northern Europe at TikTok, Rebecca Stimson, the UK Public Policy Director at Meta, and Zoe Darmé, the Director for Trust, Knowledge and Information Products at Google.
The tone of the session was rather set by the first question, asked by Dame Chi Onwurah. At the inquiry a year before, tech companies had assured MPs that they were addressing misinformation and other harmful forms of content on their platforms. Dame Chi cited examples from recent months such as misinformation posted about the mass shooting at Bondi Beach or in relation to the current situation in Iran, then asked, “Has anything changed for the better?”
In response, Ali Law at TikTok said that, in the first quarter of 2025, 99.3% of content that violated TikTok’s community guidelines had been removed proactively, without it being reported by users. Of this, some 95% was removed within 24 hours of being posted, and 90% was removed before it had received any views.
The other witnesses provided details of their policies and the increasingly sophisticated methods used to enforce them, such as through AI analysis of content as it is posted, conducted at scale.
The MPs then dug into those policies. For example, Adam Thompson MP asked about political bias in the way X ranks content, and the influence of its executive chairman and chief technology officer, Elon Musk.
“Mr Musk posts and participates in the public conversation [on X] individually,” replied Wifredo Fernández: “It does not necessarily link to our operation as a platform … We do not have a political perspective as a platform.”
But Kit Malthouse MP later picked up on this, arguing that someone – a person or people – had to decide on the rules and policies in place at X. So, who made those decisions? “I think we all understand who the owner of the company is,” replied Fernández.
Malthouse sought clarification: “He is directly involved in the production of these policies.”
“Yes, as any leader of a company would be,” said Fernández.
George Foreman MP raised his own experience of disinformation: last autumn, a deepfaked video posted online appeared to show him saying he had defected to another political party. It was posted on YouTube and Facebook, and then shared on X. The platforms have rules about misinformation in the lead-up to elections, but that didn’t apply in this case and the videos were not judged to be in breach of rules.
Challenged on this, Zoe Darmé said that, on YouTube, said that the videos could still be “be reported and reviewed against community guidelines and removed.”
Freeman responded that the videos were still on YouTube, and said that if platforms would not police issues such as this, politicians would have to intervene. “My instinct is to pass a very simple law,” he said, “that somebody’s identity belongs to them and cannot be stolen, used or misappropriated, whatever the purpose.”
Later, Samantha Niblett MP raised the issue of a restaurant in her constituency that had a Facebook page with 32,000 followers. That page was taken down, and all the owner was told was that it had “violated community standards”. There was no means to contact anyone at Facebook to dispute this claim or address the issue, and the restaurant saw a corresponding loss of custom.
Rebecca Stimson from Meta undertook to investigate the case, if it were still an issue.
“I think that that is absolutely unacceptable,” responded the chair of the committee. “You are saying that politicians should be flagging to you when small businesses, which are the lifeblood of our country economically and the driver of growth, have their entire revenue stream taken offline. You are accepting that your systems do not work.”
MPs also asked how platforms protect children and prevent them accessing age-inappropriate content, and what the tech companies plan to do differently that they are not already doing. They asked to see the tech companies again in a year, if not before, for a further update on progress.
Watch the whole session of the Follow-up on Social media, misinformation and harmful algorithms inquiry at Parliamentlive.tv. See also the Science, Innovation and Technology Committee pages on the UK Parliament website.
In related news:
Major LED streetlighting upgrade for Hampshire and West Sussex

Leave a Reply