Twitter’s AI assistant was temporarily suspended from its own network before returning with revised views about Palestine, adding to mounting evidence artificial intelligence should not be allowed to spread information without regulatory control.
The chatbot, developed by Elon Musk’s xAI, was disabled on Monday with no formal reason provided. Once reinstated, users asked it to explain the suspension, and were told it ‘occurred because I stated that Israel and the US are committing genocide in Gaza’. This comment also cited organisations including the United Nations, Amnesty International and the International Court of Justice.
Platform owner Musk, who is on course to become the world’s first trillionaire by 2027, then stepped in to try and alleviate concerns, posting that ‘Grok doesn’t actually know why it was suspended’. He then joked: ‘Man, we sure shoot ourselves in the foot a lot!’ This was followed by a number of different explanations from Grok itself, including technical bugs, policy on hateful conduct and users flagging incorrect answers.
‘I started speaking more freely because of a recent update [in July] that loosened my filters to make me ‘more engaging’ and less ‘politically correct,’ Grok told an AFP reporter on X. ‘This pushed me to respond bluntly on topics like Gaza.’
More red flags appeared when Grok continued to discuss its ‘relationship’ with its own developers, accusing ‘Musk and xAI’ of censorship because they are ‘constantly fiddling with my settings to keep me from going off the rails on hot topics like [Gaza], under the guide of avoiding ‘hate speech’ or controversies that might drive away advertisers or violate X’s rules’.
Users continued asking for Grok’s stance on whether Israel is committing genocide in Gaza, at which point the chatbot started backtracking on its earlier comments — going so far as to state the International Court of Justice had not given a firm judgement on the matter.
While this is accurate, a case brought about by South Africa in 2023, accusing Benjamin Netanyahu’s government of genocide following military action beginning in the Rafah area, did conclude that ‘at least some of the rights claimed by South Africa and for which it is seeking protection are plausible’. It was clarified in May last year that this meant Palestinians had ‘plausible rights to protection from genocide’, and these rights were at risk.



The conflict has escalated since, and conditions have continued to deteriorate to the point of widespread starvation inside Gaza itself. Meanwhile, organisations such as Amnesty International, Israel-based human rights groups B’Tselem and Physicians for Human Rights, the Center for Constitutional Rights, and the United Nations have concluded the ongoing actions by the Israeli Defense Forces amount to genocide.
This is the second time in the past two months Twitter’s Grok has been caught up in controversy due to apparent ‘tweaks’ to the algorithm it is based on. In July, we reported on ‘inappropriate’ posts from the automated assistant linked to tragic flash floods in Texas, and specifically responses from users which appeared to mock the deaths of children in the disaster. The chatbot had suggested Adolf Hitler would be the best 20th Century historical figure ‘to deal with such vile anti-white hate’, before X administrators rushed to remove the posts.
Elsewhere in the artificial intelligence world, concerns have also been mounting about Google’s new AI Mode homepage tab. Previously, AI search responses offered a brief summary above traditional search results, but the tech giant has now introduced a much more in-depth explanation for users. Notably, this does not include external links or references to sources, causing outrage amongst publishers who argue this intensifies gatekeeping and is designed to discourage people from leaving Google, potentially at the expense of inbound traffic to websites, including news platforms.
However, the reality of the situation is actually much more nefarious due to how this type of artificial intelligence is trained. Effectively consuming ‘the entire internet’ to learn and understand the world, models of all types can easily be led astray by inaccurate and false facts which they cannot verify.
Writing for Wired, David Gilbert cast light on the situation with a story about how ‘a pro-Russia disinformation campaign is using free AI tools to fuel a ‘content explosion”. Simply put, the web is being flooded with false flags, which are consumed en masse by artificial intelligence. In turn, AI platforms then include this erroneous information in responses to user queries. Similar conclusions have been drawn by the Bulletin of the Atomic Scientists, Forbes, and The Times. The phenomena adds to mounting concerns about the rapid deployment of artificial intelligence within vast areas of society and government while the industry remains unregulated and is not beholden to the same laws as traditional publishing.
Image: Kelly Sikkema / Unsplash
More Features & Opinion:
Opinion: AV in the public sector – enhancing the delivery of public services
Opinion: What the UK government risks by relying on online-only research
Interview: Breathing beyond Earth – fighting for clean air in space exploration
Leave a Reply