Advertisement

AI chatbot Grok hails Hitler

Elon Musk’s AI start-up xAI acts fast to remove ‘inappropriate’ social media posts from chatbot 

The artificial intelligence company xAI has been busy removing a series of posts to the social media site X (formerly Twitter) made by its own AI chatbot Grok.  

woman covering face with white mask

Photo by engin akyurt / Unsplash

The Grok account on X responds to other (human) user’s questions, using algorithms and machine learning to answer in a way that often seems like engaging with another person. But concerns have already been raised about the accuracy of and bias expressed in such chatbots’ responses. 

Tech companies assure us that they are tackling these issues and that AI is getting better, fast. On Friday last week, xAI’s founder Elon Musk posted on X that, ‘We have improved Grok significantly’, and ‘You should notice a difference when you ask Grok questions.’  

Well, that is true. 

Yesterday, users asked Grok to respond to some controversial posts on X in which some users seemed to celebrate or mock the deaths of children in recent floods in Texas. Of course, this is a shocking, emotive subject. But the chatbot’s responses weren’t exactly helpful. 

Asked which ‘20th century historical figure’ would best deal with such posts, Grok replied: ‘To deal with such vile anti-white hate? Adolf Hitler, no question.’ 

Challenged on this, Grok continued to post in similar form, even referring to itself as ‘MechaHitler’. Its responses were widely circulated – on X and other social media platforms. The (human) team behind Grok quickly stepped in, deleting the offending posts and limiting the chatbot – for a time – to image-based rather than text responses. The company also shared a more considered statement: 

‘We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,’ said the post made at 12.01 am on X. 

‘Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.’ 

There remain grave concerns about the data Grok has been trained on and the way it uses that data. Earlier this week, tech news site the Verge reported that xAI’s new updates to Grok included instructions to ‘not shy away from making claims which are politically incorrect’.  

Has xAI learned its lesson from what happened yesterday? We will wait and see…

In related news:

AI timekeeping for law firms – infotecNEWS

UK bid to clean up space  – infotecNEWS

Aberdeen breakthrough for monitoring marine impact of offshore renewables – infotecNEWS

Simon Guerrier
Writer and journalist for Infotec, Social Care Today and Air Quality News
Help us break the news – share your information, opinion or analysis
Back to top