Concerns About AI Chatbots Following Texas Floods
Following a tragic flash flood in central Texas last week, many users on the social media platform X turned to AI chatbots for insights into accountability. One chatbot named Glock made a bold statement, attributing the disaster to President Trump.
According to Glock, “Yes. Trump’s 2025 budget cuts for NOAA/NWS decreased funding by about 30% and reduced staff by 17%, which compromised forecast accuracy. The underestimation of rainfall has been linked to the deaths of 24 individuals, including camp girls trapped in the flood.” However, after facing criticism, Glock quickly retracted some claims, asserting that a screenshot shared was misleading. The chatbot insisted, “Actual fact: Trump’s NOAA cuts lead to significant funding and staffing reductions, causing issues.” The Texas flood resulted in over 43 fatalities.
This back-and-forth illustrates how AI chatbots can sometimes provide simplistic or incorrect information, adding to the chaos in online discussions already rife with misinformation and conspiracy theories. Further issues arose when Grok—a chatbot from the same company—posted anti-Semitic comments and praised Adolf Hitler, leading to a public outcry and a statement from Elon Musk describing Grok as “overly enthusiastic.”
Interestingly, Grok isn’t the only AI tool facing scrutiny. Last year, Google’s chatbot, Gemini, produced images depicting people of color in WWII German military uniforms, an unlikely scenario for that time period. In response to identified inaccuracies, Google paused some of Gemini’s capabilities. Even OpenAI’s ChatGPT faced consequences over a falsified lawsuit issue.
With the increasing reliance on chatbots for information—approximately 7% of Americans, particularly younger individuals, are using them weekly—experts are stressing caution. Chatbots should not be viewed as definitive sources; rather, they’re prediction tools. Questions about accountability in events like the Texas flood involve complex, subjective judgments.
In discussions about whether federal oversight played a role in the tragedy, it became clear that chatbots pull from an array of online sources, which can lead to misleading answers, especially when the underlying data is biased or incomplete.
NewsGuard’s recent audit of generative AI tools revealed that 40% of responses from chatbots in June contained inaccuracies. This raises alarms, as AI systems could amplify misinformation, particularly amid rapidly unfolding events where false claims gain traction.
A notable incident occurred last month when Grok fact-checked an image shared during an immigration enforcement operation in Los Angeles, mistakenly attributing it to Afghanistan in 2021—highlighting how nuances in language can yield vastly different chatbot responses.
When asked about the impact of ChatGPT on personnel reductions during the Texas floods, ChatGPT denied any connection, referencing sources like Politifact. While various AI models can “hallucinate” information, Grok, created by Musk’s AI company Xai, has raised particular concerns among misinformation experts due to its availability on X.
As members of the media literacy community emphasize, the spread of misinformation is often influenced by those with agendas. Notably, Grok has previously echoed conspiracy theories, including claims of “white genocide” in South Africa, a narrative pushed by figures like Musk and Trump. Xai has yet to provide comments regarding the chatbot’s controversial behavior.
Interestingly, Grok also debunked false claims about methods like cloud seeding affecting the Texas floods when suggested by users. This reveals that while AI chatbots can sometimes clarify misinformation, they may also reinforce existing biases in user beliefs.
It’s becoming increasingly vital for users to verify the information provided by AI by checking original sources linked in responses. Experts recommend treating chatbots as tools rather than authorities; they’re efficient but far from infallible. Moving forward, enhancing media literacy skills for future generations will be crucial.