The Psychology of Misinformation in the AI Era
With artificial intelligence that could write news articles, create hyperrealistic deepfake videos, and create highly personalized social media posts, combating misinformation has become paramount. A generative AI system drives the misinformation by creating more convincing content at larger scales. The Air India Flight 171 Ahmedabad crash on June 12, 2025, serves as an illustration of how Gen AI can be harnessed in times of crises, taking advantage of human psychology to generate confusion and distrust. This article attempts to analyse the cognitive biases, emotional triggers, and social matrix within which misinformation flourishes in the AI era by looking at real-life examples, including the Ahmedabad tragedy, and strives to explore ways to counteract it.
Human minds are designed to think and consume information very fast, and many times' efficiency is more valued than accuracy. Although a clear advantage at the evolutionary level, this behaviour does expose us to false information, particularly when it is constructed using the advanced AI mechanisms. Systems that can manipulate these mental shortcuts are generative AI, like large language models and the deepfake technology, which enables it to generate content that seems real, although it might not be.
Confirmation bias, which is the desire to seek, interpret and recall information that fit our pre-conceived notions, is one of the strongest psychological forces. The bias can be exploited by AI-generated content with distressing accuracy. As an example, in the 2020 U.S. presidential campaign, AI-generated social media messages propagated polarizing narratives that focused on presenting users with the content that confirmed their political stance. Likewise, an AI-generated video purporting to show Indian soldiers vandalizing a mosque in Kashmir went viral in 2023, causing protests and increasing tension between India and Pakistan. This video was spread without criticism as it aligned with already existing regional animosities among the people who were predisposed to believe the information that suited them, which demonstrates how confirmation bias triggers the spread of AI-driven misinformation. Research at MIT has demonstrated that a hoax is relayed 6 times quicker than the truth on the Internet, primarily because it conforms to what people wish to be true.
The illusion of truth effect is another example of cognitive vulnerability, since one becomes more convinced in an affirmation after seeing the same statement multiple times, truth, or falsehood. AI applications have the potential to drive digital environments with systematic misinformation that drives a cycle of legitimizing false information through testing. As an illustration, AI-created misinformation with regards to unproven COVID-19 treatment such as ivermectin was rampant during the COVID-19 pandemic, as bots spread the information across various platforms such as X. The case in the Ahmedabad plane crash is the same with a forged preliminary investigation report produced by AI, resembling those used by the official aviation industry, obtained wide circulation within several days after the tragedy of June 12, 2025. This report, after palming off information such as a pilot seat dislodging or torrential rains that caused the crash (which was obviously untrue according to the weather at that time) and was refuted by the Indian government itself but not before misinforming those in the aviation industry and the citizenry. In the report, even the information about a 2024 incident involving LATAM Airlines was reused, providing it with an airtight finishing touch.
Read Also: Role of AI in Shaping the Future of Business Schools
Inaccurate information is facilitated by feelings and Gen AI would excel in touching the heart strings. Strong emotional content that makes one angry, afraid or happy travels quicker since the amygdala is activated making it skip the decision-making part of the brain. The user data can be used by AI tools to create emotionally charged materials with the aim to target a particular audience. A study conducted in 2021 by the University of Southern California concluded that emotionally attractive misinformation, including false tales about adverse reactions to vaccines, received a higher number of interactions.
The crash in Ahmedabad that took away the lives of 274 people (241 of the 242 people aboard and 33 on the ground in the B.J. Medical College hostel) gave such manipulation an excellent breeding ground. Hours later, AIs produced videos and images that were shown as false and portraying the crash aftermath. A 17-year-old in Aravalli screen recorded one viral video, a fabricated explosion of the Boeing 787-8 Dreamliner, which received millions of views before law enforcement questioned the teen. An emotionally charged deepfake may hit the news hard by displacing factual reports, and this is what happened in a 2025 case example in Southeast Asia, where a deepfake video of a politician confessing to corruption swayed the masses in an upcoming election.
In addition, misinformation powered by AI frequently appeals to the social identity theory, which states that individuals find some of their identities within their group inclusion. AI can propagate polarization by creating content that involves a conflict between us-versus-them. In 2022, during the Russian invasion of Ukraine, propaganda videos which misrepresented Ukrainian military personnel committing atrocities aiming to ease justification of the invasion and encourage Russians to support the war were generated in AI-assisted campaigns specifically in Russian-speaking territories. The Ahmedabad crash has an example with AI-generated posts on X as a likely cause, as it related to the medical condition of a pilot causing a so-called suicidal crash, which fuelled conspiracy theories and distrust based on group identity. Such stories were compelling enough because they did nothing to undermine socially accepted divides, and thus they appeared to be less questionable in the eyes of viewers.
Incorporated AI encourages misinformation to thrive in social media. Recommendation algorithms give preference to engagement, preferring sensational or misleading information or data to less exciting but correct information. According to a 2024 report conducted by the Pew Research Center, 60 percent of the adult population in the U.S. have been exposed to AI-generated misinformation through social media, with most of them not realizing it was done artificially. Such algorithms cause the formation of echo chambers, the content of which convinces users of their positions even more, which, in turn, exposes them to misinformation.
The situation repeated itself in Ahmedabad crash where there was no timely official information coming through the Ministry of Civil Aviation, which conducted only one briefing, not even taking questions, leaving a vacuum that Gen AI filled. False reports and clips such as those reporting as fake by fact-checking organization BOOM quickly circulated on X and other sites. One fake report claimed an incorrect sole survivor’s seat number, while another cited non-existent torrential rain, despite the sunny weather on June 12, 2025. These were some fake news spread worldwide and that the Press Information Bureau was forced to dispel because it was the first fatal case involving Boeing 787 Dreamliner. This effect, called social proof, affects individuals to increase belief in misinformation when they can observe people supporting it, even when these are unreal endorsements.
The more Gen AI evolves, the more it can cause a so-called truth decay, as individuals will no longer care about believing anything. When a video, an article or an audio-clip might plausibly be produced by an AI, how can we tell what is real? This distrust can be traced to the cynical worldview effect since people are less likely to trust the institutions prompting them to believe that there is an alternative narrative, which might be false. According to a survey of Edelman in 2025, faith in international media fell to 43%, and many participants reference AI-created content as one of the causes of their distrust.
The Ahmedabad crash aggravated this crisis. A hoax AI-generated report went around the aviation community, causing even pilots to believe it based on its technical language, until specialists pointed out that it had been based on a future 2024 LATAM accident. Another example was a deepfake audio of a CEO announcing a bankruptcy of a company in early 2025 and leading to a 20-percent decrease in its stock price in a few hours. On the same note, the news alerts of high profile personalities such as Rafael Nadal coming out as gay or even Israeli Prime Minister Benjamin Netanyahu being arrested produced by the AI have become viral before Facebook or Apple can correct them among iPhone users. This erosion can be further enhanced by Dunning-Kruger effect, in which people with limited knowledge tend to overestimate their capability of seeing the truth and hence would be less inclined toward its verification.
Addressing the psychological vulnerabilities exploited by Gen AI, especially evident in the Ahmedabad crash, requires a multi-faceted approach. Here are some strategies, grounded in real-world efforts:
Education is critical to countering misinformation. Media literacy programs can teach people to recognise AI-generated content by looking for subtle cues, like unnatural lip-sync in deepfakes or overly polished text in AI-written articles. A 2025 EU initiative trained students to spot AI-generated fake videos, reducing their likelihood of sharing misinformation by 30%. Schools and community organisations should incorporate critical thinking into curricula, emphasising how to verify sources and question emotional triggers.
AI can fight fire with fire. Tools like those developed by xAI can detect deepfakes or flag AI-generated text by analysing patterns, such as unnatural pixel artefacts or linguistic inconsistencies. The C2PA standard, adopted in 2025 by major news outlets, embeds metadata to verify authenticity. Making these tools widely available can empower users to check content before sharing it, as seen in the debunking of Ahmedabad crash visuals.
Governments and tech companies must enforce transparency in AI-generated content. Policies like the EU’s AI Act mandate labelling of AI-generated media to reduce deception. In China, the 2025 guidance identified AI-generated content after viral fake stories sparked unrest. Platforms should disclose when algorithms amplify misleading content, giving users context about what they’re seeing.
Newsrooms and fact-checking groups should adopt clear AI policies, such as human oversight and labelling of AI-generated content. Platforms like X can integrate features like Grok’s DeepSearch mode, which analyses web sources to provide verified answers, helping users navigate misinformation. In 2025, CBS News’ verification team debunked AI-generated videos of the Iran-Israel conflict, setting a model for proactive fact-checking.
Since misinformation thrives on emotional manipulation, teaching people to pause and reflect before reacting can reduce its spread. Australia’s 2025 “Trust Your Doctor, Not a Chatbot” campaign promoted the “stop, think, verify” mantra, reducing AI-generated health misinformation by 25%.
The psychology of the misinformation in the AI era is a complicated process involving both human nature and technology Jarvis of the Sea, where misinformation created by AI may be employed to generate fear, panic, and disinformation. The AI-driven fake reports, videos, and images created around the Ahmedabad plane crash are an example of how Gen AI uses cognitive bias and emotional weaknesses and creates confusion during an event associated with tragedy. However, it also gives us instruments to resist, such as detection algorithms, education programs, and so on. This is a dilemma between innovation and accountability.
Trust or credibility, as we progress into the AI world, will have to be restored in a communal approach. People will need to become smarter and think critically, tech companies need to ethical develop AI, and governments need to come up with policies that will safeguard people without hurting innovation. Understanding the psychological roots of misinformation helps to prepare better to live in a world increasingly difficult to pin down.
According to philosopher Hannah Arendt, “The moment we no longer have a free press, anything can happen.” The press in the AI era is not merely human reporters, but it is the algorithms and models that define our information world. Understanding these psychological vulnerabilities that AI targets, we can take back control over what we know to be true and bring a future in which truth is still held at value.