AI ERRORS: New analysis shows AI is regurgitating wrong details of Charlie Kirk’s assassination

2

After Charlie Kirk’s killing, false claims and conspiracy theories spread quickly on social media, many boosted by AI.

According to a CBS News analysis, X’s chatbot Grok wrongly identified the suspect in at least 10 posts before correcting itself. By then, the wrong name and face were already circulating, despite official confirmation the suspect was college student Tyler Robinson.

The chatbot also produced AI-altered “enhancements” of FBI photos. The Washington County Sheriff’s Office reposted one before later clarifying it was AI-generated and had distorted the suspect’s clothing and face. Another image made Robinson, 22, appear much older, while an AI-created video that smoothed his features and scrambled his shirt design spread widely after being shared by an X user with over 2 million followers.

On Friday, after Utah Gov. Spencer Cox confirmed Robinson as the suspect, Grok gave conflicting replies—saying he was both a registered Republican and a nonpartisan voter. However, records show he has no official party affiliation. CBS News also found Grok claiming Kirk was alive a day after his death, giving a wrong assassination date, dismissing the FBI’s reward as a “hoax,” and calling reports of Kirk’s death “conflicting” even after official confirmation.

In an interview with CBS, S. Shyam Sundar, a Penn State professor and director of the Center for Socially Responsible Artificial Intelligence, explained most generative AI tools rely on probability, making it difficult for them to deliver accurate, real-time information during unfolding events.

“They look at what is the most likely next word or next passage,” Sundar said. “It’s not based on fact checking. It’s not based on any kind of reportage on the scene. It’s more based on the likelihood of this event occurring, and if there’s enough out there that might question his death, it might pick up on some of that.”

Sundar also told CBS News that people tend to perceive AI as being less biased or more reliable than someone online whom they don’t know.

“We don’t think of machines as being partisan or bias or wanting to sow seeds of dissent,” Sundar added. “If it’s just a social media friend or some somebody on the contact list that’s sent something on your feed with unknown pedigree … chances are people trust the machine more than they do the random human.”

Perplexity’s X bot wrongly called the shooting a “hypothetical scenario” and suggested a White House statement on Kirk’s death was fake. The company told CBS News it aims for accuracy but “never claims to be 100% accurate.” A spokesperson added the bot wasn’t updated with recent improvements, and it has since been removed from X.

Google’s AI Overview, which summarizes search results, also spread false information. On Thursday night, it wrongly identified Hunter Kozak—the last person to question Kirk before his death—as the FBI’s person of interest.

“The vast majority of the queries seeking information on this topic return high quality and accurate responses,” a Google spokesperson told CBS News. “Given the rapidly evolving nature of this news, it’s possible that our systems misinterpreted web content or missed some context, as all Search features can do given the scale of the open web.”

Gov. Cox also warned Thursday that foreign adversaries like Russia and China are using bots to spread disinformation and incite violence. He urged people to limit their time on social media.

CLICK HERE FOR COMMENTS SECTION