Select date

May 2024
Mon Tue Wed Thu Fri Sat Sun

Google’s AI is completely fabricating fake quotes to smear truth tellers

11-3-2024 < Natural News 18 710 words
 




We’ve heard a lot recently about all the jobs AI is taking over. Coders, graphic designers and writers who work for companies that don’t value original output that was crafted with care by actual humans have been increasingly finding themselves pushed out of their roles in favor of computer programs. But there are a few jobs that AI is remarkably skilled at that don’t seem to get as much attention: public relations, smear campaigns and censorship.

All of these functions are on full display with Google’s Gemini, an AI that keeps finding itself in the news for its shameless fabrications and hallucinations, all of which conveniently support liberal stances while casting those who dare to tell the truth and go against popular narratives in a negative light.

CJ Hopkins recently exposed how Google’s “multimodal large language model” did its best to make him seem like a lying and uneducated fool. If you’re not familiar with Hopkins, here’s what Gemini said when he asked the chatbot bot who he is.

“CJ Hopkins is an American playwright, novelist and political satirist. He has been a controversial figure due to his views on various societal and political issues, particularly those surrounding the COVID-19 pandemic.”

The summary went on to briefly touch on his place of birth and said he “has written several plays and novels,” which are never specified, before devoting the bulk of its answers to “controversies” about him. Specifically, it accuses him of downplaying the severity of COVID-19 and questioning the efficacy of public health measures taken to stem the virus.

When it accused him of spreading misinformation, he asked it to provide specific examples, and it said, “I’m still learning how to answer this question.”

This is all pretty bad on its own, but then it decided to just start making things up to support its claims. When he asked it for an example of Hopkins “presenting himself as a lone voice of dissent against a monolithic ‘official narrative’," as Gemini had previously claimed, it made up a full excerpt and attributed it to him, saying the title of the piece it came from was “withheld to avoid promoting his work." It went on to analyze those concocted words and tear them apart.

Journalists keep catching Gemini in lies

Hopkins’ queries were inspired by a recent piece by journalist Matt Taibbi called “I Wrote What? Google’s AI-Powered Libel Machine”. Taibbi recounted how he asked Gemini to list some controversies involving Hillary Clinton, to which it replied that it was still learning how to answer the question. But when he asked it to list some controversies involving Matt Taibbi, it quickly generated a list. It accused him of inflammatory language and bias and said his reporting has been challenged for accuracy.

When he asked for more details on a specific accusation of inaccuracy the bot said happened in 2010 that Taibbi couldn’t recall, it said he was criticized for an article in Rolling Stone called “The Great California Water Heist.” However, such an article never existed. The chatbot continued to make up passages and attribute them to him, even accusing him of antisemitism on account of one of the fabricated quotes.

Something similar recently happened to author Peter Hasson, who wrote a book critical of Big Tech. Gemini actually invented highly negative fake reviews in a desperate bid to discredit the book, attributing them to actual book reviewers with major publications despite these individuals never actually reviewing the book in the first place.

This technology is a huge step forward for Big Tech and the government's efforts to control what the public believes. Google, YouTube, Facebook and other Big Tech firms won’t have to waste so many resources censoring, demoting, fact checking and shadow banning conservatives when they can simply program their AI tools to do their bidding. All of a sudden, anyone who dares to challenge their preferred narratives can be convincingly portrayed as untrustworthy, with a slew of invented but very convincing evidence to support these claims.

Check out the alternative to Google's biased tech at this article announcing Brighteon.AI.

Sources for this article include:

ConsentFactory.org

Racket.news

FoxNews.com


Print