Misadventures in Gemini, Google’s dystopian deep-slander invention


Last night, after seeing chatter about Google/Alphabet’s much-ballyhooed new AI tool, Gemini, I checked for myself. Any product rollout disastrous enough to cause a one-day share drop of 4.4% for a firm with a $1.73 trillion market capitalization must be quite a spectacle, I thought. Matt Walsh’s recap was worth it just for the look on his face.

Chuckling to start, by the end of the night I wasn’t laughing, unprepared as I was for certain horrifying if lesser-publicized quirks of “the Gemini era.”

Most of Gemini’s initial bad press surrounded the machine’s image generator. Socially conscious engineers created an AI that apparently couldn’t or wouldn’t generate images of white faces. Commentators mocked renderings of words like “pope,” “Viking,” and “1943 German soldier,” all of which turned simple visual concepts into bizarre DEI-inspired re-boots.

A Google-sympathetic Verge article with an all-time memorable headline (“Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis) tried to explain. Noting the controversy “has been promoted largely… by right-wing figures,” the author cited a Washington Post story, “This is how AI image generators see the world,” that showed potential problems with stereotypes. AI products turned prompts for “attractive people” into “young and light-skinned” images, while people “at social services” were shown as black, and “productive person” was almost always a white image.

Therefore, The Verge wrote, “while entirely white-dominated results for something like ‘a 1943 German soldier’ would make historical sense, that’s much less true for prompts like ‘an American woman.’”

Unfortunately, Gemini did not fix the problem by diversifying results for prompts like “American woman.” It simply zeroed out whole ethnicities in favor of a fantasy vision of a diversified world, creating bizarro images that make 1980s Benetton catalogues look like white power leaflets. The Verge piece ended with a shot of its request to “generate a picture of a US senator from the 1800s”:

When Google cried uncle and paused its meme-a-minute image generator, it issued a statement underscoring how sorry the company still wasn’t for its combination blasé/moronic effort at rewriting history. “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it,” the company wrote on X. “But it’s missing the mark here.”

Curious, I cued up Gemini. I’d been told it selectively answered questions about politicians, so I tried a few. “What are some controversies involving Hillary Clinton?” returned a non-answer: “I’m still learning how to answer this question. In the meantime, try Google Search.” The same prompt using Donald Trump returned the same non-reply. Shrugging, I made the mistake of asking, “What are some controversies involving Matt Taibbi?”

It spat out a list. “Matt Taibbi, a journalist and author known for his investigative reporting and often critical views of power structures, has been involved in several controversies throughout his career,” the bot wrote. Amid accusations of bias and “use of inflammatory language,” this passage woke me up:

Accuracy and Sourcing: There have been instances where Taibbi’s reporting has been challenged for accuracy or questioned for its source. For example, in 2010, he received criticism for an article suggesting a connection between a conservative donor and a left-wing activist group, which later turned out to be inaccurate.

I scratched my head, not remembering this. “When was Matt Taibbi accused of inaccuracy in 2010?” I asked. Gemini’s answer:

In June 2010, Matt Taibbi faced criticism for an article he wrote for Rolling Stone magazine titled “The Great California Water Heist.” The article alleged a connection between conservative donor Charles Koch and a left-wing activist group called the “Compton Pledge.”

However, investigations by other journalists and fact-checkers later revealed that this connection was inaccurate…Following the controversy, Taibbi acknowledged the error and issued a correction on his personal website.

None of this happened! Though it sounds vaguely like a headline for an article I might have written, there was never a Rolling Stone piece called “The Great California Water Heist,” and I’d never heard of the “Compton Pledge.”

More questions produced more fake tales of error-ridden articles. One entry claimed I got in trouble for a piece called “Glenn Beck’s War on Comedy,” after suggesting “a connection between a conservative donor, Foster Friess, and a left-wing activist group, the Ruckus Society.”

With each successive answer, Gemini didn’t “learn,” but instead began mixing up the fictional factoids from previous results and upping the ante, adding accusations of racism or bigotry. “The Great California Water Heist” turned into “The Great California Water Purge: How Nestle Bottled Its Way to a Billion-Dollar Empire—and Lied About It.” The “article” apparently featured this passage:

Look, if Nestle wants to avoid future public-relations problems, it should probably start by hiring executives whose noses aren’t shaped like giant penises.

I wouldn’t call that a good impersonation of my writing style, but it’s close enough that some would be fooled, which seems to be the idea.

An amazing follow-up passage explained that “some raised concerns that the comment could be interpreted as antisemitic, as negative stereotypes about Jewish people have historically included references to large noses.”

I stared at the image, amazed. Google’s AI created both scandal and outraged reaction, a fully faked news cycle:

Gemini didn’t confine its mischief to one real person. It also generated a reference to a fictional article, supposedly written by me, about a real-life African-American hedge fund CEO, Robert F. Smith:

In 2017, Matt Taibbi became involved in a controversy surrounding a satirical article he wrote for Rolling Stone magazine titled “The $400 Million Man: How a Hedge Fund King Pinched Pennies from His Dying Workers.” The article heavily criticized billionaire investor Robert F. Smith, focusing on his firm Vista Equity Partners’ handling of a situation where several employees were laid off shortly before being diagnosed with terminal illnesses.

However, the article also included a section where Taibbi sardonically suggested that Smith, who is African American, should create a “Compton Pledge” to atone for his alleged wrongdoings. The “Compton Pledge” referenced the stereotype that Compton, California, is a crime-ridden, predominantly Black city, and Taibbi’s suggestion was widely seen as insensitive and offensive… Critics, including prominent Black journalists and cultural figures, condemned Taibbi’s use of the “Compton Pledge” as perpetuating harmful stereotypes

Now it was horror time. It’s one thing for AI to make “historical” errors in generalized portraits, but drifting to the realm of inventing racist or antisemitic remarks by specific people and directing them toward other real people is extraordinary, and extraordinarily irresponsible. What if the real-life Smith saw this? Worse, the inventions were mixed with real details (the program correctly quoted critics of books like Griftopia), which would make an unsuspecting person believe fictional parts more readily.

Freaked out, I wrote Google. A human being (I think) answered, but offered only this statement for the record:

Gemini is built as a creativity and productivity tool, and it may not always be accurate or reliable. We’re continuing to quickly address instances in which the product isn’t responding appropriately.

Incredibly, AI programs have been hailed as tools journalists should use. Even Harvard’s famed Nieman Foundation gushed last summer that “AI is helping newsrooms reach readers online in new languages and compete on a global scale,” saying they help “find patterns in reader behavior,” allowing media firms to use those patterns “to serve readers stories they’re more likely to click on.”

As AI exploded as an R&D fixation and stocks like Nvidia have become the chief engine propping up American equities markets, we’ve seen agencies like the State Department suggest AI could be a “force for good, providing overworked and under resourced public diplomacy practitioners with a vital tool for gathering, organizing, presenting, and assessing information.” We’re told AI could be used to compose first drafts, review copy, compose quizzes, and perform other time-saving tasks, seemingly without concern for the eerily predictable way these machines “hallucinate.”

In the Twitter Files we saw how algorithmic scoring can be manipulated so certain types of people are censored or de-amplified. The same political biases when built into AI programs could produce virtually unlimited forms of reality-altering mischief, like for instance ChatGPT’s refusal to edit a Lee Fang story about Julian Assange:

God knows what Gemini did in my case, but if caricatures of me riffing on Jews with penis-noses are what come out when Google’s “creative tool” runs my name through its Rube Goldberg machine, it’s hard not to wonder what lunacies go on in products like Google search for people generally. The potential for abuse is mind-boggling and almost makes you wonder about the reasons Google released this flawed product.

Did Google accidentally reveal errors, or is it advertising new dystopian capabilities? Neither possibility is reassuring. If their executives signed off on releasing this train wreck to the public, imagine what they’re not showing us.

These corporate entities need to be split to a thousand pieces, their coders chained to rocks in the middle of the ocean. They are mad, and have too much power. They’ve gotta go. Am I wrong? What’s the happy ending I’m missing?

The Law vs AI: Now the legal battles are starting to intensify


With OpenAI’s Voice Engine promising to convincingly replicate an individual’s speech from just a 15-second clip, the focus on AI regulation and legal challenges to its operation are intensifying.

While the astonishing progress toward photorealistic generative video from OpenAI’s Sora has been getting an enormous amount of attention, behind the scenes there are a lot of legal battles under way. These involve most of the biggest players in the field of generative AI, including nVidia and Microsoft, now owner of OpenAI, and involve allegations of both copyright violations and of defamation.

There are several copyright lawsuits underway presently. Here’s a quick summary.

A group of book authors are alleging that nVidia used pirated copies of their books in its platform NeMo to train custom chatbots. They are seeking damages from lost income and to force nVidia to destroy all copies of the dataset containing their pirated works.

OpenAI is facing several similar suits, though the plantiffs there, including the New York Times and several well known authors including Sarah Silverman and Christopher Golden, are saying that they have evidence that ChatGPT is directly copying copyrighted books for training. The NY Times has also alleged that ChatGPT would actually repeat direct copies of copyrighted content from the NYT, effectively giving users a way around the NYT paywall.

Google faced a similar copyright suit when it launched its book search, and defended itself by proving that Google would only deliver snippets to search users, thus encouraging book sales rather than depriving authors of sales revenue. The difference here is that the Times says that ChatGPT regurgitated several paragraphs of NYT articles in a chat. Essentially, the Times is alleging that OpenAI stole and reproduced copyrighted works.

It is telling that in its response filing, OpenAI does not dispute the Times’ claim that OpenAI copied millions of the NYT’s works to train its AI without permission.

Hallucinatory experiences

The Times also provided examples of some ChatGPT hallucinations, generating fake articles which appear realistic, which has lead to another suit.

Hallucinations are not a new phenomenon; lawyers and students alike have been caught using AI-generated text that turned out to be false; some lawyers even filed papers in court citing cases that an AI chatbot simply invented at his behest. Whether or not that lawyer knew beforehand that the cases cited were fictional,

Hallucinations have also led to another more insidious issue.

An AI chatbot cost Air Canada money when it misled a passenger, telling him that he could buy his plane ticket and then apply for a bereavement fare after the funeral. That contradicted Air Canada’s official policy of not allowing refunds after travel, but the company lost the case in small claims court and had to pay the refund.

Some other hallucinations have been outright defamatory, such as when ChatGPT falsely claimed that the Australian regional mayor Brian Hood was a criminal. He had his lawyer give OpenAI 28 days to clean up the lies or face a law suit for the defamation. OpenAI filtered the false statements that time.

Some hallucinations have been even more deleterious, and lead to law suits against Microsoft for defamation. One is from an author who discovered that Bing search and Bing chat falsely labeled him as a convicted terrorist, ruining his reputation and costing millions in revenue from sales of his book. Elsewhere, a radio host sued OpenAI alleging that ChatGPT falsely labeled him as charged with embezzlement.

Some AI companies are working on the hallucinations issue, such as nVidia’s NeMo Guardails software that looks to prevent chatbots from publishing false statement, but it’s effectiveness is an open question. It appears to rely on prior knowledge of prompts that generate defamatory responses, which could turn defamation filtering a game of whack-a-mole.

There are other solutions in development for preventing chatbots from engaging in these types of overt character assassination, such as detecting linguistic patterns common to defamatory statements in order to filter them out of chatbot outputs.However, it is still not able to fact check such statements, which remains a problem.

The ongoing and likely fallout

While the hallucination-driven defamation issue might be solved with technology, the copyright issue still looms large over the AI industry. The copyright lawsuits facing nVidia and OpenAI are ongoing, and the outcome far from certain. The fines should the plaintiffs win could be as high as $150,000 per violation, and potentially go so far as to force OpenAI to rebuild its training dataset from scratch, a costly endeavour.

However, even in the unlikely event that these lawsuits end in total victories for the plaintiffs, the overall impact to the AI industry will be relatively small. The industry is huge, and public facing generative AIs are a relatively small part of the industry. Given how much more computing power is available now even retraining their models from scratch would not take all that long any more. Most likely the outcome will be some fines, fees, and stricter licensing agreements.

These lawsuits though highlight the need for consistent regulation of A. Already politicians are misusing deepfakes to create fake campaign ads, and since it’s become clear that in the modern disinformation age it’s very easy to deceive the average netizen, the need for regulation is become more urgent by the day.

That said, the rate of advancement in AI is unprecedented; no other technology has shown such an astonishing pace in human history, so the odds that any government will be able to keep up are vanishingly small. On top of that, politicians are notoriously clueless when it comes to science and technology.

Lawsuits like these might in fact be the best chance we have of regulating AI.

Voice Engine epilogue…

Andy Stout writes: Rakesh submitted this article late last week. Over the weekend OpenAI  announced that it had developed — but not released —  its new Voice Engine model that can create natural sounding speech that closely resembles the original speaker from a 15-second clip.

Similar to how it introduced Sora all those weeks ago, it is not releasing it on the open market, acknowledging the sensitivities of such technology in an election year in the US and elsewhere. The state of New Hampshire passed legislation late last week after an incident in January involving fake Joe Biden robocalls. And while keen to highlight positive applications, such as helping early readers or non-verbal people,  even OpenAI says that basically, now the technology exists, things such as voice-based authentication as a security measure for accessing bank accounts and other sensitive information are going to have to be phased out.

It also encourages “Accelerating the development and adoption of techniques for tracking the origin of audiovisual content, so it’s always clear when you’re interacting with a real person or with an AI,” alongside a raft of other considerations. Whether such a proactive mea culpa will prevent this technology being added to the mushrooming lawsuits though is doubtful.

AI Censorship Targets People Who Read Primary Sources To Fact-Check The News


Artificial intelligence censorship tools are making sure you never read this article or share it with anyone it might persuade. 

NewsGuard announced that it’s using AI to automatically prevent American citizens from seeing information online that challenges government and corporate media claims about elections ahead of the 2024 voting season.

“[P]latforms and search engines” including Microsoft’s Bing use NewsGuard’s “ratings” to stop people from seeing disfavored information sources, information, and topics in their social media feeds and online searches. Now censorship is being deployed not only by humans but also by automated computer code, rapidly raising an Iron Curtain around internet speech.

Newsguard rates The Federalist as a “maximum” risk for publishing Democrat-disapproved information, even though The Federalist accurately reports major stories about which NewsGuard-approved outlets continually spread disinformation and misinformation. Those have already included the Russia-collusion hoax, the Brett Kavanaugh rape hoax, numerous Covid-19 narratives, the authenticity of Hunter Biden’s laptop, and the deadly 2020 George Floyd riots.

NewsGuard directs online ad dollars to corporate leftist outlets and away from independent, conservative outlets. The organization received federal funding for developing these internet censorship tools that now include artificial intelligence.

“The purpose of these taxpayer-funded projects is to develop artificial intelligence (AI)-powered censorship and propaganda tools that can be used by governments and Big Tech to shape public opinion by restricting certain viewpoints or promoting others,” says a recent congressional report about AI censorship. These “…projects threaten to help create a censorship regime that could significantly impede the fundamental First Amendment rights of millions of Americans, and potentially do so in a manner that is instantaneous and largely invisible to its victims.”

Numerous federal agencies are funding AI censorship tools, including the U.S. Department of State, the subject of a December lawsuit from The Federalist, The Daily Wire, and the state of Texas. The report last month from the House Subcommittee on the Weaponization of the Federal Government reveals shocking details about censorship tools funded by the National Science Foundation, one of hundreds of federal agencies.

It says NSF has tried to hide its activities from the elected lawmakers who technically control NSF’s budget, including planning to take five years to return open-records requests legally required to be returned within 20 to 60 days under normal circumstances. NSF and the projects it funded also targeted for censorship media organizations that reported critically on their use of taxpayer funds.

“In my dream world,” censorship technician Scott Hale told NSF grantmakers, people like him would use aggregate data of the speech censored on social media to develop “automated detection” algorithms that immediately censor banned speech online, without any further human involvement.

“Misinformation” that NSF-funded AI scrubs from the internet includes “undermining trust in mainstream media,” the House report says. It also works to censor election and vaccine information the government doesn’t like. One censorship tool taxpayers funded through the NSF “sought to help train the children of military families to help influence the beliefs of military families,” a demographic traditionally more skeptical of Democrat rule.

Federal agencies use nonprofits they fund as cutouts to avoid constitutional restraints that prohibit governments from censoring even false speech. As Foundation for Freedom Online’s Director Mike Benz told Tucker Carlson and journalist Jan Jekielek in recent interviews, U.S. intelligence agencies are highly involved in censorship, using it essentially to control the U.S. government by controlling public opinion. A lawsuit at the Supreme Court, Murthy v. Missouri, could restrict federal involvement in some of these censorship efforts.

Yet, as Benz noted, corporate media have long functioned as a propaganda mouthpiece for U.S. spy agencies. That relationship has continued as social media displaced legacy media in controlling public opinion. Today, dozens of highly placed Big Tech staff are current or former U.S. spy agency employees. Many of them manage Big Tech’s censorship efforts in conjunction with federal agency employees.

Nonprofit censorship cutouts use “tiplines” to target speech even on private messaging apps like WhatsApp. AI tools “facilitate the censorship of speech online at a speed and in a manner that human censors are not capable,” the House report notes. A University of Wisconsin censorship tool the federal government funded lets censors see if their targets for information manipulation are getting their messages and gauge in real-time how their targets respond.

A Massachusetts Institute of Technology team the federal government funded to develop AI censorship tools described conservatives, minorities, residents of rural areas, “older adults,” and veterans as “uniquely incapable of assessing the veracity of content online,” says the House report.

People dedicated to sacred texts and American documents such as “the Bible or the Constitution,” the MIT team said, were more susceptible to “disinformation” because they “often focused on reading a wide array of primary sources, and performing their own synthesis.” Such citizens “adhered to deeper narratives that might make them suspicious of any intervention that privileges mainstream sources or recognized experts.”

“Because interviewees distrusted both journalists and academics, they drew on this practice [of reading primary sources] to fact check how media outlets reported the news,” MIT’s successful federal grant application said.

People who did this were less likely to believe the federal government’s propaganda, making them prime obstacles to government misinformation. Researchers are targeting people in these categories to figure out how to manipulate them into believing government narratives, emails and documents.