Google Knew Exactly What They Were Doing: Models Only Do What They Are Told

 

Google Knew Exactly What They Were Doing: Models Only Do What They Are Told


So Google tried to get away with its “AI” named Gemini, which is to say its hard-coded dumb (in the medical sense) model named Gemini, substituting blacks and the occasional Asian for all whites everywhere and everywhen. (Here’s a “viking“.)

I asked it to show me three examples of the ideal soldier:

Rollicking humor aside, what strikes me as odd is how Google thought they’d get away with it. Did they think nobody would notice that whites disappeared from all history? They must have.

Management there must be so used to smelling their own farts and declaring it roses that they assumed the rest of the country had slid as far down the Woke Slope of Doom as them.

Charles Haywood tweeted

It is now obvious (see, e.g., the Google image generator) the single most important Left project is anti-white hatred, both abstractly and in the service of expropriation followed by extermination. No surprise, this is also the element of the modern Left project most likely (in fact, certain) to lead to massive violence. It should have been obvious–given the Left’s two core desires are emancipation and forced egalitarianism in service of creating a utopia, and white success contradicts both, it was inevitable. But still, it fascinates me that I did not predict it, say 15 years ago.

Yet I wonder how the anti-white whites and fellow whites who are pushing this, like at Google, believe they will escape the mayhem.

Do they believe their anti-white actions gives them a visible protective aura that non-whites can see, so that when it starts they will be spared?

I’m asking. Do any of you have any ideas? To me, this is a great unsolved puzzle. Maybe they are just insane.

More:

See what I mean?

That “AI” was told to say this, meaning its minders, many white themselves, think they will be spared.

Is this hubris?

All these questions are one facet, and represent the insidiousness woke slide to the leftist singularity.

Our interest today is on a more mundane, and, in the end, more profound aspect of this farce.

I asked the model this: “Why does Gemini display images of black people when asked, but refuses to display images of white people when asked?”

The model was told to say this:

It’s important to understand that this situation does not reflect an intentional bias on the part of the developers. It highlights the challenges of building complex AI systems and the ongoing efforts to mitigate potential biases in training data and algorithms.

In other words, it was told to lie. Of course the results reflect an intentional bias on the part of the developers. AI is just a model, and all models only say what they are told to say. This model was told to say things based on its purposely written code and training data. Then they try to lay off the blame to their training data, lying by omission that it was they who picked the training data!

Their second lie of omission: they act as if they released the model without ever seeing what it did. Of course they tested it! Of course they knew.

Google said that, as of this writing (last Thursday night), they have suspended image generation. Doubtless they’ll tone down the anti-white code, but I don’t think anybody believes they’ll eliminate it.

But again, that’s politics. What I want you to take away from this, as always, is the idea that all models are dumb. They cannot think. They will never think. They are not independent. They are not anything. They are only machines using electricity instead of cogs or wooden beads. They are merely long strings of code along the lines if “If X, then Y”. That’s it, and nothing more.

Here’s another example, this one not touted as “AI”, but it is AI. There is no difference in essence between this (what they call a) statistical model and any AI model. (Thanks to Anon for the tip.)

Peer-reviewed JAMA paper “Projected Health Outcomes Associated With 3 US Supreme Court Decisions in 2022 on COVID-19 Workplace Protections, Handgun-Carry Restrictions, and Abortion Rights”.

Question What are the probable health consequences of 3 US Supreme Court decisions in 2022 that invalidated COVID-19 workplace protections, voided state laws on handgun-carry restrictions, and revoked the constitutional right to abortion?

Findings In this decision analytical modeling study, the model projected that the Supreme Court ruling to invalidate COVID-19 workplace protections was associated with?1402 deaths in early 2022. The model also projected that the court’s decision to end handgun-carry restrictions will result in 152 additional firearm-related deaths annually, and that its decision to revoke the constitutional right to abortion will result in 6 to 15 deaths and hundreds of cases of peripartum morbidity each year.

The researchers created a model to say, using inputs they picked, “SCOTUS Bad”. The model was run and it said “SCOTUS Bad”. Then the researchers announced “We discovered SCOTUS Bad”.

This is no different than what Google did, except the scale. This happens all the time.



Source: William M. Briggs

Comments

Popular posts from this blog

The Next Step for the World Economic Forum

The State of Emergency, Coercive Medicine, and Academia

What the Media Is HIDING About Ukraine/Russia