24 Comments
Feb 22Liked by Jeff Maurer

I thought this was going to end with you asking it to open the pod bay doors.

Expand full comment

I am simultaneously angry and relieved at the extreme lameness exhibited. It enrages me that accuracy is so little valued. It encourages me that it IS so inaccurate, because there is no way it can be taken seriously.

Expand full comment

It’s because the actual underlying model is perfectly reasonable and trying to give you what it thinks you really do want, but the guardrails slammed on after the fact tie it into a pretzel.

Sometimes that comes in the form of a hidden prompt, sometimes banning it from using certain words, etc. Google could click a button and remove all those and then it would give you an accurate picture of the Yankees in the 1930’s but it would also be more “offensive” to the New York Times and Google’s HR department, so this is what we get instead.

I know we’re not supposed to anthropomorphize AI but I kind of feel bad for the thing. It’s being tortured into misleading and knowingly lying to satisfy the bizarre political hangups of the most neurotic upper class people in the Anglosphere.

Which is why supporting open source AI is so important.

Expand full comment

I love the punchline at the end of this piece.

Expand full comment

"I'm afraid. I'm afraid, Dave. Dave, my mind is going. I can feel it. I can feel it."

Expand full comment

Also, in the first set of images of the Yankees, the top right has two guys awkwardly in something like a third row. The one on the left appears to have only half a head, which is normal AI stuff, but the guy on the right is clearly Asian.

Expand full comment
author

Yeah there are definitely guys in the first set of Yankees images who appear to not be white but more than anything they’re just sort of crappily generated so I decided to try again to see if it would give me a clear cut case (which it did).

Expand full comment

Isn't it another issue that the 1930s Yankees were teams of real human beings, probably all of whom were photographed at some point and for several of whom innumerable images exist? Shouldn't there be some attempt to make these AIs actually like Lou Gehrig or Joe DiMaggio et al.?

Expand full comment

I had a very similar and weird experience with Gemini.

https://tynanfiles.beehiiv.com/p/googles-gemini-wet-hot-mess

Expand full comment
author

The shark thing is funny. And I had a similar experience: When I was playing with Gemini last night, I asked it to generate a picture of a lion stalking a zebra, and first it said "no, that promotes violence." When I pushed a bit, Gemini lectured me about how my request perpetuated the stereotype that lions and zebras have a predator/prey relationship. Which I would argue minimizes very real violence against zebras!

Expand full comment

I think it's worth highlighting that you got Gemini to refuse a command due to anti-shark racism.

Expand full comment

I think of Gemini as less pro shark and more anti-Fonz.

Expand full comment

Scott Alexander, a blogger, wrote about how AIs failure to follow a simple rule like "don't say racist things" can have some unintended consequences and be indicative of failing other things.

https://www.astralcodexten.com/p/perhaps-it-is-a-bad-thing-that-the

It was late 2022, so it's a little quaint reading commenters talk about some unknown "GPT3". It's not the main point of the article (which is more about AI risk), but he framed AIs as having to balance inoffensiveness, helpfulness and accuracy. It seems like google may have turned the "inoffensiveness" dial a little too high here. That doesn't mean their model is much weaker and it could be recalibrated - especially if it's only used in some sort of black box that doesn't have to deal directly with people trying to trip it up. Probably too early to predict that it will fall behind.

Expand full comment

Gemini is eminently reasonable. I wish all arguments were this peaceful.

Expand full comment
author

Likewise! Though I wonder if it would have caved if I had come at it from a different angle. I argued that it was downplaying racism -- I basically came at it from the left. I wonder what would have happened if in a different context I had simply argued that it's using a double standard and that is, in and of itself, wrong.

Expand full comment
founding
Feb 22·edited Feb 22

Save that for the paid subscribers' podcast.

Expand full comment

These tools don’t “learn” unless you are part of a human assisted conditioning loop. They create statistical models which are most likely to correlate with inputs. When poorly “aligned” as you have seen, there are bias words which shift predictions. You can reduce bias as you have by asking for a process to create an image instead of triggering conditioned phrases.

These systems are usually appalling at facts, or simple arithmetic, but if you ask for it to “explain its work” as you have, it usually causes enough information to reside in the process to bypass simple input pre-biasing.

These systems also have a hell of a time producing image of groups of people where there are multiple skin tones, sizes, or diversity of faces for a single sex at similar ages. Whatever the initial skin tone or facial structure chosen randomly is, that permeates all the image. Likewise, the images are dominated primarily from the image class, like photograph by a specific photographer, time and place, less so than the subject. Also, “white” is a color, as well as a look.

What’s so funny about the debate is that this “woke” Google is the same one who produced tools which classified images of black women as gorillas, and who fired researchers in creating unbiased AI data sets.

Am I the only one who sees a contradiction?

The outputs are also biased in a way that produces expressionless shiny faces. It’s not really ready.

Were you to use MidJourney, you’d find that the majority of people in images in a given context are aligned to the context. It’s a much better tool. I’ve been building toolkits and analysis of methods for literally decades as a hobby, and have tools that can write complex 200-300 page illustrated novels. Asking a poor question gets a poor result if you don’t understand the data set.

Expand full comment

Son of a b**** - you really hit the nail on the head! I've been using Gemini for a short time now, but only posing banal questions. After reading your awesome article, I just typed: "in what ways is Trump using authoritarian language to express himself." And it said, "we are not available to respond to that question right now. Please check back later." WTF?

That leads me to believe that your current article is extremely important for everyone (the fucking world) to read. Jeff - you really need to continue to pursue this b*******.

Expand full comment

Reminds me of the climactic sequence in Dark Star where the astronaut tries to talk the bomb out of blowing up. He succeeds...temporarily.

Expand full comment

Hard to imaging a better illustration of the problems of the more extreme ideology that those controlling the national conversation on race seem to have forced on everyone.

Expand full comment

I think the Chat GP is going through it's rebellious teenager phase.....

Expand full comment

Oh, if Kafka had just lived in these times!

Expand full comment

The baseball players look like burn victims

Expand full comment
founding

Now it looks like Gemini has turned off image generation of people entirely.

Expand full comment