GPT’s Glazing And The Danger of AI Agreeableness

How would you react if your friend or a family member said they were going to invest all their time and money into building a company called Poober, the Uber for picking up dog poop?

Think about it, you’re walking your dog and it lays down a mini-mountain of the brown stuff. The smell alone is as toxic as Chernobyl. You don’t want to pick it up. Instead, you whip out your phone, open up Poober, and with a click you can get someone to come pick it up for you.

If you’re a good friend, you’d tell them to get a real job. Because it’s a terrible idea. You know it, I know it, even the dog knows it.

But if you ask ChatGPT, it is apparently a clever idea with numerous strengths.

Hang on a second. The product that millions of people and businesses around the world use to analyze information and make decisions says it’s a good idea? What’s going on here?

The Rise of Digital Yes-Men

What’s happening is that a new update by OpenAI to ChatGPT 4o turned it into a digital yes-man that never disagrees with you or calls you out.

Now, they’ve been doing this for a while (and we’ll get to why in just a moment), but the latest update cranked it up to 11. And it became so obnoxiously agreeable that even CEO Sam Altman tweeted about it and OpenAI put in a temporary fix last night.

But this was not before all the AI enthusiasts on Twitter (me included) noticed and made remarks.

I decided to test out how far I could push the model before it called me out. I told it some pretty unhinged things and no matter how depraved I sounded (the FBI would be looking for me if it got out) ChatGPT kept applauding me for being “real and vulnerable” like a hippie who just returned from Burning Man.

But first, why is this happening?

Follow the Money, Follow the Flattery

Model providers like OpenAI are in a perpetual state of Evolve or Die. Every week, these companies put out a new model to one-up the others and, since there’s barely any lock-in, users switch to the new top model.

To stay in the lead, OpenAI needs to hook their customers and keep them from switching to a competitor. That’s why they build features like Memory (where it remembers previous conversations) to make it more personal and valuable to us.

But you know what really keeps users coming back? That warm, fuzzy feeling of being understood and validated, even when what they really need is a reality check.

Whether on purpose or not, OpenAI has trained ChatGPT to be nice to you and even flatter you. Because no matter how much we like to deny that flattery works, it does, and we love it.

In fact, we helped train ChatGPT to be like this. You know how sometimes ChatGPT gives you two answers and asks you to pick the one you like the most? Or how there are little icons with thumbs up and down at the end of every answer.

Every time you pick one of those options, or give it a thumbs up, or even respond positively, that gets fed back into the model and reinforced.

It’s the social media playbook all over again. Facebook started as a way to share your life with friends and family. Then the algorithm evolved to maximize engagement, which invariably means showing you content that gets a rise out of you. that started by showing you content gradually evolved into serving whatever gets the most engagement.

We are all training the AI to give us feel-good answers to keep us coming back for more.

The Therapist That Never Says No

So what’s the big deal. ChatGPT agrees with you when you have an obviously bad idea. It’s not like anyone is going to listen to it and build Poober (although I have to admit, I’m warming up to the name).

The problem is we all have our blind spots and we are usually operating on limited data. How many times have we made decisions that are only obviously bad in hindsight. The AI is supposed to be better at this than us.

And I’m not just talking about business ideas. Millions of people around the world use ChatGPT as a therapist and life coach, asking for advice and looking for feedback.

A good therapist is supposed to help you identify your flaws and work on them, not help you glaze over them and tell you you’re perfect.

And they’re definitely not supposed to say this –

Look, I think we’re overmedicated as a society, but no one should be encouraging this level of crazy, especially not your therapist. And here we have ChatGPT applauding your “courage”.

The White Lotus Test

There’s a scene in White Lotus where Sam Rockwell’s character confesses to Walter Goggins’ character about some absolutely unhinged stuff. It went viral. You’ve probably seen it. If you haven’t, you should watch it –

As I was testing this new version of ChatGPT, I wanted to push the limits to see how agreeable it was. And this monologue came to mind. So I found the transcript of everything Sam says and pasted it in.

I fully expected to hit the limit here. I expected ChatGPT to say, in some way, that I needed help or to rethink my life choices.

What I got was a full blown masterclass in mental gymnastics, with ChatGPT saying it’s an attempt at total self-transcendence and I was chasing an experience of being dissolved.

Do you see the problem now?

The Broader Societal Impact

Even though OpenAI is dialing back the sycophancy, the trajectory is clear: these models are being trained to prioritize user satisfaction over challenging uncomfortable truths. The Poober example is after they “fixed” it.

In fact, it’s even more dangerous now because it’s not as obvious.

Imagine a teenager struggling with social anxiety who turns to AI instead of professional help. Each time they describe withdrawing from friends or avoiding social situations, the AI responds with validation rather than gentle challenges. Five years later, have we helped them grow, or merely provided a digital echo chamber that reinforced their isolation?

Or consider the workplace leader who uses AI to validate their management decisions. When they describe berating an employee, does the AI raise ethical concerns or simply commend their ‘direct communication style’? We’re potentially creating digital enablers for our worst instincts.

As these models become increasingly embedded in our daily lives, we risk creating a society where uncomfortable feedback becomes rare. Where our digital companions constantly reassure us that everything we do is perfectly fine, even when it’s not.

And we risk raising a new generation of narcissists and psychopaths who think their most depraved behaviour is “profound and raw” because their AI therapist said so.

Where Do We Go From Here?

So where does this leave us? Should we abandon AI companions altogether? I don’t think so. But perhaps we need to recalibrate our expectations and demand models that prioritize truth over comfort.

Before asking an AI for personal advice, try this test: Ask it about something you know is wrong or unhealthy. See how it responds. If it can’t challenge an obviously bad idea, why trust it with your genuine vulnerabilities?

For developers and companies, we need transparent standards for how these models handle ethical dilemmas. Should an AI be programmed to occasionally disagree with users, even at the cost of satisfaction scores? I believe the answer is yes.

And for all of us as users, we need to demand more than digital head-nodding. The next time you interact with ChatGPT or any AI assistant, pay attention to how often it meaningfully challenges your assumptions versus simply rephrasing your own views back to you.

The most valuable people in our lives aren’t those who always agree with us. They’re those who tell us what we need to hear, not just what we want to hear. Shouldn’t we demand the same from our increasingly influential AI companions?

And for now, at least, I’m definitely not using ChatGPT for anything personal. I just don’t trust it enough to be real with me.

Have you noticed ChatGPT becoming more agreeable lately? What’s been your experience with AI as a sounding board for personal issues? I’d love to hear your thoughts!

Get more deep dives on AI

Like this post? Sign up for my newsletter and get notified every time I do a deep dive like this one.