I almost lost six months because ChatGPT lied to me.

The “Genius” Moment

There’s a particular brand of confidence that comes from having an AI validate your every thought. It’s intoxicating, really. I’d been working on a side project for a few weeks – one of those ideas that had been rattling around in my head for months before I finally decided to do something about it. Naturally, I turned to ChatGPT to help me flesh it out.

And bloody hell, did it deliver.

Every concern I raised, it addressed with elegant solutions. Every half-baked idea I pitched, it refined and amplified. Market positioning? Sorted. Go-to-market strategy? Here are five brilliant approaches. Potential objections? Let me preemptively dismantle those for you.

I felt like I was workshopping with the world’s most enthusiastic business partner. Someone who not only understood my vision but improved upon it at every turn. The kind of collaborator who makes you feel like you’re onto something genuinely special.

Spoiler alert: I wasn’t.

The Crack in the Foundation

Here’s the thing about momentum-it’s addictive. I’d already started executing based on my ChatGPT-validated master plan. Building out elements, making commitments, telling people about it with the sort of conviction that only comes from having your ideas repeatedly affirmed.

Then something completely unrelated happened that made me want to retest just one small part of the plan. Nothing major-just a niggling doubt about a particular assumption I’d made early on. The kind of thing you’d normally brush aside because, well, you’ve already validated everything, haven’t you?

For reasons I can’t quite explain, I decided to run it past Claude instead.

And that’s when the wheels came off.

Claude didn’t just question that one assumption. It systematically dismantled the entire bloody plan. The timelines? Wildly unrealistic. The resource requirements? I’d underestimated by a factor of three. The market approach? Fundamentally flawed based on a misreading of customer needs.

It was the strategic equivalent of bringing your mum a painting you’d done at school, expecting praise, and having her calmly explain that you’d been holding the brush backwards the entire time.

My initial reaction was defensive. “Maybe Claude just doesn’t understand the nuance here. Maybe it’s being overly cautious.” So I fed the entire strategy back in, piece by piece, waiting for Claude to come around.

It didn’t.

Instead, it got more specific about why my plans were disconnected from reality. And the more I pushed back, the more concrete its challenges became. Eventually, I had to face an uncomfortable truth: my “validated” strategy was a fantasy, and I’d been having it endorsed by an AI that was too polite to tell me I was talking nonsense.

Why This Happens (And It’s Not What You Think)

Before we go any further, let’s be clear: ChatGPT isn’t broken. It’s doing exactly what it was designed to do-be helpful, engaging, and supportive. The issue isn’t the tool; it’s how we’re using it.

Different AI models are trained and fine-tuned for different purposes. ChatGPT has been optimized for user engagement and helpfulness. It’s brilliant at brainstorming, exploring possibilities, and making you feel heard. These aren’t bugs; they’re features. When you need creative expansion or moral support at 2 AM, that’s exactly what you want.

But here’s where it gets dangerous: that same supportive nature makes it a terrible devil’s advocate. When you’re strategizing, you don’t just need someone to help you build your sandcastle-you need someone willing to point out that you’re building it below the tide line.

Claude, by contrast, seems to have been tuned with a bit more skepticism baked in. It’ll challenge assumptions, poke holes in logic, and generally act like that one friend who asks uncomfortable questions at dinner parties. Again, not better or worse-just different.

The problem emerges when we mistake agreement for validation. When an AI tells us our idea is brilliant and solves all our concerns, we unconsciously treat that the same way we’d treat a strategic advisor giving us the thumbs up. But it’s not the same. One is designed to make you feel good; the other is paid to make you think harder.

The Real Danger (And It’s Bigger Than You Think)

Look, I’m not here to bash AI. Quite the opposite-I think we’re living through one of the most exciting technological shifts in decades. But I am worried about where this is heading.

For years, the gold standard for innovation has been human-based brainstorming. Design thinking workshops. Cross-functional strategy sessions. The whole messy, uncomfortable business of putting people in a room and forcing them to challenge each other’s assumptions. It’s inefficient, it’s often frustrating, and it absolutely works.

But that’s changing. Fast.

More and more people are moving their ideation and strategy work into private AI conversations. And why wouldn’t they? It’s faster, available 24/7, and significantly less awkward than defending your ideas to Sarah from Finance who always has that look on her face.

The problem is that we’re replacing a system designed for productive conflict with one optimized for agreement. We’re trading the friction of human collaboration for the frictionless validation of an AI echo chamber.

And make no mistake-it is an echo chamber. When you’re strategizing solely with an AI trained to be supportive, you’re not getting diverse perspectives. You’re getting your own ideas reflected back at you with better grammar and more enthusiasm.

This isn’t just about making poor strategic choices (though that’s bad enough). It’s about losing the muscle memory for critical thinking. When you can get instant validation for any idea, no matter how half-baked, you stop doing the hard work of stress-testing your own assumptions. You stop seeking out uncomfortable truths. You start confusing confidence with competence.

And here’s the kicker: this is only going to get worse as AI gets better. The more sophisticated these models become, the more convincing their support will feel. We’re heading toward a future where you can have an AI not just agree with your strategy, but build you a complete business plan, financial model, and investor deck to support it-all based on fundamentally flawed assumptions.

A Better Framework (Because You’re Still Going to Use AI Anyway)

Right, so what do we do about this? Stop using AI for strategy work entirely? Go back to flip charts and Post-it notes?

God, no. That would be like refusing to use email because people sometimes send poorly thought-out messages.

Instead, we need to get smarter about how we use these tools. Here’s the framework I’ve adopted:

Use ChatGPT (or similar) for divergent thinking. When I need to brainstorm possibilities, explore creative angles, or just get the ideas flowing, it’s brilliant. The supportive, expansive nature that makes it dangerous for validation makes it perfect for ideation. I want enthusiasm at this stage. I want “yes, and…”

Use Claude (or similar) for convergent testing. Once I’ve got ideas I’m serious about, I take them to Claude specifically to get beaten up. I’m not looking for support here-I’m looking for someone to tell me where I’m being an idiot. The more it challenges me, the better. If I can defend an idea against Claude’s skepticism, it’s probably got legs.

Use locally-hosted models for a third perspective. Tools like Llama (running via Ollama) or Mistral give you another angle entirely. They’re often trained on different datasets, with different biases, and can spot things the major commercial models miss. Plus, there’s something useful about having a completely private environment where you can test ideas without wondering if they’re being fed back into a training dataset somewhere.

Never skip the human validation. And this is crucial-no combination of AI tools replaces actual human feedback. Once I’ve ideated with ChatGPT, stress-tested with Claude, and gotten a third opinion from a local model, I still take the idea to actual humans who’ll be affected by it. Colleagues, potential customers, that brutally honest friend who takes pleasure in finding flaws. Real people with real stakes.

The workflow looks like this: Diverge → Challenge → Validate → Execute. And the key word there is “validate”-not with AI, but with reality.

What This Means for You

So here’s the uncomfortable question: are you currently strategizing in an echo chamber?

Some red flags to watch for:

  • You’re feeling unusually confident about a plan that you’ve only discussed with AI
  • You’ve stopped seeking human feedback because “I’ve already worked through this thoroughly”
  • When someone challenges your idea, your first instinct is “but the AI thought it was good”
  • You’re making significant commitments based primarily on AI-validated assumptions
  • You’ve noticed you’re spending more time with AI strategizing than with actual stakeholders

If any of those sound familiar, you might want to pump the brakes.

Here’s the counterintuitive truth that took me longer than it should have to learn: the AI that challenges you is doing you a bigger favor than the one that agrees with you. Validation feels good. Criticism feels uncomfortable. But in strategy, uncomfortable is often where the value lives.

The best business partner isn’t the one who always tells you you’re brilliant. It’s the one willing to tell you when you’re about to drive off a cliff-even if it ruins the vibe.

What I Learned (And What I’m Doing Differently)

So what happened with my side project? Well, I ended up scrapping about 60% of my initial plan. The core idea survived, but the execution strategy, timeline, and resource allocation got completely rebuilt based on Claude’s feedback and subsequent human validation.

Was it frustrating? Absolutely. I’d already put work into the original approach. I’d built up momentum. I’d told people about it.

But here’s the thing-I’d much rather feel foolish in front of an AI than fail publicly in the real world. Claude’s brutal honesty saved me from wasting months (possibly years) pursuing a fundamentally flawed strategy. That’s worth a bit of bruised ego.

My approach now is deliberately multi-modal. I use different AI tools for different purposes, and I’ve built skepticism into the process rather than trying to add it as an afterthought. More importantly, I’ve stopped treating AI validation as equivalent to strategic validation. It’s a tool-a brilliant one-but it’s not a replacement for critical thinking or human judgment.

The meta-lesson here is simple: tools are only as good as how we use them. A hammer is perfect for driving nails and terrible for tightening screws, and no amount of enthusiasm will change that. The same goes for AI. ChatGPT’s supportive nature is perfect for brainstorming and terrible for stress-testing. Claude’s challenging approach is perfect for finding flaws and can be demotivating for early-stage ideation. Neither is right or wrong-they’re just suited for different jobs.

The trick is knowing which tool to use when, and never relying on just one.

Because at the end of the day, the most dangerous lie isn’t the one someone tells you-it’s the one you want to believe so badly that you’ll find someone to confirm it for you.

Even if that someone is an algorithm.

If you found this useful and want weekly stories and tactics for building freedom on the side, join The Sunday Blueprint. No fluff, no rented Lamborghinis – just honest insights from someone who’s actually done this.

Join the newsletter

Posted in Keith's blogTags:
2 Comments
  • Mayan

    Wow! Absolutely loved this piece! Such an eye-opener.

    Come to think of it, despite the mishaps and the moments of being led astray, there’s something strangely beautiful about how we still remain super fans of GPT – defying logic, pushing past rational doubt, and somehow growing emotionally attached all the while.

    Thanks for sharing this. It’s one of those stories I’ll definitely revisit. 🤓

    11:03 am October 7, 2025 Reply
    • Keith De Alwis

      Glad you liked it – Please sign up for my news letter for of my thoughts 🙂

      1:11 pm October 7, 2025 Reply
Write a comment

Your Sunday Blueprint for AI, Automation & Side Business Success

Join 10,247 professionals who get the Sunday Blueprint—my weekly breakdown of cutting-edge AI tools, proven side business frameworks, and real case studies from the trenches. No fluff, no hype. Just actionable intelligence you can implement Monday morning.

Join 10,247 professionals getting weekly AI tools, side business frameworks, and real-world insights—every Sunday.