Skip to content Skip to footer

Propped Up by a Prompt

What happens when AI confidence masks real-world inexperienceโ€”and how businesses pay the price

It started with a comment from a client.

He told me, โ€œI donโ€™t think they actually know how to do half of what theyโ€™re being assigned. But they know how to ask ChatGPT. So they think thatโ€™s the same thing.โ€

Thatโ€™s what AI overconfidence in the workplace looks like.

And the more we unpacked it, the more I saw it:
The quiet emergence of a new kind of employee.
One thatโ€™s well-meaning, technically curious, and dangerously overconfidentโ€”because somewhere along the way, a fast answer from AI started to feel like real expertise.

Itโ€™s not.

And now, a lot of businesses are paying for the difference.

When AI Becomes a Crutch

This isnโ€™t an anti-AI article. I use it. I build systems that rely on it.

A business professional struggles to balance on a platform thatโ€™s being propped up by an oversized tablet displaying an AI prompt. The scene represents how AI can create artificial confidence in inexperienced employees and cause instability inside companies.
Illustration showing how AI can create the appearance of competence while hiding real-world inexperience.

Iโ€™m not worried about the technology. Iโ€™m worried about how weโ€™ve decided to use itโ€”without rules, without training, and without checks.

In company after company, hereโ€™s the pattern Iโ€™m seeing:

  • A junior employee is given a task theyโ€™ve never done before.
  • Instead of asking their manager or reviewing SOPs, they go straight to an AI tool.
  • The tool responds quickly, clearly, and confidently.
  • The employee now feels like theyโ€™ve โ€œgot it.โ€
  • They go execute.

At first, it works. The task is completed. A deliverable is produced.
But then the cracks start to show.
The process doesnโ€™t scale. The logic is off. The decision backfires.

And no one understands whyโ€”because it looked so good on paper.

What weโ€™re seeing is a growing gap between confidence and competence.
And when AI props that up unchecked, the business ends up footing the bill.

Pretending to Know is the New Workplace Default

Letโ€™s call this what it is: performance.
Employees, often unintentionally, are now performing expertiseโ€”not developing it.

The tools have made it easier than ever to look capable.
AI can write the pitch. Summarize the brief. Build the formula. Create the SOP.

But it canโ€™t:

  • Understand context
  • Manage risk
  • Make tradeoffs
  • Handle fallout
  • Lead

Iโ€™ve seen junior employees build entire project timelines from AI prompts without once checking whether their team had the capacity to deliver. Iโ€™ve seen marketing plans that looked beautiful but lacked any awareness of market nuance or brand tone.

Because AI can give you the answer.
It canโ€™t tell you whether that answer makes sense for the business youโ€™re in.

The Cost of Misplaced Confidence

The problem here isnโ€™t ambition. Itโ€™s absence.
An absence of mentorship. Of review. Of operational guardrails.

When people build with tools they donโ€™t understand, the issues show up later.
In lost time. In rework. In decisions that cost money because no one asked the second layer of questions.

Iโ€™m watching leaders struggle to balance encouragement with caution.
They donโ€™t want to squash initiative. But they also canโ€™t afford to let unchecked overconfidence turn into quiet operational chaos.

And this is where the real damage happens: not from AI itselfโ€”but from the belief that speed and confidence equals skill.

What Leaders Need to Do

If youโ€™re running a company right now, you donโ€™t need to ban AI.

You need to manage it.

Hereโ€™s where I start when I work with a leadership team on this issue:

  • Define where AI is allowedโ€”and where itโ€™s not.
    Make it clear what tasks can be supported by AI and which ones require human review, judgment, or context.
  • Create friction intentionally.
    Add review steps, peer checks, or internal prompts that force second-level thinking.
  • Refocus on operational fluency.
    If your team canโ€™t explain why they did somethingโ€”beyond โ€œbecause the prompt said soโ€โ€”theyโ€™re not ready to own the result.
  • Install systems that catch it early.
    This is where Lean AIยฎ comes in. You donโ€™t eliminate tools. You build infrastructure that makes them accountable.

Why Lean AIยฎ Was Built for This

Lean AIยฎ was never designed to โ€œreplace people.โ€
It was designed to support the people who are already trying to do too much with too little.

It does that by:

  • Removing manual friction
  • Building smart, structure-first workflows
  • Applying intelligence in narrow, high-leverage ways
  • Catching breakdowns before they escalate

More importantly, it creates a system where AI output isnโ€™t blindly trusted.
Itโ€™s integrated, audited, and supported by a real operational backbone.

The result?
People grow. Teams get faster. And the business doesnโ€™t pay for overconfidence masked as capability.

Final Word

AI is powerful. No question.
But itโ€™s only as useful as the structure it lives insideโ€”and the judgment of the people using it.

If your team is producing polished work that keeps breaking, ask why.
If your junior staff seem brilliant on Monday and underwater by Friday, dig deeper.

Because in too many companies, what looks like productivity is really just AI-generated performance.

And unless youโ€™ve built systems to tell the difference, you wonโ€™t see the cracks until itโ€™s too late.


Ryan Gartrell
Consultant. Operator. Creator of Lean AIยฎ.
ryangartrell.com |
angryshrimpmedia.com

Substack

Cart0
Cart0
Cart0