The Assistant Who Outperformed the AI Strategy Team

Meet P. V., 52, executive assistant to the Global VP at NexxMedica, a pharmaceutical company where half the managers think “AI transformation” means asking ChatGPT for presentation titles.

She’s been with the company for 19 years.
She knows who signs late, who fakes sick, and which senior leaders are brilliant—on paper.

P. V. doesn’t say much.
She takes notes. She books flights. She listens.
She also reads every single report that gets “summarized” by ChatGPT for the execs who don’t have time to read them.

And one day, P. V. noticed something odd:
The AI summaries were wrong. Consistently.
They left out qualifiers.
They mashed up competitor data.
They misunderstood context and translated “pending litigation” as “resolved dispute.”

The exec team didn’t notice.
They were too busy reposting thought leadership posts written by their interns.

The company had declared:
“We are now an AI-first organization.”
Translation: Nobody thinks. Everybody prompts.

Then came the merger.
A big one. A Japanese biotech firm.
Everyone in the leadership team panicked and ran to their dashboards.
They asked AI to:
• Draft the integration roadmap
• Rewrite internal policies
• Even suggest which teams to cut

One suggestion:
“Merge the oncology and dermatology divisions to streamline cost.”

No one questioned it.
Except P. V.

She was the only person who remembered a clause in the dermatology contract: a region-specific exclusivity deal that would become invalid if departments merged.
A $40 million risk—gone, if followed blindly.

She quietly mentioned it to her boss.
Who ignored her.
So she took the risk and emailed the legal team.
They double-checked.
She was right.

That email saved the company $40 million.
An assistant had outperformed an AI-powered strategy team.
Why?
Because she paid attention.
She understood nuance.
And she didn’t worship dashboards.

Here’s the irony:
When the announcement went out, the leadership team took credit.
They called it “AI-human synergy.”
They used phrases like “our deep tech-human validation model.”
But nobody mentioned P. V.
Until someone from legal did.
And the whispers began.
And the board took notice.

Today, P. V. is Head of Operational Integrity.
She now teaches a quarterly workshop called “When AI Gets It Wrong.”
Attendance? Mandatory.

The Real Takeaway (For Aspiring Leaders, First-Time Managers, and First-Time AI CEOs)
It’s easy to fall in love with automation.
But leadership isn’t about speed—it’s about discernment.
It’s about asking, “Is this true? Does this make sense? What are we missing?”
P. V. wasn’t “anti-AI.”
She was just anti-stupid.
And that made her radical.

In the Labyrinth of Management, the most valuable person in the room is often the one no one notices—until they save the whole system from itself.

If you enjoyed this article, you can dive deeper into real-world leadership lessons and behind-the-scenes stories in my book Labyrinth of Management—available now on Amazon.

For more stories, reflections, practical leadership tips, and to stay updated you can follow me on InstagramX (Twitter), and Facebook.

×