When Joe Depa, global chief innovation officer at EY, spoke to Business Insider, he made an unusually candid observation about the technology his firm is aggressively rolling out. “I have a high sensitivity for detecting AI,” Depa said, describing his ability to spot when employees rely too heavily on generative tools in their writing and presentations.

Depa leads EY’s global AI, data, and innovation strategy, overseeing how artificial intelligence is embedded across one of the world’s largest professional services firms. His comments matter now because companies are pushing employees to adopt AI at scale — while quietly judging how that adoption shows up in day-to-day work. In environments where writing and presentations act as proxies for thinking quality, sounding generic can quietly undermine credibility and advancement.

The Real Impact

Depa is not warning employees away from AI. On the contrary, he is responsible for expanding its use across EY’s workforce and has not imposed strict limits on how often employees should use it. The issue, he said, is not usage itself but substitution — when AI begins to replace individual thinking rather than amplify it.

“There are situations where it’s too much AI,” Depa said, explaining that some work products show little evidence of original judgment. In those cases, he added, “there does become a point of AI becoming a little bit less efficient or effective.”

The impact of that assessment is more than stylistic. In consulting, finance, law, and other knowledge-driven industries, clarity and specificity are central to how work is evaluated. When documents read as overly polished but oddly empty, leaders may conclude that an employee has outsourced not just drafting, but reasoning.

Depa said he notices several recurring signals in AI-heavy writing: neutral or overly formal tone, repetitive sentence structures, and language that leans heavily on buzzwords without committing to a clear point of view. Humor, personality, and contextual awareness are often absent.

Presentations reveal similar patterns. Depa said over-reliance on AI tends to produce surface-level insights, broad framing, and a lack of concrete examples tailored to a specific audience.

“Anytime you see vagueness or general statements that don't really tell you anything, I would often say that's AI,” Depa said.

Where the Pressure Is Building for Employees

The pressure underlying Depa’s comments is structural rather than disciplinary. Companies want the productivity gains AI promises, but they still need employees who can exercise judgment, take ownership of recommendations, and stand behind decisions.

That creates a narrow path for workers. Using AI too little can signal resistance to change. Using it too much can suggest an absence of original thinking. Even employees who are enthusiastic adopters may feel uneasy about how visible that reliance should be.

That tension is already showing up in behaviour. In a Business Insider survey of 220 respondents, 40% said they either hide or downplay how much they use AI at work. The concern is not rule-breaking, but perception.

For executives, the risk is uniformity. If AI becomes the dominant voice across internal documents and client materials, firms risk sounding interchangeable. Depa warned that without individual perspective and style, “everyone doesn't sound the same.”

In client-facing roles, that sameness can have tangible consequences. Vague or hedged recommendations — something Depa noted AI does “by design” — can weaken trust, slow decision-making, and create doubt even when the underlying analysis is solid.

What Happens Next

Depa’s comments point to a new phase of AI adoption inside large organisations: moving from access to discernment. The question leaders are increasingly asking is not whether employees are using AI, but how they are integrating it into their thinking.

Depa said he encourages teams to begin with their own structure and ideas before turning to AI. “If you write it yourself first and then ask for the enhancement using AI, I feel like that's much more productive,” he said.

That approach reflects a broader shift in how executives are evaluating work. As AI removes friction from drafting and formatting, expectations rise. Fluency is no longer impressive on its own. What stands out is clarity, specificity, and the willingness to make a recommendation rather than present a menu of options.

For employees, this means that originality and audience awareness are becoming more important, not less. AI can refine language and challenge assumptions, but the responsibility for insight — and accountability for what is said — remains human.

The Bottom Line

Joe Depa’s claim that he can “detect AI” is not about policing tools or discouraging innovation. It is about preserving standards in an environment where speed and polish are increasingly automated.

As firms like EY embed AI into everyday work, employees are being assessed on how well they balance efficiency with individuality. The message is subtle but consequential: AI can elevate good thinking, but it cannot replace it — and when it tries, the absence is often obvious to those in positions of authority.

In the race to adopt AI at work, the advantage may belong not to those who use it most, but to those who use it with restraint and intent.

Lawyer Monthly Ad
generic banners explore the internet 1500x300
Follow Finance Monthly
Just for you
Adam Arnold

Share this article