My boss asked me the wrong question
A few weeks ago my manager pulled me aside. Direct question, good intentions: "Why aren't you doing the screens? You're really good at them. We need to be shipping."
I didn't have a clean answer in the moment. I said I'd look into it. But the question stayed with me.
Because on the surface it's reasonable — screens are visible, screens ship, screens mean progress. Except we're living in a moment where AI can generate screens faster than any designer alive. So if screens are the job, the job is already gone.
What my manager was actually asking — without realising it — was: why aren't you doing the thing that's being automated? And that's the question I want to sit with here.
Think about a boxer in the ring. Mid-fight, absorbing punches, throwing combinations, reading the opponent. He cannot simultaneously fight and analyse his own form. He cannot see that his left guard drops every time he throws the right. He is too inside the work to see the work clearly.
That's what the corner is for. The coach watches from outside. He sees the pattern. He sees the guard dropping, the footwork breaking down, the tell the opponent has already spotted. Between rounds he delivers the one thing the boxer cannot generate for himself: perspective shaped by distance.

Most companies have eliminated the corner. Everyone is in the ring. Swinging. Producing. Shipping. The directive is clear — more features, faster, cheaper. And so the whole organisation leans into execution, because execution is visible and visibility is what gets rewarded.
The problem isn't the swinging. The problem is the blunt axe. Lincoln — or Churchill, history can't quite decide — said it plainly: give me six hours to chop down a tree and I'll spend the first four sharpening the axe.
The point isn't that sharpening is more important than chopping. It's that chopping with a blunt axe for ten hours produces a worse result than sharpening for four and chopping for six — in the same total time.

These aren't the same lesson. The boxer tells you when to stop — mid-execution, when you're too inside the work to see it clearly. The axe tells you why to stop — because doing the strategic work upfront is what earns you the right to move fast later.
UX strategy is the sharpening. It's the work that doesn't look like progress because the tree isn't visibly moving. And so companies skip it, rush past it, deprioritise it — because the axe is already swinging and everyone can see the chips flying.
AI just made the axe swing faster. The blade is still blunt.
The test that shipped anyway
A few months ago, a PM on my team proposed an A/B test. The hypothesis was simple: hide the phone number field on the lead capture form, collect name and email first, then reveal the phone number on the next screen.
The bet was that reducing friction upfront would increase the number of leads progressing through the funnel.
My CRO lead and I looked at it and saw the same problem immediately. The form was making a silent promise — this is all we need — and then breaking it one screen later. That's not reduced friction. That's a bait and switch.
And users who feel deceived don't just drop off. They drop off and they don't come back. Which means the guardrail metric — the one that tells you whether conversion is healthy downstream — was at risk.
We proposed an alternative. Run three variants. One with phone first, email second. One with email first, phone second. One with both on the same screen. That way you'd actually learn something — not just whether hiding the phone number moves leads forward, but what sequence of information actually builds trust through the flow.
The PM acknowledged the flaw. Nodded. And shipped the original test anyway.
His reason: the pipeline was full. An ABC test would take longer to reach statistical significance. He didn't have the bandwidth.
And here's the thing — he wasn't entirely wrong about the timing. Three variants does take longer to validate than two. But he was making a specific trade: ship something you know is flawed because doing it properly takes more time.
That trade used to have a defence. Validation was expensive. Research took weeks. "We don't have time" was inconvenient but sometimes true.
That defence is gone now. We can plan, test, and deliver a research report in two days. We can generate variant wireframes in an afternoon. We can model the funnel impact of a bait-and-switch pattern before a single line of code is written.
The time excuse evaporated. What remained was something else entirely: the habit of not stopping.
The organisation was optimised for output, and output is what got rewarded. So even when validation became fast, even when the cost of slowing down collapsed, the behaviour didn't change. Because the incentive didn't change.
The team wasn't out of time. They were out of the habit of stopping.

This wasn't one PM making a bad call. It was a system optimised for one thing being asked to do another.
The thing companies can't see
Screens are visible. Strategy isn't.
That sounds obvious. But follow it to its conclusion and it explains almost everything broken about how companies hire, reward, and manage design teams.
When a designer ships twelve wireframes in a sprint, everyone can see the output. The Figma file exists. The handoff is done. Progress is legible.
When a strategist spends three days interrogating whether the feature should exist at all — mapping the journey, identifying the flawed assumption baked into the brief, reframing the problem before anyone opens a design tool — the output is a conversation. Maybe a one-pager. Maybe a decision that quietly saves the team three months of building the wrong thing.
One of those looks like work. The other looks like thinking. And in most organisations, thinking doesn't show up on the sprint board.
This is the visibility trap. And AI just made it significantly worse.
Think of glyphosate. The compound is a carcinogen that activates through constant, large-scale exposure — typically non-lethal at lower doses, but damaging at scale. The visibility trap works the same way. It predates AI; organisations have always been biased toward legible output. But at the speed AI operates, companies are being overexposed to it. The accumulation is what kills you: dead weight, bottlenecks, cognitive overload, unimpactful work, silos that nobody questioned because the screens kept shipping. The companies that correct this at scale will outperform the ones that don't.
Because now the designer who generates fifty screens in a day looks extraordinarily productive. The strategist who stops the team from generating fifty screens in the wrong direction looks like an obstacle.
The speed differential between execution and strategy has never been more extreme — and so the bias toward visible output has never been stronger.
And here's the part that should make every hiring manager uncomfortable: your job specs are probably making this worse. If you're screening for portfolio output — polished screens, high-fidelity flows, visual craft — you are selecting for the thing AI can already do faster than any human. You are optimising your hiring process for the role that is disappearing.

Strategic UX output looks different: a decision framework that killed three bad ideas before they were built, a service blueprint that exposed a handoff nobody owned, a reframed brief that changed what the team was building altogether.
So what does the organisation that survives this actually look like?
What you're actually hiring for

If you're the designer reading this, here's what you build toward. If you're the one hiring, here's what you look for.
Let me ask you something directly. When you last hired a UX designer, what were you actually screening for?
Polished portfolio? High-fidelity mockups? Visual craft? A Figma file that looked production-ready?
If so, you hired a screen producer. And screen production just became the cheapest thing in your organisation.
For years, the portfolio was the right filter — because the portfolio was where the skill lived. Execution was hard, time-consuming, and genuinely differentiating.
AI didn't just change that. It inverted it.
The moment execution becomes cheap, the value shifts entirely to what precedes execution. The question that needed asking. The brief that needed interrogating. The decision that needed making before the pipeline filled up.
That person probably doesn't have the most polished portfolio. Their case studies might be heavy on thinking and light on final screens. They might look, on paper, like they're slower.
They're not slower. They're operating at a different level entirely.
The audit you need to run this week:
First: audit your team for thinking, not output. Who interrogates briefs? Who pushes back and is right? Who left behind a framework, not just a deliverable?
Second: be honest about who can make the shift. Some designers are exceptional operators. That's a real skill. But it's a skill AI is absorbing quarter by quarter.
Third: rewrite what you're screening for. Interview questions: Walk me through a brief you pushed back on. What did you rule out? Tell me about a system you left behind. A stakeholder wants to ship something flawed — what do you do?
Fourth: look at what you're rewarding. Sprint velocity. Screen output. Delivery speed. If those are the metrics, you're training operators.
And if you're earlier in your career: strategic thinking helps at any level. So does technical depth — not knowing how to code, but knowing how things work well enough to guide the tools. I once solved a problem that had my AI collaborator stuck for days — not because I knew the code, but because I'd seen a similar architectural pattern in another project. Technical knowledge isn't about doing the work. It's about knowing when the tool is going in circles.
Your team is in the ring right now, swinging hard. The question is whether anyone is in the corner.
Put someone in the corner.


