Not all findings are equal — a smarter way to prioritize UX in organizations that aren't ready for it yet
In low-maturity organizations, even good research gets shelved — not because the findings are wrong, but because designers have no tool to show which ones are worth the engineering investment. This is how I built one.

Role
UX Strategist
Context
Bilfinger Group
Type
Methodology Creation & Validation
Year
2021
01 — Context
UX maturity shapes everything — and most industrial companies are at the beginning
Before explaining what UXIM is, it's worth explaining when it matters — because the answer directly shapes the methodology. UX maturity describes how embedded user-centered thinking is inside an organization's development process. And in Germany's industrial and process industry sectors, most companies are at an early stage: UX exists, but it sits at the edges. Designers run research, produce recommendations, and then — often — watch those recommendations compete for priority against a backlog of engineering tasks that feel more tangible and more urgent.
The higher the maturity, the less this problem exists. Mature organizations have the processes, the trust, and the resources to act on UX findings systematically. UXIM is not designed for them. It's designed for organizations that are in the middle of that journey — where UX has earned a seat at the table but hasn't yet earned the budget and bandwidth to run a full HCD cycle on every decision.
Low Maturity
UX as decoration
Growing maturity
UX present, underresourced
High maturity
UX embedded in process
Design seen as beautification. Research deprioritized. Engineers make UX decisions without data.
UXIM: builds the case
Research exists but implementation is ad hoc. Findings compete with backlog without a clear priority framework.
UXIM: strongest fit
Dedicated UX team, regular research cycles, established design-dev workflow.
UXIM: not needed
Bilfinger was squarely in the middle column when I was working there. Research happened. Recommendations were written. But when sprint planning came around, the designer's priority list and the engineering team's effort estimates were two separate documents — and they rarely met.
02 — The Pattern
The same conversation, repeated: good findings, no clear path to action
At Bilfinger, I kept running into a version of the same dynamic. A research session would surface real, documented problems. Engineers would nod along. And then, when implementation decisions were made, the UX findings would lose — not because anyone disagreed with them, but because nobody could answer the question that actually drives sprint decisions: compared to everything else on the backlog, how important is this, really?
The standard response is to do better research. More rigorous, better documented. But that wasn't the gap. The findings were already good. What was missing was a way to translate them into a language engineering could act on: one that connected user impact to implementation effort, and that engineering had co-owned rather than just received.
"Design keeps being seen as beautification because designers haven't built the tools to prove otherwise. That's a solvable problem."
I started thinking about it as a strategic and organizational challenge, not a research quality one. The question wasn't "how do we do better UX?" — it was "how do we make sure UX input actually reaches the sprint?"
03 — The Core Idea
Frequency × severity = true impact. Your research data already contains this.
The insight that drove everything came from video analysis I was doing on PIDGraph. I was reviewing unmoderated session recordings to understand how engineers actually used the tool — and I started counting. Not just whether an interaction caused errors, but how often it occurred.
A bad button is not equally important in all contexts. If a user encounters it once a session, fixing it is a minor improvement. But I found that certain core interactions in PIDGraph were being performed over 200 times per hour by working engineers. Every second of friction in those interactions multiplied across the entire working day. That's not a usability issue anymore — it's a productivity cost with a calculable value.
The Realization at the Core of UXIM
Severity of problem
Frequency of use
True user impact
Example from PIDGraph: "Save and download buttons look identical" — a medium-severity finding on its own. But 100% of engineers hit this interaction every single session, and 67% manually verify the file afterwards every time — an unnecessary step entirely caused by UI confusion. Suddenly this isn't a medium finding. It's a daily productivity drain. That distinction changes where it sits on the priority list.
The second half of the idea was about what happens on the engineering side. Instead of a designer arriving at sprint planning with a finished priority list, I wanted to bring developers into the prioritization conversation before any decision was made. Developers assess effort directly — using a working prototype — and that data feeds the same matrix. The output is something both sides co-own.
What UXIM keeps from standard practice
User observation as the primary data source. Behavioral evidence over opinions. Commitment to acting on what users actually do, not what they say.
What UXIM adds
Frequency data as an impact multiplier. Developer effort estimates as a co-owned input. A shared priority output that maps directly to sprint user stories — no translation step required.
04 — How it works
Three inputs. One joint output. No separate handoff conversation.
UXIM runs in three phases. Each phase feeds the next, and the critical difference from standard practice is that the priority output is built together — not handed over.
01 — User observation
Collect behavioral data
02 — Parallel assessment
Impact + effort, simultaneously
03 — Joint matrix
Co-owned prioritization
Contextual inquiry and video analysis with real users. Capture frequency, time-on-task, and error rate per interaction — not just whether errors occur, but how often.
Designer scores impact using behavioral frequency data. Developer scores effort against existing architecture. Both work from the same prototype reference. No separate meetings.
Findings plotted on impact/effort matrix. High impact + low effort = immediate priority. Each item already has its user story rationale and effort estimate baked in.
Key Visualization
UXIM output — impact vs effort, populated with real PIDGraph findings
High effort
Low effort
Deprioritize
Low impact, high effort
Example task:
Role-based login
Plan carefully
High impact, high effort
Example task:
Symbol grouping
Role management
Fill-ins
Low impact, low effort
Example task:
Icon Updates
Do these first
High impact, low effort
Example task:
Save vs download fix
Property panel move
Edge menu always visible
Low impact
High impact
Impact scores derived from usage frequency and time-on-task data.
Effort scores from developer or stakeholders interviews or workshopsagainst the existing architecture.
Both axes are empirical — no guesswork.
05 — What I found
More efficient for low-maturity contexts. Honest about what it trades away.
I ran UXIM alongside a standard Human-Centered Design process on the same PIDGraph data, comparing them on efficiency, agile alignment, and coverage of user needs. The results were clear — and so were the limitations.
UXIM
No additional user sessions needed — impact derived from existing observation data
Priority output maps directly to agile user stories — no translation step
Developers co-own the output — less friction at sprint planning
Easier to justify in low-maturity orgs — shows ROI without a long research cycle
Not iterative by design — works best as a sequential, phase-based process
Can miss new problems that only surface during user testing of the redesign
VS
Standard HCD (ISO 9241-210)
Genuinely iterative — each cycle can discover new needs
Deeper validation — actually tests the redesigned solution with users
Requires new user sessions — expensive and slow in B2B industrial contexts
Engineering still receives the priority list rather than helping build it
Harder to justify in low-maturity orgs — cycle length makes ROI less visible
The conclusion I reached: UXIM isn't a replacement for HCD — it's a different tool for a different organizational moment. As maturity grows and resources increase, the full HCD cycle becomes both feasible and necessary. But in the early stages, when a designer needs to build trust and prove that UX decisions are also sound business decisions, UXIM gives you a path to visible wins that earns the space for deeper work later.
"The first cycle is about trust, not perfection. Chase the low-hanging fruit first — let the results make the case for everything that follows."
06 — How I use it now
The framework evolved.
I don't apply UXIM as a named, structured process in my day-to-day work anymore. What changed is how I think about prioritization — and those changes show up in every project, whether or not anyone calls it UXIM.
Working across Bilfinger, Siegwerk, and most recently inside the CARIAD environment at Volkswagen Group, I kept encountering the same underlying conditions: limited trust on UX process in parts of the organization, engineering teams with legitimate concerns about scope, and pressure to show results within sprint cycles. The questions UXIM taught me to ask turned out to be just as useful in a multi-brand automotive context as they were in an industrial SaaS one.
The three questions I now ask before prioritizing anything
01
How often does a user actually encounter this?
Frequency is the multiplier. A medium-severity problem that occurs 200 times per session outranks a high-severity problem that occurs once. Without frequency data, prioritization is guesswork dressed up as judgment.
02
What does it cost engineering to fix, relative to what it costs the user to work around it?
If the fix is cheap and the workaround is expensive — in time, errors, or frustration — that's a quick win with a clear business case. Surfacing this comparison early removes the friction from sprint planning rather than generating it.
03
Am I presenting engineering with a decision, or involving them in making one?
Developers who co-own a priority output implement it differently than those who receive it. In low-maturity organizations especially, joint ownership of the decision is often what makes the difference between a recommendation that ships and one that doesn't.
This methodology was built on PIDGraph data
The PIDGraph case study shows what the product became. This one shows how the priorities were set — and why those 40+ recommendations didn't all ship at the same time.
View case study
← Previous case study
PIDGraph — AI industrial tool
Bilfinger Group
Next case study →
Smart Factory Dashboard
Bilfinger Group
© 2025 Michael Gibran