When UX goes wrong
Derailment can happen to the best of us — and not every train wreck can or should be fixed, but you should know where the project went off track.
Every client is different. Obviously, I know. The nuance needed for successful communication isn’t lost on me, but figuring out each client’s unique needs for transparency can be a full-time job. Even if you’re hyper-aware and make the necessary adjustments, but you lack the necessary ancillary support needed, you’ll get railroaded and communication still fails. (I’ll be peppering in train references so I can display some sweet train art, so all aboard!)
Shit happens. It does. No matter how much you try to teach clients what your research strategy and methodology are and what questions you’re trying to answer. It doesn’t matter how many workshops you lead them through, sometimes, they just don’t get it. They’ll be excited and smile. They ask sufficient enough questions. But once you truly believe everyone is on the same page, your confidence will turn to hubris and it will bring miscommunication in its train.
The next thing you know they’re asking, “Why is UX conducting research on competitors and emerging trends, isn’t that marketing’s job?” They might even state they thought your job was to tell the designers “where to put the buttons.”
The worst part? You’ll never know that’s happening until it's too late and the train has already left the station.
You throttle down to figure out what went wrong and hindsight kicks in — could I have been more detailed in the last meeting? Less detailed? Was the deck too focused on progressively disclosing findings? Were they too long? Should we have had more frequent check-ins? Short of seeing them through a UX certification course, the best you can do is work on simply and effectively communicating. That being said, there are some overarching patterns to look out for.
So, here are my thoughts, some general and some specific, on what to train your sights on to hopefully clear the tracks when UX goes wrong.
Not having any client resources or previous research to review or audit
This happens, especially on new products or concepts, and it doesn’t ALWAYS end poorly — just most of the time. When ideas about “what users want” appear out of thin air, or user and task flows have been drawn up by somebody in marketing without any evidence to justify their assumptions, lookout.
You will spend 99% of your research efforts presenting your findings that are the exact opposite of what they expect to hear. Eventually, you’ll be looking for any thread or pattern that gets you to a design solution that’s remotely close to their project assumptions while steering through risk mitigation.
Stakeholder involvement and control
It’s great when stakeholders want to be involved. It means they're personally invested in the project and that can lead to a lifelong relationship of being in the trenches together. It can also mean someone has a control issue and they’re just waiting to steer things in the direction they think the project should go rather than following the data.
This happens more often than I’d like to admit. Everything will be going brilliantly, and the next thing you know, your advice is being dismissed because an idea was shopped until a bias was confirmed. Best case, it’s innocent and can have a quick course correction. Worst case your scrambling to do damage control and salvage any data you can.
There’s an utter disrespect of your expertise that can bruise the ego, but it's up to you to push through in hopes that everyone can learn from it and move on.
When clients delay tasks or user sourcing
This can happen for many reasons, but the thing I look out for is if the client is trying to control the data so their original assumptions and stay valid. This is the first place I look if there isn’t any data or resources from the client during kickoff. This train of thought ends up being a real hostage situation for them.
Staying on the project timeline is key for any inexperienced client to understand. They might not fully grasp the repercussions of not hitting certain benchmarks or why pivoting back and forth between what they want to prioritize, but it’s on you and your team to keep them in check.
If it's just you as the sole designer, and it's a part-time client — god help you and you’re gonna have some difficult conversations to navigate.
Why we need to talk to users
Or if they’re trying to verify demographics, any subject matter experts that can help shape the focus of our primary research. Reiterating, “to make informed design decisions” seems to fall on deaf ears, so I tend to put in terms like additional expenses, budget and resource, and untested, high-risk of failure.
This MOSTLY gets people’s attention, though I did learn that there’s a certain personality type out there that thinks of themselves as problem solvers, though they always make things worse. Their massive egos won’t allow them to understand concepts like selection bias, confirmation bias, or the importance of survey anonymity or confidentiality.
Unrealistic beliefs on scope
“We want a multisided platform that maintains our core functionality that’s unsupported by research, we want MVP in 7 to 8 weeks, and we want you to work with a designer of our choice in a different time zone.”
Not understanding the difference between proof of concept and MVP
This is a continuation of the previous point, but MVP gets thrown around haphazardly, and that’s a shame. If the idea is untested, and there aren’t any actual users, then we’re not getting through the development and in the app store in 7 weeks.
Proof of Concept is the first stage of idea validation — the stage where a small project is implemented for verifying if a concept can be implemented on the technical capability and business model grounds.
MVP is a functional app that comes loaded with the prime features that best represent the application.
Feature vs core functionality
I try to explain this a lot, and apparently, I still fail. I’ll say “a car’s core functionality is to drive, daytime running lights are a feature that enhances the driving experience.”
I know everyone understands this, but I’m starting to think this is too simplistic and can’t easily parallel to any current project.
Not understanding the fidelity levels of deliverables
and why there isn’t color in low or mid-fi user testing. I’ll just leave this here.
When’s it worth fixing?
No matter how much you try to train and educate to realign everybody's expectations, the project might be doomed from the start. Sometimes it seems like there’s no way forward and the project is destined for the grim reaper to ride the rails. I’m usually someone that can make something out of nothing and help the wounded walk away with, if nothing else, a lesson that they’ll never forget. Though I hope to not be on the side of derailment often, it’s sometimes inevitable, but getting back on track is a finesse that comes with experience.