The design of any app or website is only as good as the user research backing it. So how does spaghetti research impact quality and what can you do about it?
If you’ve worked with a software team, chances are you’ve come across the term spaghetti code. This perjorative is used for a codebase that is unstructured, undocumented, error-prone, and just difficult to navigate around or make sense of. In this context, spaghetti refers to the jumbled state of the code and nothing to do with carb-loaded, italian cuisine.
The front and back end of the earliest websites were built with spaghetti code. These websites were often cobbled together with snippets of random code stolen from other sites or forums and had no set framework or structure. The architecture of these old websites didn’t exist beyond making it work.
As an aside, I’ve spent parts of my career knee-deep in spaghetti code and often the best solution is to start over or awkwardly force the website into a new framework. But it’s never a simple fix.
User research can fall into the same trap as software built with spaghetti code, as the definition of what exactly user research is varies wildly from company to company. Some teams view unreliable marketing metrics like NPS and CSAT as design research. Some teams just look at analytics and make their best inferences as to why the software is failing. Some teams have intentionally never spoken to their users. Some teams bring too many of their personal biases to a project and cherry pick research to support their beliefs.
When teams don’t have a clear way to measure design correctly, they jump to conclusions and start changing things arbitrarily. This is what spaghetti research is – unstructured, arbitrary, unreliable, sometimes biased and error-prone. It’s research that is difficult to build on and becomes unusable very quickly.
Like any system with moving parts, user research needs to follow a set framework or methodology otherwise it loses context, becomes difficult to understand, and can’t truthfully inform design decisions.
In 2016, the Head of UX at WeWork Tomer Sharon developed a framework to accurately and methodically record research called Atomic Research. Similar to how Atomic Design organises design systems into reusable, atomic components, Atomic Research breaks down research insights into an atomic unit called a ‘nugget’ or fact. This atomic unit is generally a single recorded insight that is easy to understand and can be backed by substantial evidence.
When a researcher or research team sees consistent patterns in these nuggets through continual research, a conclusion can be drawn to better inform a design or service improvement. It works well as a framework for short and long term research projects or applied across single features or entire applications.
There’s no question about it – user research is a social science, and like any good scientific research, it needs to be conducted methodically within strict parameters. Reliable research is that which can be repeated easily and the Atomic Research framework clearly lends itself to this.
We need to clear the plate of spaghetti research.