Process

How I gather understanding and shape it into design

'In practice' examples on this page are locked for privacy Enter password

1.0

Research

How I gather understanding before trying to change anything.

The goal

Whilst the exact methods might change project to project, my underlying intent is always the same; to understand and gain as much insight into a product and its context before attempting to change it.

That sounds simple, but it's layers of understanding. How do you know the right questions to ask in an interview if you don't know how the application works?

I know I won't become a subject-matter expert, but I want to ensure that I'm familiar enough that when industry-specific terms or workflows are used or described, I can follow and engage.

The methods

Through years of research, I have my own hierarchy and ordering of the different menu of activities available. Below I have described unique insights, and also given each item a tag to try and expose my inner judgement process for each activity. Of course all of these tags are caveated by user access, time, and budget restrictions; sometimes it's just not possible to do them.

Essential

These are my essential toolkit, I almost always do these — they provide deep insights and ground projects really well, reducing the need for other methods

Highly valuable

I will really push to do these, and will use them most of the time, but they may require time, budget, or access to users/tools that clients don't always have

Selective

These are deployed less, more on a project-by-project basis — I'll use them only where I predict they will produce good results, and can't use another method from the above categories

Niche

These are specialised and deployed only where appropriate — either they require a lot of effort for little gain compared to other methods, or require very specific circumstances

1.1

Stakeholder interviews

Before user research starts, I want to understand the application from the inside. Stakeholders know the history; why certain decisions were made, which features have been resented for years, what's been tried and failed. They ground the project with business goals, ensuring we understand how the business will measure 'success'.

They don't have all the answers, and often come with opinions, so there's work to extract objective findings, but they're the quickest way to get oriented.

Well-run stakeholder conversations also surface things beyond software changes. Sometimes the most impactful outputs I've identified have been business-model observations that would've stayed invisible if not for these conversations.

In practice
1.2

Heuristic evaluation

Two days with an application is cheap relative to almost everything else.

Even when I've worked on the product before, I'll walk through it with fresh eyes before interviews start, partly to confirm the current state (because the final product may have strayed from my designs), and partly to surface areas to probe in research. Being able to interrogate specific processes during an interview only works if you really know how the application works.

Heuristic evaluations also catch things users feel but can't articulate; two primary buttons in a flow might show up in interviews only as a vague 'I'm never quite sure what this one does'. The evaluation is what lets me diagnose it there and then.

In practice
1.3

Helpdesk ticket analysis

Some of the richest pre-research material lives in support queues.

Vague, repeated complaints that nobody's been able to pin down are almost always a fundamental design issue in disguise. The pattern of complaint tells you where to probe in the interview; the specific ticket language tells you how users actually describe the problem, which is almost never how the internal team describes it.

In practice
1.4

Competitive analysis

I dissect competitors into their component features, evaluating the different ways they solve the same paradigm.

This gives me a pattern library to draw on when designing, and means that if an interviewee says 'I prefer how Competitor X does it', I can actually question them on why they prefer it. It also shows me where design patterns converge (probably worth following) versus where they diverge (maybe a design opportunity).

In practice
1.5

Observation in the environment

When the environment shapes the work, there's no substitute for being there.

It's where you pick up on the 'normal' things people wouldn't mention; the post-it that tells them the exact process to follow otherwise the system crashes, the dedicated computer for interacting with one system. Those are things people forget to tell you because they're everyday occurrences, but they're key to creating something better.

I capture as much as I can during environmental observation (recordings, phone notes, photos of printouts and workarounds), because it's almost impossible to reconstruct after the fact.

In practice
1.6

Semi-structured interviews

Hearing people talk about their work is the richest source of insight, but it also requires time, effort, and trust from your interviewee. It's worth doing, but it's not something to rush into without preparation.

I usually spend an hour with each participant, with me as lead interviewer and optionally someone else from my team (a lead developer, or a director). Participants share their screen where possible, because what people describe and what they do are different things.

Every interview is recorded, transcribed, and stored securely, and the recording always ends with 'off the record, is there anything you want to say?', which reliably surfaces things people weren't comfortable saying on the record.

Interview design

Every script has key questions (non-negotiable) and general ones (conversation tools, pulled out when the participant is quiet or it fits naturally). Scripts are organised by theme rather than chronologically, so I can see coverage at a glance.

Emotional framing when it earns its place

Asking what someone feels about something reliably gets richer answers than what they think. 'What causes you the most stress?' retrieves a pre-prioritised answer where 'What's the biggest issue?' gets you whatever surfaces first. It's a technique I use selectively, not on every question.

Closing on friction

The last few questions always ask about frustration specifically; what they struggle with most, what workarounds they've built, what else annoys them. The things that frustrate people are the things that matter to them.

In practice
1.7

Analytics platforms

Data on individual screens can be surprisingly insightful, even with minimal data collection.

Is that choice something people fret about? See how long they spend on that modal. Are the search results useful? See whether they click on any links, how many, and how long on each.

Analytics in the research phase are about understanding what's already happening; identifying key flows, spotting where users are struggling, and finding pain points that a heuristic evaluation might miss but real usage makes obvious.

I'll be honest, I'm no data scientist. I was most familiar with Google Analytics when it was Universal Analytics, and whilst DataDog RUM captures loads of information, if I have to use it, I spend a lot of time constructing queries. It's still worth it though because it reveals quantative insight that qualatative methods alone wouldn't pick up on, and often is the best way to confirm (or contradict) findings from interviews.

In practice

My ranking of methods

Given the choice, I'll always reach for in-person observation first, then in-person interviews, and almost always do a heuristic evaluation. I'll do helpdesk ticket analysis if there is any, but I'll rarely reach for questionnaires or focus groups.

Observation is the most expensive but also the most informative. It's the only way to see the environment, the workarounds, the things people do but don't say. It's also the only way to see how people actually use the product rather than how they say they use it.

Interviews are slightly less rich but much more scalable. They also give you the chance to ask 'why' and 'how do you feel about that?', which gets you deeper insight than just watching someone use something.

Heuristic evaluations are basically standard operating procedure for me; I do them on every project because they get me aquainted with the product and surface things to probe in interviews.

Competitive analysis is brought out if there are actual meaningful competitors to compare against, and if I think the insights will be worth the effort. It's not a default because it can be a rabbit hole, but when it's relevant it gives me a pattern library to draw on in design.

Helpdesk ticket analysis produces better results than questionnaires since they capture actual feedback about the product in use. They don't require a user to recall their frustrations since it's something they are detailing at that point in time. Users also submit them little and often, meaning they feel less 'involved'.

Questionnaires are useful when I've got a passionate user base and the client is happy with a wider outreach, but they produce 'I like it' or 'here's the one thing I hate' rather than the deeper frustration interviews surface. Completion rates skew the sample toward vocal users, meaning the results are rarely representative.

Focus groups aren't even on this list, and rank defintiely last because they're echo chambers. The confident voices dominate and consensus emerges where real difference exists.

2.0

Synthesising

Turning what I learned in research into design-ready outputs.

2.1

How I filter and group insight

Real prioritisation happens later, at the design stage.

During research I'm spotting patterns, separating problems from possible solutions, and holding enough material that themes can emerge.

Spotting problems without prescribing fixes

When a solution comes to mind (and it usually does), I'll note it, but I try hard not to let possible fixes define the problem. The point is to stay with the problem until I understand its root.

The 'grain of salt' filter

I take every participant seriously, but I weigh their input against how representative, valid, and solvable it is. Only one person might mention something, but that might be a thing everyone experiences that only they noticed, or it might be personal frustration I can't solve. Frequency matters. Severity matters more; a rare issue with catastrophic consequences ranks higher than a common minor one.

Thinking about how deep the problem goes

A surface finding like 'I don't know where to go for X' might be a menu-labelling issue, or the entire menu structure might be wrong, or the navigation paradigm itself shouldn't be a menu. At research stage I'm noting the symptom and staying curious about what lies beneath.

Segmentation as a finding, not an input

I almost always end up creating sub-groupings as research progresses, less like a folder structure and more like a tagging system; users sit in multiple categories, and the point is coverage, not mutual exclusion.

2.2

The synthesis technique

Transcript, then sticky note, then theme.

Back when transcription was unreliable, I'd type every recording out manually. It was slow, but it had a hidden benefit: by the end, everything was loaded in my brain. Now AI-assisted transcription does the heavy lifting, but I still listen back and correct by ear. Partly to fix errors, but mostly to catch intonation.

Sarcasm, hesitation, emphasis, all change what something means. A participant who says 'I love that I spend ages typing up notes' with a certain tone is telling me they hate it. That goes on the sticky note as 'hates typing up notes', not as a direct quote.

Highlighted moments become virtual sticky notes, each tagged with the source transcript so I can go back for context. I group the notes visually, overlapping similar ones and clustering near-adjacent ones. The result looks like a heatmap. Themes emerge from density, unique insights stay visible rather than being flattened into a frequency count.

Now, I'll leverage AI with the transcripts asking, 'What are the themes in here?', 'Here are the themes I identified, are they correct?', and 'Here are the themes I identified, are any missing?'.

It's not a replacement for the manual pass, rather a check. Just someone looking over my shoulder so I don't overlook things.

2.3

How I document findings so they survive into design

The hardest part of research isn't gathering it, it's keeping it alive through the design phase, when scope pressure and momentum start to blur it.

I always produce an outcomes document as a minimum; a summary that captures findings, decisions, and the trail from insight to design intent.

I have a two-tier approach to documentation. When there's budget and time, I'll also produce a full research report going in depth on methodology and per-feature analysis. Reports are expensive and arduous to write, but they carry weight. Design is often seen as subjective; a well-structured report with real evidence makes it less so. They're also how I keep myself honest. If I'm proposing a change, I want to point to the evidence rather than asking people to take my word for it.

I follow a consistent report structure. Introduction, Objectives, Method, Key Findings, then I go into a Detailed Breakdown, and Recommendations.

Key findings always come before the deep dive. Screenshots of the current UI sit alongside observations. Recommendations are numbered and short, liftable directly into a design brief.

2.4

What I don't produce, and why

I don't make personas.

They become artefacts that get lost outside the UX team, generalise away the insights that matter, and subtly shift the team's focus from 'here's what we learned' to 'here's who we're designing for', which sounds similar but isn't.

Instead, I'll capture user-group realities (accessibility needs, typical environments, time pressures, emotional context) as design requirements.

When I need to tag user groups for coverage, I do it as tags; overlapping, fluid, and treated as findings rather than inputs.

3.0

Design & Prototyping

Deciding what to do about what the research surfaced.

Once research is done, the job shifts. I'm no longer trying to understand what's happening; I'm trying to decide what to do about it.

Time and budget now mean something they didn't in research, because every decision has a cost attached. The work is partly about designing good solutions, and partly about deciding which problems to solve first, which to solve together, and which to leave alone.

3.1

From themes to design decisions

Grouping themes by shared root cause

Often several insights point to the same underlying problem. A small menu tweak might address three different research findings because they were all symptoms of one structural issue.

Root causes over band-aids

Surface findings can mask deeper structural problems. The 'I don't know where to go for X' example might indicate a labelling issue, a structural issue, or a question about whether menus are the right paradigm at all. The instinct for telling them apart is partly pattern recognition from experience, but mostly from exposure; trying enough different software, in different paradigms, that I know what 'good' looks like in similar contexts.

Variations to see which is right

Multiple options laid out next to each other, each version copied out before any significant change so I can revert or compare. You can often tell when you've landed on the right one, but that tell is earned through exposure, not instinct.

Talking options through

Conversations (with a colleague, a developer, or AI) are how I stress-test whether a solution is landing right. Not because the other person will have the answer, but because articulating the problem often reveals whether I've actually understood it.

Talking options through

Conversations (with a colleague, a developer, or AI) are how I stress-test whether a solution is landing right. Not because the other person will have the answer, but because articulating the problem often reveals whether I've actually understood it.

Checking conventions before breaking them

Before finalising a novel interaction, I'll check how established tools handle the same problem. Breaking convention is sometimes the right move, but shouldn't be the default. When I do deviate, I document the justification so whoever inherits the design later knows why.

3.2

Context loading

Before designing anything in detail, I want to understand the circumstances of its use.

Where will this run? Who'll be using it, at what moments? What breaks first if the environment isn't perfect? Getting this wrong shows up as 'obvious' design flaws later, which were actually the result of designing for a context that never existed.

Real deployment, not ideal deployment

Stakeholders might paint a rosy picture of idealised scenarios when reality is starkly different. Designing for the lowest common denominator is key; a shiny animated interface that crashes the key computer the business depends on isn't fit for purpose.

Time sensitivity and error cost

Safety-critical tools need protection against residual state and fast-but-wrong inputs. Exploratory tools need low friction and permission to wander. Most products are a mix, and the work is knowing which mode applies where.

User role segmentation

Which user groups need which access, which permissions, which journeys. Each group gets its own journey rather than being squeezed into a single unified one.

Feature prioritisation with future-proofing

You can't build everything at launch, but the architecture should always leave room for the features you've had to cut.

In practice
3.3

Structuring the interface

Structure often makes or breaks a product.

Features can be added later, but fundamental IA decisions are expensive to change. This can be a long process; hours of research, organisation, naming, re-organising, and re-naming. It's also the process that results in an output that can look the least impressive (I've definitely had comments along the lines of 'oh, you made a tree diagram, why did that take so long?').

Entry and exit matter

Homepages, empty states, and post-task states are often under-designed because they feel like glue, but they shape the rhythm of the whole experience.

Information hierarchy

What needs to be visible without scrolling? One click away? Behind progressive disclosure? The answer is rarely about the information itself; it's about what the user is trying to do in that moment, and what they'll need next.

Persistent mission-critical identifiers.

When the cost of being in the wrong context is high, the context has to be visible at all times. Experts carry domain knowledge the interface can't replicate; the interface's job is to give them the information they need to apply that knowledge quickly.

In practice
3.4

Setting up the visual language

Once the structure is settled, the visual language gives the interface its character.

I tend to spend more time here than people expect, partly because it's scaffolding everything else hangs on, and partly because getting it right early saves rework later.

Style guide or design system?

Not a question about project size; it's about who will maintain the product, how often it'll change, and whether other teams will build on top of it. If unsure, I'll start with just styling and a few notes. It can evolve into a system if needed, but I haven't spent time defining rules when it might not be needed.

Adapting client brand guidelines

Brand guidelines are usually built for marketing and rarely survive contact with interaction design. My job is to translate, not transplant. I have told enough clients that it doesn't matter how much they argue, their brand colour will not be accessible against white text. I'll create large palettes of brand shades, ensure contrast requirements, and identify systems which allow anyone to instantly pair two colours in a way that meets accessibility guidelines.

Mockups as persuasion tools

When a client is reluctant to go with a particular direction, debating it in the abstract rarely shifts anyone. A wireframe in high fidelity with their own content in it often does.

When to stray from a design system

'Is the deviation solving a specific problem the system doesn't solve, or am I just doing something different for the sake of it?' If the former, it's justified. If the latter, use the system.

In practice
3.5

Collaborating with developers

Communicating and carrying through design decisions to production is a lot easier when the build side is genuinely engaged.

That's often a challenge, because developers are busy and have their own priorities, but it's a good investment of time for both sides to get on the same page about what we're trying to achieve and why.

I choose my battles

I'm mindful of budget and effort. I aim to find the solution that takes the least developer effort for the greatest user benefit, which means more upstream design work for me but less friction downstream.

Involvement in research changes the reception

If I bring a developer into a user interview, they come out with lived experience of the user's frustration rather than a written summary. Six weeks later, when I show a design that changes one menu item, the developer who was there gets it immediately. The developer who wasn't wonders why I'm bothering.

Trust is the other mode

Some of the best working relationships I've had are ones where the developer just trusts that if I say something matters, it matters. Trust is earned through track record, and it's worth protecting.

4.0

Iteration

Shipping isn't the end of the process; it's the start of a different part.

Shipping isn't the end of the process; it's the start of a different part. Once a product is live, users interact with it in ways I never could have predicted (like really, you're gonna do that?!), and often in numbers that make patterns visible that weren't visible before.

4.1

Post-launch measurement

Where research analytics are about understanding the status quo, iteration analytics are about validating whether changes actually improved things.

It's not a replacement for qualitative research; it's a complement that catches patterns I can't elicit and users I can't interview.

Defining success operationally

I'll start by writing a single sentence about what a successful interaction looks like ('a user finding a document, quickly and easily'), then break it into sub-criteria that can each be measured.

Mapping routes per method

Users reach the same goal through different paths, and collapsing them into a single metric hides the differences that matter. I'll document the optimal route for each method step-by-step, so each can be measured independently.

Operationalising qualitative success into observable behaviour

What does a successful search session look like in the data? Only type once, no error messages, find the document, enter it, find what they were looking for, don't go back to the results list. Each of those becomes measurable.

Critical evaluation of tool-reported signals

Analytics tools give you labels like 'rage clicks' and 'dead clicks', but the labels aren't always what they seem. Rage clicks can include users scrolling with scrollbar arrows. Every metric I use is checked against session replays first.

Explicit caveats on every metric

What the numbers don't capture matters as much as what they do.

4.2

Dashboard goals

  1. Monitor the current performance of each method.
  2. Compare data once design changes have shipped to see whether they actually improved things.
  3. Document the specific queries and strings I'm using, so they can be revised if better methods emerge or the tooling changes.

The wrangling to get the data out is often harder than the analysis itself. I run it anyway because I want to know, but I've definitely spent way too long in DataDog writing long (awful) queries to investigate things. I've also come back the next day and realised I'd written them wrong.

In practice