Example Projects

  • Why the most popular tagline was actually the worst

    The Challenge:
    A company was preparing to launch a new tagline to support a key product feature. They had three options, each using a different tone and wording to highlight the offering, and wanted to understand which one resonated most with their audience.

    What I Did:
    I ran a monadic survey test with over 1,000 participants, evaluating each tagline across dimensions like emotional response, clarity, appeal, brand fit, and relevance. We also measured preference both before and after introducing a short explanation of the actual product feature.

    The Outcome:
    One tagline stood out early on as the favourite but not for the right reasons. It turned out that many people misunderstood what the line was promising, assuming it meant something the product couldn’t actually deliver.
    Once participants were shown what the product really offered, preference shifted dramatically, and that originally popular option became the least preferred.

    Why It Matters:
    What looked like a clear winner would have created false expectations, which would potentially lead to disappointed users or even legal complications. This approach allowed the team to spot misunderstandings early and select a message that truly aligned with the product’s value, protecting both the brand and the customer experience.

  • Bringing the customer voice into product planning (without slowing things down)

    The Challenge
    At this travel tech company, deciding which features to build was mostly about two things: how hard it would be to build and how much business value it promised. But one key piece was missing: what customers actually cared about. I saw an opportunity to bring that voice into the process.

    What I Did
    I kicked off a company-wide effort to make customer value a natural part of how we prioritise features. Instead of relying only on gut feeling or business impact, I introduced a simple but structured way to capture what matters most to travellers.

    Here’s how it worked:

    • I partnered with product managers and designers to gather each team’s top feature ideas.

    • We turned those into short, clear statements that real customers could easily react to.

    • We then ran quick surveys with active users to find out which features they valued most and why.

    To keep things practical, I built a framework that combined three perspectives:

    • How feasible it is to build

    • The potential business impact

    • How much customers actually want it

    This gave teams a clear visual of what’s worth building now, what’s a strategic bet, and what’s not worth the effort.

    The Outcome

    • Teams gained evidence to back up decisions and drop low-value features with confidence

    • Roadmap discussions became sharper and more customer-driven

    • The process was simple enough to repeat without adding extra delays or complexity

    Why It Matters
    This project brought customer value into the spotlight. By making it visible and measurable, the company could better align its roadmap with what travellers really care about without slowing delivery or compromising business goals.

  • Testing the boundaries of personalisation in mental health apps

    The Challenge
    A mental health app team wanted to make support feel more personal (e.g. recommending exercises and prompts tailored to each user). The tricky part? Personalisation meant collecting more data, and the team worried this could come at the cost of trust. The big question was: Can we personalise without crossing a line?

    What I Did
    I designed and ran a concept testing study to explore different personalisation scenarios. We tested how users responded to varying levels of data sharing (e.g. mood tracking, behavioural data) and autonomy (e.g. choosing content vs. getting auto-suggestions). Participants were shown different app feature prototypes, and we measured their emotional response, perceived helpfulness, and likelihood to use the feature.

    The Outcome
    The message from users was clear:

    They felt most comfortable when they were in control. Being able to opt in, set boundaries, and decide what data to share built trust and interest. Features that quietly collected data and served up automatic suggestions raised the most privacy concerns, even if they promised “better” personalisation.

    Personalisation was welcome but only when it felt earned, not imposed.

    Why It Matters
    The study gave the team confidence to move forward with a design that put user agency first: opt-in personalisation with clear choices about data sharing. It showed that ethical design and personalisation don’t have to be at odds. When done right, they can reinforce each other.

    More details in my CHI paper

  • When “more” didn’t mean “better”

    The Challenge:
    A global online travel agency wanted to optimise how it sold ancillary services: things like seat selection, baggage, and travel insurance. The assumption? Bundling more services together would feel like a better deal and boost sales. But customers weren’t biting, and the team didn’t know why.

    What I Did:
    I ran a conjoint analysis study, simulating real booking scenarios with over 800 long-haul and family travellers. We asked people to choose between different bundles of extras at varying prices, helping uncover not just what they valued but what they were actually willing to pay for.

    The Outcome:
    We uncovered a key problem:

    • People wanted the big-ticket extras (like checked luggage and cancellation protection), but once these were added to a bundle, the overall price shot up and suddenly, the perceived value dropped.

    • In contrast, small, inexpensive services bundled together, things like seat selection and basic customer service, felt like good value for money, even if individually they weren’t seen as must-haves.

    Put simply, there was a mismatch between desire and willingness to pay. People liked the idea of the bigger services, but not at the price they were being asked to pay.

    Armed with this insight, the company pivoted from offering large, “everything-in” bundles to lighter, flexible packages, focusing on perceived value over sheer quantity.

    Why It Matters:
    Without this research, the company would have continued bundling based on assumptions and missing the subtle, but critical difference between what people want and what they’re actually willing to pay for. Sometimes, less really does feel like more.