Guiding the future of choice research

Rob Kaiser, PhD

Chief Methodologist

People make weird choices.

Sometimes, it’s going with the middle of the pack rather than the premium option. Sometimes, it’s brushing over a no-brainer bundle or falling for an overlooked feature. Every time, unexpected decisions stop strategists and marketers in their tracks.

But are these choices actually unexpected, or are vital pieces of information missing?

As a research psychologist and someone who’s led advanced analytics and innovation teams for decades, I’ve always been driven by one central concern: data quality. Without quality data, it doesn’t matter how sophisticated your model is. You’re building on shaky ground.

One such model, the discrete choice model (DCM), is one of the most powerful tools we have. Relied upon for years to understand what drives customer decisions, DCMs don’t just ask people what’s important; they put real options in front of them and let us observe the choices people make. Paired with a good experimental design, it provides powerful insights into what truly drives decisions and why.

But there’s a catch: DCMs only work if people are paying attention. When you’re dealing with complex research, expensive niche audiences like IT or healthcare decision-makers, or clients who want to pack a lot into one study, keeping respondents engaged gets harder. This pressure to get more but maintain engagement has pushed our team to create what we call our Guided DCM.

What is a Guided DCM?

The idea behind a Guided DCM is simple but powerful: allow for moments of guided reflection among the choice tasks. Open-ended questions break up the traditional DCM exercise and invite explanations, then AI-driven follow-up questions dig deeper into respondents’ thinking like a skilled human moderator might.

I’ve wanted to utilize this interview-like structure for a long time, but incorporating standard open-ends was understandably risky for clients, who worried about survey length or complexity. With the advent of large language models (LLMs), however, came the opportunity to test this effectively, and I believe we’ve opened the door to something that will shape the future of research.

What we did

Opening the door did not come easy. We had to work carefully to avoid increasing survey fatigue and irritating people with mundane repetition. This meant balancing the number of open-ended questions and their placement vis a vis the choice tasks. It also meant training the AI prompts meticulously, much like you would brief a good moderator: here’s context on the brand/product, here’s what we want to learn, here’s the direction in which to probe, here’s the context of what matters – and remember to vary the questions.

That meticulous training took time: time to test the prompts, time to refine the language, time to make sure the AI understood not only the literal meaning of a respondent’s answer but also the goals of the study.

What we learned

All that time proved to us that making it feel conversational is key. Throwing in extra questions doesn’t improve the experience or the data; building a back-and-forth to encourage people to think and reflect on their decisions does.

Improvements from the conversations include:

  • 45% increase in open-end length. People said more in their responses than they would have without the AI-guided follow-ups.
  • Sharp drop to a 6% non-answer rate. Fewer respondents failed to provide meaningful answers to the questions.
  • Predictive model accuracy rose from 89% to 96%, which meant we could better simulate actual in-market behavior.

Along with these direct measures, Guided DCMs provide a more realistic model than standard DCMs for our clients. When respondents are disengaged, we end up with less reliable data and less likelihood to predict sensitivity to changes. With this approach, we have been able to refine a trusted technique and better align the derived importance of attributes (like price and specific features) with what clients see in the markets.

Beyond the numbers and modeling, something just as valuable came through: the customer’s voice. Hearing their own words defend their choice gave context that pure quantitative models often lack, bringing the “why” to the “what” and giving us quant at qual depth.

Best practices

 Let me offer a few takeaways:

  • Craft your prompts intentionally. It’s not “set it and forget it.” The AI needs guidance toward your research goals.
  • Vary the questions to avoid boredom and fatigue. Respondents pick up on repetition fast, and you risk disengagement.
  • Balance cognitive load. We reduced the number of DCM tasks slightly to make room for the open-ends.
  • Remember: AI is a tool, not a substitute for thoughtful research design. Human judgment and creativity still matter.

What’s next for Guided DCM

Guided DCM shines in deconstructing complex purchase trade-offs. We have started with particularly intense cases like decision-makers in IT, healthcare, and finance, but it’s already showing results in other areas, including consumer-related products and services.

In messaging research, clients often want to understand the right mix of messages to resonate with their audiences. The Guided DCM not only discerns the winning combination; it uncovers the strengths behind that winner, giving brands better building blocks for strategies on how to say what to whom.

Our “Guided” approach isn’t just limited to DCM, either. Sometimes, a DCM is not the right research tool, but another set of structured tasks is, like MaxDiffs and their various flavors. We can add the Guided structure to improve results in these situations where appropriate, bringing more of the human elements into the data.

Why this excites me

What excites me most about this success is that it brings us closer to people. The research psychologist in me is drawn to the challenge of understanding why people do what they do – not just measuring their actions but uncovering the motivations behind them. Seeing this approach come to life, watching it transform both the data and the conversations we can have with clients, reminds me why I love this field. It’s a meaningful step forward, blending science and humanity in a way that benefits everyone involved.

As so often happens, our innovations are realized in the wake of a human’s specific needs. A client comes with a real problem, and we co-create the next solution together. We’re excited to partner with more clients to explore what’s next. If you’re curious how Guided DCM could unlock richer insights or help tackle a tough research challenge, let’s talk. This is just the beginning, and I can’t wait to see where it leads.

Connect with Rob Kaiser, Chief Methodologist at robkphd@psbinsights.com to learn more.

Knowledge that changes the game

Time to tackle that thorny problem

Let's talk