MaxDiff analysis and preference testing

Back to services listing

Dipsticks Research offers MaxDiff analysis that can be easily integrated into your research providing preference/importance scores for multiple items such as for product features, concept design and brand preference.

A brief guide

MaxDiff (otherwise known as best/worst scaling) is an alternative to measuring preference via methods such as rating scales or ranking questions. Rating scales often don’t discriminate well between items meaning respondents can become fatigued with long lists and results don’t accurately represent the strength of preference between items. Ranking questions also do not work well with long lists, whilst the order of importance is provided in the output the strength of preference between items is not.

MaxDiff solves these issues as respondents trade-off their best and worst options from sub-sets of items across multiple screens, simulating a more realistic decision making task. Due to the discrete response options presented to respondents, MaxDiff does not suffer from scale bias (i.e. differences in scale interpretation between respondents). Since only a sub-set of items appear on each screen, this method can handle long lists of items, breaking the task into more intuitive bitesize chunks for respondents to answer (typically showing 4 or 5 items per screen). The output from MaxDiff provides both the order and strength of preference, ultimately giving more accurate and extensive insight.

On-screen example of MaxDiff

Our approach

Design stage

At Dipsticks Research our in-house statisticians carefully design the MaxDiff questioning framework, ensuring that it is balanced, efficient for each respondent and unbiased towards any items tested. The design stage is the foundation of creating robust and accurate results, therefore we tailor our algorithmic set-up to your research needs, as well as having experienced researchers on-hand to guide you through the design stage, with the core aim to provide highly accurate and meaningful data outputs.


We utilise machine learning techniques, rather than simply using counts analysis, to provide stable unbiased models that also add flexibility to your results - meaning you can visualise the strength of preference for each item tested at an aggregate level, but also apply filters to the data. To understand item preference for specific sub-groups in your target market, we can add up to 10 breaks in the tool output expanding the capabilities and insight that can be derived from the MaxDiff analysis.

MaxDiff results give an output for each item tested on a scale of 0 to 100, if an item has a preference score double to that of another item it can be inferred that preference is twice as strong.

Get in touch

We provide bespoke and cost-effective MaxDiff solutions to meet your insight requirements. To find out more, contact Steven Pesarra on 01434 611160 or email

Get in touch

t: 01434 611160
f: 01434 611161