Three-fourths of market researchers admit that surveys are too long according to the 2014 Annual Survey of Market Research Professionals sponsored by Market Research Careers.
About 20% of professional survey takers surveyed in 2015 say that a survey of 15 to 19 questions is “getting too long” and the same percentage won’t even consider starting a survey with 20 questions. Virtually none of these survey takers think that a survey under 10 questions is too long.
A typical Adaptive Survey® is less than five questions.
An Adaptive Survey® is a market research technology that allows researchers to get more answers from fewer questions. The technology addresses frustrating, length concerns of respondents, cost concerns of market researchers and quality concerns of decision makers who consume market research reports.
The overall planning and implementation process is the same as any other online survey. For many market researchers, the process includes…
Respondents see an online survey that looks much like any other survey – except much shorter. The Adaptive Question® can be any broadly worded question and includes a randomized list of answers submitted by other respondents. The list includes a mix of high ranking answers and new answers. Respondents are asked to select answers they agree with and to drag them into priority order. There is an option to specify answers not on the list so that it looks familiar to respondents – similar to an ordinary multi-select question with an other-specify option.
While the process looks simple to respondents, there are a multiple complex issues resolved behind the scenes.
In the case of the Adaptive Survey® methodology, the objective is to put a set of answers into priority order. Adaptive Survey® uses a technique called ‘limited choice’ that increases the chance that we will get good differentiation between answers. Essentially, respondents review 10 comments and indicate which four they agree with most. They also indicate which comments are priorities for them and the technique limits those choices as well. This technique maximizes the differences between comments and makes it easier to see which issues need attention first.
CloudMR™ automatically sorts out the answers and presents the top answers on a 2x2 matrix. The priority answers that are most popular are located in the upper right quadrant. Niche answers that are important among a smaller group are in the upper left.
Answers with a large base are more certain than answers with a low base. Agree and Priority percentages are adjusted for the number of people who saw the answer using the exact confidence interval described by CJ Clopper and ES Pearson. CloudMR™ applies a 90% confidence interval to the raw Agree & Priority percentages and takes the lowest end of that interval. The values plotted are the values at the lower tail so that we are 95% confident that the actual values are this high or higher.
Sometimes respondents add answers that are duplicative. For example, one respondent might say they are concerned about ‘the color’ and another might say ‘offer it in blue.’ Researchers may want to combine these answers since they are both concerned with color. Various researchers call this process ‘coding,’ ‘netting,’ or ‘building themes’ – a fairly tedious prospect when dealing with open-ended questions Our system groups all suggestions using a patent-pending crowd-sourcing technology that will identify the general areas that need more resources.
It is interesting to note that the 2nd most actionable answer was mentioned by only one person, but other respondents who saw it voted it into the top ten. This fact contrasts with traditional open-ended coding where an answer mentioned only one time is usually overlooked.
In this example, the top ten themes are the same after coding the top 25 answers and after coding all 223 answers. Coding the top 50 answers produces an identical result as coding all comments – the same top 10 answers in the same order.
CloudMR™ develops an Action Score™ for every segment based on information in your survey or from uploaded data. The Action Score™ is used to sort answers in priority order.
Finally the score is normalized just like grades in a classroom when graded on the curve. This number is called the Action Score™ – a number between 0 and 100. This just makes it easier for most people to relate to it.
Action Scores™ are presented in a heat-map style. Scores are sorted by the total column with the best (bright green) answers at the top. In this case, the bright green ‘best’ cells indicate both high agreement and high priority that result in a high Action Score. Lower scores indicate that either low agreement or low priority or that both are low. Assuming all segments are the same, you would expect the entire chart to look similar to the total column. The display makes it easy to see areas of disagreement between segments by looking for cells that stand out or seem out of place. In this example notice the two red cells in the top row among new customers and among those who are less likely to recommend this company. Also notice the green cells near the bottom among those who are more likely to recommend now.
An independent evaluation was conducted by the W. P. Carey School of Business at Arizona State University. Professor Raghu Santanam conducted two parallel surveys – one traditional survey using 20 rating scales and 6 other questions and one Adaptive Survey® consisting of 1 Adaptive Questions®and 5 other questions. Both surveys were designed to determine demand for improvements of the 20 top smart phone features. The respondent profiles are statistically identical for both surveys.
One key finding is that three of the top ten answers (#1, #3, & #7) identified by the Adaptive Questions® were not even anticipated by the traditional method.
In the summary of results, Professor Santanam says…