- Published: March 18, 2010
Statistical Significance: Not Just For Geeks Anymore (Search Engine Land)
One of the true tests of what works in marketing is to use quantitative measurement methods to compare what happens when changes are made. As an example, a company may test two different types of incentive plans with its sales force. Calculating the results of these plans could help the company select the plan that leads to higher sales.
Comparative analysis is most widely utilized in advertising testing, especially online advertising. With the immense amount of data available to those engaged in Internet advertising, comparing what works and what does not can simply be a matter of looking at the numbers. Many companies do exactly that by employing so called A/B testing where effectiveness of one ad is compared to another (see MarketingExperiments for more on A/B testing). Measures of effectiveness include click-through-rates, purchases or other user action.
Yet many marketers responsible for online advertising often do not take a deep look at whether the results truly represent a difference, at least not in a statistical way. This story outlines what it takes for comparison of two ads to be statistically significant and paints a nice picture of how results can be misinterpreted. It also suggests that certain tools available on the Internet to measure statistical significance may not be quite accurate.
Though the concepts I’ve described above are (hopefully) now very clear, unfortunately, some of the web-based tools for differentiating CTRs seem to have disregarded them completely.
Note: Here is a neat online statistical tool for evaluating A/B testing that may, in fact, may addressed the problems raised by the author of this story .
Even if a marketer follows the advice presented in this story, what other issues need to be considered before the marketer makes a final decision on which advertisements are best?
Image by Koen Vereeken