KnowThis Blog Postings
Assessing the Value of Company-Produced Research Reports
The standard market research report, generally viewed as a routine and often dull business document, is being viewed in a new light by many marketers. While most people do not equate a research report with promotion, the fact is many companies are turning these reports into major promotional tools. The practice of distributing research reports to potential customers has been used for a number of years in scientific and technology industries but in recent years the practice has expanded into many other fields particularly among service firms such as those involved in consulting, healthcare and financial industries.
In the past these reports have served as background supporting materials to help establish credibility for a company’s products and services. Today companies are placing these reports at the front of their promotional activity. The release of company-produced research reports is often supported with a promotional press release highlighting key findings and encouraging anyone interested to visit the company’s website to see the full report. Though, in most cases, accessing the report requires completion of a form containing contact information that can later be used by the company for follow-up purposes.
While the type of research used to prepare the report can include such techniques as experimentation, personal observation or website visitor tracking, the majority of reports present the results of survey research (i.e., people completing questionnaires). Reports are presented in formats such as white papers, slide presentations, webinars and flash graphics presentations. These reports are often produced using high quality graphs and charts backed up by carefully created narratives that proudly emphasize the company’s strengths. While many companies claim the research supports their products, many of these claims may in fact be more fluff than substance since they are not grounded in doing research the right way.
DOING RESEARCH RIGHT
Doing research right takes a good deal of knowledge of research methods and a lot of time. By “right” we are talking about using scientific methods that have been tested and refine over hundreds of years and hold up to a statistical analysis. Unfortunately, most company produced research is not scientific and, thus, may not be as important as the company would have its readers think.
One of the biggest concerns when reading a research report is whether what is being presented is indeed relevant and worth considering. To be relevant the research must overcome a number of obstacles particularly in terms of research validity and reliability.
- Research Validity - This refers to a series of hurdles that determine whether the research is really measuring what it claims to be measuring. For instance, if a company says its report is measuring how people prefer its products over competitors’ products, is the research really designed in a way that tests this?
- Research Reliability - This is measured in terms of whether the results of the research can be applied to a wider group than those who took part in the study. For example, if a company’s research study reports on the results of a few focus groups with a total of 40 people participating, is the information obtained from these people sufficient to conclude how the entire target market for a product, which may number in the millions, feels about the company’s products?
As we will see, most of the issues we will discuss can impact research relevance.
ASSESSING THE VALUE OF RESEARCH REPORTS
So how can a non-researcher judge whether a research report is really based on sound research? Well, there is never a guarantee that research is good (think of all the mistakes companies make introducing new products) but there are certainly clues when research may not have been done right.
Below we offer seven questions worth asking when reading a research report that deals with survey research (i.e., where people are the respondents). How a report stands up to these questions gives the reader some clues as to the validity of the information contained in the report.
1. What seems to be the purpose of the research?
Company produced research reports can serve as an effective form of promotion. Such reports often provide readers with information related to product features and benefits, comparisons with competitor’s offerings, and target market perceptions. But any research report a company openly distributes, and which appears to paint a nearly perfect picture of the company, should be questioned as to the main purpose of the research. In particular, was obtaining promotional value the main reason for undertaking the research? Research designed strictly as a promotional piece invariably leads to bias in research design. Such reports often are constructed in ways that, intentionally or unintentionally, contain elements, such as a poorly designed survey or a poorly chosen group of respondents, that help sway results in the company’s favor. While by itself the answer to this question is not automatically indicative of poor research, it should make the answers to the following questions that much more important.
2. Who conducted the research?
Companies have two options for conducting research – do it in-house or out-source some or all of the research to someone else, such as a market research company. Often research is more credible if the company partners with an experienced and reputable market research company. While many companies conduct very good research on their own, the suspicion level of company produced research should be raised when companies, particularly smaller firms, conduct their own research.
3. What was used to measure responses?
Survey research requires the use of questionnaires or survey “instruments” for collecting information from respondents. Yet questionnaires can be easily flawed and lead to serious validity problems. When examining the results of respondents’ answers to a survey question, it is important to recognize that poorly structured questions can lead to biased results. Here are a few ways questions can be poorly written:
- Leading Questions – These questions are written in such a way as to suggest an answer to the respondent: (e.g., “Wouldn’t you agree that our product has better overall performance than our competitor’s product?”)
- Loaded Questions – These questions provide information in their wording that plays on a respondent’s emotions or is slanted in a particular way (e.g., “Do you believe that our keen knowledge of what is important to the market allows us to respond better to customers’ needs than our competitor?”)
- Double-Barreled Questions – These questions contain more than one issue within a question (e.g., “How would you rate the speed and durability of our product?”)
- Ambiguous Questions – These questions contain words that are either too general or that respondents may interpret in different ways (e.g., “Do you regularly use our product?”)
Additionally, questions about the validity of the research should increase if the report does not include the actual questions used to obtain the results but instead states the general nature of a question such as “When asked about their preference for different brands, customers responded... ” Clearly if the actual question was tainted with any of the four issues discussed above then the results should be suspect.
4. What medium was used to obtain the responses?
In an ideal research environment, researchers would have full control over study participants’ environment. But that is clearly not practical, especially when a large number of respondents is sought. In situations where respondents’ environment cannot be controlled, such as research done by mail, over the phone or online, the results must be held to a higher level of scrutiny than more controlled environments, such as surveys conducted at the researcher’s own facilities.
5. Who were the respondents in the research?
One of the biggest issues with survey research relates to those who participate. Most company produced research reports examine results of responses from a small percentage (i.e., sample) of a bigger group (i.e., population). For instance, a major online retailer may conduct research on customers’ feelings toward the retailer’s product offerings by drawing a sample from customers who have purchased within the last year, of which there may be hundreds of thousands. When done right, sampling can produce very good information that can then be extended to the full population. But the restrictions for doing sampling correctly are very tight and include:
- Random Selection – At its strictest level sampling must ensure that all cases (e.g., customers) within the desired population (e.g., all customers who purchased in the last year) have the same chance of being selected to participate.
- Right Respondents Selected – This condition relates to sampling accuracy and making sure the people who do participate are, in fact, the ones being targeted (e.g., actual customer and not customer’s spouse).
- Sufficient Number of Respondents – Good research requires researchers use statistical calculations to determine how large a sample of respondents are needed in order for the results to be considered useful. While the methods for determining sample size are beyond the scope of this article, any report that does not indicate the number of respondents to the survey should be suspected of not meeting minimum sample requirements.
- Non-Respondents Are Not Different – Not everyone who is asked to participate in a survey will actually agree to do so. To ensure that those who do not respond are not different than those who do respond (e.g., do not have different characteristics, such as age or education level, that could affect interpretation of results), researchers should make a follow-up effort to find out about those that did not originally respond. However, undertaking this is time-consuming and expensive, consequently few researchers ever bother to carryout this important analysis. Those that do and report it are generally considered solid research designers.
6. Are differences between groups meaningful?
Once the research is obtained it must be analyzed. A very large percentage of company produced research uses a simple approach to describe results. So called descriptive statistics is mostly limited to indicating how many participants responded in a certain way (e.g., how many liked or dislike a product). But in many cases, to really be helpful, research should also indicate whether there are differences between respondent groups. For instance, are North American, European and Asian customers different in how they rate a certain issue? To compare groups requires analysis techniques that are more advanced than simply showing that more North American customers selected answer A, more European customers selected answer B, and more Asian customers selected answer C. But using more advanced analysis requires a valid research design that includes good sampling techniques and well-structured questions. Reports that offer indications of more advanced analysis will suggest their results are “statistically significant” when comparing differences between groups. If this research has met the requirements of the first five questions, then very likely the information is valid and its implications are worthy of consideration.
7. How are results presented?
A great way to hide poorly performed research is to mask the results within a fancy presentation. Certainly marketers are well-known for embellishing the benefits of their products, so seeing embellishment extend to research reports is not unexpected. The most common methods for hiding poor research include:
- Report Only What Looks Good – A sure sign that research is not what it is appears is when the report only presents a few results and only those that make the company look good. Unfortunately, the extent of the problem may never be known unless the company offers access to the full survey instrument in order to see what was not reported.
- Use Scales on Graphs That Look Good – What better way to show a company’s strengths than to present it visually. Many people are more likely to be attracted to visual evidence than to reading the full report. But what is often hidden by visuals is the real story. Presenting information in a graph, such as a line or bar graph, is affected by the scale used to represent the information. This can be seen frequently on financial television shows or websites where a quick look at a company’s stock price shows what appears to be a big drop. Yet the scale used is actually in very small increments and not really representative of the percentage change. For market research, how scales are used to represent change (e.g., sales, market share, customer service rating, etc.) is very important and should be consistent between different types of analysis. For example, the scale used to compare how the company is perceived by their customers on one characteristic should be the same scale used to show other characteristics. Reports that continually show different scales are probably doing so to make something appear important when in fact it is not.
As more companies use company produced research reports for promotional purposes, assessing the value of these reports will become important. How reports stand up to these seven questions is a good start for determining whether the information contained has a high probability of being valid or whether it is simply promotional fluff.
Image by RambergMediaImages




