From Questions to Answers: How Research Is Designed
From Questions to Answers: How Research Is Designed
By Howell Sasser, PhD, Scientific Review Coordinator, Manila Consulting Group; Adjunct Member of the Faculty, New York Medical College. Dr. Sasser reports no financial relationships relevant to this field of study.
This is the second in a three-part series about the design and conduct of clinical research. The first installment discussed how research begins with the formulation of research questions (See Alternative Medicine Alert, September 2011). This article will look at the way the design of a study flows from the research question and from the available data. It also will consider briefly the most important potential problems in the design of research. It is intended not as a comprehensive review of study design, but rather as a reiteration of the key insights that research projects come in all shapes and sizes, and that each study must be designed in a way that is appropriate to its specific circumstances.
A research question can be answered reliably only with information collected in a systematic way. The details of a study design are really nothing more than a framework to ensure that the study's results allow for meaningful inference. A study's processes, the flow of what actually happens, should serve the question or questions it is trying to answer. In practical terms, this means that different kinds of questions will be best served by different kinds of studies. (See Table 1.)
In some situations, a rapid snapshot of conditions is all that is needed. If it is sufficient to know that when some characteristic is present in a population, another is also present some portion of the time, a cross-sectional study may be appropriate. In such a study, there is no a priori assumption that one thing causes another, only that they co-occur with a frequency that may suggest a relationship.
An example of this type of study is a survey of complementary and alternative medicine (CAM) use in an Irish hospital population by Chang and colleagues.1 The authors distributed a questionnaire to cancer patients, patients with other diagnoses, and health care professionals. In addition to collecting information on use of various CAM modalities, they recorded demographic and social characteristics. From this, the authors reported the rate of CAM use (highest among health care workers, lowest among cancer patients) and various other factors that seemed to be associated statistically with CAM use (female sex, non-Christian faith, private health insurance). However, they did not suggest that any of these factors caused CAM use.
Other research questions have a clear element of causality one factor precedes and is in some sense responsible for an outcome but for any of several reasons, this causal sequence cannot be observed as it happens. This may be because the associated events are too far apart in time to make measurement practical or because the outcome event is rare, or even because the outcome is a negative event and observing it happen without intervening (and thus contaminating the study) would be unethical. When these issues arise, a common strategy is to use a case-control, or retrospective, design. In this type of study, a group of participants who already have the outcome of interest is assembled and compared with a group of participants who do not have the outcome. Any number of prior factors can be assessed for their possible association with the outcome, but a certain element of doubt as to which came first, the proposed risk factor or the outcome (an issue also called causal sequence), almost always tempers the strength of the conclusions in case-control studies.
An example of this design is Hedin and colleagues' study of prebiotic and probiotic use in a group of 234 patients with inflammatory bowel disease (IBD) as compared to such use in a group of 100 healthy controls.2 Those with and without IBD were asked to recall their past use of pre/probiotics, and the comparative odds of use in the two groups were calculated. As compared with the healthy controls, the odds of prior probiotic use were more than four times as great among those with ulcerative colitis and more than three times as great among those with Crohn's Disease. Note that while this study design comes closer to addressing causality, its results are expressed in terms of the probability of earlier events among those with and without an outcome, not the probability of an outcome among those with and without an earlier event.
When possible, observing events in their "natural" sequence is a more reliable way to draw reasonable inferences about causality. In some situations, it is feasible to recruit a group of participants, determine that they have not yet had the outcome of interest, assess whether they have one or more other factors that may play a role in bringing about the outcome, and then follow them forward in time to see which do and which do not actually have the outcome. This is a cohort, or prospective, study design.
As an example, Sibbritt and colleagues studied a group of 14,701 randomly selected Australian women for the presence of asthma (seen here as the "risk factor") and the use of CAM modalities (seen here as the "outcome") over a 10-year period.3 Participants completed questionnaires at baseline and three other times during the study. Those in the study population who were asthmatic were significantly more likely to use CAM overall than were those who were not asthmatic, but when CAM modalities were considered individually, a statistically significant relationship could only be shown for consultations with a naturopath or herbalist.
The study designs discussed so far are all observational. In other words, the investigator observes and records characteristics and events, but does not intervene to assign exposure to some factor to some participants and not to others. In many clinical settings, however, it is important to assess the value of exposures such as therapeutic approaches that plainly are assigned or applied. In such cases, one of several variations on the clinical trial study design may be indicated. Common features of clinical trials are random assignment to one of two or more study treatments, the concealment of treatment assignment when practical to prevent deliberate or inadvertent biasing of study findings, and careful control of as many extraneous factors as possible to increase the likelihood that observed results are due to the study treatment.
For example, Cherkin and colleagues conducted a clinical trial in which participants were assigned randomly to one of two types of massage or to usual care for the treatment of low back pain.4 Each treatment lasted 10 weeks, and participants completed questionnaires measuring disability and symptoms at 10, 26, and 52 weeks. It was not possible to conceal the treatment assignment from participants, but those assessing the study's outcomes were not aware of which treatment each participant had received. Results at 10 weeks were similar in the two massage groups and statistically better than in the usual care group, however most of the benefit appeared to have dissipated by 52 weeks.
It is commonly observed that there is a rising strength of evidence in the study designs as ordered here. The greater our certainty as to the temporal sequence of events, the more reliable our inferences will be. However, this is subject to two caveats. First, it bears repeating that not every study design will fit every question. In some cases, a design that is lower in the study "food chain" may be the best, or indeed the only, option. Second, the relative strength of any study design rests on assumptions about the way its population of participants was assembled and the way information was collected from them. If the results of a study are to be broadly applicable, the study population must be similar in demographic and other important characteristics to the larger population it represents. Also, the information collected from study participants must be as complete and accurate as possible. When either of these conditions is not met, statistical results may (indeed will) still be calculated, but they may not mean what they appear to. The difficulty is that there often is no way of comparing study results to "truth," since they often are the only available source of information about the state of nature. So it is all the more important that studies be carefully constructed and conducted.
In the third article in this series, we will consider how study results are interpreted and applied in clinical and other health practice.
References
1. Chang KH, et al. Complementary and alternative medicine use in oncology: A questionnaire survey of patients and health care professionals. BMC Cancer 2011;11:196.
2. Hedin CRH, et al. Probiotic and prebiotic use in patients with inflammatory bowel disease: A case-control study. Inflamm Bowel Dis 2010;16:2099-2108.
3. Sibbritt D, et al. A longitudinal analysis of complementary and alternative medicine use by a representative cohort of young Australian women with asthma, 1996-2006. J Asthma 2011;48:380-386.
4. Cherkin DC, et al. A comparison of the effects of 2 types of massage and usual care on chronic low back pain: A randomized, controlled trial. Ann Intern Med 2011;155:1-9.
This is the second in a three-part series about the design and conduct of clinical research.Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.