Understanding the Difference Between Headlines and Hard Facts In Health Research Reporting

My daughter recently came home from school with a story about what may well have been one of the most significant epidemiological studies in the history of public health. In a deadly 1854 cholera outbreak in London, a physician named Dr. John Snow mapped out clusters of cholera cases, comparing this with information about the water source for each household, based on his theory that cholera was spread by drinking water and not through the air, as was commonly believed. When an outbreak occurred in his neighborhood, Dr. Snow had the handle removed from the community water pump so no one could get water. This halted the outbreak in that area, which provided further support to his theory. Dr. Snow’s contribution to the study of epidemiology is considered valuable, because it shows the importance of observation in helping prevent the spread of disease and uncovering possible causes. In fact, Dr. Snow had pinpointed the cause of London’s outbreak of cholera — a water-borne pathogen — 30 years before the bacterium was even discovered.

In modern times, epidemiological studies have continued to showcase important medical insights. Over time they have provided observations leading to some of the most important health findings — such as linking smoking with lung cancer… suggesting that women are just as susceptible to heart disease risk factors as men… that African-American men are particularly prone to prostate cancer… and that Asian women experience a lower incidence of breast cancer until they move to North America and adopt our higher-fat diet. However epidemiological studies can also be misleading if inappropriate conclusions are drawn from the research. We consumers may hear only the 30-second sound bite on the evening news or take in the headline from an online bulletin or newspaper, missing important details. It helps to understand how research works, in particular these long-term epidemiological studies, so that you can figure out whether the findings are sound enough to influence you.

DIFFERENT KINDS OF STUDIES

Epidemiological studies are based on observations. Typically, researchers accumulate data on how people live and how their health fares, enabling comparison of one population to another, explains consumer advocate Charles B. Inlander, author of Take This Book to the Hospital with You: A Consumer Guide to Surviving Your Hospital Stay. This form of research can also examine the impact of different lifestyle habits (such as diet, exercise, alcohol intake, smoking, prescription drug use, etc.) within a population, and suggest links between disease rates and environmental factors. Inlander points out that finding an association does not prove causation… it makes a case for closer examination with further research.

Types of epidemiological studies include…

  • Cohort studies (also called prospective studies), which follow a group of people over time, recording their lifestyle choices and tracking who gets sick. The Nurses’ Health Study (which tracked health effects of birth control pills, among other things) is a good and familiar example of a longitudinal cohort study. Another example is the Framingham Heart Study, designed to identify factors associated with cardiovascular disease.
  • Retrospective or case-control studies, which look at populations with or without certain diseases and examine their prior lifestyles. Some use data from cohort studies, such as the Nurses’ Health Study. A downside is that these studies may be biased by several factors… including what investigators are looking for and the makeup of the study population. However, these studies take less time and cost less than cohort studies.
  • Cross-sectional studies look at data at one point of time — for instance to compare people’s present health status with current lifestyle habits. A survey is one example. Selection bias can be an issue with this form of research, but it does have the advantage of being inexpensive and quick, relatively speaking. It’s important to note that these studies are the weakest in terms of implying cause.

EPIDEMIOLOGICAL STUDIES FACE CRITICISM

Epidemiological studies can produce findings that are flawed. The most damning example was the link between hormone replacement therapy (HRT) and heart health suggested by a non-randomized observational study. Based largely on the observations of this study, doctors began more aggressively prescribing hormones to postmenopausal women to prevent cardiovascular disease — a practice that came under scrutiny when the Women’s Health Initiative found evidence that HRT could actually increase the risk of heart attack and stroke in women.

The problem was, causation (in this case, that HRT leads to greater comfort and better health during menopause) was assumed, when other factors may well have been at work. For instance, women who took HRT might also have eaten better, exercised more often, and/or had better insurance and health care access. These possibilities suggest a higher socioeconomic status, which is usually reflected in better health… and has little or nothing to do with HRT.

WHAT THEY ARE… AND WHAT THEY AREN’T

By definition, epidemiological studies do not prove cause and effect. However, the results are often subject to misinterpretation since people may equate or misunderstand an association to be a cause/effect. Inlander stresses that epidemiological studies are not the same as randomized, controlled clinical trials, which directly compare one intervention with another to determine which is more effective. We can’t draw the same kinds of conclusions from these two very different types of studies — yet often the news is reported or understood by people listening with half an ear as though one could.

Clinical trials, however, are also imperfect. Since they are very expensive, they are often funded by pharmaceutical companies — a fact which we’ve seen often leads to bias in study design or how the results are reported or not reported, as the case may be. Clinical trials often focus on a single factor, when in truth most disease processes are multi-factorial. Plus they can present their own ethical and moral limitations. For example, scientists obviously can’t randomly assign one group of people to smoke cigarettes and another not to smoke, or one to eat a healthful diet while the other subsists on only junk food. Only observational studies can draw conclusions on these lifestyle choices.

MAKING SENSE OF THE SCIENCE

Each type of study has limitations as well as strengths. Inlander emphasizes that epidemiological studies are vital and valuable, but stresses that it’s important to understand what they are and what they aren’t. Tips to make sense of new medical research and studies include…

  • Understand the capabilities of different study methods. Observational studies suggest links or associations, ask questions and call for further research. In contrast, a randomized, controlled clinical trial is designed to provide strong evidence for a possible cause. Another type of study called a meta-analysis pools the results of different studies and analyzes their results. Although often providing even more enticing associations, these sometimes focus on connections that eventually prove irrelevant, perhaps because of study selection bias or varying study populations, whose lifestyles and circumstances may greatly differ.
  • All studies potentially have flaws. Even randomized, controlled clinical trials can be tainted by possible conflicts of interest or methodological flaws. This is why medical journals today insist that study authors reveal sources of support and funding. Observational studies can be subject to misinterpretation, while meta-analyses frequently lump together varying methodologies and interventions, which may lead to meaningless conclusions.
  • Learn to read between the lines. Beware of articles that condense complex research into overly simplified health recommendations (particularly when it involves a new drug or treatment). News producers are on the lookout for research findings that will grab attention… what makes a good headline doesn’t necessarily come from good science.
  • Go to the source. Often news articles refer to research without saying what type of trial it was, so I asked Inlander whether there were clues or buzzwords to provide a frame of reference. Not only did he answer in the negative, he noted that trying to make such assumptions could lead to yet more incorrect conclusions. Far better is to go on-line and look up the original source. Even when full-text versions of medical journal articles are expensive or unavailable, you can generally read abstracts or summaries for free at www.pubmed.gov. But be aware that abstracts and summaries don’t always reveal the funding source, which can be important in sifting out the relevance of the findings.
  • Keep in mind that medicine is an art as well as a science. The research is only as good as what we know today, and may well change in important ways tomorrow or next year.