Reporters want to get bylines and publishers want readers, because more readers mean more advertising dollars. Thus, news outlets have every incentive to trumpet research results as big news, breakthroughs that will attract “eyeballs.”
Combine these aligned incentives with the fact that very few reporters have much background in science and you have a recipe for over-hyping minor advances or preliminary results as big news.
How can you critically read a story about a supposed major medical advance and know if it is truly important?
First, accept that mice are not humans. What works in mice may or may not work in people. Some 5% of initial promising results in lab rodents end up being similarly effective in humans. Even those that do cross over take a very long time before being useful – an average of 17 years between the first trial in mice and an approved human product.
What about human studies?
Be VERY skeptical of association as proving causation: the observational trial Researchers live in a “publish or perish” world and look for associations between habits or exposures and diseases or longevity that can form the basis of a published paper.
Good medical science depends on a controlled clinical trial, in which people are randomly assigned to the treatment being studied and are generally otherwise very similar. Observational trials may suggest linkages but almost never prove them.
The fact is that people who do one thing, like drink coffee, may do many other things differently. Coffee drinkers may be more likely to smoke, or eat donuts or work in offices than those who do not drink coffee. Unless the researchers have been able to match the people who do the thing studied with those who don’t, and can be sure that is the ONLY difference between them, the outcome may be due to something completely different.
Good trials, in addition to randomly assigning people to the treatment(s) being studied are double blinded. This means that neither the people being studied nor the researchers know which treatment or placebo they are getting. Other than death, few outcomes of a trial are absolutes. There is a strong placebo effect for most conditions, and if people know they are getting the active drug, many will feel better for that reason.
If researchers are heavily invested (emotionally or financially) in drug A being better than drug B, they will be tempted to ignore side effects or encourage feeling better in the group given A.
Finally – be careful not to assume that “statistically significant” is always the last word. Statisticians devise ways to tell if trial results are purely due to chance. This is given as a “P value.” A P of 0.05 means there is only a 5% chance that the results were just luck; the lower the P value, the more likely there really was a difference between groups.
Small differences in outcome may be called statistically significant when their clinical significance is minor. When a study result says that people given A lived significantly longer than those given B, look carefully to see how much longer.
This is particularly common with trials of new cancer drugs. You may read a headline saying that cancer patients given X lived significantly longer than those given Y. Buried deep in the story may be the facts that those given X lived 6.5 months and those given Y lived 5.3 months – and that those given X had many more side effects and had to pay $50,000 more out of pocket. It is not so clear that you would always want to choose X.
Prescription for Bankruptcy. Buy the book on Amazon
