I used to trust numbers. Who didn’t?
I thought numbers were staid, true, and irrefutable. Research depends on measurement, measurement produces numbers, and these help us make sense of the seemingly nutty and doggedly disordered world around us. (I wrote about this topic in a chapter for the Science Writers’ Handbook.)
In 2007, I started fact-checking for a magazine dedicated to cancer research. That meant poring over statistics and fitting those numbers to interpretation. I soon realized that numbers aren’t cold and lifeless.
They’re storytellers—perhaps never more so than when they show up in the scientific literature. Numbers demarcate narratives about our diseases, tendencies, and behaviors—and the interconnectedness of all those things.
Like some of the best narrators, though, numbers can be unreliable. Numbers can be twisted and manipulated; they whitewash wishful thinking with the appearance of truth. (For more on that subject, see Charles Seife’s compelling Proofiness.)
Science writers can color data by how we report it. For example: In a classic 1982 experiment, researchers asked
human guinea pigs volunteer subjects to imagine they had lung cancer and choose between two therapies. Participants were divided into two groups. One received the following information (note the recurrence of the word”die”):
Of 100 people having surgery, 10 will die during treatment, 32 will have died by one year and 66 will have died by five years. Of 100 people having radiation therapy, none will die during treatment, 23 will die by one year and 78 will die by five years.
People in the other group received the same statistics, but portrayed in terms of survival instead of death. To paraphrase the paper:
Of 100 people having surgery, 90 will survive, 68 will live more than one year and 34 will live more than five years. And of 100 people have radiation therapy, 100 will survive therapy, 77 will live more than one year, and 22 will live more than five years.
Among responders who received death information, 42 percent chose radiation. Among responders who received survival info, only 25 percent chose radiation—even though they were given the same data.
Readers of news stories about medical research want to know how a new finding will affect their personal risk—and that’s tricky because reported findings are rarely as straightforward as they seem.
The following list of tips for writing about risk could help keep us honest and accurate. I’ve focused on two popular (and easily confused) metrics: absolute risk and relative risk. (See #6 for other measures.)
1. Absolute risk and relative risk are not the same thing.
The measured risk will often be clearly stated in the results section of the abstract. A randomized controlled trial will usually report relative risk. Absolute risk is easier for readers to relate to but so broad it doesn’t include information about risk factors. For a good discussion about the complexities of absolute risk, see this piece at Health News Reviews by Reuters Health staff writer Frederik Joelving.
2. Absolute risk is hard to measure.
Absolute risk is a person’s lifetime risk of developing a disease. (For example: the NCI reports that 1.47 percent of the population born today, or 1 in 68 men and women, will be diagnosed with pancreatic cancer in their lifetime.) But because it’s a lifetime risk, and one encounters so many variables during a lifetime, it’s difficult to assess. If it’s available, consider including absolute risk in stories that report any risk; it can be a way of giving readers their bearings.
3. Relative risk shows a comparison.
You’re a savvy consumer. Say you’re buying a new car. At the lot, a shrewd and smooth-talking salesperson notices which 2013 Viper catches your eye. (Hey, this is my scenario. I drive a 1980 wagon but I still have Viper dreams.)
You’re quickly reassured that the one you want is the best one to get.
“Definitely,” says the salesperson. “This one is better.”
You wait. And wait. But that’s the end of the sentence. Though you’re nearly blinded by that dazzling smile, you have to ask.
“Better than what?”
You have to ask because you know that “better” is a relative term. Every kid knows you can’t teeter-totter by yourself; every grown-up knows that you have to have at least two things for one to be better. That’s the case with relative risk. Relative risk measures the difference in risk between two groups. Studies that report relative risk have compared the same outcome from two groups of people. It’s the duty of science writers to explicitly say what’s being compared.
My rule of thumb: Mention both groups. The comparison is what makes relative risk so useful: It can help focus attention on specific variables related to what actually causes something to happen.
4. A relative risk of 1 indicates no difference in risk.
When a study finds a relative risk of less than 1 associated with a group of test subjects, those people are less likely (than people in the other group) to have the outcome being measured. A relative risk greater than 1 indicates people who are more likely to have the outcome. (The same rule applies for another measure called hazard ratios, which can be treated in similar ways.)
5. Relative risk, on its own, doesn’t tell you anything about absolute risk.
Let’s say you learn about a new, safe, inexpensive supplement that—when taken every day for a decade—can cut a person’s risk of being diagnosed with pancreatic cancer by a whopping 40 percent. That sounds astonishing, accessibly and useful—all of which are good ingredients for a story that will grab your readers. And it’s an important result, as pancreatic cancer has few treatments. But if I were reporting on it, I would feel obligated to tell my readers that the new supplement will have little effect on most people: The absolute risk of being diagnosed with the disease is 1.47 percent, which a decade of daily doses from that supplement lowers to about 0.88 percent.
My rule of thumb: beware. Relative risk usually sounds more dramatic than the change in absolute risk.
6. These are not the only measures.
Observational studies report risk using odds ratios, which are even less intuitive for your reading audience. Be careful about reporting odds ratios as relative risks; an odds ratio of 2.3, for example, may not correspond to an increased risk of 130 percent. The best way to ensure that you’re correctly reporting odds ratios is to remember tip 7….
7. Biostatisticians can provide useful perspective.
The author list of a paper that reports risks will almost always include the biostatistician who helped compute the numbers. This person probably won’t be the corresponding author, but if you have questions about the numbers, it’s worth figuring out who worked on them. Even when you’re not in doubt, the biostats person who worked on the study can help you out. You might also want to ask a statistician who did not work on the study for some perspective, but be careful. Not all statisticians are alike; make sure you find one in the same field.
This list is neither exhaustive nor definitive. Want to add to it? Meet me in the comments!