Need to know

Risks and benefits to our health are dangerously exaggerated
December 16, 2009

Body scans can reveal a lot about our health, but they can also be misleading




Here’s a little test. Breast cancer screening reduces deaths by 25 per cent. If we assume that 1,000 women dutifully turn up to all their screening appointments, how many lives will be saved? The answer, remarkably, is one. Not 25, or 100, or even 250, the figures picked by a substantial proportion of gynaecologists who were asked this.

As a recent study showed, if 1,000 women started getting screened at the age of 50, and had a mammogram every year, only one would have her life saved through early detection and treatment of cancer by the time screening stops at the age of 70. Without screening, four women would have died from breast cancer; with it, three would have done so. But in the same group between two and ten women would also be treated needlessly—possibly with surgery, radiotherapy and chemotherapy. And between 100 and 500 women would have a false alarm, suffer worry, and need further tests before getting the all-clear. About half would undergo an unnecessary biopsy.



Here’s another example. The drug giant Pfizer claims, on the basis of a clinical trial, that its statin drug Lipitor reduces the risk of heart attacks in people with multiple risk factors—high blood pressure, high cholesterol or angina—by 36 per cent. Assume 100 people take Lipitor for three and a half years. How many heart attacks will be prevented? The answer, once again, is one. Without Lipitor, there would have been three heart attacks; with it, there were two. This is indeed close to the 36 per cent reduction claimed; Pfizer is not lying. But the bottom line is that 100 high-risk people had to take Lipitor for more than three years to prevent one of them having a heart attack. The other 99 got no benefit.

Benefits are easy to exaggerate when they are expressed in this way—and medicine is riddled with examples of inflated benefits and understated risks, misleading the unwell into unrealistic expectations and causing healthy people to worry needlessly.

The problem is called mismatched framing: benefits are framed in one way, side effects in another. Expressing benefits as a relative risk—25 per cent fewer breast cancer deaths, 36 per cent fewer heart attacks—produces a nice big number that looks impressive. But unless we know what the real underlying risk is, a percentage reduction in that risk is meaningless.

Let’s suppose that the risk to women of dangerous blood clots from a contraceptive pill is one in 7,000. A new pill is introduced and the risk doubles. Letters are sent to GPs across the country saying that the risks of the new pill are 100 per cent greater, and panic ensues. Women give up the new pill in droves, abortions rise, and, paradoxically, so does the risk of blood clots, since pregnancy carries a greater risk of thrombosis than the new pill. This, of course, actually happened in 1995. If women had been told not that the risk had doubled, but it had increased by one in 7,000, would the same panic have occurred? Almost certainly not.

One in 7,000 is an absolute risk increase; 100 per cent is a relative risk increase. In medical journals, it is commonplace for the benefits to be expressed in relative terms, while the side effects are given in absolute terms. One recent study of the BMJ, the Journal of the American Medical Association and the Lancet found that a third of papers that included both benefits and harms failed to express them in the same metric.

The Academy of Medical Sciences has postponed a report on the issue until later in 2010; while misleading leaflets urging women to get screened for breast cancer were withdrawn by the NHS in February, because they failed to mention risks—false positives, invasive tests, misleading results, unnecessary operations and anxiety—and did not express benefits straightforwardly.

Public health specialists are in a bind over this. Their job is to get as many participants as possible screened, or vaccinated against flu, or whatever the latest wheeze is. So they focus on big numbers—“breast screening saves an estimated 1,400 lives each year”—in order to make not participating seem like a dereliction of duty.

The numbers are indeed large. But what matters to the individual making a decision about whether to participate is not how it affects national statistics or NHS targets, but what it means to him or her. The best way to express that is as a “number needed to treat” (NNT): how many people have to follow the advice or take the treatment to save one event. In the case of the statin trial, the NNT would be 100; with breast cancer screening, it is 1,000. So the chances that an individual will personally benefit are quite small: one in 100 for statins, one in 1,000 for mammography. Some doctors argue that any treatment with an NNT of over 50 is no better than buying a lottery ticket.

If women were told this, some would decide to get screened anyway, particularly if they have a family history of cancer. Others may prefer to opt out. It is, or ought to be, an individual decision. Screening is not worthless, but nor is it quite as wonderful as women have been led to believe.

Can public health doctors afford to be honest about risks and benefits? I wouldn’t bet on it. If drug companies, journals, public health campaigners and journalists are allowed to exaggerate without actually lying, who’s to stop them? Meanwhile patients and the public struggle to understand what’s really best for them. Much greater transparency is needed, and fast.