Posts filed under Medical news (341)

August 7, 2014

Vitamin D context

There’s a story in the Herald about Alzheimer’s Disease risk being much higher in people with low vitamin D levels in their blood. This is observational data, where vitamin D was measured and the researchers then waited to see who would get dementia. That’s all in the story, and the problems aren’t the Herald’s fault.

The lead author of the research paper is quoted as saying

“Clinical trials are now needed to establish whether eating foods such as oily fish or taking vitamin D supplements can delay or even prevent the onset of Alzheimer’s disease and dementia.”

That’s true, as far as it goes, but you might have expected the person writing the press release to mention the existing randomised trial evidence.

The Women’s Health Initiative, one of the largest and probably the most expensive randomised trial ever, included randomisation to calcium and vitamin D or placebo. The goal was to look at prevention of fractures, with prevention of colon cancer as a secondary question, but they have data on dementia and they have published it

During a mean follow-up of 7.8 years, 39 participants in the treatment group and 37 in the placebo group developed incident dementia (hazard ratio (HR) = 1.11, 95% confidence interval (CI) = 0.71-1.74, P = .64). Likewise, 98 treatment participants and 108 placebo participants developed incident [mild cognitive impairment] (HR = 0.95, 95% CI = 0.72-1.25, P = .72). There were no significant differences in incident dementia or [mild cognitive impairment] or in global or domain-specific cognitive function between groups.

That’s based on roughly 2000 women in each treatment group.

The Women’s Health Initiative data doesn’t nail down all the possibilities. It could be that a higher dose is needed. It could be that the women were too healthy (although half of them had low vitamin D levels by usual criteria). The research paper mentions the Women’s Health Initiative and these possible explanations, so the authors were definitely aware of them.

If you’re going to tell people about a potential way to prevent dementia, it would be helpful to at least mention that one form of it has been tried and didn’t work.

New breast cancer gene

The Herald has a pretty good story about a gene, PALB2, where there are mutations that cause a substantially raised risk of breast cancer.  It’s not as novel as the story implies (the first sentence of the abstract is “Germline loss-of-function mutations in PALB2 are known to confer a predisposition to breast cancer.”), but the quantified increase in risk is new and potentially a useful thing to know.

Genetic testing for BRCA mutations is funded in NZ for people with a sufficiently strong family history, but the policy is to test one of the affected relatives first. This new gene demonstrates why.

If you had a high-risk family history of breast cancer, and tested negative for BRCA1 and BRCA2 mutations, you might assume you had missed out on the bad gene. It’s possible, though, that your family’s risk was due to some other mutation — in PALB2, or in another undiscovered gene — and in that case the negative test didn’t actually tell you anything. By testing a family member  first, you can be sure you are looking in the right place for your risks, rather than just in the place that’s easiest to test.

July 11, 2014

Another prostate cancer study

Today’s prostate cancer risk factor, per the Herald, is vasectomy. The press release is here; the paper isn’t open-access.

This is a much more reliable study than the one earlier in the week about cycling, and there’s reasonable case that this one is worth a press release.

In 1986, the researchers recruited about 50000 men (health professionals: mostly dentists and vets), then followed them up to see how their health changed over time.  This research involves the 43000 who hadn’t had any sort of cancer at the start of the study. As the Herald says, about a quarter of the men had a vasectomy, and there have been 6000 prostate cancer diagnoses. So there’s a reasonable sample size, and there is a good chance you would have heard about this result if no difference had been found (though probably not via the Daily Mail)

The relative increase in risk is estimated as about 10% overall and about 20% for ‘high-grade’ tumours, which is much more plausible than the five-fold increase claimed for cycling.  The researchers had information about the number of prostate cancer tests the men had had, so they can say this isn’t explained by a difference in screening — the cycling study only had total number of doctor visits in the past year. Also, the 20% difference is seen in prostate cancer deaths, not just in diagnoses, though if you only consider deaths the evidence is borderline.  Despite all this, the researchers quite rightly don’t claim the result is conclusive.

There are two things the story doesn’t say. First, if you Google the name of the lead researcher and ‘prostate cancer’, one of the top hits is another paper on prostate cancer (and coffee, protective). That is, the Health Professionals Followup Study, like its sister cohort, the Nurses Health Study, is in the business of looking for correlations between a long list of interesting exposures and potential effects. Some of what it finds will be noise, even if it appears to pass sanity checks and statistical filters. They aren’t doing anything wrong, that’s just what life is like.

Second, there were 167 lethal prostate cancers in men with vasectomies. If the excess risk of 20% is really due to vasectomy, rather than something else, that would mean about 27 cancers caused by 12000 vasectomies. Combining lethal and advanced cases, the same approach gives an estimated 38 cases from 12000 vasectomies. So, if this is causation, the risk is 2 or 3 serious prostate cancers for every 1000 vasectomies. That’s not trivial, but I think it sounds smaller than “20% raised risk”.

July 9, 2014

Would I have heard if the results were different?

The story about cycling and prostate cancer in the Herald (or the Daily Mail) is a good opportunity to look at some of the rules of thumb for deciding which stories to read or believe:

Firstly, would you have heard if the results were the other way around? Almost certainly not: prostate cancer wasn’t the main point of this study, and there wasn’t a previously-suspected relationship.

Second, for cancer specifically, is this mortality or diagnosis data? That is, are we seeing an increase in detection or in cancer? This is diagnosis data; so it could be just an increase in detection. The researchers were confident it wasn’t, but we must remember the immortal words of Mandy Rice-Davies

Thirdly, what sort of study is it? Obviously it can’t be experimental, but a good study design would be to ask people about cycling (or even better, measure cycling) and then see whether it’s the bike fanatics who develop cancer. This study was a self-selected survey of cyclists, getting self-reported data about past cycling and past diagnosis of prostate cancer. It’s a fairly extreme sample, too: half of them cycle more than 5.75 hours per week.

Fourth, how strong is the evidence of association, and what sort of sample size are we looking at? The association is just barely statistically significant (p=0.046 in one model, p=0.025 in a second), and there are only 36 prostate cancer cases in the sample.  It’s pretty borderline.  The estimated relative risk is huge, because it has to be given the sample size, but the uncertainty range is also huge. The confidence interval on the relative risk of 5 reported by the Herald goes from 1.5 to 18.

Fifth, what does previous research say? This is in the story

‘To the best of our knowledge, this is the first study to demonstrate an association between prostate cancer and cycling, so there are no studies hypothesizing a pathophysiological mechanism for such a link.’

Sixth, what do other experts think? We don’t know. The closest thing to an independent comment is this in the press release

“Physicians should discuss the potential risks and health benefits of cycling with their patients, and how it may impact their overall health,” says Ajay Nehra, MD, Editor-in-Chief of Journal of Men’s Health and Chair, Department of Urology, Director, Men’s Health, Rush University Medical Center, Chicago, IL.

He could have said that without reading the paper.

In summary, there’s borderline evidence from a weak study design for a sensational finding that isn’t supported by any prior evidence. This is fine as research, but it shouldn’t be in the headlines.

You can read the research paper here for the next month, and the journal press release here.

Recycling

In March, I wrote

The Herald has a story about a potential blood test for dementia, which gives the opportunity to talk about an important statistical issue. The research seems to be good, and the results are plausible, though they need to be confirmed in a separate, larger sample before they can really be believed. …

 But it’s the description of the accuracy of the test that might be misleading.

There’s a Herald story today about new test; the same comments apply, except that the research paper is open-access

July 5, 2014

Once is accident, twice is coincidence

Back in 2010, a piece in Slate pointed out that a country’s success in the 2010 and 2006 World Cup knock-out rounds was strongly correlated with the proportion of the population infected by Toxoplasma gondii.  In 2010, Toxoplasma seroprevalence predicted all eight knockout-round wins; in 2006 it predicted seven of eight.

Toxoplasma, in case you weren’t introduced when you met it, is a single-celled organism that can live, and reproduce asexually, in pretty much any warm-blooded animal, but can only reproduce sexually in the guts of cats. That’s not the interesting part. The interesting part is that in rodents the parasite has effects on the brain, making the animal less cautious and more likely to end up in the gut of a cat. There’s some evidence Toxoplasma also has effects on human behaviour, though that’s still controversial.

Now, in 2014, I see a Tweet from Australian biologist Michael Whitehead

So, three times is enemy action?

There are good reasons to be sceptical: the football rankings haven’t changed all that much since 2006, so this isn’t really three independent tests. Also, the seroprevalence data is for the countries as a whole, not for the team members.  Still, in contrast to the predictions using Octopus vulgaris in the last World Cup, it’s not completely out of the question that there could be a real effect.

July 4, 2014

Measuring accuracy

From the Herald

A new scientific test is able to detect which 14-year-olds will become binge drinkers by the time they hit 16.

A study published in the journal Nature describes how scientists have developed a system that weighs up a range of risk factors and predicts – with about 70 per cent accuracy – which teens will become heavy drinkers.

That’s true, but the definition of accuracy is doing quite a bit of work here.

We don’t have figures for 16 year olds, but according to the Ministry of Health about 20% of 15-17 year olds have ‘hazardous drinking patterns.’ That means I can predict with 80% accuracy without even needing to weigh up a range of risk factors — I just need to predict “No” each time. Parents, teachers, or people working with youth could probably do better than my 80% accuracy.

The researchers found that their test correctly classified 73% of the non-binge-drinker and 67% of the binge drinkers, which means it would get 72% of people classified correctly. That’s rather worse than my trivial “the kids are ok” predictor. In order to be any use, the new test, which combines brain imaging and detailed interviews, needs to be set to a higher threshold, so it predicts fewer drinkers.  The researchers could have done this, but they didn’t.

Also, in order to be any use, the test needs to identify a group who will selectively benefit from some feasible intervention, and there needs to be funding to supply both this intervention, and the cost of doing long interviews and fMRI brain imaging on large groups of teenagers. And that needs to be the best way to spend the money.

June 29, 2014

Not yet news

When you read “The university did not reveal how the study was carried out” in a news story about a research article, you’d expect the story to be covering some sort of scandal. Not this time.

The Herald story  is about broccoli and asthma

They say eating up to two cups of lightly steamed broccoli a day can help clear the airways, prevent deterioration in the condition and even reduce or reverse lung damage.

Other vegetables with the same effect include kale, cabbage, brussels sprouts, cauliflower and bok choy.

Using broccoli to treat asthma may also help for people who don’t respond to traditional treatment.

‘How the study was carried out’ isn’t just a matter of detail: if they just gave people broccoli, they wouldn’t know what other vegetables had the same effect, so maybe it wasn’t broccoli but some sort of extract? Was it even experimental or just observational? And did they actually test people who don’t respond to traditional treatment? And what exactly does that mean — failing to respond is pretty rare, though failing to get good control of asthma attacks isn’t.

The Daily Mail story was actually more informative (and that’s not a sentence I like to find myself writing). They reported a claim that wasn’t in the press release

The finding due to sulforaphane naturally occurring in broccoli and other cruciferous vegetables, which may help protect against respiratory inflammation that can cause asthma.

Even then, it isn’t clear whether the research really found that sulforaphane was responsible, or whether that’s just their theory about why broccoli is effective. 

My guess is that the point of the press release is the last sentence

Ms Mazarakis will be presenting the research findings at the 2014 Undergraduate Research Conference about Food Safety in Shanghai, China.

That’s a reasonable basis for a press release, and potentially for a story if you’re in Melbourne. The rest isn’t. It’s not science until they tell you what they did.

June 24, 2014

Beyond clinical trials?

From The Atlantic

And with reliable simulations for what’s happening at the cellular level, this approach could be used to treat patients and also to test new drugs and devices. Dassault Systèmes is focusing on that level of granularity now, trying to simulate propagation of cholesterol in human cells and building oncological cell models. “It’s data science and modeling,” Charlès told me. “Coupling the two creates a new environment in medicine.”

Charlès and his colleagues believe that a shift to virtual clinical trials—that is, testing new medicines and devices using computer models before or instead of trials in human patients—could make new treatments available more quickly and cheaply. 

From pharmaceutical chemist Derek Lowe, in response

Speed the day. The cost of clinical trials, coupled with their low success rate, is eating us alive in this business (and it’s getting worse every year). This is just the sort of thing that could rescue us from the walls that are closing in more tightly all the time. But this talk of shifts and revolutions makes it sound as if this sort of thing is happening right now, which it isn’t. No such simulated clinical trial, one that could serve as the basis for a drug approval, is anywhere near even being proposed. How long before one is, then? If things go really swimmingly, I’d say 20 to 25 years from now, personally, but I’d be glad to hear other estimates.

We do, potentially, have the tools to use current treatments more effectively, and data science can help.  Even there,  the biggest opportunities are nothing to do with subtle individual differences — for example, both here and in the US, only about half of people with hypertension are being treated.

June 18, 2014

Counts and proportions

Phil Price writes (at Andrew Gelman’s blog) on the impact of bike-share programs:

So the number of head injuries declined by 14 percent, and the Washington Post reporter — Lenny Bernstein, for those of you keeping score at home — says they went up 7.8%.  That’s a pretty big mistake! How did it happen?  Well, the number of head injuries went down, but the number of injuries that were not head injuries went down even more, so the proportion of injuries that were head injuries went up.

 

To be precise, the research paper found 638 hospitalised head injuries in 24 months before the bike share program, and 273 in the 12 months afterwards. In a set of control cities that didn’t start a bike-share program there were 712 head injuries in the 24 months before the matching date and 342 in the 12 months afterwards. That is, a 14.4% decrease in the cities that added bike-share programs and a 4% decrease in those that didn’t.