Posts filed under Evidence (90)

April 1, 2013

Briefly

Despite the date, this is not in any way an April Fools post

  • “Data is not killing creativity, it’s just changing how we tell stories”, from Techcrunch
  • Turning free-form text into journalism: Jacob Harris writes about an investigation into food recalls (nested HTML tables are not an open data format either)
  • Green labels look healthier than red labels, from the Washington Post. When I see this sort of research I imagine the marketing experts thinking “how cute, they figured that one out after only four years”
  • Frances Woolley debunks the recent stories about how Facebook likes reveal your sexual orientation (with comments from me).  It’s amazing how little you get from the quoted 88% accuracy, even if you pretend the input data are meaningful.  There are some measures of accuracy that you shouldn’t be allowed to use in press releases.
March 15, 2013

Better evidence in education

There’s a new UK report by Ben Goldacre, “Building Evidence into Education”, which has been welcomed by the Teacher Development Trust

Part of the introduction is worth quoting in detail:

Before we get that far, though, there is a caveat: I’m a doctor. I know that outsiders often try to tell teachers what they should do, and I’m aware this often ends badly. Because of that, there are two things we should be clear on.

Firstly, evidence based practice isn’t about telling teachers what to do: in fact, quite the opposite. This is about empowering teachers, and setting a profession free from governments, ministers and civil servants who are often overly keen on sending out edicts, insisting that their new idea is the best in town. Nobody in government would tell a doctor what to prescribe, but we all expect doctors to be able to make informed decisions about which treatment is best, using the best currently available evidence. I think teachers could one day be in the same position.

Secondly, doctors didn’t invent evidence based medicine. In fact, quite the opposite is true: just a few decades ago, best medical practice was driven by things like eminence, charisma, and personal experience. We needed the help of statisticians, epidemiologists, information librarians, and experts in trial design to move forwards. Many doctors – especially the most senior ones – fought hard against this, regarding “evidence based medicine” as a challenge to their authority.

In retrospect, we’ve seen that these doctors were wrong. The opportunity to make informed decisions about what works best, using good quality evidence, represents a truer form of professional independence than any senior figure barking out their opinions. A coherent set of systems for evidence based practice listens to people on the front line, to find out where the uncertainties are, and decide which ideas are worth testing. Lastly, crucially, individual judgement isn’t undermined by evidence: if anything, informed judgement is back in the foreground, and hugely improved.

This is the opportunity that I think teachers might want to take up.

February 23, 2013

When in doubt, randomise.

There has been (justified) wailing and gnashing of teeth over recent year-9 maths comparisons, and the Herald reports that a `back to basics’ system is being considered

Auckland educator Des Rainey, who did the research with teachers to test his home-made Kiwi Maths memorisation system, said the results came as a shock to the teachers and made him doubt his programme could work.

But after a year of practising multiplication and division on the Kiwi Maths grids for up to 10 minutes a day, the students more than doubled their speed.

This program looks promising, but why is anyone even talking about implementing a major nationwide intervention based on a small, uncontrolled before/after comparison measuring a surrogate outcome?

That is, unless you believe teachers and schoolchildren are much less individually variable than, say, pneumococci, you would want a randomised controlled comparison, and since presumably Des Rainey would agree that speed of basic arithmetic is important primarily because it’s a foundation for actual numeracy, you’d want to measure the success of the program based on numeracy tasks rather than on arithmetic speed. The results being reported are what the medical research community would call a non-randomised Phase IIa efficacy trial — an important stepping stone, but not a basis for policy.

Of course, that’s not how education works, is it?

February 22, 2013

Drug safety is hard

There are new reports, according to the Herald, that synthetic cannabinoids are ‘associated’ with suicidal tendencies in long-term users.  One difficulty in evaluating this sort of data is the huge peak in suicide rates in young men.  Almost anything you can think of that might be a bad idea is more commonly done by young men than by other people, so an apparent association isn’t all that surprising.  There is also the problem with direction of causation — the sorts of problems that make suicide a risk might also increase drug use — and difficulties even in getting a reasonable estimate of the denominator, the number of people using the drug. Serious, rare effects of a recreational drug are the hardest to be sure about, and the same is true of prescription medications.  It took big randomized trials to find out that Vioxx more than doubled your rate of heart attack , and a study of 1500 lung-cancer cases even to find the 20-fold increase in risk from smoking.

In this particular example there is additional supporting evidence. A few years back there was a lot of research into anti-cannabinoid drugs for weight loss (anti-munchies), and one of the things that sank these was an increase in suicidal thoughts in the patients in the early randomized trials.  It’s quite plausible that the same effect would happen as a dose of the cannabinoid wears off.

In general, though, this is the sort of effect that the proposed testing scheme for psychoactive drugs will have difficulty finding, or ruling out.

February 15, 2013

There oughtta be a law

David Farrar (among others) has written about a recent Coroner’s recommendation that high-visibility clothing should be compulsory for cyclists.  As he notes, ” if you are cycling at night you are a special sort of moron if you do not wear hi-vis gear”, but he rightly points out that isn’t the whole issue.

It’s easy to analyse a proposed law as if the only changes that result are those the law intends: everyone will cycle the same way, but they will all be wearing lurid chartreuse studded with flashing lights and will live happily ever after.  But safety laws, like other public-health interventions, need to be assessed on what will actually happen.

Bicycle helmet laws are a standard example.  There is overwhelming evidence that wearing a bicycle helmet reduces the risk of brain injury, but there’s also pretty good evidence that requiring bicycle helmets reduces cycling. Reducing the number of cyclists is bad from an individual-health point of view and also makes cycling less safe for those who remain. It’s not obvious how to optimise this tradeoff, but my guess based on no evidence is that pro-helmet propaganda might be better than helmet laws.

Another example was a proposal by some US airlines to require small children to have their own seat rather than flying in a parent’s lap. It’s clear that having their own seat is safer, but also much more expensive.  If any noticeable fraction of these families ended up driving rather than flying because of the extra cost, the extra deaths on the road would far outweigh those saved in the air.

It’s hard to predict the exact side-effects of a law, but that doesn’t mean they can be ignored any more than the exact side-effects of new medications can be ignored. The problem is that no-one will admit they don’t know the effects of a proposed law.  It took us decades to persuade physicians that they don’t magically know the effects of new treatments; let’s hope it doesn’t take much longer in the policy world.

[PS: yes, I do wear a helmet when cycling, except in the Netherlands, where bikes rule]

January 21, 2013

For the record

This weekend, Christchurch had its biggest aftershock for six months.  The moon was substantially further from the earth than average.

Journalist on science journalism

From Columbia Journalism Review (via Tony Cooper), a good long piece on science journalism by David H. Freedman (whom Google seems to confuse with statistician David A. Freedman)

What is a science journalist’s responsibility to openly question findings from highly credentialed scientists and trusted journals? There can only be one answer: The responsibility is large, and it clearly has been neglected. It’s not nearly enough to include in news reports the few mild qualifications attached to any study (“the study wasn’t large,” “the effect was modest,” “some subjects withdrew from the study partway through it”). Readers ought to be alerted, as a matter of course, to the fact that wrongness is embedded in the entire research system, and that few medical research findings ought to be considered completely reliable, regardless of the type of study, who conducted it, where it was published, or who says it’s a good study.

Worse still, health journalists are taking advantage of the wrongness problem. Presented with a range of conflicting findings for almost any interesting question, reporters are free to pick those that back up their preferred thesis—typically the exciting, controversial idea that their editors are counting on. When a reporter, for whatever reasons, wants to demonstrate that a particular type of diet works better than others—or that diets never work—there is a wealth of studies that will back him or her up, never mind all those other studies that have found exactly the opposite (or the studies can be mentioned, then explained away as “flawed”). For “balance,” just throw in a quote or two from a scientist whose opinion strays a bit from the thesis, then drown those quotes out with supportive quotes and more study findings.

I think the author is unduly negative about medical science — part of the problem is that published claims of associations are expected to have a fairly high false positive rate, and there’s not necessarily anything wrong with that as long as everyone understand the situation.  Lowering the false positive rate would either require much higher sample sizes or a much higher false  negative rate, and the coordination problems needed to get a sample size that will make the error rate low are prohibitive in most settings (with phase III clinical trials and modern genome-wide association studies as two partial exceptions).    It’s still true that most interesting or controversial findings about nutrition are wrong, and that journalists should know they are mostly wrong, and should write as if they know this.   Not reprinting Daily Mail stories would probably help, too.

 

January 12, 2013

Some kind of bizarre coincidence?

The current H3N2 flu strain is causing serious illness in the US.

From the journalism blog heads-up, two adjacent headlines at Fox News

 

 

 

December 21, 2012

Happy New B’ak’tun

Welcome to the 14th B’ak’tun in the Mayan Long Count calendar. That wasn’t so bad, was it?

In case anyone has any New B’ak’tun Resolutions about charities, I wanted to link to GiveWell, an organisation that tries to find evidence about the most cost-effective and transparent charities, and also provides a way to donate to these charities.

An encouraging feature is that they don’t only publish what they found about each charity, they publish a detailed review of their mistakes.

Their current top 3 charities are one that distributes anti-mosquito bed nets, one that sends cash directly to very poor people in Kenya (via a very cheap cell-phone based system), and one that treats schistosomiasis.

December 10, 2012

Briefly

I’ve been away or busy for a couple of weeks, so here are some collected links on statistics, graphics, the media, and risk