Stat of the Week Competition Discussion: October 20 – 26 2012
If you’d like to comment on or debate any of this week’s Stat of the Week nominations, please do so below!
If you’d like to comment on or debate any of this week’s Stat of the Week nominations, please do so below!
I’ve written before about the problem of getting actual opinions rather than social or political identifications in surveys. US late-night TV host Jimmy Kimmel had a great demonstration. He went out on the streets of LA on the afternoon before the 2nd presidential-candidate debate, and asked people who they thought won “last night’s debate”. There wasn’t any shortage of people with confident opinions (presumably these are a selected subset of the most entertaining victims, but the point still holds) (via)
The police are urging people to drive carefully this weekend, which is a good idea as always. They are also reducing their speeding threshold to 4km/h over the limit, saying that this “had made a big difference during previous holiday periods”, and that they want to “avoid a repeat of last year’s carnage, in which seven people were killed on the roads”. The lower speed tolerance was bought in for the Queens Birthday weekend 2010, which was before last year, so at least for Labour day it doesn’t seem to have made much difference.
It’s always hard to interpret figures for a single weekend (or even a single month) because of random variation. Here’s the data for the past six years (from, and)
The top panel shows monthly deaths, with October 2012 as an open circle because the number there is an extrapolation by doubling the deaths for Oct 1-15. There’s a lot of month-to-month variability, and the trend isn’t that obvious.
The second panel shows cumulative sums of deaths minus the average number for 2007-2009, a chart used in industrial process monitoring. The curve is basically flat until mid-2010, and then starts a steady decline, suggesting that a new, lower, average started in mid-2010 and has been pretty stable since. The current value of the curve, at -200, means that 200 people are still alive who would have died on the roads if the rates were still at the 2007-2009 levels.
The third panel shows the monthly deaths again, with horizontal lines at the average for 2007-2009 and 2011-12, confirming that there was a decrease to a new, relatively stable level. The decrease doesn’t just happen in months with holiday weekends, so it’s unlikely to just be the tightened speeding tolerance causing it. It would be good to know what is responsible, and there are plenty of theories, but not much evidence.
Q: Have you seen the headline: “Skipping breakfast makes you gain weight: study”?
A: If that’s the one with the chocolate cupcake photo, yes.
Q: Was this just another mouse study, or did they look at weight gain in people?
A: People, yes, but they didn’t measure weight gain.
Q: But doesn’t the headline say “makes you gain weight”?
A: Indeed.
Q: So what did they do?
A: They measured brain waves, and how much pasta lunch people ate. The people who skipped breakfast ate more.
Q: So it was a lab experiment.
A: You can’t really tell from the Herald story, which makes it sound as though the participants just chose whether or not to have breakfast, but yes. If you look at the BBC version, it says that the same people were measured twice, once when they had breakfast and once when they didn’t.
Q: And how much more lunch did they eat when they didn’t eat breakfast?
A: An average of 250 calories more.
Q: How does that compare to how much they would have eaten at breakfast?
A: There were brain waves, as well.
Q: How many calories would the participants have eaten at breakfast?
A: The part of the brain thought to be involved in “food appeal”, the orbitofrontal cortex, became more active on an empty stomach.
Q: Are you avoiding the question about breakfast?
A: Why would you think that? The breakfast was 730 calories. But the MRI imaging showed that fasting made people hungrier
Q: Isn’t 730 more than 250?
A: Comments like that are why people hate statisticians.
That’s how Ben Goldacre described the process of criticism and debate that’s fundamental to science, at a TED talk last year. At this time of year we expose a lot of innocent young students to this process: yesterday it was the turn of statistical consulting course, next month it’s BSc(Hons) and MSc research projects, and then the PhD students.
Here’s Ben Goldacre’s whole talk
Cameron Slater (and the Police, and the Police Association) are Outraged about a Horizon poll on public perceptions of the police. They do have some fair points, although a few deep breaths wouldn’t hurt.
Horizon Research conducted a poll shortly after an article in the Dominion Post. This, in itself, is one of the claims against the company, but I don’t think this one really holds water — if you’re a public opinion firm wanting coverage, trying to capitalise on well-publicised issues seems fair enough, and it’s not as if we in the blogosphere are on the moral high ground here.
The results of the poll are clearly not relevant to the question of police conduct in the particular incident described in the Dominion Post article, since if the poll has even the slightest pretension to being representative, none of the respondents will know anything about that incident beyond what they might have read in the papers. Also, since this is a one-off poll, the Dominion Post’s headline “Trust in police hits new low survey shows” cannot possibly be justified. There is no comparison with a series of similar surveys in the past; respondents were just asked whether their trust in the police had changed over time.
Many of the reported findings of the poll seem reasonable, and in line with other evidence, for example: 73% have the same or more trust in the police than five years ago, and the police are thought to do well on their primary areas of responsibility
The police, in their press release, apparently as further argument against the poll results, point out that crime rates are falling. That’s not really relevant. Even if the crime rates were falling specifically because of police actions (which is unproved, since there are falls in other countries too), it would only prove that people should think the police are effective in stopping crime, not that they do think the police are fair in investigating complaints.
The controversial claims are about victims of police misconduct: firstly, that sizable majorities think the investigation procedure needs to be more independent, and secondly, that people who identify themselves as victims are not happy with how their cases were handled. Again, this doesn’t seem all that strange: I basically trust the police, but I’m still in favour of having investigations done independently just from the ‘lead us not into temptation’ principle. It’s not news that NZ, by international standards, gives people fairly low levels of compensation for all sorts of things. And it’s hardly surprising that people who think they were mistreated by police aren’t happy with how they were treated by police.
The problem is with how the poll was conducted, and there are at least two pieces of evidence that it wasn’t done well. The first is in the poll results themselves. The number of New Zealanders who file complaints against the police is very small. We looked at this back when the issue of complaints against teachers came up. In a one-year period there were 2052 complaints, half of them minor ‘Category 5’ complaints. That’s one Category 1-4 complaint per 4500 Kiwis per year (and if some people make multiple complaints, the effective number is even smaller). In a representative sample of 756 people there shouldn’t have been enough people with experience of police complaints to get useful estimates of anything, let alone to the quoted three decimal places. Even if we ignored bias and just worried about sampling variation, it would be a serious fault that no margin of error or sample size is quoted for these subgroups.
The second problem is that a Facebook page associated with the incident gave a bounty (chance of winning money and iPad) to people who clicked through to the online poll. There are now comments on that page saying that not very many people did click through and that they might not have been counted anyway because they wouldn’t have been registered far enough in advance. That reminds me of this XKCD cartoon. If you have a poll where it’s even conceivable that it could be biased this way, it’s not much consolation to know that the only known attempt to do it was a failure.
Back in January, Rachel Cunliffe looked at birth statistics for China as a whole and for Hong Kong, and saw a small ‘dragon baby’ boom for the Year of the Dragon in Hong Kong, but nothing for the whole PRC.
Today, the Herald is seeing a Year of the Dragon boom in Auckland
A support organisation for migrant parents in Auckland is experiencing a baby boom because of a surge in the number of dragon babies born to Chinese parents.
They go on to say
Membership at the Chinese Parents Support Service Trust for new Chinese migrant mothers has reached 200 – and one in four mothers had a dragon baby born this year.
It’s hardly surprising that new Chinese migrant mothers are likely to have a baby born this year — that’s what makes them new mothers. So what we really have is growth in the number of mothers using the service, and the fact that one in four of these mothers has a child under 9 months, a ‘dragon baby’.
Today’s graph is almost entirely frivolous. It’s pumpkin season in the US (Halloween and Thankgiving), and Felix Salmon (a past winner of the American Statistical Association’s award for excellence in statistical reporting) is writing about how pumpkin has diversified.
‘Pumpkin,’ in this context, usually means the combination of sugar, cinnamon, cloves, and nutmeg that makes pumpkin pie palatable. The vegetable itself doesn’t really make an appearance.
On the continuing issue of how survey responses are sensitive to exact wording, it’s also worth pointing out that the Americans have a much narrower view of which vegetables qualify as pumpkins — they have to be round and orange on the outside. These, which I photographed last year in Melbourne, would not count as pumpkins in the US.
Congratulations to Andrew Mardon for being this week’s Stat of the Week winner for his nomination of:
“The higher a country’s chocolate consumption, the more Nobel laureates it spawns per capita, according to findings released in the New England Journal of Medicine.”
While the article mentions both correlation and causation, it doesn’t clearly explain the difference between the two and makes the distinction murkier by saying science is fallible and including a gem quote such as this:
“[The US] would have to up its cocoa intake by a whopping 125 million kg a year to produce one more laureate, said Franz Messerli, who did the analysis.”
An interesting post on how media reporting of risk could actually make us less safe. (via)