Posts filed under Surveys (188)

March 12, 2012

Run along and play

The Herald (and others) are reporting the Milo State of Play survey, which says kids don’t play enough. In fact, the Herald says

The report outlines how a lack of play can deprive children of an activity crucial to healthy brain development.

Actually, it doesn’t. The report states that play is important to social development, and uses the phrase ‘prefrontal cortex’, but that’s all. Perfectly reasonable, since that’s not what the survey was about.  The survey (once you find the actual report than backs up the YouTube animation) used interviews of three samples: children, parents, grandparents to ask about activities and attitudes.  These were sampled in a nicely representative way, though from an online survey panel, so they may be more technologically aware and active than Kiwis as a whole.

According to the YouTube video, 96% of the participants agreed that active play was essential for development, and about half  (52%) of the parents, but less than half the children (40%)  or grandparents (31%), said children should have more playtime outside than they currently do.  That’s not really the impression that the story gives.

Some of the detailed findings are interesting:

More than half (56%) of parents think children enjoy playing games from their parents’ childhood. Where in fact, 96% of children state they do enjoy playing games from their parents’ youth.

and most children said they would like to spend more time playing with their parents.

The basic message of spending more time playing and having more unstructured time is one that has been promoted before, and seems sensible, but the report doesn’t really provide any more evidence for it than we had previously.

As a final note, the Herald story leads with

New Zealand children risk weight and brain development issues as a new study shows nearly half of Kiwi kids are not playing every day.

Is it just me that finds it ironic to get warnings like this from a survey sponsored by a multinational founded on selling chocolate and baby formula ?

February 26, 2012

Half of electorates above average

Last week, The Australian reported on the politics of same-sex marriage in the West Island.  The story is a bit old (I picked it up from John Quiggin), but it’s such an impressive example that it’s still worth mentioning.

Roy Morgan Single Source survey data from the middle of last year shows that over a quarter of Australians aged 14 and over — 26.8 per cent — agreed with the blunt proposition “I believe homosexuality is immoral”.

That’s a fairly small minority, and although there is variation, it’s a minority everywhere in the country. But there is this:

In 80 of the 150 federal electorates, an above-average number of people support the proposition. 

So. In about half the electorates the proportion supporting the proposition is  above the national average (but still a minority), and in about half it is below the national average. That sure tells us a lot.

Thirty-three of these 80 are Labor seats. They take in a who’s who of the ALP.

Of the half that are above the national average, the proportion held by Labor is 41%, a little less than the 48% of all seats they hold.

[Update: in related news, 49% of British households get less than the national average broadband speed]
February 16, 2012

It depends on what you ask

This table is from a US Census Bureau report  Who’s Minding the Kids? Child Care Arrangements:Spring 2005/Summer 2006.  The report in general is boring and informative, but Table 1 stands out: why are nearly four times as many pre-schoolers cared for by fathers vs mothers!?

It turns out that there is an explanation, but it shows how interpretation of statistical output depends on exactly what you ask.  The Survey of Income and Program Participation (SIPP) samples a ‘designated parent’ for each child, and if both mother and father are available they always choose the mother.  The child-care arrangement data are for child care by someone other than the designated parent, so ‘care by the mother’ means ‘care by the mother when the child lives with the father’.  As you would expect, some fathers don’t like this, and neither do some mothers, and it does seem a bit twentieth-century.

In partial defense of the Census Bureau, they have good reasons for being very reluctant to change their questions, because they are often interested in trends over time.  And Table 2 of the report does break down care arrangments by who the child lives with, and by employment information, and other factors.

February 3, 2012

HIV trends

Given this blog’s recent focus on things claiming unconvincingly to be surveys, you must be expecting a post on the statistic that 20% of HIV-positive gay men in Auckland don’t know they’re infected.

It’s obviously going to be hard to get an accurate estimate, since we don’t have a citywide list of gay men in Auckland. We don’t know what proportion of the population is gay; in fact, we wouldn’t even be able to get consensus on what the definition would be.

The approach used by the Otago researchers was to visit places like bars, and events like Big Gay Out.   This gives a reasonably well-defined sampling frame — the sample isn’t from all gay men in Auckland, but we do know who was targetted.   About 50% of the people they approached agreed to fill in a questionnaire, and 80% of those gave a saliva sample that was subsequently tested.   It’s not perfect, but it’s the best you are likely to be able to do in practice; a sharp contrast with the bogus polls on the farm sales, where simple random-digit dialing for a sample of ten people would have been better.

The final numbers supporting the conclusion are small:  68 men were HIV-positive; 15 of them were not diagnosed. The  ‘1 in 15’ figure could be as low as 1 in 20 or as high as 1 in 12, and that’s before you start worrying about bias from non-responders being different. A comparison to other surveys of this kind in NZ and in other parts of the world is still sensible, and the research paper says the infection rate is lower than in most places, but the proportion who don’t know they are infected is higher.

There’s a lot of research currently on ways to sample from populations that can’t be reached effectively by random-digit dialling, but where there are social links between members: jazz musicians, injecting drug users, homeless people.  The general approach is to get people to recruit each other, and the difficult part is to try to correct for the bias this causes, but it’s not clear that the current methods actually work.

 

Incidentally, the NZ Herald report contains the strange paragraph

The researchers compared respondents’ self-reported HIV test history with their saliva result to find 1.3 per cent of HIV positive men did not know they were infected.

which initially doesn’t seem to make any sense and contradicts the headline.  Most of the problem is the usual inattention to denominators:  take 15/1068, to get the proportion of testable samples that were HIV positive and undiagnosed, and you get close to 1.4%.  That is 1.4% of sampled men were HIV positive and didn’t know it. Confusing P(A and B) with P(A|B) is a bit unusual — usually the Herald confuses P(A|B) with P(B|A).

The 1.3% figure actually appears in the research paper, and it seems to be a problem of premature rounding:  round the proportion HIV positive to 6.5% and the proportion of those undiagnosed to 20%, and you get 6.5%×20%=1.3%.

 

January 18, 2012

Oooh. Pretty.

David Sparks has some nice maps of public opinion, using transparency to indicate the level of uncertainty.

Compare to my cruder county-level versions, based on plotting a sample of several thousand from the population

January 13, 2012

Drug driving: dodgy numbers in a good cause?

More than a year ago, ESR scientists produced a report on drugs and alcohol found in blood samples taken after fatal crashes.  Now, the Drug Foundation is launching a publicity campaign using the data.  Their website says “Nearly half of drivers killed on New Zealand roads are impaired by alcohol, other drugs, or both.” But that’s not what the ESR report found. [Edited to add: the Drug Foundation is launching a campaign, but the TV campaign isn’t from them, it’s from NZTA]

The ESR report defined someone as impaired by alcohol if they had blood alcohol greater than 0.03%, and said they tested positive for other drugs if the other drugs were detectable.   If you look at the report in more detail, although 351/1046 drivers had detectable alcohol in their blood, only 191/1046 had more than 0.08%.  At 0.03% blood alcohol concentration there may well be some impairment of driving, and near 0.08% there’s quite a lot, but we can’t attribute all those crashes to alcohol impairment rather than inexperience, fatigue, bad luck, or stupidity.  At least the blood alcohol concentrations are directly relevant to impairment.  An assay for other drugs can be positive long after the actual effect wears off. For example, a single use of cannabis will show up in a blood test for 2-3 days, and regular use for up to a week.  In  fact, the summary of the ESR report specifically warns “Furthermore, it is important to acknowledge that the presence of drugs and alcohol in the study samples does not necessarily infer significant impairment.”   Regular pot smokers who are scrupulously careful not to drive while high would still show up as affected by drugs in the ESR report.  In fact, the Drug Foundation makes this distinction when they talk about random roadside drug testing, pointing out the advantages of a test of actual impairment over a test of any presence of a drug.

The Drug Foundation also did a survey of community attitudes to driving while on drugs (also more than a year ago), and it is interesting how many people think that stimulants and cannabis don’t impair their driving.  However, if you look at the survey, it turns out that it was an online poll, and “Respondents were recruited to the online survey via an advertising and awareness campaign that aimed to stimulate interest and participation in the study.” Not surprisingly, younger people were over-represented “The mean age of respondents was 38.1 years”, as were people from Auckland and Wellington. Maori, Pasifika, and Asians were all under-represented.  36% of respondents had used cannabis in the past year, more than twice the proportion in the Kiwi population as a whole.  No attempt was made to standardise to the whole NZ population, which is the fundamental step in serious attempts at accurate online polling.  [If we could use the data as a teaching example, I’d be happy to do this for them and report whether it makes any difference to the conclusions]

And while it’s just irritating that news websites don’t link to primary sources, it is much less excusable that the Drug Foundation page referencing the two studies doesn’t provide links so you can easily read them. The study reports are much more carefully written and open about the limitations of the research than any of the press releases or front-line website material.[The NZTA referencing is substantially less helpful]

For all I know, the conclusions may be broadly correct. I wouldn’t be at all surprised if many drug users do believe silly things about their level of impairment. Before  the decades of advertising and enforcement, a lot of people believed silly things about the safety of drunk driving.  And the new TV ads are clever, even if they aren’t as good as the ‘ghost chips’ ad.  But the numbers used to advertise the campaign don’t mean what the people providing the money say they mean.  That’s not ok when it’s politicians or multinational companies, and it’s still not ok when the campaigners have good intentions. [Edited to add: I think this last sentence still stands, but should be directed at least equally at the NZTA].

 

[Update: Media links: TVNZ,  3 News, Stuff, NZ Herald, Radio NZ]

January 12, 2012

Who you gonna call?

Keith Humphreys, an addiction researcher at Stanford, writes The newest Behavioral Risk Factors Surveillance System survey by the US CDC shows a substantially higher rate of binge drinking than in past surveys.  BRFSS is the world’s largest telephone survey, and in 2009 they started calling cellphones for the first time.

Cellphone users, and especially those who don’t have any landline phone, are a lot younger on average than the rest of the population.  That in itself need not be disastrous for surveys, since we know what proportion of the population is in each age group, and can rescale the numbers to remove the bias.  The problem is that cellphone users also are different in other ways that are harder to measure, as the CDC’s experience shows.

January 3, 2012

Overgeneralising again.

The NZ Herald online has had two stories in two days on a survey by Southern Cross Health Society.  The survey reported cancer as the number one health fear of Kiwis.  That would be the minority of Kiwis with private health insurance. Or, though the actual survey population isn’t stated anywhere, probably the smaller minority of Kiwis who are members of Southern Cross.

I don’t know whether the cancer fear finding generalises to the whole population, and neither do they, but I’m certain the screening results they report do not.  They say more than 80% of men aged 55-64, and 93% of men over 65 had a prostate cancer screening test within the past year.  Figures based on a national survey published in 2010 by the University of Otago, show that about 44% of men 60+ and 17% of men 40-60 had a PSA test in the the previous year (which includes diagnostic and post-treatment as well as screening tests).  Only 64% of men over 60 had ever had a PSA test of any sort and only 32% had ever had a screening PSA test as part of primary care.   It’s hardly surprising that people with private health insurance get more screening, though it’s still not completely clear to anyone except perhaps Paul Holmes whether the extra PSA screening is doing them any good.

The statistical message is simple: surveys only measure what they measure, not what you would like them to have measured. Get over it.

December 22, 2011

Bimodal distributions really exist

starting salaries for US lawyers

 

The NALP has released data on starting salaries for US lawyers in 2011, and the distribution is really weird.

Usually we expect salary distributions to be skewed, with a long upper tail, but in this case there are two modes: a large group earning around $45k and a smaller group earning about $160k. The mean income is about $80k, the median is about $60k, and neither is a good summary of what someone is likely to make.

The distribution didn’t always look like this. Twenty years ago, starting salaries for lawyers had a more familiar skewed distribution, with a single mode around $30k.

 

Over the twenty-year period, the income at the lower mode has rised by about 50%, but US median household income has roughly doubled, and the CPI has increased by about 65%.  Some law graduates are raking it in; most are not, and they nearly all have to pay off huge sums in student loans.

In reality the figures are probably worse than this for the majority: there’s a lot of missing data.  As Paul Campos puts it People without salaries are reluctant to report their salaries”

November 30, 2011

Half of what?

The sesquipedalian accounting company PwC has a new business fraud report, claiming that half of all NZ businesses have been victims. This is from a survey with 93 Kiwi respondents, including some businesses with even fewer than 200 employees.

The obvious problem is that large businesses have many more employees and are much more likely to have at least one case of fraud.  Small businesses, of which there are many, are vastly under-represented.  A more dramatic example from a few months back was the claim by the US National Retail Federation that 10% of companies it polled had been victims of a ‘flash mob’ attack.   That’s not 10% of stores, that’s 10% of a sample of 106 companies including BP, Sears, and North Face.

The claim that fraud is on the rise could still be supported by the data, as long as the same methodology was used now as in the past, but the reported change from 42% to 49.5% would be well within the margin of error if the 2009 survey was the same size as the new one.

PwC’s Alex Tan explains the rise as “We’re a relatively low-wage economy but we like high-wage living.”  This certainly isn’t a result of the poll — they didn’t poll the perpetrators to ask why they did it — and it sounds rather like the classic fallacy of explaining a variable by a constant.   New Zealand is a relatively low wage country, but we were a relatively low wage country in 2009 as well, and even before that.  Baches are expensive now, but they were expensive in 2009, and even before that.   If low wages and expensive tastes are overcoming Kiwis’ moral fibre this year, how did they resist two years ago?