Posts filed under Surveys (188)

December 13, 2012

Sure beats paying for your ads

Back in August, we commented on a bogus survey run by a sex toy store, which Stuff thought was news.

Now, three months later, the store has put out another press release(probably NSFW) from the same survey and this time we have two different stories in Stuff and even the Herald thinks it’s news. At least the Otago Daily Times didn’t fall for it.

Again we have figures quoted to a tenth of a percentage point from small subgroups of 1500 respondents (gays, people from Gisborne, 55-59 year olds), which would be meaningless even if this was a real survey.

Sigh.

 

December 10, 2012

Won’t somebody think of the children

The Herald warns us

More than four in ten UK parents say that their children have been exposed to internet porn, an official survey reveals.

In the fifth paragraph we find

The Daily Mail is campaigning for an automatic block on online porn to protect children

which should make any reader sceptical about the numbers.  This is a scare story from a foreign source notorious for its creative use of numbers and its obsession with sex, and a story where they actually admit they are lobbying. It’s bad enough getting science stories from the Daily Mail; this sort of thing really suggests a news shortage.

We don’t get told what actual questions were asked, how the sample was gathered, or any of the other basic survey details.  More importantly, we don’t even get told anything about the age range of the children, which makes a big difference in this case.  We do learn

Almost a third say their sons or daughters have received sexually explicit emails or texts and a quarter say they have been bullied online or on their phones.

Neither of these issues would be affected by the proposed internet filtering, and both are very different from internet porn in that they are almost exclusively between kids who know each other in real life.

Perhaps the journalists are too young to know that porn existed before the Internet, and teenagers were occasionally exposed to it. You can get a more useful perspective from danah boyd and from a report from Harvard’s cyberlaw clinic that contains actual research.

 

November 27, 2012

Stats Chat at national journalism teachers’ conference

Polls and surveys making various claims land on journalists’ desks every day – but not all of them are newsworthy (and the bogus, PR-driven ones that make it into print are subsequently shredded in this very forum).

The statschat.org.nz team is always keen to help people understand the difference between a reliable poll and something that should be filed in the bin. So we’re delighted that two members of the statschat.org.nz team, Andrew Balemi, a polls and surveys expert from the Department of Statistics at The University of Auckland, and adviser Julie Middleton have been given an hour at this Wednesday’s Journalists Training Organisation/Journalism Education Association conference to talk about polls and surveys.

They’re not going to each anyone to crunch numbers. What’s far more important is knowing the right questions to ask about a poll or survey to determine whether it should be taken seriously.

This is the hand-out we are providing – we have only an hour, so the list of questions isn’t complete, but it gives you an idea of how we encourage journalists to think.

Questions a reporter should ask of a poll or survey

Why is the poll/survey being done?
What do the pollsters want to find out?

Who is doing the survey?
Who paid for the survey?
Who carried out the work?
Is the person/company that carried out the survey a member of the Market Research Society of New Zealand? (ie, is it subject to a code of ethics?)

What we’re looking for: Evidence of lobbying, misinformation, public-relations or marketing spin … or just a company hoping to get editorial when it should buy an ad.

How representative is the sample?
How were those to be polled/surveyed chosen?
From what area (nation, state, or region) or group (teachers, National voters etc) were these people chosen?
How were the interviews conducted? (Internet survey, face-to-face, by phone)…

What we’re looking for: Evidence that the poll/survey is based on a genuinely representative random sample and has been conducted according to sound statistical principles. Be wary if you can’t get the original research questionnaire, raw data and a full explanation of methods.

If possible, ask about the broader picture
Does this study reflect the findings of other polls/surveys on this topic?

What we’re looking for: Evidence of similar findings elsewhere. 

Is this poll/survey worth reporting?
If you get positive responses to the above, yes. If not, this question becomes a philosophical one: Do you ignore accuracy for a sexy subject? Or run a story based on a bogus survey with a long list of caveats?

Don’t be afraid to ask professional statisticians for advice and help. They will generally be flattered – and pleased that you are taking such care.

November 25, 2012

Clarity begins at home

Stuff’s story on the World Giving Index would have been better just using it as a hook for a discussion of charities in NZ, but they couldn’t stop themselves from referring to details. Which turn out to be wrong.

For example

The World Giving Index, which compares countries’ charitable behaviour in giving money, time and helping a stranger, found New Zealand was slightly less giving than in 2010 when New Zealand and Australia were found to be the most generous, with 57 per cent of people doing some charitable work.

It’s not 57% of people (unless you think those behaviours are independent).  The index value of 57 is the average of the proportions for the three things that the survey actually measured.  The value of 57 is exactly the same as in the previous survey.  NZ went down in the ranking because other countries increased their Giving Index value.

Fundraising Institute of New Zealand chief executive James Austin is quoted as saying

“When you start boiling it down, even though they are very sophisticated in the way they put their statistics together, everything from exchange rates has an impact on it,”

Either he wasn’t asked a question about the World Giving Index, or he doesn’t understand it either. This isn’t an inventory of actual monetary sums given. It’s just an average of three percentages: % who volunteered time, % who gave money, % who helped a stranger. Exchange rates really don’t come into it.

The story also doesn’t mention the cautionary note in the Australasia section of the World Giving Index report

 It is important to note that these surveys were conducted before devastating floods crippled Queensland, Australia in January 2011, and the tragic earthquake that struck New Zealand in February 2011, so any change in giving behaviour after these disasters is not captured in this year’s analysis.

That’s useful context for the poll-based claim in the story

A Sunday Star-Times poll of 763 readers reflected the World Giving Index, with 53 per cent of respondents having changed the way they donate in the past 12 months. More than half of those who had changed their habits admitted to donating less money.

So, this poll actually had very different findings from the World Giving Index survey, possibly because it was asking completely different questions, but possibly because it was about a non-overlapping period of time.

And we haven’t even got to margins of error or sampling bias.
November 21, 2012

Not so much poor and huddled masses

Nice presentation of interesting results on US opinions of immigration.  Participants were given two hypothetical immigrants with characteristics chosen from these options, and asked which one they would prefer to admit, and regression models were then used to estimate the impact of each characteristic.   Country of origin had a surprisingly small impact; otherwise it was pretty much what you might expect.  The story has more details, including a comparison by political affiliation, which reveals almost no disagreement.

While on the topic, you should read Eric Crampton’s proposal that anyone completing a (sufficiently real) degree in NZ should be eligible for permanent residence: not only boosting our education export industry, but attracting young, ambitious, educated immigrants. I think it’s a good idea, but I’m obviously biased.

November 20, 2012

The data speaks for itself

Stuff,  September 19

The myth of feckless youth which does not understand the value of money should be laid to rest, new research from online deposit bank RaboDirect suggests.

Stuff, today

The overwhelming majority of young Kiwis do not believe in long-term financial planning and most give no thought to budgeting.

In both cases the actual stories are better than the leads would suggest.
November 19, 2012

The importance of friends

A Herald story today tells us, four times,  “22 per cent of women feel they have stronger relationship with girlfriends than their partners.”  According to a 2009 survey (PDF), 18% of adult Kiwis were not married, living together, or even dating.  That could potentially explain a lot of the 22%.

At least that one was for a charitable cause. The other survey story was for an online dating service that relies on recommendations from (Facebook) friends. Unsurprisingly, the survey they commissioned could be read as supporting this strategy:

43 per cent of people trusted their friends and families when it comes to dating advice, tips and recommendations.

and

The survey found singles were less inclined to go to a professional matchmaker for help, with only 1 per cent saying they would trust them

 

November 18, 2012

Why was this survey done?

Some surveys are done to find out information.  Others, perhaps not so much.

Today’s example is from the Herald, where the most informative sentence in the story is at the end:

The survey of 500 men and women in New Zealand and 1000 in Australia was commissioned by Bendon in conjunction with the release of a range of cleavage-enhancing bras.

November 11, 2012

Numbers don’t have to make sense?

Stuff is reporting that Vodafone (who sell communications, including specific teleworking products) has done a survey about working from home in NZ.

As usual, we don’t know how the survey participants were recruited, just that they were “476 people of all ages, regions and industries.”  If they really were from all industries, it’s not surprising that many of them didn’t work from home: it’s pretty difficult for a barista, or a hospital nurse, or a factory worker, or a forester.  Anyway, in this case the big problem is that we don’t have the questions, so it’s hard to work out what’s actually going on in the apparently contradictory responses.  For example

  • 61 per cent reported they couldn’t work away from the office because their manager liked to be able to “brief off work” immediately
  • 37 per cent of people felt their managers were not supportive of them working away from the office.

So, by subtraction, at least 24% of people couldn’t work away from the office because of their manager’s working style but still felt that their managers were supportive of them working away from the office?  As the story points out, we also don’t know whether the managers really were opposed to people working from home, or whether the employees just thought they were.

Comparing this survey to last week’s Randstad survey reported in the Herald, we see that the Randstad survey found “67.89 per cent said the option of working from home was either appealing or very appealing.”   (No, I don’t know why the Herald quoted the results to the nearest tenth of a respondent) but the Vodafone poll found “77 per cent of people saying that if they were changing jobs, the ability to work from anywhere would make the job more attractive.” Clearly, sampling error is not the biggest problem in interpreting these surveys.

The last line of the Stuff story is “Meanwhile, 42 per cent of people, who worked away from the office regularly, said they often did overtime.”  As written, this says 42% worked away from the office regularly, which seems to contradict the earlier results and the general tone of the story.  Here I don’t think it’s the survey’s fault. It seems more likely that an editing error added the commas and changed the implied denominator. Grammar can make a difference in statistics, too.

October 10, 2012

Ignorance surveys

In the previous post I was sceptical about the importance of young Kiwis not being on first-name terms with their zucchini.  A post by Mark Liberman on Language Log suggests that ignorance surveys are even worse than I realized.

There’s the well-known problem of misreporting:

A new survey conducted by Chicago’s McCormick Tribune Freedom Museum, which has yet to open, finds that only 28 percent of Americans are able to name one of the constitutional freedoms, yet 52 percent are able to name at least two Simpsons family members.

when in fact the figure in that survey was 73%, not 28%.  What’s new is how bad the coding of responses can be even in very respectable surveys

The way it works is that the survey designers craft a question like this one (asked at a time when William Rehnquist was the Chief Justice of the United States):

“Now we have a set of questions concerning various public figures. We want to see how much information about them gets out to the public from television, newspapers and the like….
What about William Rehnquist – What job or political office does he NOW hold?”

Answers scored as incorrect included:

Supreme Court justice. The main one.
He’s the senior judge on the Supreme Court.
He is the Supreme Court justice in charge.
He’s the head of the Supreme Court.
He’s top man in the Supreme Court.

Mark Liberman concludes

  • When you read or hear in the mass media that “Only X% of Americans know Y”, don’t believe it without checking the references — it’s probably false even as a report of the survey statistics.
  • When you read survey results claiming that “Only X% of Americans know Y”, don’t believe the claims unless the survey publishes (a) the exact questions asked; (b) the specific coding instructions used to score the answers; (c) a measure of inter-annotator agreement in blind tests; and (d) the raw response transcripts.

That might be going a bit far, but at least (a) and (b) are really important. If you call the elongated green vegetable a ‘courgette’, is that scored as right or wrong? What if you are from the US and call it ‘summer squash’, or from South Africa and (according to Wikipedia) call it ‘baby marrow’?