Posts filed under Polls (132)

February 12, 2013

Bogus poll ballot-box stuffing

From Ubermotive.com, an Australian blog

Last week, News Corporation ran a story about the shock results of an online poll that indicated over 90% of the 16,000 respondents wanted to see a raise in speed limits, despite the poll being paired with an article asking motorists to slow down in consideration for elderly drivers.

Well, I have some news for you: Over 15,000 of those votes were mine.

Also see the ABC Mediawatch story.

Everyone seems to think News Corporation should be worried, or at least embarrassed, but if you pay any attention to bogus polls you’ve already given up on the idea that public opinion is real and important and measurable.

In related news, the Assocation of Market Research Organisations has just released another draft of their Guidelines on Political Polling.  Their website currently just has the December draft, but I assume the new draft will be there soon.  The guidelines look sensible to me, with my only reservation being the attempt to maintain a “poll” vs “survey” distinction, where “poll” means a real poll and “survey” means a possibly bogus poll.

January 26, 2013

Bogus poll headlines return

It’s been a long time since we last had a headline based on a bogus poll, but this week the Herald gives us a link on the Life & Style page

Poll: Cafe made right call
A poll shows almost 90 per cent of people think the café did the right thing, asking the mother to remove her crying baby…

and the headline for the story Crying baby debate: Cafe made right call.”

The first question for “almost 90 per cent of people” is “which people?”, and second “how were they sampled?”

The poll (which comes with a bogosity disclaimer) is in the Bay of Plenty Times

Bay of Plenty Times clicky poll

 

NZ Herald bogus poll

In the Herald, the story is accompanied by another bogus poll, one that gives a quite different impression (and which doesn’t come with a disclaimer)

The difference will partly be the different audiences, partly the different phrasing of the question, and partly a reflection of the fact that bogus polls are completely useless as data-gathering methods.

The lack of agreement is still pretty dramatic.

 

January 9, 2013

Ask Nate Silver anything

Nate Silver is doing a Q &A session at reddit.com (they have a running feature I am A …. ask me anything)

 

December 10, 2012

[Video] Nate Silver talks about his new book: The Signal and the Noise

Nate Silver joins Google’s Chief Economist Hal Varian to talk about his new book “The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t” and answer Googler questions.

December 4, 2012

Making Nate Silver cry

Stuff tells us

Polls have Labour closing in on Nats

A One News-Colmar Brunton poll released last night and taken a week after Labour’s leadership spat, saw the party’s vote lift 3 percentage points to 35 per cent, with the Greens up one on 13 per cent.

and later on

The One News poll of 1000 had a margin of error of 3.1 per cent.

 If you have been paying attention, you know that (a) the margin of error for a change is about 1.4 times larger than the margin of error in a single proportion, and (b) much more importantly, the useful thing to do with election polls is not to report each one separately, but to take some sort of average.

Did they have another poll? Well, the story goes on to say

 Meanwhile, a 3News-Reid Research poll, also released yesterday, showed Labour on the rise at 34.6 per cent, up 1.6 percentage points. The Greens improved 1.3 to 12.9 per cent, while National was down 1.8 at 47 per cent.

It doesn’t make as good a lead, because Labour was only up 1.6%, not 3% in this poll.  But if there are two polls, perhaps there are even more out there.

The poll of polls at pundit.co.nz shows a steady trend towards higher support for Labour, and a trend towards lower support for the Greens (with a bit more variation around the trend).  Their last update (including these two polls) put Labour up by 1% and Greens up by 0.2%.  But that would be less newsworthy.

November 27, 2012

Stats Chat at national journalism teachers’ conference

Polls and surveys making various claims land on journalists’ desks every day – but not all of them are newsworthy (and the bogus, PR-driven ones that make it into print are subsequently shredded in this very forum).

The statschat.org.nz team is always keen to help people understand the difference between a reliable poll and something that should be filed in the bin. So we’re delighted that two members of the statschat.org.nz team, Andrew Balemi, a polls and surveys expert from the Department of Statistics at The University of Auckland, and adviser Julie Middleton have been given an hour at this Wednesday’s Journalists Training Organisation/Journalism Education Association conference to talk about polls and surveys.

They’re not going to each anyone to crunch numbers. What’s far more important is knowing the right questions to ask about a poll or survey to determine whether it should be taken seriously.

This is the hand-out we are providing – we have only an hour, so the list of questions isn’t complete, but it gives you an idea of how we encourage journalists to think.

Questions a reporter should ask of a poll or survey

Why is the poll/survey being done?
What do the pollsters want to find out?

Who is doing the survey?
Who paid for the survey?
Who carried out the work?
Is the person/company that carried out the survey a member of the Market Research Society of New Zealand? (ie, is it subject to a code of ethics?)

What we’re looking for: Evidence of lobbying, misinformation, public-relations or marketing spin … or just a company hoping to get editorial when it should buy an ad.

How representative is the sample?
How were those to be polled/surveyed chosen?
From what area (nation, state, or region) or group (teachers, National voters etc) were these people chosen?
How were the interviews conducted? (Internet survey, face-to-face, by phone)…

What we’re looking for: Evidence that the poll/survey is based on a genuinely representative random sample and has been conducted according to sound statistical principles. Be wary if you can’t get the original research questionnaire, raw data and a full explanation of methods.

If possible, ask about the broader picture
Does this study reflect the findings of other polls/surveys on this topic?

What we’re looking for: Evidence of similar findings elsewhere. 

Is this poll/survey worth reporting?
If you get positive responses to the above, yes. If not, this question becomes a philosophical one: Do you ignore accuracy for a sexy subject? Or run a story based on a bogus survey with a long list of caveats?

Don’t be afraid to ask professional statisticians for advice and help. They will generally be flattered – and pleased that you are taking such care.

November 14, 2012

How good were US election predictions?

Neil Sinhababu has compiled a list of US election predictions by professional pundits, ranked on how well they predicted the overall Electoral College results, the ten marginal states, and the popular vote (as a tie-breaker).

Remember that since people such as Sam Wang, Simon Jackman, and Nate Silver obtain their predictions deterministically from poll data, and publish them in fine detail, people who have any additional sources of information not represented in opinion polls should be able to do better on average, since they can also use the poll summaries as inputs to their own thinking. On the whole, though, the people with additional sources of information mostly did worse.  If we say that Florida was too close to call, we find ten predictions that were otherwise accurate. One was the Intrade betting market, seven were deterministic models, and only two were individuals.

In the future, as happened with baseball after Moneyball, the journalists should start to use the statistical predictions more effectively.  They probably won’t beat the models by much, but they should be able to avoiding doing much worse and sometimes do slightly better.

 

November 8, 2012

Journalism and data analysis

The occasion is Nate Silver and the data-based predictions of the US election, but Mark Coddington raises a much more general point about the difference between ways of knowing things in journalism and science.

The journalistic norm of objectivity is more than just a careful neutrality or attempt to appear unbiased; for journalists, it’s the grounds on which they claim the authority to describe reality to us. And the authority of objectivity is rooted in a particular process.

But science finds things out differently, so journalists and scientists have difficulty communicating with each other.  In political journalism, the journalist gets access to insider information from multiple sources, cross-checks it, evaluates it for reliability, and tells us things we didn’t know.  In data-based journalism there aren’t inside secrets. Anyone could put together these information sources, and quite a few people did.  It doesn’t take any of the skills and judgment that journalists learn; it takes different skills and different sorts of judgment.

TL;DR: Political journalists are skeptical of Nate Silver because they don’t understand and don’t trust the means by which he knows what he knows. And they don’t understand it because it’s completely different from journalists have always known things, and how they’ve claimed authority to declare those things to the public.


November 7, 2012

Now that’s over with

XKCD:

October 19, 2012

Surveys and political identification (yet again)

I’ve written before about the problem of getting actual opinions rather than social or political identifications in surveys.  US late-night TV host Jimmy Kimmel had a great demonstration.  He went out on the streets of LA on the afternoon before the 2nd presidential-candidate debate, and asked people who they thought won “last night’s debate”.  There wasn’t any shortage of people with confident opinions (presumably these are a selected subset of the most entertaining victims, but the point still holds)   (via)