March 15, 2013

Policing the pollsters … your input sought

This is from Kiwiblog:

A group of New Zealand’s leading political pollsters, in consultation with other interested parties, have developed draft NZ Political Polling Guidelines.

The purpose is to ensure that Association of Market Research Organisations and Market Research Society of New Zealand members conducting political polls, and media organisations publishing poll results, adhere to the highest “NZ appropriate” standards. The guidelines are draft and comments, questions and recommendations back to the working group are welcome.

This code seeks to document best practice guidelines for the conducting and reporting of political polls in New Zealand. It is proposed that the guidelines, once approved and accepted, will be binding on companies that are members of  AMRO and on researchers that are members of MRSNZ.

avatar

Atakohu Middleton is an Auckland journalist with a keen interest in the way the media uses/abuses data. She happens to be married to a statistician. See all posts by Atakohu Middleton »

Comments

  • avatar
    Megan Pledger

    Here is what I wrote over on Kiwiblog

    Self-reported ethnicity should be a variable used in post-stratification weighting and the design effect must not be greater than 7. If it is then the poll should be released but not reported on. If you do not post-stratify by ethnicity then the story must contain the proportion of Maori, Pacific and Asian in the *pre* post-stratified sample and their propotions in the general population. The design effect should be in the report.

    If there are multiple people in the household who are eligible to be quota sampled (e.g. two women of the same age live in the household) then they must be randomly selected not self-selected. In the report the quota sample method must be explicitly stated.

    The “story” should state that it only counts homes with landlines and give an estimate of the non-landline owning population.

    The poll should contain at least 1000 people who are either decided or likely to vote. Anything lower than a 3% ME is too low when control of the house will be estimated with maximum ME. In the month prior to an election samples should have 1% ME.

    The story must stay if the undecideds were removed when calculating statistics and how many undecideds there were. The story must also say how many people refused to participate in the survey plus in any particular question reported on.

    The poll can only ask questions about voter preference and the demographics necessary for weighting. It cannot ask any other questions i.e. no push-polling.

    Adjust the weighting for multiple land line owning households and for household size.

    The polling company must
    a) give all phone numbers an equal chance of being selected e.g. not give certain areas a higher chance of being polled unless that area is the only focus of the poll,
    b) must state that i) if a poll is released then that decision should have been made prior to data collection and ii) that every poll selected to be released should be released i.e. no hiding “bad” polls,
    c) supply a surveying report on application that explains the survey in enough detail that it could be replicated including the method of weighting,
    d) do call-backs on a specified schedule so that every phone number is treated the same and report the schedule and
    e) otherwise warrant they are have acted in good faith to produce and accurate and unbiased poll and that any known biases should be stated in the report and also in the story if they are large or potentially misleading.

    My understanding of quota sampling is potentially out of date but I would suggest removing it from the term “scientific poll”.

    I’d change this
    “When weighting to correct for demographic non-response, the calculated sample weights should be based on known or estimated population characteristics (for example, from Statistics New Zealand or the Electoral Commission). Weighting should not be based on previous voting behaviour, which is subject to memory accuracy.”
    to
    “When weighting to correct for demographic non-response, the calculated sample weights should be based on known or estimated population characteristics (for example, from Statistics New Zealand or the Electoral Commission). Weighting should not be based on previous voting behaviour, which is subject to memory accuracy, unless the information was collected at the previous time.”

    I could probably keep going but that enough for now.

    12 years ago

    • avatar
      Thomas Lumley

      Megan,

      You know more about this than I do, but is it really a good idea to require all opinion polls to ask an ethnicity question compatible with the Census question?

      I can certainly see requiring post-stratification on ethnicity if ethnicity is measured.

      12 years ago

  • avatar
    Andrew

    Here’s an interesting wee point. Ive found that, compared to the census, respondents in telephone surveys will indicate they identify with more ethnic groups on average.

    12 years ago

  • avatar
    Andrew

    Something else about ethnicity – a lot of people assume that minority ethnic groups are underrepresented in phone polls due to non-coverage. However, I’ve found that the practice of selecting one person per household is bigger factor. When results have been weighted to adjust for the probability of selection it can be amazing how closely ethnic profiles then compare with the Census.

    In my view a blanket criteria to weight by ethnicity would be fairly poor practice. Depending on the degree to which specific groups are underprepresented, it could just mean a reduction in the effective sample size for very little benefit. Additionally, ethnicity is multi-response in NZ, which can introduce a whole lot of complexities to weighting (eg, you can get some weird interactive effects with age, gender, and hh size weights).

    Anywho… I agree the ethnic profile should be an important consideration in political polls. How ethnicity is treated though is too complex to be boiled down to blanket criteria.

    Polling companies have put significant resource into finding ways to deal with such issues. I’m not surprised they don’t want to publically document every tiny aspect of their polls – that would just be giving all that time and hard work to their competitors.

    12 years ago

    • avatar
      mpledger

      If you’re only polling 1000 people then trying to do anything with ethnicity is going to be a major problems because you’re going to have to go to age*sex*ethniciity bins and the numbers are going to get too small e.g. older Asians. And that introduces its own bias.

      I don’t see a problem with using prioritised ethnicity in the place of multiple measures of ethnicity which cuts down the complexity.

      I agree that people will respond differently to answering ethnicity for a poll and for the census but I think it would be worth doing at least once by a polling company so they can actually see the bias they have rather than be in total ignorance.

      Another Andrew said
      “Everyone’s trading bias for variance at some point, it’s just done at different places in the analyses”
      http://andrewgelman.com/

      So basically the first Andrew said he can live with the bias because he’d rather keep the standard error low (inverse effective sample size). The interesting thing is that polls use simple random sample measures of variance i.e. margin of error, not the version that would be correct for the way they sample was done so keeping the effective sample size low doesn’t really matter as it stands.

      I mean, the real standard error for an estimate isn’t the margin of error for (most) polls, it’s sqrt(design effect) * ME. But the latter is never used.

      I get why polling companies don’t want to say how they do things *but* the public needs to know that the polls are being done correctly especially when there is reason to believe the polling company may have a vested interest in the result.

      The problem is that poorly done sampling throws up “interesting results” and “interesting results” sell newspapers so there is an incentive to do poor sampling.

      12 years ago

  • avatar
    Andrew

    I wasnt saying I could live the bias – I was saying weighting by ethnicity needs to be done carefully and is too complex for a blanket requirement.

    I think you’re making a few assumptions about the motivations of pollsters. The ones I’ve met want their results to be closest come Election Day, and none to my knowledge have a vested interest in the results.

    The public do need to know that the polls are done well, I agree. However we’re not talking about acamdic jounals here – what you’re suggesting gets included in every story and report is only useful to a very small minority, and will make reports less accessible/more confusing. Perhaps a technical report published once per year is more realistic, but again pollsters could end up throwing away their investments.

    Polling companies do sometimes calculate the MoE allowing for design effect, and should at least know what it is, but it seems to be an industry standard to report the simple random MoE. if the adjusted MoE were a requirement for all surveys (not just political polls), I’d support reporting that – although it would mean delays in delivering reports to clients. Also to my knowledge only two of the public polls approximate a random probability sample – so the MoE only really applies to two of them.

    I’d also support publishing response rates if everyone used the same formula, including those doing quota surveys. However I’ve seen response rates calculated some very odd ways.

    I can honestly say that in the surveys I’ve run, we put huge emphasis on good sampling practices and quality fieldwork.

    12 years ago

  • avatar
    mpledger

    The thing is I didn’t even know that only two polling companies used a survey that “approximates a random probability sample”.

    Who does what type of sampling, the very basics of surveying, is never reported. Yet, they all report margin of error (as if the poll were a simple random sample).

    It’s not even clear to me if they adjust for household size. I did think not, but the guidelines above sort-of imply that they do.

    If quota samples, I don’t even know if they randomly select from people in the house or just take who answers the phone (if they are in scope).

    All this stuff create biases.

    It not for the fun of making people jump through hoops that journals require certain aspect of performance for reporting a survey – it’s a gurantee that the survey is as correct as practically possible. You’d think giving people information so they can work out how to use their vote in deciding who governs the country deserves just as much correctness.

    “and none to my knowledge have a vested interest in the results”

    So no pollster belongs to a political party?

    ~~~~~~~~~~

    Actually, one further thing – all information must be untied from the phone number prior to analysis and all data must be deleted (with zero chance of re-creation) after the analysis is complete i.e. no creating databases of peoples information and political leanings.

    12 years ago

    • avatar
      Megan Pledger

      Now, I’ll say why I wrote the above…

      I have just read “Dirty Politics” (finished 10 minutes ago). The most interesting thing for statos was page 102, Curia say they are doing polling but some of the time they are actually canvassing people in order to keep the info to use it to their political advantage e,g. sending advertising, getting people out to vote.

      It happenned to me. Curia rang me up, asked for me by name and asked me to do a poll. It was a little while after I had hung up that it occurred to me that I had been asked for by name. Polls don’t need people to have names linked to opinions, they just need opinions. So I knew something flakey was going on but didn’t get the big picture until I read that part of the book.

      All I can say is that people are under no obligation to do polls even ones that are 100% pure. (Sorry to those pollsters who are 100% pure). And If you are ever asked for by name, refuse the poll.

      10 years ago