November 3, 2011

For whom the belle polls

TV3 on Tuesday reported that “early results in a poll suggest Labour’s Phil Goff won the debate with Prime Minister John Key last night.” The poll was by RadioLIVE and Horizon.

The TV piece concluded by lambasting a recent One News text poll, saying: “A One News text poll giving the debate to Mr Key 61-39 has been widely discredited, since it cost 75c to vote.”

This text poll should be lambasted if it is used to make inference about the opinions of the population of eligible willing voters. Self-selection is the major problem here: those who can be bothered have selected themselves.

The problem: there is no way of ascertaining that this sample of people is a representative sample of willing voters. Only the interested and motivated, who text message, have answered and, clearly, the pollsters do not have information on the not-so-interested and motivated non-texters.

The industry standard is to randomly select from all eligible willing voters and to adjust for non-response.  The initial selection is random and non-response is reduced as much as possible. This is to ensure the sample is as representative of the population as possible. The sample is collected via a sampling frame which, hopefully, is a comprehensive a list of the population you wish to talk to.  For CATI polls the sample frame is domestic landline (traditional) phone numbers.

With election polls, as landlineless voters have been in lower socio-economic groups which tend to have lower voter participation, this has not been so much of a problem.  The people we wish to talk to are eligible willing voters – so the polls have not been unduly biased by not including these landlineless people.

However, as people move away from landlines to mobile phones, CATI interviewing has been increasingly criticised. Hence alternatives have been developed, such as panel-polls and market prediction polls like IPredict – and the latter will be the subject of a post for another day.

But let’s go back to the Horizon panel poll mentioned above. It claims that it’s to be trusted as it has sampled from a large population of potential panellists who have been recruited and can win prizes for participation. The Horizon poll adjusts for any biases by reweighting the sample so that it’s more like the underlying New Zealand adult population – which is good practice in general.

However, the trouble is this large sampling frame of potential panellists have been self-selected. So who do they represent?

To illustrate, it’s hard to imagine people from more affluent areas feeling the need to get rewards for being on a panel. Also, you enrol via the internet and clearly this is biased towards IT-savvy people. Here the sampling frame is biased, with little or no known way to adjust for any biases bought about from this self-selection problem. They may be weighted to look like the population but they may be fundamentally different in their political outlook.

Panel polls are being increasingly used by market researchers and polling companies. With online panel polls it’s easier to obtain samples, collect information and transfer it, without all the bother involved in traditional polling techniques like CATI.

I believe the industry has been seduced by these features at the expense of representativeness – the bedrock of all inference. Until such time as we can ensure representativeness, I remain sceptical about any claims from panel polls.

I believe the much-maligned telephone (CATI) interviewing, which is by no means perfect, still remains the best of a bad lot.

avatar

Andrew Balemi is a Professional Teaching Fellow in the Department of Statistics at The University of Auckland. He is a former Head of Marketing Science at market research company Colmar Brunton. See all posts by Andrew Balemi »

Comments

  • avatar
    Coral Grant

    Loved this post, I have really had enough of the texting polls that have sprung up on TV of late, to me they provide no relevant information because as you state we have no idea who they represent.

    13 years ago

  • avatar

    […] Labour’s extra borrowingStatsChat on Polls Andrew Balemi (Auck Uni Stats Dept) at Stats Chat blogs on polls:But let’s go back to the Horizon panel poll mentioned above. It claims that it’s to be trusted […]

    13 years ago

  • avatar
    Megan Pledger

    CATI polls also have a great deal of biases…
    In order to be sureveyed by CATI you
    – need to have a *land-line* phone
    – need to be at home when the surveyor rings
    – need to answer the phone (rather than not at all or someone else in the household)
    – need to agree to do the survey and have good hearing and a reasonable amount of English skills

    And the bigger the household the less likely you are to be surveyed i.e. 1/8 chance to be surveyed in a household with 8 18+ers while 100% in a household with 1 18+ person in it.
    (As I understand it, the polls don’t commonly adjust for household size in the weights.)

    All these things mean that CATI surveys don’t really represent the young, the busy, the poor and people in large households.

    While the Horizon polls are not that great, I don’t think people should be that confident about CATI polling surveys either.

    13 years ago

    • avatar
      Andrew Balemi

      I totally agree with the points you make that CATI is by no means perfect. However, we have extensive knowledge about their inadequacies whereas the panel-polls (which are being used a lot now) have yet to address theirs…

      13 years ago

      • avatar
        Megan Pledger

        I don’t think CATI research firms address there inadequacies either – they know ’em, they just happy to gloss over ’em.

        I don’t know any private research companies that will tell you
        1) how they select respondents
        2) how they weight responsdents based on their selection probability
        3) how they weight respondents based on non-response and
        4) what the total number of phone numbers used to get their 1000 competed surveys (Given it’s a one shot attempt to get someone in a household, I would guess 3 phone numbers for every completed survey but I wouldn’t be surprised if it’s higher).

        And these are things you need to know in order to judge the quality of their methods.

        13 years ago

        • avatar
          Thomas Lumley

          Colmar Brunton tell you 1,2, and 3.

          It’s not true that it’s a one-shot attempt to get someone from a household: they make up to five call-backs.

          As far as I can tell, they don’t say how many numbers they need to sample to get a respondent, which I agree it would be good to know. It’s not quite the data you need: what you actually need is the number of households sampled to get a respondent, and there’s probably no reliable way to tell if a phone that isn’t answered is in a household.

          12 years ago

  • avatar

    A follow up on criticism towards CATI polls. If CATI is so flawed how did so many CATI polling companies achieve great results?

    http://en.wikipedia.org/wiki/Opinion_polling_for_the_New_Zealand_general_election,_2011

    12 years ago