December 8, 2016

Understanding risk

The Office of the PM’s Chief Science Advisor has two reports out on “Making decisions in the face of uncertainty: Understanding risk” (part 1, part 2). These aren’t completely new (part 1 came out in May), but I don’t think they’ve been on StatsChat before, and they’re good.

A quote from the second part, more generally relevant that just to statistics

 In forming their views and assimilating information, most people follow the lead of credible experts – but they define and choose ‘experts’ based on whom they perceive as sharing their values. Experts are not immune to bias, and, as explained in Part 1 of this series, the actuarial approach itself is not free from value judgments. Biases and values are inherent in the risk assessment process, beginning with what we recognise as a hazard. They can influence the priority given to the study of specific risks and thereby generate data necessary to promote action on those risks.

Scientists are human, with their own biases and values. But modern science has largely evolved as a set of internationally recognized processes designed to minimize such biases, at least in the collection and analysis of the data. A core value judgment that remains in the processes of science is in the evaluation of the sufficiency of evidence on which to draw a conclusion. Because this judgment can be subject to bias, it is important to have independent replication and aggregation of scientific evidence from different studies and sources in order to reach a scientific consensus.

 Public trust in science and scientists may be becoming increasingly tenuous as the issues become ever more complex and contested. Scientists must find better ways to interact with decision makers and the public in order to bolster confidence in the authority of their expertise and the legitimacy of the advice that they provide.

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »

Comments

  • avatar
    steve curtis

    No more statistical models as we saw in the US election, which blew up in the medias face.

    8 years ago

  • avatar
    Thomas Lumley

    If there was some other source of information that wasn’t objectively less accurate that the polls, you might have a point.

    8 years ago

    • avatar
      steve curtis

      I see it that they relied on sports forecasting methods- which is often their day job that brings in the money- to a political contest. That produced the 80%+ chance of Clinton win when it was actually 65% chance of Trump winning.
      i understand the analysis of the UK general election polling found one of the reasons for getting it wrong was the 70+ age group was included with the 60-70 group when those over 70 voted more strongly for the conservatives. The US result seems to be that smaller counties went more strongly for Trump than the polls were measuring. That is except for the USC/LA Times tracking polling which showed the strength of trumps support ( but not the national votes)

      8 years ago

    • avatar
      Joseph Delaney

      I would argue that thinking more carefully about the modeling assumptions of the statistical models would have been a potential source of improvement. Nate Silver did better than most– based on a quick read of his site after the election, it looks like he managed this by assuming correlated errors (thus amplifying variance).

      No model in such small data is ever going to be “right” (a platonic ideal in any case), but it’s not a bad idea to put more thought into the variance, as well as the mean. Some forecasts were very certain of a result, and it’s true that Trump could have won even of the odds were 99% to lose (randomness is tricky that way), it doesn’t seem like it the results support wild outliers as the cause.

      8 years ago

      • avatar
        Megan Pledger

        I found it interesting that the probability of Clinton winning was never reported with a 95% confidence interval. My suspicion is that they were either unreasonably large or unbelievably small that the noone wanted to report them because it would have shown how bad the estimates were anyway,

        8 years ago

        • avatar
          Thomas Lumley

          For a one-off event, a Bayesian credible interval (since that’s how they were working) doesn’t add anything to just having the probability. The expected utility of any decision depends only on the point probability.

          8 years ago

        • avatar
          Thomas Lumley

          (I think I agree confidence intervals would be useful, but it’s not as obvious as it sounds)

          8 years ago