Posts written by Thomas Lumley (2534)

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient

February 21, 2022

Is the vaccine still working?

Back in November, I wrote

The Covid vaccine is safe and effective and it’s good that most eligible people are getting it. But how much protection does it give? If you look at the NZ statistics on who gets Covid, it seems to be extraordinarily effective: the chance of ending up with (diagnosed) Covid for an unvaccinated person is about 20 times higher than for a vaccinated person.

That’s probably an overestimate.

The issue was that during the Auckland Delta outbreak, unvaccinated people were probably more likely to be exposed to Covid than vaccinated people, and this was exaggerating the (real, but smaller) benefit of the vaccine.

Things have changed. The case diagnoses for vaccinated and unvaccinated people are about equal as a proportion of the population.  Partly this is because the vaccine is less effective against infection with Omicron, but now I think the social factors may well be leading to an underestimate of the vaccine benefit.  The point of the traffic-light system was to reduce virus exposure for unvaccinated people, both so they would be less likely to pass the virus on and so they’d be less likely to end up in hospital.  Reports in the news about unvaccinated people and about businesses that don’t like the system suggest that it does at least reduce the presence of unvaccinated adults in crowded indoor public settings.  You could reasonably expect, then, that unvaccinated adults are less exposed than vaccinated adults and that their equal case rate shows the vaccine is working.

In the absence of any other information it would be hard to decide how much to believe this explanation, but we do have other information. Other countries, with more cases and more data, have better estimates of the benefit of the vaccine than you can get from the published NZ data.  The vaccine does reduce infections with Omicron.  It doesn’t work as well as it did against Delta, and the benefit falls off more rapidly with time, but there is a benefit.   From the overseas data we’d expect the vaccine to working in New Zealand too, and the data we have are consistent with that expectation.

Even if we weren’t preventing cases of Omicron, there are at least two arguments for continuing to have vaccine rules. First, Delta is not actually gone — it’s still 5-7% of sequenced cases in MIQ and the community. It’s a small *fraction* of the outbreak, but the numbers haven’t gone down much. Second,  hospitalisation matters. As you may remember, we had hospitals even before Covid.  They’re important for treating everything from cancer to car crashes.  Keeping them available for non-Covid uses has always been a key motivation of the Covid strategy.

The numbers don’t decide anything; whether to change the rules is a policy question. But the inputs to policy should be the best estimates we can get of vaccine effectiveness, not the crude case counts.

February 16, 2022

Briefly

  • Stats NZ had to take down NZ.Stat, one of the main public interfaces to official statistics.  They’re being very helpful by email to people who need the data, but it’s a problem — and it’s not really the right interface for people who just wanted to look up a few numbers.  Eric Crampton wrote about why this matters (feel free to ignore the comments about wellbeing indicators)
  • The NZ Open Source Assocation awards include one to the Ministry of Health for the Covid trace app, and to the University of Auckland Computational Evolution group for their phylogenetic inference software, BEAST
  • Measuring things you don’t have any real way to interpret, from XKCD
  • “Creepiness” Is the Wrong Way to Think About Privacy from Slate. It’s a useful heuristic, but it’s not an analysis.  As an illustration of how intuitions can be non-generalisable, the chair of George W. Bush’s bioethics council thought eating ice-cream in public was offensive.
  • The power of selection bias: “In a series of tweets with an authentic February 7 timestamp, the self-described “industry insider working deep within Nintendo” showed an apparently deep foreknowledge of details that Nintendo wouldn’t officially reveal until the evening of February 9, two days later.”. He did it by making lots of predictions and then deleting all the ones that didn’t pan out.
  • Tim Harford explains Arrow’s Impossibility Theorem: it’s hard to take a set of individual preferences and turn them into a group decision
February 13, 2022

Community Covid Testing

For the past couple of years I’ve been arguing against Covid testing for people who don’t have symptoms and aren’t at high risk of exposure: they’ll have only a minute chance of testing positive, so we won’t learn anything, and we have better uses for the testing resources.  The only country that’s been doing systematic surveillance of Covid has been the UK, where the background prevalence has been, let’s say, somewhat higher than it had been here.

New Zealand is now getting a substantial Covid outbreak.  We’ll be over 1000 new cases some day soon, and it will start to matter for hospital planning purposes whether we’re detecting 20% of infections or 10% or 1% — because hospital numbers follow infection numbers with a long enough lag that the information is useful.

We’ve got two possible approaches to estimating the population Covid burden. One is wastewater testing, the other is random sampling.  Both approaches will keep working no matter how high the Covid prevalence is and no matter what fraction of infections are diagnosed and reported.  Sampling is more expensive, but has the advantage that it actually counts people rather than counting viruses and extrapolating to people.  Using both would probably help balance their pros and cons.

Sampling doesn’t have to be ‘simple random sampling’. If we know there’s more Covid in Auckland than in Oamaru, we can sample at a higher rate in Auckland and a lower rate in Oamaru.  We can also do adaptive sampling, where you take more samples in places where you find a hotspot.  Statistical ecologists trying to count plant and animal populations have studied this sort of problem quite a lot over the years — and statistical ecology is, fortunately, an area where NZ has expertise. But even simple random sampling would work, and would give us an estimate of infections and symptomatic cases across the country, and help plan the short to medium term response.

February 10, 2022

Briefly

  • Good discussion of the overinterpretation of opinion polls from Mediawatch. Hayden Donnell jokingly says “That is, of course, except for Mediawatch, which is the only truly objective outlet in town” — but like StatsChat, Mediawatch has the luxury of not commenting on stories where we don’t have anything to say.
  • In contrast to well-conducted opinion polls, Twitter polls are completely valueless as a way of collecting and summarising popular opinion. This means that while they’re fine for entertainment (yay  @nickbakernz) and collecting reckons from your friends, it’s probably not a good idea to rage-retweet batshit political polls.  Let them get 37:0 in favour of banning arithmetic or whatever, rather than 37:1000 against.
  • A summary of where the various non-profit Covid vaccines have got to, from Hilda Bastian
  • One of the repeated themes of this blog is that you need to measure the right things if you’re going to base decisions on them.  The “Drug Harm Index” may not qualify here because it’s not clear decisions are made based on it, but it’s still worth looking at whether it measures harm the right way.  As Russell Brown points out, the index would say “that cannabis is New Zealand’s most harmful drug – accounting for $626 million in “community harm” every year. Would you be surprised if I told you more than a third of that was lost GST?”
  • According to the MoH vaccination data, the vaccine roll-out for kids is going well on average, with 43% having had their first shot, but the differences by ethnicity are about the same as they were for adults. At the start of the Delta outbreak in August  (according to Hannah Martin at Stuff)  just over 40% of Aucklanders had had a first dose, 33% of Pacific people and 28% of Māori. That’s almost creepily close to the current situation with 5-11 year olds across the country now — the percentage for Māori being slightly lower this time.  Equity being a priority doesn’t seem to have had much impact.
  • Interesting post from Pew Research on writing survey questions: in particular, ‘agree:disagree’ questions give you more ‘agree’ results than forced choice “pineapple or pepperoni” questions on the same issues.
  • In New Zealand there are some issues with denominators for vaccination rates — the population that’s used undercounts minority groups.  This seems to be much worse in the UK: from Paul Mainwood on Twitter
February 7, 2022

Testing numbers

The Herald and the Spinoff both commented on the Covid testing results yesterday. The Spinoff had a quick paragraph

While the tally of new cases is down, the test positivity rate is up. Yesterday’s report saw 21,471 tests and 243 positive cases – a one in 88 result; today it was 16,873 tests and 208 new cases: a one in 81 result.

and the Herald had a detailed story with quotes from experts

Experts believe Covid fatigue and a perception that Omicron is less of a threat than Delta are to blame for low testing numbers at the start of the community outbreak.

There were 100,000 fewer tests administered in the week following Omicron community transmission than the week following Delta transmission, Ministry of Health data shows.

They’re both right, but the Ministry of Health is not giving out the most helpful numbers or comparisons to understand how much it’s really a problem.

There are three basic reasons for testing: regular surveillance for people in certain high-risk jobs, testing of contacts, and testing of people with symptoms.  The number of surveillance tests is pretty much uninformative — it’s just a policy choice — but the proportion of positive tests is a strong signal.  The number of tests done for (not yet symptomatic) close contacts tells us about the effectiveness of contact tracing and about the number of cases in recent days (which we knew), but it doesn’t tell  us much else, and the positivity rate will mostly depend on who we define as close contacts rather than on anything about the epidemic.  The number of tests prompted by symptoms actually is an indicator of willingness to test, and the test positivity rate is an indicator of Covid prevalence, but only up to a point.

There’s another external factor confusing the interpretation of changes in symptomatic testing: the seasonal changes in the rate of other illnesses.  When Delta appeared, testing was higher than when Omicron appeared.  That could be partly because people (wrongly) thought Omicron didn’t matter, or (wrongly) thought it couldn’t be controlled, or (perhaps correctly) worried that their employers would be less supportive of being absent, or thought the public health system didn’t care as much or something.  It will also be partly because fewer people have colds in December than in August.

As a result of much collective mahi and good luck, most of the people getting tested because of symptoms actually have some other viral upper-respiratory illness, not Covid.  At times of year when there is more not-actually-Covid illness, testing rates should be higher. August is winter and kids had been at school and daycare; it’s the peak season for not-actually-Covid. December, with school out and after a long lockdown to suppress various other viruses, is low season for not-actually-Covid. Fewer tests in December is not a surprise.

Not only will more colds mean more testing, they will also mean a lower test positivity rate — at the extreme if there were no other illnesses, everyone with symptoms would have Covid. The two key testing statistics, counts and positivity rate, are hard to interpret in comparisons between now and August.

It would help some if the Ministry of Health reported test numbers and results by reason for testing: contacts, symptoms, regular surveillance. It would help to compare symptomatic testing rates with independent estimates of the background rate of symptoms (eg from Flutracker).  But it’s always going to be hard to interpret differences over long periods of time — differences over a few weeks are easier to interpret, preferably averaged over more than one day of reporting to reduce random noise.

None of this is to disagree with the call for people with symptoms to get tested.  We know not everyone with symptoms is tested; it’s probably been a minority throughout the pandemic. Getting the rate up would help flatten the wave of Omicron along with masks and vaccines and everything else.

February 6, 2022

How many omicrons (recap)

Now we’re at Waitangi weekend we can confirm that New Zealand modellers and epidemiologists, none of whom expected 50,000 cases per day at this point, were correct.  Unfortunately, the Herald has

Questioned on earlier figures that up to 50,000 new cases would be emerging by Waitangi Day – and 80,000 a day a few weeks later – Hipkins described the calculations as useful, saying it was better to have some modelling than none.

Further down, the Herald piece admits that these figures didn’t come from the New Zealand modellers that the Minister is paying and being advised by, but from IHME in Seattle. It’s worse than that, though. The only place I saw tens of thousands of cases as a description of the modelling by the IHME in Seattle was in a Herald headline.

All the other reporting of it that I saw at least said “infections”, even if they weren’t clear enough that this wasn’t remotely the same as cases. 

As you can see, the IHME model prediction for reported cases today, Sunday 6 February, was actually 332 (or 202 with good mask use), even though the projection for infections by tomorrow was nearly 50,000.

The uncertainty interval for that projected 332 went from 85 to nearly 800, so the actual figure was well inside the predicted range.

You might think that this sort of accuracy still isn’t very good. Projecting the timing of the epidemic is hard — think of the exponential-spread cartoon from Toby Morris and Siouxsie Wiles

Especially early on in an outbreak, individual choices and luck can make a big difference to how fast the outbreak spreads.  Eventually it will be overall patterns of vaccination and masking and distancing and isolation that matter for the overall outbreak size. The models will be more accurate as the outbreak gets bigger and less random, and they will likely be more accurate about total outbreak size than about timing.

I’m not a fan of the IHME models — they have notoriously been overly optimistic in the medium to long term in the US — but Michael Baker and the Otago group think they’re reasonable, and you should arguably listen to them rather than me on this topic.  We’ll find out soon. Whatever you think of them in general, though, the modellers certainly didn’t predict 50,000 cases by today, and shouldn’t be criticised for failing to predict something that didn’t happen.

 

February 1, 2022

Pie charts, Oz edition

From The Australian (via Luke Wihone on Twitter)

There’s two issues here. First, they are called percentages for a reason — they should add up to 100. This is what it looks like with the missing 16%

Even if you decided to rescale the percentages to give a two-candidate pie, though, the graph is wrong. This is what it would actually look like

That’s Australia. A graph like this one used in New Zealand politics would seem to come under the  Advertising Standards Authority decision saying misleading graphs are not actually misleading if they have the numbers written on them.  As I said at the time, I think this is bad as a matter of political norms and factually incorrect as to the impact of graphics. Maybe we can get it changed.

January 31, 2022

Net approval

There has been quite a bit of fuss on Twitter about this headline, and to a lesser extent the reporting it leads to.  The controversy is over the ‘net approval’ metric — proportion approving minus proportion disapproving — which is relatively new in NZ politics (and which is annoying not in the “full results” summary of the 1News Kantar poll at 1News).  You might not guess from the headline that the poll gives Labour+Greens a majority in Parliament and Ardern twice the “preferred PM” percentage of anyone else.

Net approval is a commonly-reported summary for polls about the US president. According to Wikipedia, it dates back to 1937. That in itself is valuable for the US — continuity makes it easier to do long-term comparisons — and attitudes to the President, separately from his party, seem to be a useful aspect of public mood to measure.   In the US, it isn’t usual to compare the net approval of the President and the Leader of the Opposition; they don’t have one. You do sometimes get net approval ratings for Presidential candidates, but they seem to be less common that just ‘approval’ or ‘would vote for’ or more detailed breakdowns.

There’s a weaker case for personal approval ratings here than in the US, since people don’t vote for a Prime Minister separately from a party — if anything, it might be more interesting to get personal approval for electorate MPs — but it’s not irrelevant. You could argue, and some of the people complaining certainly did, that Jacinda Ardern has made her party more popular than it would be under Generic Replacement Prime Minister, and that Judith Collins made her party less popular than it would have been under Generic Replacement Leader.  That’s a meaningful question on which net approval provides some limited data, in a different way than “preferred Prime Minister” does. However, I would argue that net personal approval is more useful as a comparison over time than a comparison between government and opposition, because the level of “Don’t Care” will intrinsically tend to be higher for leaders who aren’t actually in government. As the Herald says

Just 10 per cent gave no answer or said they didn’t know, which is probably to be expected given Ardern has been Prime Minister for four years – most people have an opinion on her.

I’ve got no problem with net approval being reported. It’s definitely true that it has gone down for Ardern, though it’s not clear how much is a reduction in approval and how much is an increase in disapproval. I don’t think the headline is appropriate given how new ‘net approval’ is, and given the problems of comparing opposition and government net approval.  It’s clear that Luxon’s approval is up, and that National’s support is up, though more at the expense of ACT than Labour.  The second headline, if you click through from the front page, is more reasonable —  Jacinda Ardern’s personal approval rating plummets in new 1News poll, but Christopher Luxon won’t be getting too excited — though even there I’d be happier if the headline was about one of the familiar metrics or at least said ‘net’.

Briefly

  • The Financial Times reports that the head of Turkey’s official statistics agency has been sacked, and suggests that it’s because the government doesn’t like the inflation data.  This is counterproductive; the reasons that inflation estimates are useful rely on people believing them.
  • David Epstein has a nice post about the ‘everything in your fridge causes and prevents cancer’ problem
  • Entirely separately from the question of how it should be headlined, here’s a Twitter thread about the accuracy of the IHME Covid predictions (for the USA).
  • From Russell Brown, a post criticising the ‘Drug Harm Index’ 
  • Via Tobias Schneider on Twitter, some interesting beliefs about NATO membership from this report. The Saudi Arabia, South Africa, and China samples are admitted to tilt wealthy/educated; the others are supposed to be representative. Yes, 11% of Russian respondents say they think Russia is in NATO
  • A pointlessly bad graph from the White House — why would anyone make an obviously distorted y-axis like this when it doesn’t convey a particularly misleading impression?
  • A graph of Google mobility data (from @thoughtfulnz on Twitter) showing the number of people out and about in retail or recreation locations was a bit higher than pre-Covid, then decreased to about pre-Covid levels after the Omicron traffic lights introduction.  From a public health point of view, we could do with being less normal and more like the US and UK, which are much lower than pre-Covid
January 27, 2022

How many omicrons?

Radio NZ has a headline Omicron: Modelling suggests NZ could face peak of 80,000 daily infections, and the report starts “New Zealand could be facing 50,000 daily Omicron infections by Waitangi weekend”. This is technically correct, but in this context that is not the best kind of correct.

First, this is a model for infections, not cases.  It includes asymptomatic infections (which are definitely a thing) and infections that just don’t get reported. The modelled peak for cases is a couple of weeks later, and about a factor of 7 lower.  So 50,000 daily infections by Waitangi weekend, peaking at 80,000 a few weeks later means 425 daily cases by Waitangi weekend, peaking around 11,000 daily cases by late March, if we believe the model.  Given that we have been seeing reporting of cases, not infections, for the past two years, it’s misleading to quote a number that’s twice as soon and an order of magnitude higher.

Is it realistic that so many cases get unreported? It’s not clear. The best data on this, according to Trevor Bedford, who knows from Covid, is from the UK, where they have a mail-out prevalence survey.  He estimates that the UK reports about 3 in 10 cases, and thinks it would be a bit lower for the US.  I’d be surprised if it’s lower than the UK here, at least for the next few weeks. So, that conflicts a bit with the IHME infections model.

So, is the model right? Well, on the one hand, it’s a serious effort at modelling and should be taken seriously.  On the other hand, it’s a model for everywhere in the world, so the amount of attention given to New Zealand data and outcomes will be quite limited.  The NZ modellers put rather more effort into modelling New Zealand data and New Zealand policies.

The reasons that New Zealand eventually controlled our Delta outbreak were specific to New Zealand: lots of new vaccinations, quite good adherence to interventions, being happy to take it outside, being on a small island in the tropics, whatever.  This sort of thing is hard for a worldwide model to pick up.  As Radio NZ says, the model has a prediction if we use masks, and a prediction if everyone gets boostered; these are lower.  It doesn’t have a prediction that accounts for capacity restrictions or vaccination of children. It’s a model where ‘flattening the curve’ fails completely.

Looking at the model in more detail, it does seem that there are some issues with the NZ data feeds. The model for testing looks like this:

That’s clearly wrong in two ways: first, it’s not going to be steady like that. More importantly, it’s too low by about a factor of 50. Here’s what the Ministry of Health says daily testing data looks like

The vaccination model is also somewhat out of data

It projects vaccinations as stopping in mid-November. They didn’t.

What can we say about the projections? Well, Victoria, with a slightly higher population, somewhat weaker restrictions, and not wildly different vaccination rate peaked at about 14,000 cases per day.  So that’s clearly in the plausible range, and would be bad enough.  It’s not out of the question that things get as bad as the IHME estimate, but I think it’s unrealistic to think of it as a most likely projection. And it certainly doesn’t need the confusion of ‘infections’ and ‘cases’.