February 7, 2022

Testing numbers

The Herald and the Spinoff both commented on the Covid testing results yesterday. The Spinoff had a quick paragraph

While the tally of new cases is down, the test positivity rate is up. Yesterday’s report saw 21,471 tests and 243 positive cases – a one in 88 result; today it was 16,873 tests and 208 new cases: a one in 81 result.

and the Herald had a detailed story with quotes from experts

Experts believe Covid fatigue and a perception that Omicron is less of a threat than Delta are to blame for low testing numbers at the start of the community outbreak.

There were 100,000 fewer tests administered in the week following Omicron community transmission than the week following Delta transmission, Ministry of Health data shows.

They’re both right, but the Ministry of Health is not giving out the most helpful numbers or comparisons to understand how much it’s really a problem.

There are three basic reasons for testing: regular surveillance for people in certain high-risk jobs, testing of contacts, and testing of people with symptoms.  The number of surveillance tests is pretty much uninformative — it’s just a policy choice — but the proportion of positive tests is a strong signal.  The number of tests done for (not yet symptomatic) close contacts tells us about the effectiveness of contact tracing and about the number of cases in recent days (which we knew), but it doesn’t tell  us much else, and the positivity rate will mostly depend on who we define as close contacts rather than on anything about the epidemic.  The number of tests prompted by symptoms actually is an indicator of willingness to test, and the test positivity rate is an indicator of Covid prevalence, but only up to a point.

There’s another external factor confusing the interpretation of changes in symptomatic testing: the seasonal changes in the rate of other illnesses.  When Delta appeared, testing was higher than when Omicron appeared.  That could be partly because people (wrongly) thought Omicron didn’t matter, or (wrongly) thought it couldn’t be controlled, or (perhaps correctly) worried that their employers would be less supportive of being absent, or thought the public health system didn’t care as much or something.  It will also be partly because fewer people have colds in December than in August.

As a result of much collective mahi and good luck, most of the people getting tested because of symptoms actually have some other viral upper-respiratory illness, not Covid.  At times of year when there is more not-actually-Covid illness, testing rates should be higher. August is winter and kids had been at school and daycare; it’s the peak season for not-actually-Covid. December, with school out and after a long lockdown to suppress various other viruses, is low season for not-actually-Covid. Fewer tests in December is not a surprise.

Not only will more colds mean more testing, they will also mean a lower test positivity rate — at the extreme if there were no other illnesses, everyone with symptoms would have Covid. The two key testing statistics, counts and positivity rate, are hard to interpret in comparisons between now and August.

It would help some if the Ministry of Health reported test numbers and results by reason for testing: contacts, symptoms, regular surveillance. It would help to compare symptomatic testing rates with independent estimates of the background rate of symptoms (eg from Flutracker).  But it’s always going to be hard to interpret differences over long periods of time — differences over a few weeks are easier to interpret, preferably averaged over more than one day of reporting to reduce random noise.

None of this is to disagree with the call for people with symptoms to get tested.  We know not everyone with symptoms is tested; it’s probably been a minority throughout the pandemic. Getting the rate up would help flatten the wave of Omicron along with masks and vaccines and everything else.

February 6, 2022

How many omicrons (recap)

Now we’re at Waitangi weekend we can confirm that New Zealand modellers and epidemiologists, none of whom expected 50,000 cases per day at this point, were correct.  Unfortunately, the Herald has

Questioned on earlier figures that up to 50,000 new cases would be emerging by Waitangi Day – and 80,000 a day a few weeks later – Hipkins described the calculations as useful, saying it was better to have some modelling than none.

Further down, the Herald piece admits that these figures didn’t come from the New Zealand modellers that the Minister is paying and being advised by, but from IHME in Seattle. It’s worse than that, though. The only place I saw tens of thousands of cases as a description of the modelling by the IHME in Seattle was in a Herald headline.

All the other reporting of it that I saw at least said “infections”, even if they weren’t clear enough that this wasn’t remotely the same as cases. 

As you can see, the IHME model prediction for reported cases today, Sunday 6 February, was actually 332 (or 202 with good mask use), even though the projection for infections by tomorrow was nearly 50,000.

The uncertainty interval for that projected 332 went from 85 to nearly 800, so the actual figure was well inside the predicted range.

You might think that this sort of accuracy still isn’t very good. Projecting the timing of the epidemic is hard — think of the exponential-spread cartoon from Toby Morris and Siouxsie Wiles

Especially early on in an outbreak, individual choices and luck can make a big difference to how fast the outbreak spreads.  Eventually it will be overall patterns of vaccination and masking and distancing and isolation that matter for the overall outbreak size. The models will be more accurate as the outbreak gets bigger and less random, and they will likely be more accurate about total outbreak size than about timing.

I’m not a fan of the IHME models — they have notoriously been overly optimistic in the medium to long term in the US — but Michael Baker and the Otago group think they’re reasonable, and you should arguably listen to them rather than me on this topic.  We’ll find out soon. Whatever you think of them in general, though, the modellers certainly didn’t predict 50,000 cases by today, and shouldn’t be criticised for failing to predict something that didn’t happen.

 

February 2, 2022

United Rugby Championship Predictions for Week 13

Team Ratings for Week 13

The basic method is described on my Department home page.
Here are the team ratings prior to this week’s games, along with the ratings at the start of the season.

Current Rating Rating at Season Start Difference
Leinster 14.71 14.79 -0.10
Munster 9.92 10.69 -0.80
Ulster 7.14 7.41 -0.30
Glasgow 4.87 3.69 1.20
Edinburgh 3.66 2.90 0.80
Bulls 2.57 3.65 -1.10
Connacht 2.28 1.72 0.60
Stormers 1.40 0.00 1.40
Sharks 1.12 -0.07 1.20
Ospreys 0.18 0.94 -0.80
Cardiff Rugby -0.93 -0.11 -0.80
Scarlets -1.39 -0.77 -0.60
Benetton -3.05 -4.50 1.40
Lions -3.66 -3.91 0.30
Dragons -6.46 -6.92 0.50
Zebre -16.31 -13.47 -2.80

 

Performance So Far

So far there have been 64 matches played, 43 of which were correctly predicted, a success rate of 67.2%.
Here are the predictions for last week’s games.

Game Date Score Prediction Correct
1 Dragons vs. Benetton Jan 29 13 – 13 3.80 FALSE
2 Ulster vs. Scarlets Jan 29 27 – 15 15.70 TRUE
3 Cardiff Rugby vs. Leinster Jan 30 29 – 27 -10.50 FALSE
4 Connacht vs. Glasgow Jan 30 20 – 42 6.50 FALSE
5 Ospreys vs. Edinburgh Jan 30 23 – 19 2.80 TRUE
6 Sharks vs. Stormers Jan 30 22 – 22 5.70 FALSE
7 Lions vs. Bulls Jan 30 10 – 34 1.10 FALSE
8 Zebre vs. Munster Jan 30 17 – 34 -20.30 TRUE

 

Predictions for Week 13

Here are the predictions for Week 13. The prediction is my estimated expected points difference with a positive margin being a win to the home team, and a negative margin a win to the away team.

Game Date Winner Prediction
1 Ulster vs. Connacht Feb 05 Ulster 9.90
2 Lions vs. Bulls Feb 06 Bulls -1.20
3 Stormers vs. Sharks Feb 06 Stormers 5.30

 

Top 14 Predictions for Round 17

Team Ratings for Round 17

The basic method is described on my Department home page.
Here are the team ratings prior to this week’s games, along with the ratings at the start of the season.

Current Rating Rating at Season Start Difference
Stade Toulousain 7.83 6.83 1.00
La Rochelle 7.48 6.78 0.70
Bordeaux-Begles 6.87 5.42 1.50
Lyon Rugby 5.26 4.15 1.10
Racing-Metro 92 4.76 6.13 -1.40
Clermont Auvergne 4.45 5.09 -0.60
Montpellier 4.11 -0.01 4.10
Castres Olympique 0.92 0.94 -0.00
RC Toulonnais 0.05 1.82 -1.80
Stade Francais Paris -0.10 1.20 -1.30
Section Paloise -2.19 -2.25 0.10
Brive -2.94 -3.19 0.20
USA Perpignan -4.09 -2.78 -1.30
Biarritz -5.05 -2.78 -2.30

 

Performance So Far

So far there have been 105 matches played, 79 of which were correctly predicted, a success rate of 75.2%.
Here are the predictions for last week’s games.

Game Date Score Prediction Correct
1 Bordeaux-Begles vs. Castres Olympique Jan 30 23 – 10 12.40 TRUE
2 Brive vs. Biarritz Jan 30 33 – 10 7.60 TRUE
3 La Rochelle vs. Montpellier Jan 30 23 – 29 11.00 FALSE
4 Stade Toulousain vs. Racing-Metro 92 Jan 30 15 – 20 10.60 FALSE
5 USA Perpignan vs. Lyon Rugby Jan 30 23 – 28 -2.60 TRUE
6 Section Paloise vs. Clermont Auvergne Jan 31 28 – 20 -1.00 FALSE
7 Stade Francais Paris vs. RC Toulonnais Jan 31 26 – 24 6.80 TRUE

 

Predictions for Round 17

Here are the predictions for Round 17. The prediction is my estimated expected points difference with a positive margin being a win to the home team, and a negative margin a win to the away team.

Game Date Winner Prediction
1 USA Perpignan vs. Stade Toulousain Feb 06 Stade Toulousain -5.40
2 Lyon Rugby vs. Stade Francais Paris Feb 06 Lyon Rugby 11.90
3 Montpellier vs. Section Paloise Feb 06 Montpellier 12.80
4 Racing-Metro 92 vs. Brive Feb 06 Racing-Metro 92 14.20
5 RC Toulonnais vs. Castres Olympique Feb 06 RC Toulonnais 5.60
6 Biarritz vs. La Rochelle Feb 07 La Rochelle -6.00
7 Clermont Auvergne vs. Bordeaux-Begles Feb 07 Clermont Auvergne 4.10

 

Rugby Premiership Predictions for Round 15

Team Ratings for Round 15

The basic method is described on my Department home page.
Here are the team ratings prior to this week’s games, along with the ratings at the start of the season.

Current Rating Rating at Season Start Difference
Exeter Chiefs 4.27 7.35 -3.10
Saracens 3.60 -5.00 8.60
Wasps 1.80 5.66 -3.90
Leicester Tigers 1.69 -6.14 7.80
Sale Sharks 1.66 4.96 -3.30
Gloucester 1.28 -1.02 2.30
Harlequins 1.26 -1.08 2.30
Northampton Saints 0.62 -2.48 3.10
Bristol -1.85 1.28 -3.10
London Irish -3.56 -8.05 4.50
Bath -4.99 2.14 -7.10
Newcastle Falcons -7.24 -3.52 -3.70
Worcester Warriors -10.15 -5.71 -4.40

 

Performance So Far

So far there have been 82 matches played, 42 of which were correctly predicted, a success rate of 51.2%.
Here are the predictions for last week’s games.

Game Date Score Prediction Correct
1 Bath vs. Harlequins Jan 29 21 – 17 -2.50 FALSE
2 London Irish vs. Exeter Chiefs Jan 30 18 – 14 -4.30 FALSE
3 Newcastle Falcons vs. Gloucester Jan 30 22 – 32 -3.20 TRUE
4 Worcester Warriors vs. Northampton Saints Jan 30 13 – 29 -5.10 TRUE
5 Sale Sharks vs. Leicester Tigers Jan 31 35 – 26 3.80 TRUE
6 Wasps vs. Saracens Jan 31 26 – 20 2.10 TRUE

 

Predictions for Round 15

Here are the predictions for Round 15. The prediction is my estimated expected points difference with a positive margin being a win to the home team, and a negative margin a win to the away team.

Game Date Winner Prediction
1 Bristol vs. Newcastle Falcons Feb 06 Bristol 9.90
2 Exeter Chiefs vs. Wasps Feb 06 Exeter Chiefs 7.00
3 Gloucester vs. London Irish Feb 06 Gloucester 9.30
4 Harlequins vs. Sale Sharks Feb 06 Harlequins 4.10
5 Leicester Tigers vs. Worcester Warriors Feb 06 Leicester Tigers 16.30
6 Saracens vs. Bath Feb 06 Saracens 13.10

 

February 1, 2022

Pie charts, Oz edition

From The Australian (via Luke Wihone on Twitter)

There’s two issues here. First, they are called percentages for a reason — they should add up to 100. This is what it looks like with the missing 16%

Even if you decided to rescale the percentages to give a two-candidate pie, though, the graph is wrong. This is what it would actually look like

That’s Australia. A graph like this one used in New Zealand politics would seem to come under the  Advertising Standards Authority decision saying misleading graphs are not actually misleading if they have the numbers written on them.  As I said at the time, I think this is bad as a matter of political norms and factually incorrect as to the impact of graphics. Maybe we can get it changed.

January 31, 2022

Net approval

There has been quite a bit of fuss on Twitter about this headline, and to a lesser extent the reporting it leads to.  The controversy is over the ‘net approval’ metric — proportion approving minus proportion disapproving — which is relatively new in NZ politics (and which is annoying not in the “full results” summary of the 1News Kantar poll at 1News).  You might not guess from the headline that the poll gives Labour+Greens a majority in Parliament and Ardern twice the “preferred PM” percentage of anyone else.

Net approval is a commonly-reported summary for polls about the US president. According to Wikipedia, it dates back to 1937. That in itself is valuable for the US — continuity makes it easier to do long-term comparisons — and attitudes to the President, separately from his party, seem to be a useful aspect of public mood to measure.   In the US, it isn’t usual to compare the net approval of the President and the Leader of the Opposition; they don’t have one. You do sometimes get net approval ratings for Presidential candidates, but they seem to be less common that just ‘approval’ or ‘would vote for’ or more detailed breakdowns.

There’s a weaker case for personal approval ratings here than in the US, since people don’t vote for a Prime Minister separately from a party — if anything, it might be more interesting to get personal approval for electorate MPs — but it’s not irrelevant. You could argue, and some of the people complaining certainly did, that Jacinda Ardern has made her party more popular than it would be under Generic Replacement Prime Minister, and that Judith Collins made her party less popular than it would have been under Generic Replacement Leader.  That’s a meaningful question on which net approval provides some limited data, in a different way than “preferred Prime Minister” does. However, I would argue that net personal approval is more useful as a comparison over time than a comparison between government and opposition, because the level of “Don’t Care” will intrinsically tend to be higher for leaders who aren’t actually in government. As the Herald says

Just 10 per cent gave no answer or said they didn’t know, which is probably to be expected given Ardern has been Prime Minister for four years – most people have an opinion on her.

I’ve got no problem with net approval being reported. It’s definitely true that it has gone down for Ardern, though it’s not clear how much is a reduction in approval and how much is an increase in disapproval. I don’t think the headline is appropriate given how new ‘net approval’ is, and given the problems of comparing opposition and government net approval.  It’s clear that Luxon’s approval is up, and that National’s support is up, though more at the expense of ACT than Labour.  The second headline, if you click through from the front page, is more reasonable —  Jacinda Ardern’s personal approval rating plummets in new 1News poll, but Christopher Luxon won’t be getting too excited — though even there I’d be happier if the headline was about one of the familiar metrics or at least said ‘net’.

Briefly

  • The Financial Times reports that the head of Turkey’s official statistics agency has been sacked, and suggests that it’s because the government doesn’t like the inflation data.  This is counterproductive; the reasons that inflation estimates are useful rely on people believing them.
  • David Epstein has a nice post about the ‘everything in your fridge causes and prevents cancer’ problem
  • Entirely separately from the question of how it should be headlined, here’s a Twitter thread about the accuracy of the IHME Covid predictions (for the USA).
  • From Russell Brown, a post criticising the ‘Drug Harm Index’ 
  • Via Tobias Schneider on Twitter, some interesting beliefs about NATO membership from this report. The Saudi Arabia, South Africa, and China samples are admitted to tilt wealthy/educated; the others are supposed to be representative. Yes, 11% of Russian respondents say they think Russia is in NATO
  • A pointlessly bad graph from the White House — why would anyone make an obviously distorted y-axis like this when it doesn’t convey a particularly misleading impression?
  • A graph of Google mobility data (from @thoughtfulnz on Twitter) showing the number of people out and about in retail or recreation locations was a bit higher than pre-Covid, then decreased to about pre-Covid levels after the Omicron traffic lights introduction.  From a public health point of view, we could do with being less normal and more like the US and UK, which are much lower than pre-Covid
January 27, 2022

How many omicrons?

Radio NZ has a headline Omicron: Modelling suggests NZ could face peak of 80,000 daily infections, and the report starts “New Zealand could be facing 50,000 daily Omicron infections by Waitangi weekend”. This is technically correct, but in this context that is not the best kind of correct.

First, this is a model for infections, not cases.  It includes asymptomatic infections (which are definitely a thing) and infections that just don’t get reported. The modelled peak for cases is a couple of weeks later, and about a factor of 7 lower.  So 50,000 daily infections by Waitangi weekend, peaking at 80,000 a few weeks later means 425 daily cases by Waitangi weekend, peaking around 11,000 daily cases by late March, if we believe the model.  Given that we have been seeing reporting of cases, not infections, for the past two years, it’s misleading to quote a number that’s twice as soon and an order of magnitude higher.

Is it realistic that so many cases get unreported? It’s not clear. The best data on this, according to Trevor Bedford, who knows from Covid, is from the UK, where they have a mail-out prevalence survey.  He estimates that the UK reports about 3 in 10 cases, and thinks it would be a bit lower for the US.  I’d be surprised if it’s lower than the UK here, at least for the next few weeks. So, that conflicts a bit with the IHME infections model.

So, is the model right? Well, on the one hand, it’s a serious effort at modelling and should be taken seriously.  On the other hand, it’s a model for everywhere in the world, so the amount of attention given to New Zealand data and outcomes will be quite limited.  The NZ modellers put rather more effort into modelling New Zealand data and New Zealand policies.

The reasons that New Zealand eventually controlled our Delta outbreak were specific to New Zealand: lots of new vaccinations, quite good adherence to interventions, being happy to take it outside, being on a small island in the tropics, whatever.  This sort of thing is hard for a worldwide model to pick up.  As Radio NZ says, the model has a prediction if we use masks, and a prediction if everyone gets boostered; these are lower.  It doesn’t have a prediction that accounts for capacity restrictions or vaccination of children. It’s a model where ‘flattening the curve’ fails completely.

Looking at the model in more detail, it does seem that there are some issues with the NZ data feeds. The model for testing looks like this:

That’s clearly wrong in two ways: first, it’s not going to be steady like that. More importantly, it’s too low by about a factor of 50. Here’s what the Ministry of Health says daily testing data looks like

The vaccination model is also somewhat out of data

It projects vaccinations as stopping in mid-November. They didn’t.

What can we say about the projections? Well, Victoria, with a slightly higher population, somewhat weaker restrictions, and not wildly different vaccination rate peaked at about 14,000 cases per day.  So that’s clearly in the plausible range, and would be bad enough.  It’s not out of the question that things get as bad as the IHME estimate, but I think it’s unrealistic to think of it as a most likely projection. And it certainly doesn’t need the confusion of ‘infections’ and ‘cases’.

January 24, 2022

United Rugby Championship Predictions for Week 12

Team Ratings for Week 12

The basic method is described on my Department home page.
Here are the team ratings prior to this week’s games, along with the ratings at the start of the season.

Current Rating Rating at Season Start Difference
Leinster 15.38 14.79 0.60
Munster 10.22 10.69 -0.50
Ulster 7.47 7.41 0.10
Edinburgh 3.77 2.90 0.90
Connacht 3.60 1.72 1.90
Glasgow 3.55 3.69 -0.10
Sharks 1.63 -0.07 1.70
Bulls 1.38 3.65 -2.30
Stormers 0.89 0.00 0.90
Ospreys 0.07 0.94 -0.90
Cardiff Rugby -1.61 -0.11 -1.50
Scarlets -1.72 -0.77 -1.00
Lions -2.47 -3.91 1.40
Benetton -3.39 -4.50 1.10
Dragons -6.12 -6.92 0.80
Zebre -16.61 -13.47 -3.10

 

Performance So Far

So far there have been 56 matches played, 40 of which were correctly predicted, a success rate of 71.4%.
Here are the predictions for last week’s games.

Game Date Score Prediction Correct
1 Lions vs. Sharks Jan 23 37 – 47 2.20 FALSE
2 Bulls vs. Stormers Jan 23 26 – 30 6.70 FALSE

 

Predictions for Week 12

Here are the predictions for Week 12. The prediction is my estimated expected points difference with a positive margin being a win to the home team, and a negative margin a win to the away team.

Game Date Winner Prediction
1 Dragons vs. Benetton Jan 29 Dragons 3.80
2 Ulster vs. Scarlets Jan 29 Ulster 15.70
3 Cardiff Rugby vs. Leinster Jan 30 Leinster -10.50
4 Connacht vs. Glasgow Jan 30 Connacht 6.50
5 Ospreys vs. Edinburgh Jan 30 Ospreys 2.80
6 Sharks vs. Stormers Jan 30 Sharks 5.70
7 Lions vs. Bulls Jan 30 Lions 1.10
8 Zebre vs. Munster Jan 30 Munster -20.30