February 2, 2022

Top 14 Predictions for Round 17

Team Ratings for Round 17

The basic method is described on my Department home page.
Here are the team ratings prior to this week’s games, along with the ratings at the start of the season.

Current Rating Rating at Season Start Difference
Stade Toulousain 7.83 6.83 1.00
La Rochelle 7.48 6.78 0.70
Bordeaux-Begles 6.87 5.42 1.50
Lyon Rugby 5.26 4.15 1.10
Racing-Metro 92 4.76 6.13 -1.40
Clermont Auvergne 4.45 5.09 -0.60
Montpellier 4.11 -0.01 4.10
Castres Olympique 0.92 0.94 -0.00
RC Toulonnais 0.05 1.82 -1.80
Stade Francais Paris -0.10 1.20 -1.30
Section Paloise -2.19 -2.25 0.10
Brive -2.94 -3.19 0.20
USA Perpignan -4.09 -2.78 -1.30
Biarritz -5.05 -2.78 -2.30

 

Performance So Far

So far there have been 105 matches played, 79 of which were correctly predicted, a success rate of 75.2%.
Here are the predictions for last week’s games.

Game Date Score Prediction Correct
1 Bordeaux-Begles vs. Castres Olympique Jan 30 23 – 10 12.40 TRUE
2 Brive vs. Biarritz Jan 30 33 – 10 7.60 TRUE
3 La Rochelle vs. Montpellier Jan 30 23 – 29 11.00 FALSE
4 Stade Toulousain vs. Racing-Metro 92 Jan 30 15 – 20 10.60 FALSE
5 USA Perpignan vs. Lyon Rugby Jan 30 23 – 28 -2.60 TRUE
6 Section Paloise vs. Clermont Auvergne Jan 31 28 – 20 -1.00 FALSE
7 Stade Francais Paris vs. RC Toulonnais Jan 31 26 – 24 6.80 TRUE

 

Predictions for Round 17

Here are the predictions for Round 17. The prediction is my estimated expected points difference with a positive margin being a win to the home team, and a negative margin a win to the away team.

Game Date Winner Prediction
1 USA Perpignan vs. Stade Toulousain Feb 06 Stade Toulousain -5.40
2 Lyon Rugby vs. Stade Francais Paris Feb 06 Lyon Rugby 11.90
3 Montpellier vs. Section Paloise Feb 06 Montpellier 12.80
4 Racing-Metro 92 vs. Brive Feb 06 Racing-Metro 92 14.20
5 RC Toulonnais vs. Castres Olympique Feb 06 RC Toulonnais 5.60
6 Biarritz vs. La Rochelle Feb 07 La Rochelle -6.00
7 Clermont Auvergne vs. Bordeaux-Begles Feb 07 Clermont Auvergne 4.10

 

Rugby Premiership Predictions for Round 15

Team Ratings for Round 15

The basic method is described on my Department home page.
Here are the team ratings prior to this week’s games, along with the ratings at the start of the season.

Current Rating Rating at Season Start Difference
Exeter Chiefs 4.27 7.35 -3.10
Saracens 3.60 -5.00 8.60
Wasps 1.80 5.66 -3.90
Leicester Tigers 1.69 -6.14 7.80
Sale Sharks 1.66 4.96 -3.30
Gloucester 1.28 -1.02 2.30
Harlequins 1.26 -1.08 2.30
Northampton Saints 0.62 -2.48 3.10
Bristol -1.85 1.28 -3.10
London Irish -3.56 -8.05 4.50
Bath -4.99 2.14 -7.10
Newcastle Falcons -7.24 -3.52 -3.70
Worcester Warriors -10.15 -5.71 -4.40

 

Performance So Far

So far there have been 82 matches played, 42 of which were correctly predicted, a success rate of 51.2%.
Here are the predictions for last week’s games.

Game Date Score Prediction Correct
1 Bath vs. Harlequins Jan 29 21 – 17 -2.50 FALSE
2 London Irish vs. Exeter Chiefs Jan 30 18 – 14 -4.30 FALSE
3 Newcastle Falcons vs. Gloucester Jan 30 22 – 32 -3.20 TRUE
4 Worcester Warriors vs. Northampton Saints Jan 30 13 – 29 -5.10 TRUE
5 Sale Sharks vs. Leicester Tigers Jan 31 35 – 26 3.80 TRUE
6 Wasps vs. Saracens Jan 31 26 – 20 2.10 TRUE

 

Predictions for Round 15

Here are the predictions for Round 15. The prediction is my estimated expected points difference with a positive margin being a win to the home team, and a negative margin a win to the away team.

Game Date Winner Prediction
1 Bristol vs. Newcastle Falcons Feb 06 Bristol 9.90
2 Exeter Chiefs vs. Wasps Feb 06 Exeter Chiefs 7.00
3 Gloucester vs. London Irish Feb 06 Gloucester 9.30
4 Harlequins vs. Sale Sharks Feb 06 Harlequins 4.10
5 Leicester Tigers vs. Worcester Warriors Feb 06 Leicester Tigers 16.30
6 Saracens vs. Bath Feb 06 Saracens 13.10

 

February 1, 2022

Pie charts, Oz edition

From The Australian (via Luke Wihone on Twitter)

There’s two issues here. First, they are called percentages for a reason — they should add up to 100. This is what it looks like with the missing 16%

Even if you decided to rescale the percentages to give a two-candidate pie, though, the graph is wrong. This is what it would actually look like

That’s Australia. A graph like this one used in New Zealand politics would seem to come under the  Advertising Standards Authority decision saying misleading graphs are not actually misleading if they have the numbers written on them.  As I said at the time, I think this is bad as a matter of political norms and factually incorrect as to the impact of graphics. Maybe we can get it changed.

January 31, 2022

Net approval

There has been quite a bit of fuss on Twitter about this headline, and to a lesser extent the reporting it leads to.  The controversy is over the ‘net approval’ metric — proportion approving minus proportion disapproving — which is relatively new in NZ politics (and which is annoying not in the “full results” summary of the 1News Kantar poll at 1News).  You might not guess from the headline that the poll gives Labour+Greens a majority in Parliament and Ardern twice the “preferred PM” percentage of anyone else.

Net approval is a commonly-reported summary for polls about the US president. According to Wikipedia, it dates back to 1937. That in itself is valuable for the US — continuity makes it easier to do long-term comparisons — and attitudes to the President, separately from his party, seem to be a useful aspect of public mood to measure.   In the US, it isn’t usual to compare the net approval of the President and the Leader of the Opposition; they don’t have one. You do sometimes get net approval ratings for Presidential candidates, but they seem to be less common that just ‘approval’ or ‘would vote for’ or more detailed breakdowns.

There’s a weaker case for personal approval ratings here than in the US, since people don’t vote for a Prime Minister separately from a party — if anything, it might be more interesting to get personal approval for electorate MPs — but it’s not irrelevant. You could argue, and some of the people complaining certainly did, that Jacinda Ardern has made her party more popular than it would be under Generic Replacement Prime Minister, and that Judith Collins made her party less popular than it would have been under Generic Replacement Leader.  That’s a meaningful question on which net approval provides some limited data, in a different way than “preferred Prime Minister” does. However, I would argue that net personal approval is more useful as a comparison over time than a comparison between government and opposition, because the level of “Don’t Care” will intrinsically tend to be higher for leaders who aren’t actually in government. As the Herald says

Just 10 per cent gave no answer or said they didn’t know, which is probably to be expected given Ardern has been Prime Minister for four years – most people have an opinion on her.

I’ve got no problem with net approval being reported. It’s definitely true that it has gone down for Ardern, though it’s not clear how much is a reduction in approval and how much is an increase in disapproval. I don’t think the headline is appropriate given how new ‘net approval’ is, and given the problems of comparing opposition and government net approval.  It’s clear that Luxon’s approval is up, and that National’s support is up, though more at the expense of ACT than Labour.  The second headline, if you click through from the front page, is more reasonable —  Jacinda Ardern’s personal approval rating plummets in new 1News poll, but Christopher Luxon won’t be getting too excited — though even there I’d be happier if the headline was about one of the familiar metrics or at least said ‘net’.

Briefly

  • The Financial Times reports that the head of Turkey’s official statistics agency has been sacked, and suggests that it’s because the government doesn’t like the inflation data.  This is counterproductive; the reasons that inflation estimates are useful rely on people believing them.
  • David Epstein has a nice post about the ‘everything in your fridge causes and prevents cancer’ problem
  • Entirely separately from the question of how it should be headlined, here’s a Twitter thread about the accuracy of the IHME Covid predictions (for the USA).
  • From Russell Brown, a post criticising the ‘Drug Harm Index’ 
  • Via Tobias Schneider on Twitter, some interesting beliefs about NATO membership from this report. The Saudi Arabia, South Africa, and China samples are admitted to tilt wealthy/educated; the others are supposed to be representative. Yes, 11% of Russian respondents say they think Russia is in NATO
  • A pointlessly bad graph from the White House — why would anyone make an obviously distorted y-axis like this when it doesn’t convey a particularly misleading impression?
  • A graph of Google mobility data (from @thoughtfulnz on Twitter) showing the number of people out and about in retail or recreation locations was a bit higher than pre-Covid, then decreased to about pre-Covid levels after the Omicron traffic lights introduction.  From a public health point of view, we could do with being less normal and more like the US and UK, which are much lower than pre-Covid
January 27, 2022

How many omicrons?

Radio NZ has a headline Omicron: Modelling suggests NZ could face peak of 80,000 daily infections, and the report starts “New Zealand could be facing 50,000 daily Omicron infections by Waitangi weekend”. This is technically correct, but in this context that is not the best kind of correct.

First, this is a model for infections, not cases.  It includes asymptomatic infections (which are definitely a thing) and infections that just don’t get reported. The modelled peak for cases is a couple of weeks later, and about a factor of 7 lower.  So 50,000 daily infections by Waitangi weekend, peaking at 80,000 a few weeks later means 425 daily cases by Waitangi weekend, peaking around 11,000 daily cases by late March, if we believe the model.  Given that we have been seeing reporting of cases, not infections, for the past two years, it’s misleading to quote a number that’s twice as soon and an order of magnitude higher.

Is it realistic that so many cases get unreported? It’s not clear. The best data on this, according to Trevor Bedford, who knows from Covid, is from the UK, where they have a mail-out prevalence survey.  He estimates that the UK reports about 3 in 10 cases, and thinks it would be a bit lower for the US.  I’d be surprised if it’s lower than the UK here, at least for the next few weeks. So, that conflicts a bit with the IHME infections model.

So, is the model right? Well, on the one hand, it’s a serious effort at modelling and should be taken seriously.  On the other hand, it’s a model for everywhere in the world, so the amount of attention given to New Zealand data and outcomes will be quite limited.  The NZ modellers put rather more effort into modelling New Zealand data and New Zealand policies.

The reasons that New Zealand eventually controlled our Delta outbreak were specific to New Zealand: lots of new vaccinations, quite good adherence to interventions, being happy to take it outside, being on a small island in the tropics, whatever.  This sort of thing is hard for a worldwide model to pick up.  As Radio NZ says, the model has a prediction if we use masks, and a prediction if everyone gets boostered; these are lower.  It doesn’t have a prediction that accounts for capacity restrictions or vaccination of children. It’s a model where ‘flattening the curve’ fails completely.

Looking at the model in more detail, it does seem that there are some issues with the NZ data feeds. The model for testing looks like this:

That’s clearly wrong in two ways: first, it’s not going to be steady like that. More importantly, it’s too low by about a factor of 50. Here’s what the Ministry of Health says daily testing data looks like

The vaccination model is also somewhat out of data

It projects vaccinations as stopping in mid-November. They didn’t.

What can we say about the projections? Well, Victoria, with a slightly higher population, somewhat weaker restrictions, and not wildly different vaccination rate peaked at about 14,000 cases per day.  So that’s clearly in the plausible range, and would be bad enough.  It’s not out of the question that things get as bad as the IHME estimate, but I think it’s unrealistic to think of it as a most likely projection. And it certainly doesn’t need the confusion of ‘infections’ and ‘cases’.

January 24, 2022

United Rugby Championship Predictions for Week 12

Team Ratings for Week 12

The basic method is described on my Department home page.
Here are the team ratings prior to this week’s games, along with the ratings at the start of the season.

Current Rating Rating at Season Start Difference
Leinster 15.38 14.79 0.60
Munster 10.22 10.69 -0.50
Ulster 7.47 7.41 0.10
Edinburgh 3.77 2.90 0.90
Connacht 3.60 1.72 1.90
Glasgow 3.55 3.69 -0.10
Sharks 1.63 -0.07 1.70
Bulls 1.38 3.65 -2.30
Stormers 0.89 0.00 0.90
Ospreys 0.07 0.94 -0.90
Cardiff Rugby -1.61 -0.11 -1.50
Scarlets -1.72 -0.77 -1.00
Lions -2.47 -3.91 1.40
Benetton -3.39 -4.50 1.10
Dragons -6.12 -6.92 0.80
Zebre -16.61 -13.47 -3.10

 

Performance So Far

So far there have been 56 matches played, 40 of which were correctly predicted, a success rate of 71.4%.
Here are the predictions for last week’s games.

Game Date Score Prediction Correct
1 Lions vs. Sharks Jan 23 37 – 47 2.20 FALSE
2 Bulls vs. Stormers Jan 23 26 – 30 6.70 FALSE

 

Predictions for Week 12

Here are the predictions for Week 12. The prediction is my estimated expected points difference with a positive margin being a win to the home team, and a negative margin a win to the away team.

Game Date Winner Prediction
1 Dragons vs. Benetton Jan 29 Dragons 3.80
2 Ulster vs. Scarlets Jan 29 Ulster 15.70
3 Cardiff Rugby vs. Leinster Jan 30 Leinster -10.50
4 Connacht vs. Glasgow Jan 30 Connacht 6.50
5 Ospreys vs. Edinburgh Jan 30 Ospreys 2.80
6 Sharks vs. Stormers Jan 30 Sharks 5.70
7 Lions vs. Bulls Jan 30 Lions 1.10
8 Zebre vs. Munster Jan 30 Munster -20.30

 

Currie Cup Predictions for Round 3

Team Ratings for Round 3

The basic method is described on my Department home page.
Here are the team ratings prior to this week’s games, along with the ratings at the start of the season.

Current Rating Rating at Season Start Difference
Bulls 9.32 7.25 2.10
Sharks 3.07 4.13 -1.10
Western Province 0.56 1.42 -0.90
Pumas -0.83 -3.31 2.50
Cheetahs -2.30 -2.70 0.40
Griquas -4.25 -4.92 0.70
Lions -5.56 -1.88 -3.70

 

Performance So Far

So far there have been 6 matches played, 5 of which were correctly predicted, a success rate of 83.3%.
Here are the predictions for last week’s games.

Game Date Score Prediction Correct
1 Lions vs. Pumas Jan 20 9 – 50 6.10 FALSE
2 Sharks vs. Griquas Jan 20 24 – 23 13.90 TRUE
3 Western Province vs. Bulls Jan 20 21 – 40 -1.50 TRUE

 

Predictions for Round 3

Here are the predictions for Round 3. The prediction is my estimated expected points difference with a positive margin being a win to the home team, and a negative margin a win to the away team.

Game Date Winner Prediction
1 Griquas vs. Pumas Feb 03 Griquas 1.10
2 Bulls vs. Cheetahs Feb 03 Bulls 16.10
3 Sharks vs. Western Province Feb 03 Sharks 7.00

 

January 19, 2022

Coffee and houses

The idea of cutting down on lattes to be able to afford a house has cropped up again. The proximate cause is a Newshub story that doesn’t quite go there — but it does talk about rent costs and mortgage rates and about satisfying a home lender under the new CCFA credit provisions, so it’s pretty close.

Now, first, I will agree that there are almost certainly people out there who haven’t emotionally grasped that buying 200 flat whites, one per day, costs (say) $900 that you could have spent on a $900 thing instead.  I don’t know if those people are likely to be helped by the story, but maybe it’s worth a try. At the level of housing, though, $900 in a year — or even two coffees every single day, for (say) $3300 — gets you nowhere in comparison with housing price inflation.  The same is true for avocados — maybe avocado toast in a cafe costs more than a coffee, but you don’t have it every day.

You might say that coffee (or avocado) is just one example, and that the point is to pay continuous and obsessive attention to shaving the costs of everything you buy. But to keep up with the rising cost of a mortgage deposit many people would have to save more than their entire discretionary income; shaving pennies isn’t going to get you there.

Perhaps most importantly, though, these approaches can’t work for most people because the housing crisis in New Zealand isn’t due to a shortage of money to spend on housing. We’re collectively spending too much money on housing. Cutting down on coffee or avocado or any other discretionary spending, so as to put more money into the real-estate sector, isn’t going to make housing more affordable on average, even if everyone does it.

Vaccination: survey vs data

This showed up on my Twitter feed this morning, originally from here. It triggered a certain amount of wailing and gnashing of teeth from Americans.

The basic pattern looks plausible; about two-thirds of the US population vaccinated. If you look carefully at the graph, you see something else: the ‘not vaccinated’ group are broken down by attitude. This can’t be an all-ages picture: if anyone is doing large-scale surveys of attitudes to Covid vaccination among six-year-olds around the world it’s (a) a revolution in survey methods that we should know more about and (b) not all that relevant to whether the six-year-olds get vaccinated.

As the description at the link says, this was based on a survey of adults. It was supposed to be nationally representative samples of adults. It clearly wasn’t. Based on doses delivered, the USA reached 75% vaccination for adults by October; Australia is currently over 95% in adults.  The qualitative message might be true, but the numbers aren’t right.

We saw recently how two big non-random US surveys had overestimated vaccination rates, the opposite problem. Why do people do this when we already know the answer? The surveys are (potentially) useful because they ask other questions: they can break down vaccination by other attitudes and circumstance of the respondent, which the CDC data cannot do. It still matters if the answers are right, though.