Posts filed under Risk (222)

October 17, 2012

Consenting intellectual S&M activity

That’s how Ben Goldacre described the process of criticism and debate that’s fundamental to science, at a TED talk last year.  At this time of year we expose a lot of innocent young students to this process: yesterday it was the turn of statistical consulting course, next month it’s BSc(Hons) and MSc research projects, and then the PhD students.

Here’s Ben Goldacre’s whole talk

 

October 15, 2012

Reporting risk safely

An interesting post on how media reporting of risk could actually make us less safe. (via)

October 10, 2012

Classification problems

I was interested in how the new psychoactive substances laws were going to handle the problem of, on one hand, the unsafe legal highs that they don’t want to ban, and on the other hand, the potentially psychoactive substances that they don’t want to have to regulate.  Safety testing for new medications is a complicated scientific and statistical problem and hard to get right even when you aren’t trying to gerrymander it.

The regulatory impact statement says they are just going to do all this by fiat. Alcohol, tobacco, caffeine (and presumably kava) will be exempted so they can be handled by existing law; currently-banned drugs will still be banned;  and things like nutmeg and a range of ornamental plants will be classified by fiat as not psychoactive if anyone raises the issue.  In the case of any ambiguity, the regulator will get to just decide. I suppose that’s the only practical way to do it, given the goals.

The headlines so far have been about the cost of approval, which is about twice what MEDSAFE charges for new medications. That’s  not unreasonable considering that legal highs are likely to be less chemically and biologically familiar than most medications.  However, the costs are basically irrelevant unless the safety criteria are written loosely enough that some psychoactive compound could conceivably pass them.  Since the criteria don’t have to be consistent with any of the rest of drug and food laws, and it’s unlikely that anyone will come up with the testing budget, there’s no upside to making them realistic.

It will still be interesting to see how the criteria end up being written, and whether caffeine, nutmeg, (or, in the other direction, some cannabis preparation) would be able to pass them. Obviously alcohol and tobacco wouldn’t.

October 9, 2012

False positives and copyright

Any binary decision requires us to consider both the probability of getting it right and the consequences of getting it wrong.  Many legal systems have traditionally felt that wrongful convictions are worse than wrongful acquittals, and this forms part of the support for the presumption of innocence.

In other areas of the law, the incentives are different.  In automated detection of unauthorized copying, and resulting ‘takedown’ notices under laws such as the US DMCA, there is effectively no risk to the copyright holder from false positives, so there is not much incentive to avoid them.

An interesting example (via the far-from-unbiased BoingBoing) is this takedown notice, one of the stream routinely posted by Google at ChillingEffects.  The first few pages just show torrent sites that posted unauthorised copies of MS Office and deserve what’s coming to them, but if you scroll down to Copyright Claim #2, it starts to look different: (more…)

October 1, 2012

Computer hardware failure statistics

Ed Nightingale, John Douceur, and Vince Orgovan at Microsoft Research have analyzed hardware failure data from a million ordinary consumer PCs, using data from automated crash-reporting systems. (via)

Their main finding is that if something goes wrong with your computer, you should panic immediately, rather than being relieved when it seems to recover. Machines that accumulated at least 5 days full-time use over eight months had a 1/470 chance of a hard disk failure, but those that had one hard disk failure had a 30% chance of a second failure, and those with a second failure had nearly a 60% chance of a third failure.  Do you feel lucky?

It’s obvious that the set of computers that have a failure are basically doomed, but this still leaves open an interesting statistical question.  Does the risk of a second failure increase because the first failure damages the computer, or because the first failure picks out a set of computers that were always a bit dodgy?   I think the researchers missed something here: they tested for whether the times between failures have an exponential distribution (which is the distribution for events that don’t have any memory), and found that it didn’t.  That doesn’t distinguish between the situation where each computer has its own constant risk of failure, and the situation where each machine starts off the same but some of them have risk increasing over time.

For computers, it doesn’t matter very much which of these possibilities is true, but in some other contexts it does.   For example, if young people sent to prison are more likely to reoffend, we want to know whether the prison exposure was partly responsible, or whether these particular people were likely to reoffend anway. Unfortunately, this turns out to be hard.

September 20, 2012

Roundup scare

You’ll probably be seeing local stories about GM corn and the weedkiller Roundup coming out soon.  Here’s an overseas example. I was asked for comment on the research paper by the NZ Science Media Centre, and said

I do not think the herbicide risks look convincing, especially with respect to cancer.  There is no consistent pattern in deaths with dose of either Roundup or GM corn: this is not just showing a threshold, as the authors suggest, since in all six of their comparisons the highest-dose group has lower mortality than lower-dose groups.  The hypothesis of hormone-related cancer differences is not supported by the multivariate biochemical analysis, which found differences in salt excretion but not in testosterone or estradiol.  The strongest conclusion that could be drawn from this study is that it would be worth studying a larger group of controls than just 10 and (since there is no sign of dose-response) just a single low dose of Roundup or GM corn.

The researchers say “It is noteworthy that the first two male rats that died in both GM treated groups had to be euthanized due to kidney Wilm’s tumors”. This is noteworthy, but perhaps not in the way the researchers mean: increases in human Wilms’ tumor from GM corn or herbicide residues would already be obvious even at rates hundreds of times lower than reported in these rats.

At the time, I had only read the research paper, not any of the media stories.  There is a detail in the story I linked above that is absolutely outrageous:

Breaking with a long tradition in scientific journalism, the authors allowed a selected group of reporters to have access to the paper, provided they signed confidentiality agreements that prevented them from consulting other experts about the research before publication.

That is, we’ll let you have a scoop provided we can make sure there’s no risk of getting it right or disagreeing with us.  Embargoes on stories about scientific papers are standard, but one of the justifications is to precisely to provide journalists with time to get the facts right.   It would be interesting to know how many of the journalists who signed these agreements were willing to admit to it in their stories.

 

September 19, 2012

They don’t reverse into mountains

That’s how I first heard the theory that it’s safer to sit at the back of a plane than at the front. It turns out there’s something to it: back in 2007, Popular Mechanics magazine looked at data on all the US plane crash fatalities over 35 years and found a roughly 40% higher risk of death in front of the wing.

Stuff is now reporting on a British TV stunt, where a plane full of crash-test dummies was deliberately crashed in the Sonoran Desert.  The test found that the front of the plane experienced higher accelerations, so you would be better off near the back. It also found that the brace position helped.  This, of course, applies to only one form of plane crash — the ‘controlled flight into terrain’, and only to a subset of those.  In some crashes no-one dies; in others everyone dies.

The conclusion that economy class is safer in general is much more dubious.  It can’t be much safer, since the chance of dying in a ‘fatal air incident’ is very, very, very small wherever you sit (Wikipedia claims about 1 per 10 million journeys). Crashes are only a subset of fatal incidents,  and the benefit of sitting near the back must be substantially smaller even than this. As a comparison, on an 8+ hour flight, the chance of pulmonary embolism is about 16 times higher than the chance of dying in a fatal air incident, and nearly 50 times high for a 12+ hour flight, so a relatively small reduction in risk in first class would outweigh any crash benefit.

 

September 13, 2012

Huffing coverage

As you will doubtless have heard, 63 people, mostly kids and youth, have died from ‘huffing’ over the past 12 years.  That’s about the same as the number of youths who commit suicide in six months.

Suicide coverage in the media is restrained by awareness of the risks of over-publicising it.  Along those lines, the NZ Drug Foundation (on Twitter) points us to the Chief Coroner’s recommendations (page 3) for media coverage of `volatile substance abuse’

The reporting of all volatile substance abuse is recognised  as being of a highly sensitive nature. Reporting has the potential to assist in the reduction of abuse, or conversely increase the incidence by promoting use and the availability of products that may be used. Although there are no inhalant specific media guidelines, the following considerations based on those expressed by the 1985 Senate Select Committee on Volatile Fumes in Canberra, Australia may be a useful guide:
• The products subject to abuse should not be named and the methods used should not be described or depicted.
• Reports of inhalant abuse should be factual and not sensationalised or glamourised.
• The causes of volatile substance abuse are complex and varied. Reporting on deaths should not be superficial.
Stories should include local contact details for further information or support.

The Drug Foundation say they have a site with resources, but it seems to be down at the moment (up again now).

August 23, 2012

Where does 80m of molten rock end up?

Wherever it wants.

Apparently, if the next Auckland volcano is in the worst possible location, 500000 people might need to be evacuated.  Even in a better location it will be no fun at all, with the only redeeming feature being that even Peter Thompson will have to concede that it’s not the best time to buy a house.   It’s good that research and planning is underway, so the mayor’s office will have a set of contingency plans filed under “V” for when the next eruption happens, but the need for public panic awareness is perhaps less than for the Alpine Fault.

From a statistical viewpoint, there are two factors that go into how much you should do to prepare for an emergency: how likely it is, and how much the preparation will help.   The earthquake wins on both of these: it’s about ten times as likely in the next 50 years, and we can do a lot more to reduce the damage.  With the Alpine Fault, we need to decide how much to spend on strengthening roads, bridges, and houses, and making water and sewer systems less likely to break.  Public discussion and pressure on the government are important.  On the other hand, if a river of molten rock heads south from One Tree Hill, or a chunk of tuff the size of a refrigerator lands on my roof, my house isn’t going to survive no matter how good the building standards are.

In terms of things we can actually influence, we might want to worry more about which bits of Auckland will be under water in 50-100 years, not which bits will be under lava.

June 28, 2012

Alpine fault: can we panic now?

The Herald has a good report of research to be published in Science tomorrow, studying earthquakes on the Alpine fault.  By looking at a river site where quakes disrupted water flow, interrupting peat deposition with a layer of sediment, the researchers could get a history of large quakes going back 8000 years. They don’t know exactly how big any of the quakes were, but they were big enough to rupture the surface and affect water flow, so at the least they would mess up roads and bridges, and disrupt tourism.

Based on this 8000-year history, it seems that the Alpine fault is relatively regular in how often it has earthquakes: more so than the San Andreas Fault in California, for example.  Since the fault has major earthquakes about every 330 years, and the most recent one was 295 years ago, it’s likely to go off soon.  Of course, ‘soon’ here doesn’t mean “before the Super 15 final”; the time scales are a bit longer than that.

We can look at some graphs to get a rough idea of the risk over different time scales.  I’m going to roughly approximate the distribution of the times between earthquakes by a log-normal distribution, that is, the logarithm of the times has a Normal distribution.

This is a simple and reasonable model for time intervals, and it also has the virtue of giving the same answers that the researchers gave to the press.  Using the estimates of mean and variation in the paper, the distribution of times to the next big quake looks like the first graph.  The quake is relatively predictable, but “relatively” in this sense means “give or take a century”.

Now, by definition, the next big quake hasn’t happened yet, so we can throw away the part of this distribution that’s less than zero, and rescale the distribution so it still adds up to 1, getting the second graph.  The chance of a big quake is a bit less than 1% per year — not very high, but certainly worth doing something about.  For comparison, it’s about 2-3 times the risk per year of being diagnosed with breast cancer for middle-aged women.

The Herald article (and the press release) quote a 30% chance over 50 years, which matches this lognormal model.  At 80 years there’s a roughly 50:50 chance, and if we wait long enough the quake has to happen eventually.

The risk of a major quake in any given year isn’t all that high, but the problem isn’t going away and the quake is going to make a serious mess when it happens.

 

 

[Update: Stuff also has an article. They quote me  (via the Science Media Centre) , but I’m just describing one of the graphs in the paper: Figure 3B, if you want to go and check that I can do simple arithmetic]