Does success in education rely on having certain genes?
If you have read media stories recently that say ‘yes’, you’d better read this article from the Genetic Literacy Project …
If you have read media stories recently that say ‘yes’, you’d better read this article from the Genetic Literacy Project …
This is a common question from our students. Unfortunately our perspective does not always lend itself easily to life outside of research and academia, as what I look for in a curriculum vitae and in a job interview is usually with respect to hiring someone who will become an academic staff member. However, fellow statistician, and the Young Statisticians representative for the New Zealand Statistical Association executive committee, Kylie Maxwell has posted her own experience as part of the International Year of Statistics.
The Herald today ran this story claiming that people think New Zealand is a racist country, based on the results of a survey run for TV3’s new show The Vote. Viewers voted through Facebook, Twitter, The Vote website or by text.
I haven’t watched The Vote, but I would like to know whether its journalist presenters, presumably fans of accuracy, point out that such self-selecting polls are unscientific – the polite term for bogus. The best thing you can say is that such polls allow viewers to feel involved.
But that’s not a good thing if claims made as a result of these polls lead to way off-beam impressions being planted in the public consciousness; that’s often the way urban myths are born and prejudice stoked.
I’m not saying that racism doesn’t exist in New Zealand, but polls like this offer no insight into the issue or, worse, distort the truth.
It’s disappointing to see the Herald, which still, presumably, places a premium on accuracy, has swallowed The Vote press release whole, without pointing out its shortcomings or doing its homework to see what reliable surveys exist. TV3 must be very pleased with the free publicity, though.
Statistical decision theory is about making decisions in the presence of uncertainty. We can’t know everything, but we still need to make choices. In decision theory we assume that the world isn’t out to get us — if cigarette smoke is toxic, it is so regardless of whether or not we study it, and whether or not we’re trying to stamp it out. Murphy’s Law is true, but only as an engineering design principle, not a fact about the malevolence of Nature.
Game theory is the evil twin of decision theory — it’s about making choices in the presence of competition, when the other players aren’t precisely out to get you, but they are out to do the best for themselves. There are a few examples of game theory in medical statistics: how do you set up regulations so that making effective drugs is more profitable than making ineffective ones? how do you use new antibiotics, given that resistance will inevitably develop? Typically, though, game theory works best in ecology, where natural selection ensures that organisms behave as if they were trying to maximise their numbers of descendants given the behaviour of other organisms.
A UCLA professor teaching a course in behavioural ecology decided to try to make his students really appreciate the problems of cooperation and competition that arise in game theory:
A week before the test, I told my class that the Game Theory exam would be insanely hard—far harder than any that had established my rep as a hard prof. But as recompense, for this one time only, students could cheat. They could bring and use anything or anyone they liked, including animal behavior experts. (Richard Dawkins in town? Bring him!) They could surf the Web. They could talk to each other or call friends who’d taken the course before. They could offer me bribes. (I wouldn’t take them, but neither would I report it to the Dean.) Only violations of state or federal criminal law such as kidnapping my dog, blackmail, or threats of violence were out of bounds.
Here’s a fun link which talks about the difference between truly random numbers and pseudo-random numbers. When we teach this, we often mention generation of random numbers (or at least the random number seed) from a radioactive source as one way of getting truly random numbers. Here is someone actually doing it. The sequel is well worth a watch too if you have the time.
What are Kiwi kids’ most common food allergies? What time do they go to sleep at night? How long can they stand on their left leg with their eyes closed?
Many thousands of students aged between 10 and 18 (Year 5 to Year 13) are due to start answering these questions – and a host of others about their lives – on Monday May 6, the first day of the new term and the day CensusAtSchool 2013 begins.
So far, 461 schools have registered to take part. The 32-question survey, available in English and Māori, aims to raise students’ interest in statistics and provide a fascinating picture of what they are thinking, feeling and doing. Teachers will administer the census in class between May 6 and June 14.
“A good way to engage students in mathematics and statistics is to start from a place that’s familiar to them – their own lives and the lives of their friends,” says co-director Rachel Cunliffe, a University of Auckland-trained statistician.“Students love taking part in the activities and then, in class with their teachers, becoming “data detectives” to see what stories are in the results – and not just in their own classroom, but across the country.”
This year, students are being asked for the first time about food allergies to reflect the lack of data on the issue, says Cunliffe. “Students will be able to explore the dataset to compare the prevalence of self-reported allergies for different ages, ethnicities and sexes.”
Westlake Girls High School maths teacher Dru Rose is planning for about 800 Year 9 and 10 students to take part. She’s keen to see the data that will emerge from two other new questions about how many hours of homework students did the night before, and how many hours sleep they had. “It’s real-life stuff,” she says. “We’ll be able to examine the data and see if there are any links.”
Andrew Tideswell, manager of the Statistics New Zealand Education Team, says our statistics curriculum is world-leading, and CensusAtSchool helps teachers and students get the most out of it.
“By engaging in CensusAtSchool, students have an experience that mirrors the structure of the national census, and it encourages them to think about the need for information and ways we might use it to solve problems,” he says. “Students develop the statistical literacy they need if New Zealand is to be an effective democracy where citizens can use statistics to make informed decisions.”
CensusAtSchool, now in its sixth edition, is a biennial collaborative project involving teachers, the University of Auckland’s Department of Statistics, Statistics New Zealand and the Ministry of Education. It is part of an international effort to boost statistical capability among young people, and is carried out in Australia, the United Kingdom, Canada, the US, Japan and South Africa.
The United States has surprisingly low social mobility: in every country, the children of the rich are more likely to be rich than the children of the poor, but the US is even worse than most Western countries.
Felix Salmon links to some graphs by Evan Soltas, looking at mobility in terms of education, with data from the US General Social Survey. He finds that people whose fathers did not go to university are much less likely to go to university themselves (unsurprising), and that this is true at all levels of income (more interesting).
I’ve repeated what Soltas did, but smoothing[1] the relationships to remove the visual noise, and also restricting to people aged 25-40 (rather than 18+)
In each panel, black is less than high school, dark red is high school, light brown is university or junior college and yellow is postgraduate. These are plotted by family income (in inflation-adjusted US dollars). The left panel is for people whose fathers had at least a junior college degree; the right is those whose fathers didn’t.
The difference is striking, and as Soltas says, may imply a greater long-term value for encouraging education than people had thought.
[1] For people who want the technical details: A sampling-weighted local-linear smoother using a Gaussian kernel with bandwidth $10000, ie, svysmooth() in the R survey package. Bandwidth chosen using the ‘Goldilocks’ method[2]
[2] What? $3000 is too wiggly, $30000 is too smooth, $10000 is just right.
From Bradley Voytek, apparently from Reddit, but unfortunately not further sourced there either
I think the legend at the bottom right just makes this perfect.
There’s a new UK report by Ben Goldacre, “Building Evidence into Education”, which has been welcomed by the Teacher Development Trust
Part of the introduction is worth quoting in detail:
Before we get that far, though, there is a caveat: I’m a doctor. I know that outsiders often try to tell teachers what they should do, and I’m aware this often ends badly. Because of that, there are two things we should be clear on.
Firstly, evidence based practice isn’t about telling teachers what to do: in fact, quite the opposite. This is about empowering teachers, and setting a profession free from governments, ministers and civil servants who are often overly keen on sending out edicts, insisting that their new idea is the best in town. Nobody in government would tell a doctor what to prescribe, but we all expect doctors to be able to make informed decisions about which treatment is best, using the best currently available evidence. I think teachers could one day be in the same position.
Secondly, doctors didn’t invent evidence based medicine. In fact, quite the opposite is true: just a few decades ago, best medical practice was driven by things like eminence, charisma, and personal experience. We needed the help of statisticians, epidemiologists, information librarians, and experts in trial design to move forwards. Many doctors – especially the most senior ones – fought hard against this, regarding “evidence based medicine” as a challenge to their authority.
In retrospect, we’ve seen that these doctors were wrong. The opportunity to make informed decisions about what works best, using good quality evidence, represents a truer form of professional independence than any senior figure barking out their opinions. A coherent set of systems for evidence based practice listens to people on the front line, to find out where the uncertainties are, and decide which ideas are worth testing. Lastly, crucially, individual judgement isn’t undermined by evidence: if anything, informed judgement is back in the foreground, and hugely improved.
This is the opportunity that I think teachers might want to take up.
As the Novopay debacle continues, Stuff and the Herald are both reporting a survey done of its members by the Post-Primary Teachers Association. At Stuff, the story begins:
Nearly 36 per cent of secondary school staff are not reporting their Novopay glitches, a survey has found, casting doubt on the Government’s claims of an improvement in the payroll system.
The Post Primary Teachers’ Association found that 38.2 per cent of staff were underpaid, overpaid or not paid at all during the February 20 pay cycle.
That compares with only 1.9 per cent of staff who logged problems with the system, as reported by Novopay Minister Steven Joyce using PricewaterhouseCoopers figures.
In the Herald:
Up to 1600 teachers did not report complaints through official channels over mistakes with their pay administered through the Novopay pay roll system, according to a union survey.
The Post Primary Teachers Association surveyed 4500 teachers for the pay period ending February 20 and found 36 per cent had not formally reported errors with their pay because they were either “too embarrassed” or feared putting school administrators under more pressure.
In this case the PPTA report is easily available (59 page PDF), so we can find out what was actually done. The union surveyed all its (roughly 18000) members, using an online poll. They received 4659 responses from members, of whom 1712 were affected.
Obviously, teachers who had experienced problems would be more likely to respond, especially if the reason they hadn’t complained to the local administrators was because they didn’t want to put them under more pressure. The PPTA report handles this issue very well. On page 13 they give calculated Novopay error rates under the assumption that 100% of those with problems responded, and under the assumption that the responses are representative. This gives upper and lower bounds, and the lower bound is substantially higher than Novopay is claiming.
In the media stories, things are a bit confused. The 36% or 38% are proportions assuming the responses were representative. The numbers in the vicinity of 1600 look like the number assuming that everyone adversely affected responds, perhaps minus an estimate of how many of them Novopay reported. I haven’t been able to reconcile them with the PPTA report. In any case, neither paper accurately described how the data were collected, even though this was made clear by the PPTA.