Statistical evidence and cheating at chess
At the Zadar Open chess competition last month, a player who had previously been at the low end of the chess master range did extraordinarily well, playing the level of the world’s very best. Or at the level of a good computer program. There was absolutely no physical evidence to suggest that he had been cheating, but his level of improvement, and the agreement between his moves and those produced by top computer programs are striking. On the other hand, if you are going to allow accusations in the absence of any corroborating physical evidence, it’s also essentially impossible for an innocent person to mount a defense.
KW Regan, who is a computer scientist and chess master has analysed historical chess competition data, looking at agreement between actual moves and those the computer would recommend, and he claims the Zadar Open results should happen less often than once in a million matches. In his letter to the Association of Chess Professionals, he raises the questions
1.What procedures should be instituted for carrying out statistical tests for cheating with computers at chess and for disseminating their results? Under whose jurisdiction should they be maintained?
2. How should the results of such tests be valued? Under what conditions can they be regarded as primary evidence? What standards should there be for informing different stages of both investigative and judicial processes?
There’s a New York Times story, and Prof Regan also has a blog post. (via)
Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »
“he claims the Zadar Open results should happen less often than once in a million matches”
That’s really high. ;-)
12 years ago