Posts filed under Social Media (95)

June 1, 2013

Information and its consequences, on BBC radio

From BBC Radio: listen online

On Start the Week Emily Maitlis talks to the Executive Chairman of Google, Eric Schmidt about the digital future. A future where everyone is connected, but ideas of privacy, security and community are transformed. Former Wikileaks employee James Ball asks how free we are online. The curator Honor Harger looks to art to understand this new world of technology. And worried about this brave new world? David Spiegelhalter, offers a guide to personal risk and the numbers behind it.

(via @cjbayesian)

May 2, 2013

Why does no-one listen to us?

Dan Kahan, a researcher in the Cultural Cognition project at Yale Law School, has an interesting post on “the science communication problem”

The motivation behind this research has been to understand the science communication problem. The “science communication problem” (as I use this phrase) refers to the failure of valid, compelling, widely available science to quiet public controversy over risk and other policy relevant facts to which it directly speaks. The climate change debate is a conspicuous example, but there are many others

April 27, 2013

Facebook data analysis and visualisation

From the Stephen Wolfram blog, lots of analysis of Facebook friend data with well-designed graphs.  For example, this graph shows how the mean age of your `friends’ is related to your age.

median-age-friends-vs-age2

 

Those under 40 have Facebook friends of about the same age, but after than the age distribution levels off and becomes much more variable.

New science journalists

Three journalists who have just finished internships in science journalism with NPR news in Washington, DC. Might be useful to follow in the future:

 

April 26, 2013

That TV3 poll on racism? Bogus!

The Herald today ran this story claiming that people think New Zealand is a racist country, based on the results of a survey run  for TV3’s new show The Vote. Viewers voted through Facebook, Twitter, The Vote website or by text.

I haven’t watched The Vote, but I would like to know whether its journalist presenters, presumably fans of accuracy, point out that such self-selecting  polls are unscientific – the polite term for bogus. The best thing you can say is that such polls allow viewers to  feel involved.

But that’s not a good thing if claims made as a  result of these polls lead to way off-beam impressions being planted in the public consciousness; that’s often the way urban myths are born and prejudice stoked.

I’m not saying that racism doesn’t exist in New Zealand, but polls like this  offer no insight into the issue or, worse, distort the truth.

It’s disappointing to see the Herald, which still, presumably, places a premium on accuracy, has swallowed The Vote press release whole, without  pointing out its shortcomings or doing its homework to see what reliable surveys exist. TV3 must be very pleased with the free publicity, though.

April 25, 2013

Internet searches reveal drug interactions?

The New York Times has a story about finding interactions between common medications using internet search histories.  The research, published in the Journal of the American Medical Informatics Association, looks at search histories containing searches for two medication names and also for possible symptoms.  For example, their primary success was finding that people who searched for information on paroxetine (an antidepressant) and pravastatin (a cholesterol-lowering drug) were more likely to search for information on a set of symptoms that can be caused by high blood sugar.  These two drugs are now known to interact to cause high blood sugar in some people, although this wasn’t known at the time the internet searches took place.

This approach is promising, but like so many approaches to safety of medications it is limited by the huge number of possibilities.  The researchers knew where to look: they knew which drugs to examine and which symptoms to follow. With the thousands of different medications, leading to millions of possible interacting pairs and dozens or hundreds of sets of symptoms it becomes much harder to know what’s going on.

Drug safety is hard.

April 8, 2013

All clinical trial results should be published

If you’re one of the 40,000 or so people who has signed the Alltrials petition you will have received an email from Ben Goldacre asking for more help.

The  Declaration of Helsinki, the major document on research ethics in medicine, already states

30. Authors, editors and publishers all have ethical obligations with regard to the publication of the results of research. Authors have a duty to make publicly available the results of their research on human subjects and are accountable for the completeness and accuracy of their reports. They should adhere to accepted guidelines for ethical reporting. Negative and inconclusive as well as positive results should be published or otherwise made publicly available. Sources of funding, institutional affiliations and conflicts of interest should be declared in the publication. Reports of research not in accordance with the principles of this Declaration should not be accepted for publication.

The petition is trying to get these principles enforced. Publication bias isn’t just a waste of the voluntary participation of (mostly sick) people in research. Publication bias means we don’t know which treatments really work.

In my first job (as a lowly minion) in medical statistics, my boss was Dr John Simes, an oncologist. Back in the 1980s he had shown that publication bias in cancer trials gave the false impression that a more toxic chemotherapy regimen for ovarian cancer had substantial survival benefits to weigh against the side-effects.  Looking at all registered (published and unpublished) trials showed the survival benefit was small and quite possibly non-existent.  The specific treatment regimens he studied have long been outmoded, but his message is still vitally important.

These examples illustrate an approach to reviewing the clinical trial literature, which is free from publication bias, and demonstrate the value and importance of an international registry of all clinical trials.

Nearly thirty years later, we are still missing information about the benefits and risks of drugs.

For example, influenza researchers have used detailed simulation models to assess control strategies for pandemic flu. These simulation models need data about the effectiveness of drugs and vaccines.  When the next flu pandemic hits, we really need these models to be accurate, so it’s especially disturbing that Tamiflu is one of the drugs with substantial unpublished clinical trial data.

April 1, 2013

Briefly

Despite the date, this is not in any way an April Fools post

  • “Data is not killing creativity, it’s just changing how we tell stories”, from Techcrunch
  • Turning free-form text into journalism: Jacob Harris writes about an investigation into food recalls (nested HTML tables are not an open data format either)
  • Green labels look healthier than red labels, from the Washington Post. When I see this sort of research I imagine the marketing experts thinking “how cute, they figured that one out after only four years”
  • Frances Woolley debunks the recent stories about how Facebook likes reveal your sexual orientation (with comments from me).  It’s amazing how little you get from the quoted 88% accuracy, even if you pretend the input data are meaningful.  There are some measures of accuracy that you shouldn’t be allowed to use in press releases.
March 17, 2013

Briefly

  • When data gets more important, there’s more incentive to fudge it.  From the Telegraph: ” senior NHS managers and hospital trusts will be held criminally liable if they manipulate figures on waiting times or death rates.”
  • A new registry for people with rare genetic diseases, emphasizing the ability to customise what information is revealed and to whom.
  • Wall St Journal piece on Big Data. Some concrete examples, not just the usual buzzwords
  • Interesting visualisations from RevDanCat
March 16, 2013

Do scientists read newspapers or blogs?

A new paper surveyed neuroscientists in Germany and the US about where they get information on science-related news stories.

Based on the response of some 250 scientists (fairly evenly divided between the countries), the researchers found that scientists tended to give more weight to the influence of traditional media. For instance, more than 90 percent of neuroscientists in both countries said they relied on traditional journalist sources – both in print and online – to follow news about scientific events compared to around 20 percent for blogs.

Not surprisingly, the internet coverage of this paper has been fairly hostile (traditional media seems not to have covered it).

There’s a good  summary of the reaction by science writer Deborah Blum, but count me on the bemused side.  I do use traditional media to learn that particular science stories exist, but rarely to find out more about them.