Categories
Uncategorized

Everything We Know Is Wrong

On Tuesday 26th August 2014 BBC Radio 4 presented a programme looking at the validity of scientific research. This programme is Everything We Know Is Wrong, available on BBC iPlayer here.

Jolyon Jenkins presents, among others, the results from some research done by Dr. John Ioannidis of Stanford University. His paper “Why Most Research Findings are False for the journal PloS Medicine surveyed 50 of the most cited (more than 1000 times) papers in medical research. For each paper, he tried to find if there was a subsequent study that was larger to back up these cited papers. 5 out of 6 of the papers whose results had been based on non-randomised data had been proved to be wrong, yet these papers were still being cited. In addition, about 25% of the claims based on randomised data have been shown to be wrong or widely exaggerated.

This issue of reproducibility is important to me, and something I feel is a major problem for educational research. As a numerical analyst, generally when results are presented in a paper it is possible to go away, write some computer code and reproduce the computations / results, checking their validity. This does not seem possible with any of the educational research papers that I have read. For a start data is often anonymised (even if it is from the public domain) and so cannot be verified. Not only this, there are so many factors at play in a classroom it is hard to strip everything out apart from the factor the researcher is interested in. Another problem is mentioned in this programme: many papers seem to use very small samples, to me this throws into question the power of the test. Has the conclusion presented in a paper really been found, or is it just by chance.

Of course, their are ethical considerations (leading to the use of anonymity) and cost considerations at play here, the larger a sample, the larger the cost of the study and os the less likely it is to be funded, especially if the outcome is unclear.

John Ioannidis also discusses the influence of the culture of scientific research in what is published. Having spent a few years of researching a topic, it is not ideal to have nothing to publish, and so, if the original outcome hasn’t been realised researchers may start looking for something in the data to publish, even if the study was not designed to investigate this. This I am sure happens in the field of education.

Academic researchers, in all fields, build their career on their publications. This leads to a pressure to publish results (potentially limiting the time spent verifying a theory) and also means that previously published results are very rarely checked – You are not going to get a blockbuster publication by re-doing the work of others and verifying it. Even if some work has been checked and shown to be wrong, this fact may not be published. Suppose that a new researcher, R, has shown that a result due to a big name in the field doesn’t hold. How would publicly criticising an acknowledged expert in the field affect this researcher’s career? The big name may even be on the refereeing panel of R’s paper – is this going to be fair?

All of these issues should be considered when reading a piece of educational research. It is important to not take research at face value without first passing a critical eye over it.