Go to transcript
It is worth reiterating that contrary to popular depictions of science, science does not rely on authority as an indicator of truth.
The video reminds me of an xkcd comic showing the problem with using statistical significance if the studies showing no effect are unreported.
In this analogy, the study showing a link between green jelly beans and acne has only a 5% probability (or less) of being a coincidence (p < 0.05). This would be convincing evidence that there is a link between green jelly beans and acne, except all the 19 studies showing no link between non-green jelly beans and acne were unreported and discarded. If all the study results were reported, then it would suggest that the result of the green-jelly-bean study is indeed a coincidence: 1/20 = 5%.
Scientific studies in real life can be even worse. Companies, and even university researchers, are not obligated to publish studies in which the results show no effect (studies with “null results”). This means that researchers can run the hypothetical green-jelly-bean study 20 times until they get the result that they want, by coincidence. What normally happens does not involve ill intent, but has the same effect. The hypothetical green-jelly-bean study is run independently by 20 different research teams (who can be separated by time), who are unaware of each other, because studies with negative results are not published. Only the group with the positive result publishes its results, but the result is actually a coincidence. See the concept of publication bias at Wikipedia.
Read the rest of this entry »