I gave a talk yesterday that was an opinionated survey of seven causes of the replication crisis in psychology, and seven actions we could all take today to avoid it in future. All the slides are on github. In brief:
1. Publication bias
Publication bias comes in part from null results being meaningless with traditional statistics. Use Bayes Factors instead, they can provide evidence for the null, and are easy to do in R.
2. Small sample size
Most of us do not collect enough data in our experiments. Use a power calculation to work out an appropriate sample size. This is easy to do in R.
3. Misunderstanding statistics
No-one in psychology really understands p values. Also, a p value between .04 and .05 is strangely common in psychology, yet p-values in this range provide only very weak evidence. Use Bayes Factors instead
4. Low reproducibility
If you run a different experiment to me, and do different analysis, is it that surprising you get a different answer? Ensure your work is reproducible by publishing your raw data, analysis scripts, stimuli, and experiment code.
5. ‘p’ hacking
Common practices in flexible analysis, like testing for significance after every 10 participants, and stopping when it's significant, can lead to false positive rates of about 60%. Pre-register your next big study, so you don't fool yourself.
6. Poor project management
Most psychologists do not have adequate private archiving and recording within their own labs. Use a version control system (e.g. github) to improve project management in your lab.
7. Publication norms
Pressure to publish lots of papers leads to lots of poor outputs, rather than a few good ones. Publish fewer, better papers. If you are a manager, focus hiring, promotion, and appraisal less on volume and more on quality.
CC-BY-SA 4.0
No comments:
Post a Comment