Thursday, February 14, 2019

Seven ways to fix the replication crisis

I gave a talk yesterday that was an opinionated survey of seven causes of the replication crisis in psychology, and seven actions we could all take today to avoid it in future. All the slides are on github. In brief:

1. Publication bias
Publication bias comes in part from null results being meaningless with  traditional statistics. Use Bayes Factors instead, they can provide evidence for the null, and are easy to do in R.

2. Small sample size
Most of us do not collect enough data in our experiments. Use a power calculation to work out an appropriate sample size. This is easy to do in R.

3. Misunderstanding statistics
No-one in psychology really understands p values. Also, a p value between .04 and .05 is strangely common in psychology, yet p-values in this range provide only very weak evidence. Use Bayes Factors instead

4. Low reproducibility
If you run a different experiment to me, and do different analysis, is it that surprising you get a different answer? Ensure your work is reproducible by publishing your raw data, analysis scripts, stimuli, and experiment code.

5. ‘p’ hacking
Common practices in flexible analysis, like testing for significance after every 10 participants, and stopping when it's significant, can lead to false positive rates of about 60%. Pre-register your next big study, so you don't fool yourself.

6. Poor project management

Most psychologists do not have adequate private archiving and recording within their own labs. Use a version control system (e.g. github) to improve project management in your lab.

7. Publication norms
Pressure to publish lots of papers leads to lots of poor outputs, rather than a few good ones. Publish fewer, better papers. If you are a manager, focus hiring, promotion, and appraisal less on volume and more on quality.

CC-BY-SA 4.0

Monday, February 11, 2019

PsychoPy on Raspberry Pi

The problem
 
My department is fortunate to have several multi-seat testing rooms for psychological research. The downside is the computers inside them are slightly ageing. They're several-year-old desktop machines with integrated graphics that were originally Windows 7, but have been converted to Windows 10.

Since that conversion, Psychopy, a great open-source experiment generator has been experiencing intermittent issues with graphics-related freezes. It's not all machines, and not all the time, but sporadically they hang for 4-5 seconds before updating the screen. This is bad news for some experiments. Psychopy does not officially support integrated graphics, so our attempts to get this resolved with the developers have so far met with limited success.

Some solutions I didn't go for

1. Upgrade the machines

A £30 discrete graphics card would probably do the trick, but with the number of machines we have across the department, that's still quite a cost overall.

2.  Boot to Linux

We've never been able to replicate this hanging issue on any Linux machine, so it seems Windows specific. Unfortunately, booting from USB is disabled on these machines.


3. Use Linux laptops

Our lab is probably only testing six people at any one time, so we could buy a set of laptops for this purpose and just move them into the testing rooms when we test. This would work, but is potentially a bit expensive (perhaps £2000).

The solution I'm now trying:

4. Use Raspberry Pis

Raspberry Pis are cheap, and the testing rooms already have monitors, keyboards and mice in them (connected to the desktop machines). So, total cost per seat is £51.75. That's for a Raspberry Pi 3, case, power supply, 8GB SD card, official power supply, and HDMI to DVI cable.

The Psychopy programs I've tested so far on this setup work fine.