This is also reported on (wonderfully) by Donald Clark at http://donaldclarkplanb.blogspot.ca/2013/01/cool-research-happy-sheets-hopeless.html and the original slides are at http://www.boozallenlearning.com/whitepapers/IEL12_Failure-Triggered-Training_Bliton-Gluck.pdf
A report on a presentation by Bliton and Gluck on a fascinating and rather brilliant study of the effectiveness of training 500 people in an organization about the dangers of phishing.
The study split the subjects into three groups, a control group with no intervention, one that received some nicely presented informational text that was actually pasted from a wiki, and one that received a carefully designed pedagogically sound interactive tutorial.
Of those receiving some form of tuition, the results of post-tests were much as expected, with a statistically significant gain shown by those who got the interactive tutorial. The evaluations of the training were great which, given the creators of the training are professionals, is what might be expected. So far so good. This was a successful training exercise that proved the tuition had worked, and that interactive tutorials are a worthwhile investment as they produce better results. If most of us at Athabasca University got results like this, we would consider our job well done, and congratulate ourselves on being great educators. Such things are among the main ways that we typically measure the success of our courses.
This is where it gets fun.
What they did next was to test the effectiveness of the training by sending mock phishing emails to all the subjects. To their great surprise, there was no statistically significant difference between the failure rate of either of the two groups that had received the intervention and, more surprising still, no difference in the control group. I’ll reiterate that so that you can dwell a little more on the full import of this: the control group that had received no training did just as well/badly as those that had received the training. In fact, though not to a significant extent, those who had received no training actually appeared to do slightly better than the rest.
What Bliton and Gluck did next was even smarter: those who had been fooled by the phishing attack were informed of their ‘failure’ and received remedial training. This recurred twice more at intervals that were based on what we know of how memory decays (spaced learning theory) and, with each run of the remedial training for those that ‘failed’ the test, the number of victims of attacks in each group in the next run reduced enormously until, in the final run, almost no one was caught out.
The notions that 1) teaching is equivalent to learning and that 2) the ability to pass a test after training translates into genuine competence without further reinforcement and reflection are bizarre, given that this is not exactly a new idea (actually it is well over a hundred years since the earliest spaced learning theories and studies showed very similar results). But it is deeply embedded in our educational systems, both in industry and academia.
The slides are great, but I hope that Bliton and Gluck publish the full study. Apart from anything else, it’s not entirely clear what interstitial intervals were used in this case – they just say ‘over a period of months’, which is interesting given that some variants of the theme suggest that the positive effects can be gained with intervals of only 10 minutes between reinforcement (one of many good reasons to include time for reflection after the event in any learning activity). This is exactly the kind of research that we need to shake educational traditionalists out of their complacency.
Address of the bookmark: http://www.daveswhiteboard.com/archives/4932