A school in the UK has been testing spaced learning (a technique based on findings about memory formation from neuroscientific research) to condense a four-month GCSE science module into 90 minutes – 20 minutes of intensive narrated Powerpoint, 10 minutes of basketball, repeated three times, with great success. In fact, over a quarter of students did better in the tests using this method than they did after subsequently taking the traditional four-month module. The suggestion is that an entire GCSE could be passed by most students with just a few days of study and, strikingly, that further study might actually be harmful in a significant number of cases.
Ignoring things like the Hawthorne effect and assuming these results are meaningful, there are two main conclusions to be drawn here. The first is positive: that spaced learning works pretty well and that we can learn a lot from neuroscience. The second is appalling: that GCSEs (qualifications usually taken by English students at the age of 16) are almost totally useless as a means of gauging knowledge and understanding. Of course, we suspected that already.
Tests and exams are so embedded in our educational systems that we sometimes think they tell us something useful about the effectiveness of teaching and learning strategies. Alas, they tell us little. What they do tell us is that, somehow, a particular instance of a particular intervention may have helped some people to pass the test. If we get enough similar interventions in enough contexts to help identify a pattern, then we can start to say with some assurance whether a particular kind of intervention might help some people pass some kinds of test, and we might even be able to generalise a little about shared characteristics of such people, which might in turn help us to tailor our teaching for different learners to pass tests more effectively. Whether the test tells us anything useful or not, however, remains a significant question.
In this particular context, there is some evidence that spaced learning may be an effective approach to passing some GCSEs. But even here there are some nasty issues: the fact that many of the same students actually did worse after following this process and then studying for a further four months suggests that:
- whatever they learnt was not persistent and/or
- that what they learnt later reduced their ability to pass the exam.
If the former is true, spaced learning may have its uses but they are pretty limited. Given the advantages conferred by having already had a successful go at the tests, if the latter is true, then either it is a sign of some truly appalling teaching or, more likely, it suggests that the students carried on learning and may have subsequently known too much to pass. This sounds bizarre, but I have some anecdotal evidence for this: I can remember looking through model exam questions and answers for a GCSE-equivalent computing course with my son a few years ago and being horrified that he was being penalised for knowing too much on many of the questions, which frequently ignored complexities and ambiguities in favour of repeating what the book (sometimes absolutely wrongly) stated. For instance, one that stands out in my poor memory is that markers were explicitly told to penalise students for stating (correctly) that TCP and IP are protocols, while rewarding the incorrect answer of TCP/IP (which is actually a suite of protocols). A student with curiosity and an interest in the subject who had explored even a little further than the book would therefore have received lower marks than those who had memorised just what was needed. It is not surprising, therefore, that a relatively surface approach would be more successful in such instances. Knowing a little of the right kind of facts to answer test questions would, at least sometimes, be more useful than actually understanding the subject.
So, in the context of exam-passing at least, spaced learning is either useless in the long term, or part of the reason for its success is that it emphasises surface-level memory skills at the expense of depth of real learning. Interesting, but not revolutionary.
There are some occasions in life when this kind of learning can be useful (I'd like to try spaced learning as a means of learning to play a song, for instance) but not enough to warrant its wholesale adoption. More significantly, I think that it's yet another a damning indictment of tests/exams as the primary driver and means of evaluating the success of our educational system. There are huge opportunities to rethink what we are assessing and how we do it, and we must work on these urgently. Assessment is such a driver in our systems that, if we do it wrong, we run a big risk of setting inauthentic goals and encouraging weak learning strategies that must be unlearnt as we enter real life.