An interesting article (thanks to Mary Pringle for alerting me to this).
The claimed finding is that fact retrieval is improved through taking tests. Or, to put it another way that better reflects what is actually being researched here, taking tests improves the ability of students to take similar tests. Hmm. That’s news?
Any sensible pedagogical design will include something like tests (maybe not with that name) as an integral part of the learning process. It is an essential part of the metacognitive process and fits well with work on learning cycles by Lewin, Kolb and others over the past hundred years or so, and aligns perfectly with a constructivist view of knowledge. We need opportunities to connect new knowledge with old and to apply it. It’s not the end of the process but it’s an important step along the way. Testing forces us to confront our beliefs, reflect on our knowledge, apply it, think twice about what we know and don’t know, identify the flaws, take remedial action, and to do it in a (typically) ‘safe’ context before we have to apply it for real.
Like almost all such articles where people attempt to ‘experiment’ with different approaches to education, there appear from what is reported in NYT to be at least a couple of gaping methodological flaws:
- The amount of time spent on task seems to have been largely ignored. By my reckoning, the control group in the first experiment (reading only) spent 5 minutes on the task, the repeat-reading group spent about 20 minutes, the concept mapping group spent an unspecified amount of time (probably extended because of the extra cognitive load involved in the diagramming process, so time actually thinking about what was being learned was not that great) and the test group spent at least half an hour, all of it relating to the content to be learned. The conclusion that might be drawn from this is that the longer one spends thinking about something one has learned, the better one will have learnt it. Indeed, looking at the results and bearing this in mind, it is surprising that the control groups did not do worse than they did.
- To make it even less reliable, it appears that no account has been taken of the fact that those using tests were actually practicing the very skills needed to do better on tests – exam technique can be learned just like any other skill.
- It is not clear whether or not feedback was given on the results of the test. If it were, the simple fact that caring was shown by whatever or whoever gave the feedback would have had a notable effect on learning. Even if not, tests would have highlighted to the learners what they did and didn’t know more effectively than concept maps or reading – that’s why we give learners opportunities to practice applying their knowledge.
What this study (on the face of it, it’s not an experiment because of the lack of proper control for highly significant variables) does suggest is that the simple application of concept mapping does not greatly improve fact learning as a matter of course. This is obvious. We know that tools do not improve learning: it is not the tools themselves but how they are used that turns them into a learning technology. The devil is in the detail: how much preparation was provided for those using concept maps? How much time and effort was spent in the mechanical process of map construction relative to time spent in reflecting on what had been learned? Were users of concept maps provided with sufficient training to allow them to use the tools to identify gaps in knowledge as well as connections? Were they able to get feedback or share maps with others?
Perhaps I am being unfair to the researchers and I’m looking forward to seeing the real article to find out more about how the study was conducted – sadly, Science does not make articles published online available as part of AU’s access package so we have to wait till it appears in the journal itself before we can read it.
Address of the bookmark: http://www.nytimes.com/2011/01/21/science/21memory.html?ref=science