Given its appearance in Huffpost Weird News, this is a surprisingly acute, perceptive and level-headed analysis of the much-headlined claim that 10% of US college graduates believe Judge Judy serves on the US Supreme Court. As the article rightly shows, this is palpable and scurrilous nonsense. It does show that a few American college graduates don’t know who serves on the Supreme Court (which is not exactly a critical life skill) but, given that over 60% got the answer correct and over 20% picked someone who did formerly serve, the results seem quite encouraging. The article makes the point that Judge Judy is referred to on the poll simply as Judith Sheindlin, that is not the name she is popularly known by, so there is no evidence at all that anyone actually believed her to be a supreme court judge. It was just a wrong and pretty random guess that no one would have got wrong if she had been referred to as ‘Judge Judy’. I’d go further. Most people would only know Judge Judy’s real name if they happened to be fans, in which case they would instantly recognize this as a misdirection and so be able to pick between the three remaining alternatives, one of which even I (with no interest in or knowledge of parochial US trivia) recognize as wrong. So it is quite possible that a large proportion of correct or nearly correct answers were actually due to people watching too much mind-numbing daytime TV. Great.
What it does show in quite sharp relief is how dumb multiple choice questions tend to be. If this were given as a quiz question in a course (not improbable – most are very much like it, and quite a few are worse) it would provide no evidence whatsoever that any given individual actually knew the answer. This is not even a test of recall, let alone higher order knowledge. A wrong answer does not indicate belief that it is true, but a correct answer does not reliably indicate a true belief either. Individually, multiple choice questions are completely useless as indicators of knowledge, in aggregate they are not much better.
As long as they are not used to judge performance or grade students, objective quizzes can be useful formative learning tools. Treated as fun interactive tools, they can encourage reflection, provide a sense of control over the process, and support confidence. They can also, in aggregate, provide oblique clues to teachers about where issues in teaching might lie. In a very small subset of subject matter (e.g. some sub-areas of math problem solving), given enough of them, they might coarsely differentiate between total incompetence and minimal competence. There are also a few ways to improve their reliability – adding a confidence weighting, for example, can help better distinguish between pure guesses and actual semi-recollection, and adaptive quizzes can focus in a bit more on misconceptions, if they are very carefully designed. But, if we are honest, the only reason they are ever used summatively in education or other fields of learning is because they are easy to mark, not because they are reliable indicators of knowledge or performance, and not because they help students to learn: in fact, when given as graded tests, they do exactly the opposite. I guess a secondary driver might be that it is easy to generate meaningful-looking (but largely meaningless) statistics from them. Neither reason seems compelling.
Apart from their uselessness at performing the task they are meant to perform, there are countless other reasons that graded objective tests are a bad idea, from the terrible systemic effects of teaching to the test, to the extrinsic motivation they rely on that kills the love of learning in most learners, to their total lack of authenticity. It is not hard to understand why they are so popular, but it is very hard to understand why teachers and others that see their job as to inspire, motivate and support would do this to students to whom they owe a duty of care.
Address of the bookmark: http://www.huffingtonpost.com/entry/polls-judge-judy-supreme-court_us_569e98b3e4b04c813761bbe8