Guesses and Hype Give Way to Data in Study of Education – NYTimes.com

This is a report on the What Works Clearinghouse, a set of ‘evidence-based’ experimental studies of things that affect learning outcomes in US schools, measured in the traditional ‘did they do better on the tests’ manner. It’s a great series of reports.

I have a number of big concerns with this approach, however, quite apart from the simplistic measurements of learning outcomes that ignore what is arguably the most important role of education – it is about changing how you think, not just about knowing stuff or acquiring specific skills. There is not much measurement of that apart from, indirectly, through the acquisition of the metaskill of passing tests, which seems counter-productive to me. What bothers me more though is the naive analogy between education and clinical practice. The problem is an old one that Checkland expressed quite nicely when talking of soft systems:

“Thus, if a reader tells the author ‘I have used your methodology and it works’, the author will have to reply ‘How do you know that better results might not have been obtained by an ad hoc approach?’ If the assertion is: ‘The methodology does not work’ the author may reply, ungraciously but with logic, ‘How do you know the poor results were not due simply to you incompetence in using the methodology?’

Not only can good methodologies be used badly, bad methodologies can be used well. Teaching and learning are creative acts, each transaction unique and unrepeatable. The worst textbook in the world can be saved by the best teacher, the best methodology can be wrecked by an incompetent or uncaring implementation. Viewed by statistical evidence alone, lectures are rubbish, but most of us who have been educated for long enough using such methods can probably identify at least the odd occasion when our learning has been transformed by one. Equally, if we have been subjected to a poorly conducted active learning methodology, we may have been untouched or, worse, put off learning about the subject. It ain’t what you do, it’s the way that you do it.

Comparing education with medicine is a category mistake. It would be better to compare it with music or painting, for instance. ‘Experimental studies show that children make better art with pencils than with paints’ might be an interesting finding as a statistical oddity, but it would be a crass mistake to therefore no longer allow children to have access to paintbrushes. ‘On average, children playing violins make a horrible noise’ would not be a reason to stop children from learning to play the violin, though it is undoubtedly true. But it is no more ridiculous than telling us that ‘textbook X leads to better outcomes than textbook Y’, that a particular pedagogy is more effective than another, or that the effectiveness of a particular piece of educational software produces no measurable improvement over not using it. Interestingly, the latter point is made in a report from the ‘What Works Clearinghouse’ site at http://ies.ed.gov/ncee/pubs/20094041/pdf/20094041.pdf which, amongst other interesting observations, makes the point that the only thing that does make a statistical difference in the study is teacher/student ratios. Low ratios allow teachers to exhibit artistry, to adapt to learners’ needs, to demonstrate caring for individuals’ learning more easily. This is not about a method that works – it is about enabling multiple methods, adapted to needs. It is about allowing the teacher to be an artist, not an assembly worker implementing a fixed set of techniques.

I am not against experimental studies as long as we are very clear and critical in our interpretation of them and do not over-generalize the results. It would be very useful to know that something really does not ever work for anyone, but I’m not aware of many unequivocal examples of this. Even reward and punishment, that fails in the overwhelming majority of cases, has at least some evidence of success in some cases for some people – very few, but enough to show it is not always wrong.

Even doing nothing which, surely, must be a prime candidate for universal failure, sometimes works very well. I was once in a maths class at school taken by a teacher who, for the last few months of the two-year course, was taken ill. His replacements (for some time we had a different teacher every week, most of whom were not maths teachers and knew nothing of the syllabus) did very little more than sit at the front of the class and keep order while we studied the textbook and chatted amongst ourselves. The average class grade in the national exams sat at the end of it all was considerably higher than had ever been achieved in that school previously – over half of us got A grades where, in the past, twenty percent would have been a good showing. Of course, ‘nothing’ does not begin to describe what actually happened in the class in the absence of a teacher. The textbook itself was a teacher and, more importantly, we were one another’s teachers. Our sick teacher had probably inspired us and the very fact that we were left adrift probably pulled us closer together and made us focus differently than we would have done in the presence of a teacher. Maybe we benefited from the diversity of stand-in teachers. We were probably the kind of group that would benefit from being given more control over our own learning – we were the top set in a school that operated a streaming policy so, had it happened to a different group, the results might have been disastrous. Perhaps we were just a statistically improbably group of math genii (not so for me, certainly, so we might rule that one out!). Maybe the test was easier that year (unlikely as about half a dozen other groups didn’t show such improvement, but perhaps we just happened to have learned the right things for that particular test). I don’t know. And that is the point: the process of learning is hugely complex, multi-faceted, influenced by millions of small and large factors. Again, this is more like art than medicine. The difference between a great painting and a mediocre one is, in many cases, quantitatively small, and often a painting that disobeys the ‘rules’ may be far greater than one that keeps to them. The difference between a competent musician and a maestro is not that great, viewed objectively. In fact, many of my favourite musicians have objectively poor technique, but I would listen to them any day rather than a ‘perfect’ rendition of a midi file played by an unerring computer. The same is true of great teaching although this doesn’t necessarily mean it is necessarily the result of a single great teacher – the role may be distributed among other learners, creators of content, designers of education systems, etc.  I’m fairly sure that, on average, removing a teacher from a classroom at a critical point would not be the best way to ensure high grades in exams, but in this case it appeared to work, for reasons that are unclear but worth investigating. An experimental study might have overlooked us and, even if it did not, would tell us very little about the most important thing here: why it worked. 

We can use experimental studies as a starting point to exploring how and why things fail and how and why they succeed. They are the beginning of a design process, or steps along the way, but they are not the end. It is useful to know that low teacher/student ratios are a strong predictor of success, but only because it encourages us to investigate why that is so. It is even more interesting to investigate why it does not always appear to work. Unlike clinical studies, the answer is seldom reduceable to science and definitely not to statistics, but knowing such things can make us better teachers.

I look forward to the corollary of the What Works Clearinghouse – the Why it Works Clearinghouse.

Address of the bookmark: http://www.nytimes.com/2013/09/03/science/applying-new-rigor-in-studying-education.html?_r=0

LinkedIn launches LinkedIn for Education

This is about connecting people you at colleges or who you went to college with, rather than being a service for academics like academia.edu or others of that ilk, and it’s an incremental change from the existing ways LinkedIn already does pull people who claim the same institutional background together, but an interesting development none the less.

 

Address of the bookmark: http://pro.gigaom.com/blog/linkedin-launches-linkedin-for-education/

Killing stupid software patents is really easy, and you can help

I’ve very rarely come across a software patent that is not really stupid, that does not harm everyone apart from patent trolls and lawyers, and that is not predated by earlier examples of prior art.  This article explains how anyone can easily put a stop to them before they do any damage. Great stuff.

Address of the bookmark: http://boingboing.net/2013/07/24/killing-stupid-software-patent.html

Doug Engelbart, American inventor and computing legend, has passed away — Tech News and Analysis

Sad news of the death, at 88, of one of the greatest thinkers and inventors of the past century. Although the headlines all proclaim him as the inventor of the mouse, that was only one of his many achievements that were more profoundly influential. Among the many other things that he invented or played a significant role in inventing were the first working hypertext (and hence the Web), the word processor, the Internet (his lab was the second node on its forerunner, the ARPANET), email, video conferencing, windowing systems like the Mac and Windows, and much else besides. A modest and inspiring genius whose vision of augmenting, not replacing, human intellect reverberates loudly to this day.

Address of the bookmark: http://gigaom.com/2013/07/03/doug-engelbart-american-inventor-computing-legend-passes-away/

The Psychology of Hiring: Why Brainteasers Don't Belong in Job Interviews : The New Yorker

An interesting article that makes a very strightforward and obvious point, with some evidence, that brainteasers in job interviews do little more than demonstrate the candidate’s ability to do brainteasers in job interviews. They do not predict success in the jobs they are filtering for. The parallel implications relating to typical exam processes and practices in educational systems are clear. 

Address of the bookmark: http://www.newyorker.com/online/blogs/elements/2013/06/why-brainteasers-dont-belong-in-job-interviews.html

Julian Dibbell » A Rape in Cyberspace

I have made use of this or its influential ancestor article in a few courses that I have taught over the past decade or so and, after a long period of forgetting about it, have done so again recently. Rereading it again, I was as affected by it now as much as I was the first time I read it. Though it relates to events that occurred in the largely superseded technology of the MOO, Dibbell’s detailed descriptions and rich reflections are as relevant in an era of social networks, MMORPGs, Q&A sites, web forums and immersive worlds as they were when he first wrote them. Maybe more so.

It’s a long, harrowing, but rewarding read, not for the easily offended, unravelling the unpleasant story of Mr Bungle and his reincarnation as Dr Jest, the things he did to other characters in the MOO, and the responses of the other inhabitants of the MOO to ‘him’ (I may give away too much with those quotes). It challenges notions of identity, self, and the nature of human engagement as well as offering a fascinating meditation on ethics, consensus and social contracts in both meat-space and cyber-space. No unequivocal answers, but many challenging questions. The denouement that was not there in the original piece is worth waiting for, and makes the whole episode even more ugly and even more thought-provoking than it appears from the start. 

Address of the bookmark: http://www.juliandibbell.com/articles/a-rape-in-cyberspace/

Students riot after teachers try to stop them from cheating on exams

If someone had made this up I might have thought they had gone a little too far down the satirical path to be entirely believable. And yet…

‘Outside, more than 2,000 people had gathered to vent their rage, smashing cars and chanting: “We want fairness. There is no fairness if you do not let us cheat.” The protesters claim cheating is endemic in China and that sitting the exams without help puts their children at a disadvantage.’

One parent assaulted an invigilator who had refused a bribe having confiscated a cellphone hidden in a student’s underwear. The invigilators were holed up in the examination halls and had to send calls for help over the Internet. Radio transmitters and receivers were confiscated (some hidden inventively in erasers), and at least two groups trying to communicate with examinees were found in a nearby hotel. I don’t know whether they found all of them. Probably not, if they were anything like those discussed at http://www.china.org.cn/english/China/172006.htm which reports on things like earpieces that had to be surgically removed when they got stuck or, most awe-inspiring of all, an ‘interphone’ that exploded inside a student’s abdomen.

A study at http://ojs.library.ubc.ca/index.php/cjhe/article/view/183537/183482 suggests that 58% of Canadian students cheated in high school exams, though the numbers fall as level of study increases, with ‘only’ 9% of graduate students admitting to cheating in exams. The level of cheating in coursework is significantly higher across the board. These are sobering figures, given that the results are self-reported and may thus give an optimistic picture.

From ingenious uses of high tech cameras and transmitters, watches that display books’ worth of notes, and hidden earpieces, to bottles of water with crib sheets printed on the inside of the label or engraved notes on fingernails, cheating technologies are big business.  There are some amazingly smart tools and methods available online such as those at http://24kupi.com, http://www.cheat-on-exam.com and http://www.wikihow.com/Cheat-On-a-Test (which, for any students thinking this might be a good idea, invigilators know about too Smile). However, with embeddable technologies, tattooed circuits, and increasingly tiny smart devices the possibilities are growing fast.

This is an arms race that no one can win. Cheats get smarter at least as fast as institutions get wiser but some will always be caught and all will live in fear of being caught. However, the value of a qualification is directly proportional to its validity so, if that is called into question, everyone loses – cheats, institutions, non-cheats and society as a whole. It is more than a bit worrying that there are medical professionals, safety inspectors and architects who cheated in their exams, especially as the evidence suggests this attitude persists throughout cheats’ careers. Endemic cheating is a tragedy of the commons. If you cannot trust a qualification then there is no point in having one and all become valueless.

Can we do something about it? Yes, but it requires a concerted effort, and better detection technologies are only a small part of the answer. It is perfectly possible to design assignments that are engaging, personal, relevant and largely cheat-proof. I’ve yet to find a foolproof method that cannot be foiled by a determined cheat who employs someone else to impersonate them take a whole course on their behalf. However, we can stop or render harmless simpler contract cheating, plagiarism, collusion, bribes and other common methods of cheating through simple process design. Courses where no student ever does the same thing, where learning is linked to personal interests and aspirations, where each part is interconnected with every other and the output of one part is the input of the next are both more engaging and more cheat-proof. Amazingly, I have had students who attempt to cheat even then but, because of the built-in checks of the design, they fail anyway. Multiple examiners and public displays of work are a good idea too – non-cheating students can usually be relied upon to point out examples of cheating even if the examiners miss it. We can get rid of the traditional regurgitation format of exams, or make use of alternative and less spoofable variations like oral exams, especially those that require students to draw on unique coursework experience rather than uniform replication of process and content. We can help educate students how not to cheat and make a point of reminding them that it is a bad thing to do. And we can get to know our students better, both to reduce the likelihood of cheating and to discover it more easily should it occur. Most of these methods cost time, effort, and money when compared with the common industrial one-size-fits-all models they are up against. But they all lead to better learning, provide more reliable discrimination of competence, greater immunity to cheating, and are fairer to everyone. If we stack that up against the staggeringly high costs of endemic cheating, they begin to look like much more efficient alternatives.

Address of the bookmark: http://www.theprovince.com/news/Students+China+riot+after+teachers+stop+them+from/8554083/story.html

The Roots of Grades-and-Tests

Excellent dismissal by Alfie Kohn of the massive systematic idiocy of grading and testing. Some great arguments made, but I think the main one is summarized most succinctly thus: 

“Extrinsic inducements, of which G&T is the classic example in a school setting, are devices whereby those with more power induce those with less to do something.  G&T isn’t needed for assessment, but it is very nearly indispensable for compelling students to do what they (understandably) may have very little interest in doing. “

We have to work out better ways of teaching than this. It is not right for an educational institution to continue do something so antagonistic to learning.

Address of the bookmark: http://www.alfiekohn.org/teaching/gradesandtests.htm

Learning Locker

Very interesting new development, not quite finished yet but showing great promise – a simple means to aggregate content from your learning journey, supporting open standards. This is not so much a personal learning environment as a bit of glue to hold it together. The team putting it together have some great credentials, including one of the co-founders of Elgg (used here on the Landing) and the creator of the Curatr social learning platform.

Currently it appears that its main open standard is SCORM’s new TinCan API, but there are bigger plans afoot. I think that this kind of small, powerful service that disaggregates learning journeys from monolithic systems (including those such as the Landing, Moodle, MOOCs and Blackboard-based systems) is going to be a vital disruptive component in enabling richer, more integrated learning in the 21st Century. 

This is the description of the tool from the site itself:

“It’s never been easier to be a self-directed learner. Whether you’re in school or at work, you’re always learning. And it’s not just courses that teach. The websites you visit, the blogs you write, the job you do; it’s all activity that contributes to your personal growth.

Right now you’re letting the data all this activity creates slip through your fingers. You could be taking control of your learning; storing your experience, making sense of what you do and showing off what you know.

Learning Locker helps you to aggregate and use your learning data in an environment that you control. You can use this data to help quantify your abilities, to help you reach personal targets and to let others share in what you do.

It’s time to take your data out of the hands of archaic learning management systems that you can’t reach. We use new technologies, like the xAPI, to help you take control of your learning. It’s your data. Own it.”

Address of the bookmark: http://www.learninglocker.net/

Wheel on SAMR and Bloom's Digital Taxonomy

A brave or, more accurately, foolhardy attempt to marry Bloom’s (unempirical and unsubtle) taxonomy and the (equally unempirical but worthy of reflection) SAMR model of technology that categorizes technologies in terms of relative transformative capacity, with examples of appropriate iPad tools to cover each segment of both wheels. Like most such models, it is way too neat. You simply cannot categorize things that relate to the complex world of learning in such coarse and simple ways – in both the case of Bloom and of SAMR, it ain’t what you do so much as the way that you do it that makes all the difference in the world, and the tools linked to are mostly much more interesting (and, conversely, much more boring) than the diagram suggests. However, like many such models, it is not a bad bit of scaffolding or at least a springboard for reflection that encourages one to think about things that, without it, might be missed, especially if you are not an expert in pedagogy or technology.

Address of the bookmark: http://www.educatorstechnology.com/2013/05/a-new-wonderful-wheel-on-samr-and.html?utm_source=dlvr.it&utm_medium=linkedin