Demotivating students with participation grades

Alfie Kohn has posted another great article on ways we demotivate students. This time he is talking about the practice of ‘cold calling’ in classrooms, through which teachers coerce students that have not volunteered to speak into speaking, rightly observing that this is morally repugnant and reflects an inappropriate and mistaken behaviourist paradigm. As he puts it, “The goal is to produce a certain observable behavior; the experience of the student — his or her inner life — is irrelevant.” A very bad lesson to teach children. But it is not limited to children, and not limited to classrooms.

Online, it is way too common for teachers to achieve much the same results – with much the same moral repugnancy and with much the same behaviourist underpinnings – through ‘participation’ grades. We really need to stop doing this. It is disempowering, unfair (especially as, rather than grading terminal outcomes, one typically grades learning behaviours) and demotivating. It also too often leads to shallow dialogues, so it’s not as great for learning as it might be, but that’s the least of the problems with it.

Ideally we should help to create circumstances where students actually want to contribute and see value in doing so, regardless of grades. If it has no innate value and grades are needed to motivate engagement, there is something terribly wrong. There are lots of ways of doing that – not making everyone do the same thing, offering diverse opportunities for dialogue, for instance. I find student and tutor blog posts and the like are good for this, because they open up opportunities for voluntary engagement where topics are interesting, rather than having to follow a hierarchical threaded flow in a discussion forum. Allowing students a strong say in how they contribute can help – if they pick the topics and methods, they are far more likely to join in. Asking questions that matter to different students in different ways can help – choice is necessary for control, and is way easier to do in an asynchronous environment where multiple simultaneous threads can coexist. Splitting classes into smaller, mutually supportive groups (ideally letting students pick them for themselves) can be beneficial, especially when combined with pyramiding so each group contributes back to a larger group without the fear and power inequalities larger groups entail.

If grades are needed to enforce participation, it’s a failure of teaching.  Getting it right is an art and I freely admit that I have never perfected that art, but I am quite certain that grading participation is not the solution. There are no simple formulae that suit every circumstance and every student, but being aware of the problems rather than relying on a knee-jerk participation grade, especially, as is all too common, when there are no course learning outcomes that such a grade addresses, is a step in the right direction. Of course, if there actually is an explicit outcome that students should be able to argue, debate, discuss, etc then it is much less of an issue. That’s what the students (presumably) signed on to learn about, though there is still a lot of care needed to ensure all students have an equal chance, and that there is enough scaffolding, reflection and support available to ensure they are not graded on ‘raw’ untutored interaction, and that the interaction becomes a learning experience that is reflected upon, not just accomplished.

In case you are wondering how I deal with grading based on social interactions, my usual approach is to allow students to (optionally) treat their contributions as evidence of learning outcomes, typically in a reflective portfolio, and to encourage them to reflect on dialogues in which they may or may not have directly participated. This allows those that are comfortable contributing to do so, and for it to be rewarded if they wish, but does not pressure anyone to contribute for the sake of it, as there are always other ways to show competence. There’s still a reward lurking in there somewhere, so it is not perfect, but at least it provides choices, which is a start.

Address of the bookmark: http://www.alfiekohn.org/blogs/hands

Course Exam: Religious Studies (RELS) 211

One of what I hope will be a continuing series of interviews with AU faculty about their courses in AUSU’s Voice Magazine. This one is concerned with the intriguingly titled Death and Dying in World Religions, explained by the author and coordinator, Dr. Shandip SahaIt provides fascinating glimpses into the course rationale, process and pedagogy, as well as some nice insights into what drives and interests Dr Saha. There are some nice innovative aspects, such as formally arranged phone conversations between tutor and student at key points – low tech, high engagement, great for building empathy while doing much to assure high quality results. It does make me wonder, when tutors inevitably therefore get to know a lot about their students and their thinking, why an exam is still necessary. My inclination, in the next revision, would be to scrap that or make it more reflective (‘what I did on my course’ kind of thing) as it offers nothing much to an otherwise great-sounding course apart from a lot of stress and effort for all concerned. The course subject matter and pedagogy itself sounds brilliant and I really like Dr Saha’s attitude and approach to its design and implementation. 

I would love to see more of these. It’s a great way of sharing knowledge and reducing the distance. One of the fascinating things about our virtual institution is that, in some ways, we have far greater opportunities to learn from one another than those in conventional institutions, where geographical isolation means people seldom get a chance to see how those in other centres and faculties think and work, and the local is always more salient than the remote. Online learning can and should break down boundaries. Apart from places like here on the Landing, where a few dozen courses have a pitch, we don’t normally take enough advantage of this. I would encourage any AU faculty who are running courses that are even a little out of the ordinary to share a bit about them with the rest of us via blogs on the Landing, even if the courses themselves don’t actually use the site. Or maybe even to contact Marie Well at the Voice Magazine to volunteer an interview!

Address of the bookmark: https://www.voicemagazine.org/articles/articledisplay.php?ART=11137

Dumb poll illustrates flaws in objective tests

Given its appearance in Huffpost Weird News, this is a surprisingly acute, perceptive and level-headed analysis of the much-headlined claim that 10% of US college graduates believe Judge Judy serves on the US Supreme Court. As the article rightly shows, this is palpable and scurrilous nonsense. It does show that a few American college graduates don’t know who serves on the Supreme Court (which is not exactly a critical life skill) but, given that over 60% got the answer correct and over 20% picked someone who did formerly serve, the results seem quite encouraging. The article makes the point that Judge Judy is referred to on the poll simply as Judith Sheindlin,  that is not the name she is popularly known by, so there is no evidence at all that anyone actually believed her to be a supreme court judge. It was just a wrong and pretty random guess that no one would have got wrong if she had been referred to as ‘Judge Judy’. I’d go further. Most people would only know Judge Judy’s real name if they happened to be fans, in which case they would instantly recognize this as a misdirection and so be able to pick between the three remaining alternatives, one of which even I (with no interest in or knowledge of parochial US trivia) recognize as wrong. So it is quite possible that a large proportion of correct or nearly correct answers were actually due to people watching too much mind-numbing daytime TV. Great.

What it does show in quite sharp relief is how dumb multiple choice questions tend to be.  If this were given as a quiz question in a course (not improbable – most are very much like it, and quite a few are worse) it would provide no evidence whatsoever that any given individual actually knew the answer. This is not even a test of recall, let alone higher order knowledge. A wrong answer does not indicate belief that it is true, but a correct answer does not reliably indicate a true belief either. Individually, multiple choice questions are completely useless as indicators of knowledge, in aggregate they are not much better.

As long as they are not used to judge performance or grade students, objective quizzes can be useful formative learning tools. Treated as fun interactive tools, they can encourage reflection, provide a sense of control over the process, and support confidence. They can also, in aggregate, provide oblique clues to teachers about where issues in teaching might lie. In a very small subset of subject matter (e.g. some sub-areas of math problem solving), given enough of them, they might coarsely differentiate between total incompetence and minimal competence. There are also a few ways to improve their reliability – adding a confidence weighting, for example, can help better distinguish between pure guesses and actual semi-recollection, and adaptive quizzes can focus in a bit more on misconceptions, if they are very carefully designed. But, if we are honest, the only reason they are ever used summatively in education or other fields of learning is because they are easy to mark, not because they are reliable indicators of knowledge or performance, and not because they help students to learn: in fact, when given as graded tests, they do exactly the opposite. I guess a secondary driver might be that it is easy to generate meaningful-looking (but largely meaningless) statistics from them. Neither reason seems compelling.

Apart from their uselessness at performing the task they are meant to perform, there are countless other reasons that graded objective tests are a bad idea, from the terrible systemic effects of teaching to the test, to the extrinsic motivation they rely on that kills the love of learning in most learners, to their total lack of authenticity. It is not hard to understand why they are so popular, but it is very hard to understand why teachers and others that see their job as to inspire, motivate and support would do this to students to whom they owe a duty of care.

Address of the bookmark: http://www.huffingtonpost.com/entry/polls-judge-judy-supreme-court_us_569e98b3e4b04c813761bbe8

Brain Based Learning and Neuroscience – What the Research Says!

Will Thalheimer provides a refreshing look at the over-hyping of (and quite pernicious lies about) neuroscience and brain-based learning. As he observes, neuroscience is barely out of diapers yet in terms of actual usable results for educators, and those actually researching in the field have no illusions that it is anywhere close yet (though they are very hopeful). What the research says is pretty close to nothing, when it comes to learning practice.

I am a little sceptical about whether neuroscience will ever be really valuable in education. This is not to say it is valueless – far from it. We have already had some useful insights into memory and have a better idea of some of the things that reduce or increase the effectiveness of brain functioning (sleep, exercise, etc), as well as a clearer notion of the mechanisms behind learning. Such things are good to know and can lead to some improvements in learning. The trouble is, though, that most researchers in the area are doing reductive science – seeking repeatable mechanisms and processes that underlie phenomena we see. This is of very little value when dealing with complex adaptive systems and emergence. Stuart Kauffman demonstrates that there are two main reasons reductive explanations fail to give us any help at all with understanding emergent systems: epistemological emergence and ontological emergence. Epistemological emergence means that it is impossible in principle to predict emergent features from constituent parts. Ontological emergence means that completely different kinds of causality occur in and between emergent phenomena than in and between their constituent parts, so knowledge of how the constituent parts work has no bearing at all on higher levels of causality in emergent phenomena. It’s a totally different kind of knowledge.

Knowing how the brain works in education is useful in much the same way that knowing about movements of water molecules in clouds is useful in meteorology. There are insights to be gained, explanations even, but they are of relatively little practical value in predicting the weather, let alone in predicting the precise shape of a specific cloud. Worse, in education, we don’t have a very precise idea of what kind of cloud shape we are seeking, most of the time. In fact, when we act like we do (learning objectives and associated assessment) we usually miss a great deal of the important stuff.

But it is worse than that. Those of us concerned with education are not just predicting or explaining phenomena, but orchestrating them. You can no more extrapolate how to teach from knowing how the brain works than you can extrapolate how to paint a masterpiece from knowing what paint is composed of. They are not even in the same family of phenomena. This doesn’t mean that a painter cannot learn useful things about paint that can assist the process – how fast it dries, its colour fastness, its viscosity, etc, and it does open up potential avenues for designing new kinds of paint. But we still need to know what to do with it once we know that. So, yes, brain science has value in education. Just not that much.

Address of the bookmark: http://www.willatworklearning.com/2016/01/brain-based-learning-and-neuroscience-what-the-research-says.html

Reimagining Online Education ~ Stephen Downes

Stephen Downes provides a typically wise critique of another of those really dumb ‘reimagining education’ pieces that does not reimagine education at all – it just reinforces what is already wrong with it. His points are all sound and worth reflecting on. Though a little strained, I quite like Stephen’s metaphor:

“Education doesn’t more features. It needs authentic propulsion and sound aerodynamic design. Sadly, most educational professionals don’t study aerodynamics, they study ornithology.”

I could extend the metaphor a little further. While many educators are stuck on ornithology (and some stopped looking any further than the archeopteryx), I think many educational researchers, at least in e-learning, are looking at more ways to tweak propellor-driven biplanes or trying to make airport check-ins more efficient. Some are looking at jet planes and rocket ships. Some are exploring helicopters and hovercraft. Perhaps a few are wondering how to build personal teleporters.

What education actually needs, though, is a thorough critical reconsideration of the entire transport system, taking into consideration what people want from it, why they choose to travel in the first place, their levels of comfort, their levels of risk, what the constraints are, what effects it has on the broader ecosystem, how it affects people’s psyches, how it stimulates them, and how it changes social patterns, amongst many other things. There’s a really important place for bicycles, buses, trains, ships, boats, footpaths, skateboards, snowmobiles, gliders, skis, horse-drawn buggies, hoverboards and all the rich diversity of transportation devices and infrastructure we have invented and will invent. It’s not one science: it’s a host of technologies and, above all, it’s a system invented by and for humans.

Bearing that in mind, education really needs a better metaphor than travel from A to B. At the very least, there is an indefinitely large range of more important and interesting stuff happening between A and B than ever happens at the destination, there’s a great deal of important stuff to say about the comfort and stimulation of the passengers in transit, and, often, ‘B’ is not where they want or need to be anyway.

Address of the bookmark: http://www.downes.ca/post/64800

Virtual Canuck | Teaching and Learning in a Net-Centric World

Terry Anderson has, after many years, moved his much-loved Virtual Canuck site to a shiny new system with its own domain, and it’s looking very good.

There’s masses of stuff here for anyone with an interest in distance and online education, and quite a few other things that relate to Terry’s diverse interests, from music to Unitarianism. Don’t miss his latest post on the new IRRODL special issue on MOOCs – some great commentary on and summaries of articles.

Address of the bookmark: http://virtualcanuck.ca/

Teaching

I have received awards for my teaching at the University of Brighton and Athabasca University, and am a National Teaching Fellow of the Higher Education Academy, UK.

In the past I have taught courses on a wide range of computing and education topics, including information technology, learning technology, networking, pedagogy, research methods, and learning design.

At Athabasca, I currently teach the following courses in the School of Computing and Information Systems:
Undergraduate
COMP 266 – Introduction to Web Programming
COMP 282 – Social aspects of Games
COMP 283 – Effective Use of Facts and Myths in Computer Games (temporary stand-in)
COMP 350 – Green Computing (under development)
COMP 470 – Web Server Management
Graduate
COMP 602 – Enterprise Information Management
COMP 607 – Ethical, Legal and Social Issues in Information Technology
COMP 635 – Green ICT Strategies
COMP 650 – Social Computing

I am also supervising a number of undergraduate and graduate projects, essays, theses and dissertations

Some thoughts on the future of universities (interview with me in The Voice Magazine)

Part 2 of a longer interview with me, the largest part of which is concerned with my thoughts on the future of universities. Because there has been a small stir lately around an Educause Review article on a similar topic (worth reading – a useful perspective that might make some conversations easier), I thought it might be worth sharing. There are some broadly similar ideas, albeit from a somewhat different angle, as well as a couple that are not there in the Educause article (notably related to the fact that institutions and teacher controlled activities are not the only fruit, and what that implies for universities), and my summary is much shorter!

The editor, Karl, disagreed with me in his editorial, I think because he misunderstood what I was calling for, and so I wrote a brief follow-up, again published by the Voice Magazine, on the letters page of the current issue, which presents it using a slightly different set of metaphors.

Disclaimer: this is far from my final, complete and considered view on the topic. It’s just a brief and spontaneous answer to a question that I might answer at least slightly differently on any given day of the week. There will be a chapter by me and Terry Anderson coming out in the forthcoming second edtion of the SAGE Handbook of E-learning Research that provides a more rigorous and careful prediction of the future of online learning, in which we attempt to explore not so much the digital wonders to come (though there is a bit of that) but the pedagogical character and organizational form it will possess. One of the central points we make in this is that a central characteristic of that future will be diversity. There are not only many possible futures. There will be many actual futures.

Address of the bookmark: https://www.voicemagazine.org/archives/articledisplay.php?ART=10944&issue=2342