Prizes as Curriculum • How my school gets students to “behave”

A harrowing report on systematic child abuse in an American school. What’s particularly tragic about that is that the teachers who are inflicting such abuses are not bad people: they genuinely believe that they are doing good or, if not good, then at least they are doing their best to help.

Louisa was a warm and well-meaning person. After this incident, she wanted to reflect on what had happened—it had been an upsetting day for all. Louisa asked herself certain questions and didn’t ask others. In the end, she was able to justify her decision in a way that enabled her to see her decision as a moral one. “Eric has problems entertaining himself, and that’s something we need to support him with. Maybe something is going on at home,” she sighed.

Very sad. We must change this reward and punishment culture. It does not work.

Address of the bookmark: http://www.rethinkingschools.org/archive/30_03/30-3_lagerwerff.shtml

Exams as the mothers of invention

I’m often delighted by the inventiveness and determination of exam cheats. It would be wonderful were such creativity and enthusiasm put into learning whatever it is that exams are supposed to be assessing but, tragically, the inherently demotivating nature of exams (it’s all about extrinsic motivation and various ways they diminish intrinsic motivation) means that this is a bit of an uphill struggle. I particularly like the ingenious but not very smart approaches mentioned in this article:

“One test taker apparently hid his or her mother under the desk, from where she fed the student answers, while in a second case, someone outside the test taker’s room communicated answers by coughing Morse code.”

Of course, the smart ones are not so easily discovered.

This is an endless and absurd arms war that no one can win. The inventiveness and determination of exam cheats is nearly but not quite matched by the inventiveness and determination of exam proctors. My favourite recent example is the Indian Army’s reported efforts to prevent exam cheating by making examinees remove all their clothes and sit in an open field, surrounded by uniformed guards. It is hard to believe this could happen but the source seems reliable enough and there are videos to prove it. I’m prepared to bet that they didn’t stop cheating altogether, though. 

I’ve found one and only one absolutely foolproof method of preventing cheating in proctored exams: don’t give them in the first place, and challenge yourself to think of smarter ways of judging competence instead. Everyone is better off that way. But, if you are determined to give them, despite the overwhelming evidence that they are demotivating, unfair, unreliable, unkind and costly, don’t make it possible for the answers to be given in morse code.

Address of the bookmark: https://www.insidehighered.com/quicktakes/2016/03/30/examity-shares-data-cheaters

The LMS of the future is yours! | Michael Goudzwaard

I think this is, from a quick skim through, the beginnings of a very good idea. An LMS that does almost nothing. Quoting directly:

What would this LMS look like? In my view, it would have three things:

1) a course roster with stellar SIS integration

2) a gradebook

3) a rock-star LTI and API

That’s it! Oh, except it would also be open source, students would control their own data, including publishing any of their work or evaluations to the block chain, and you could host it locally, distributed, or in the cloud. Never mind the pesky privacy laws (or lack thereof) in the country hosting your server, because the LMS is back on campus. Not connected to the internet? That’s okay too, because there is a killer app that syncs like a boss (like Evernote. Has Evernote ever given you a sync error? No, I didn’t think so.)

Who wins with the new LMS? Students because they own and control their data and it costs less to buy and run. Instructors because they have a solid core with the option to plug any LTI into a class hub. Institutions because costs are lower and the system more secure.
Who loses? The EdTech companies. Or do they? Without standard wiki features and discussion portals, startups and the old standard barriers can invest their R&D and venture funds in really great tools.”

The principle is a little like that of Elgg, that consists of a very small core, with everything else coming from plugins that use the API.

It seems to me that, though this concept allows its users (teachers, students, admins alike) to do what they like with tools and data, it is still firmly based around the assumption of a traditional classroom model, and seems, as much as the traditional LMS, to reinforce that view. It’s still a course and grading management system, not a learning management system. It needs something that goes beyond the classroom, even in a traditional institutional setting. It needs much more flexible groupings, networks and sets.

With that in mind, this lightweight LMS still seems heavier than needed. A SIS might well provide course information, so that might be redundant. If not, a plugin or service could be written, rather than including it in the core. I am not at all sure that an integral gradebook is needed either, for much the same reason. It might, instead, benefit from a standards-based open source learning record store using xAPI (TinCan), like Learning Locker.  Or, perhaps, an integration of an OpenBadges backpack. Either, along with APIs that allow integration with things like SISs that could make badges look like grades or that could identify relevant learning records, could serve the necessary functions and allow a great deal more openness. Perhaps integrated support for some kinds of grouping and networking would help satisfy the needs of those that want to build institutional courses. All that is really needed is that rockstar API to pull it all together. This begins to sound a lot more like Elgg, and something that could, in principle, be implemented within it.

The blockchain idea is a good one: being able to free data from a central machine is much to be wished for. But it bothers me that privacy laws are seen as pesky and that they should be circumvented. They are pesky, for sure, but with good reason. We cannot force students to part with private data where laws do not protect them (I do have at least one course that does this, but it’s one of the conditions of enrolling because we are actually studying such things). What people do of their own accord is, of course, just fine, but the tacit assumption that this LMS-lite continues to reinforce is that learning happens in courses that lead directly to accreditation. That’s not about people doing things of their own accord.

With that in mind I can foresee a few interesting issues with authorization too, whatever path is taken. The mechanisms for deciding who allows what to be seen by whom might turn out to be quite complex because of the tension between hierarchical roles implied by this system and individual access authority implied by the freedom to use anything from anywhere, especially given the balkanization of social media space that currently exists and that is likely to form a good part of the basis of actual learning activities. Anything that is not public is going to have to interface with this in some quite tricky ways.

For all its embedded assumptions, I like the idea. Building an Elgg-like system with integral LTI, especially if it could support more learner-centric technologies like xAPI, OpenBadges and so on, seems like a sensible way to go

Address of the bookmark: http://mgoudz.com/2016/02/26/the-lms-of-the-future-is-yours/

Study suggests high school students hold negative views of online education

This is a report on a poll of soon-to-be US high school graduates with aspirations to enter higher education, revealing an overwhelming majority want to take most of their college courses in person. Indeed, only just over a third wanted to do any online courses at all, while a measly 6% would be happy with half or more being online.

As the article rightly notes…

“Poulin warned against reading too much into those results. He argued that since many people associate the term “online learning” with massive open online courses and diploma mills, there are bound to be misperceptions. Studies that have looked at student outcomes from online courses have found them to be generally equal to those from face-to-face courses.”

It is also worth noting that those polled had largely not had any experience of online learning and have actively been taught not to learn that way. Schools teach a way of teaching, not just what is deliberately taught. And, especially in places like the US where standardized testing dominates actual learning, it is mostly a pretty terrible way of teaching. Overall, this is a comment about the students’ attitudes and failings in US schools, not about online learning. But, to be fair, though the US is notably weak in this regard, the same systemic problems significantly affect the vast majority of educational systems in the world, including in Canada.

One thing the study does tell us about online education is that a lot of work is needed to help get the message across to kids that online learning can be incredibly enriching and empowering in ways most face-to-face learning can only aspire to. Many adults that have spent time out of school (our dominant demographic) realize this, but school kids are so submerged in the teacher-tells-student-does model of education that it is understandably hard for them to think of it any other way.

An immediately obvious way to address this lack of knowledge would be to get a few of our better courses into schools, but I doubt that would help at all, and it might even make things worse. The trouble is, the methodologies and schedules of school teaching would mostly crowd it out: when our schools continue to teach dependence and teacher control, online learning – that thrives on freedom and independence – is likely to be swamped by the power-crazed structures that surround it. Faced with a scheduled, regimented system and compelling demands from local, discipline maintaining teachers, it would be all too easy for online work to be treated as something to fit in small gaps between more obviously pressing demands.

Far better would be for schools to spend more time supporting self-guided learning, which would better support students to take control of their own learning paths, in life and in further learning. Whether online or face to face, one of my biggest challenges as a university professor has always been to unteach the terribly dependent ways of learning school students have been forced to learn. 

Another thing that would help would be for those of us in the profession of distance teaching to more forthrightly and single-mindedly design and promote the experience of online learning as being a way to reduce distance. To show that it can be more personal, more social, more engaging than at least what is found in traditional large lectures and big, faceless institutions, and not far off what is found in smaller seminars and tutorials (better in some ways, worse in others). As the article suggests, many students are put off by the apparent isolation and few realize that it does not have to be that way. Sadly, there is still too much of it that is that way, albeit far less commonly in places like AU.

But it is not just about courses and teaching, that make up only a small, if prominent, part of the learning experience at a traditional collocation-based university. Most online institutions don’t do anything like enough to go beyond the course to engage students in the richer, broader academic community. Thanks to the Landing, we at Athabasca are better advanced that way than most, but it  only reaches a fraction of all staff and students and is all too often just seen as an extension of the course environment by some students. There are lots of ways the Landing could work better and do more but, really, it is only a small part of the solution. We need to embed the kinds of interactions and engagement it provides everywhere in our online spaces. If we don’t, we will eternally be stuck in a cold-start regime where students don’t come because there is no one there, and too many school kids (and others) will continue to miss out on the richness and value of online learning.

 

 

Address of the bookmark: https://www.insidehighered.com/news/2016/02/17/study-suggests-high-school-students-hold-negative-views-online-education

Demotivating students with participation grades

Alfie Kohn has posted another great article on ways we demotivate students. This time he is talking about the practice of ‘cold calling’ in classrooms, through which teachers coerce students that have not volunteered to speak into speaking, rightly observing that this is morally repugnant and reflects an inappropriate and mistaken behaviourist paradigm. As he puts it, “The goal is to produce a certain observable behavior; the experience of the student — his or her inner life — is irrelevant.” A very bad lesson to teach children. But it is not limited to children, and not limited to classrooms.

Online, it is way too common for teachers to achieve much the same results – with much the same moral repugnancy and with much the same behaviourist underpinnings – through ‘participation’ grades. We really need to stop doing this. It is disempowering, unfair (especially as, rather than grading terminal outcomes, one typically grades learning behaviours) and demotivating. It also too often leads to shallow dialogues, so it’s not as great for learning as it might be, but that’s the least of the problems with it.

Ideally we should help to create circumstances where students actually want to contribute and see value in doing so, regardless of grades. If it has no innate value and grades are needed to motivate engagement, there is something terribly wrong. There are lots of ways of doing that – not making everyone do the same thing, offering diverse opportunities for dialogue, for instance. I find student and tutor blog posts and the like are good for this, because they open up opportunities for voluntary engagement where topics are interesting, rather than having to follow a hierarchical threaded flow in a discussion forum. Allowing students a strong say in how they contribute can help – if they pick the topics and methods, they are far more likely to join in. Asking questions that matter to different students in different ways can help – choice is necessary for control, and is way easier to do in an asynchronous environment where multiple simultaneous threads can coexist. Splitting classes into smaller, mutually supportive groups (ideally letting students pick them for themselves) can be beneficial, especially when combined with pyramiding so each group contributes back to a larger group without the fear and power inequalities larger groups entail.

If grades are needed to enforce participation, it’s a failure of teaching.  Getting it right is an art and I freely admit that I have never perfected that art, but I am quite certain that grading participation is not the solution. There are no simple formulae that suit every circumstance and every student, but being aware of the problems rather than relying on a knee-jerk participation grade, especially, as is all too common, when there are no course learning outcomes that such a grade addresses, is a step in the right direction. Of course, if there actually is an explicit outcome that students should be able to argue, debate, discuss, etc then it is much less of an issue. That’s what the students (presumably) signed on to learn about, though there is still a lot of care needed to ensure all students have an equal chance, and that there is enough scaffolding, reflection and support available to ensure they are not graded on ‘raw’ untutored interaction, and that the interaction becomes a learning experience that is reflected upon, not just accomplished.

In case you are wondering how I deal with grading based on social interactions, my usual approach is to allow students to (optionally) treat their contributions as evidence of learning outcomes, typically in a reflective portfolio, and to encourage them to reflect on dialogues in which they may or may not have directly participated. This allows those that are comfortable contributing to do so, and for it to be rewarded if they wish, but does not pressure anyone to contribute for the sake of it, as there are always other ways to show competence. There’s still a reward lurking in there somewhere, so it is not perfect, but at least it provides choices, which is a start.

Address of the bookmark: http://www.alfiekohn.org/blogs/hands

Course Exam: Religious Studies (RELS) 211

One of what I hope will be a continuing series of interviews with AU faculty about their courses in AUSU’s Voice Magazine. This one is concerned with the intriguingly titled Death and Dying in World Religions, explained by the author and coordinator, Dr. Shandip SahaIt provides fascinating glimpses into the course rationale, process and pedagogy, as well as some nice insights into what drives and interests Dr Saha. There are some nice innovative aspects, such as formally arranged phone conversations between tutor and student at key points – low tech, high engagement, great for building empathy while doing much to assure high quality results. It does make me wonder, when tutors inevitably therefore get to know a lot about their students and their thinking, why an exam is still necessary. My inclination, in the next revision, would be to scrap that or make it more reflective (‘what I did on my course’ kind of thing) as it offers nothing much to an otherwise great-sounding course apart from a lot of stress and effort for all concerned. The course subject matter and pedagogy itself sounds brilliant and I really like Dr Saha’s attitude and approach to its design and implementation. 

I would love to see more of these. It’s a great way of sharing knowledge and reducing the distance. One of the fascinating things about our virtual institution is that, in some ways, we have far greater opportunities to learn from one another than those in conventional institutions, where geographical isolation means people seldom get a chance to see how those in other centres and faculties think and work, and the local is always more salient than the remote. Online learning can and should break down boundaries. Apart from places like here on the Landing, where a few dozen courses have a pitch, we don’t normally take enough advantage of this. I would encourage any AU faculty who are running courses that are even a little out of the ordinary to share a bit about them with the rest of us via blogs on the Landing, even if the courses themselves don’t actually use the site. Or maybe even to contact Marie Well at the Voice Magazine to volunteer an interview!

Address of the bookmark: https://www.voicemagazine.org/articles/articledisplay.php?ART=11137

Dumb poll illustrates flaws in objective tests

Given its appearance in Huffpost Weird News, this is a surprisingly acute, perceptive and level-headed analysis of the much-headlined claim that 10% of US college graduates believe Judge Judy serves on the US Supreme Court. As the article rightly shows, this is palpable and scurrilous nonsense. It does show that a few American college graduates don’t know who serves on the Supreme Court (which is not exactly a critical life skill) but, given that over 60% got the answer correct and over 20% picked someone who did formerly serve, the results seem quite encouraging. The article makes the point that Judge Judy is referred to on the poll simply as Judith Sheindlin,  that is not the name she is popularly known by, so there is no evidence at all that anyone actually believed her to be a supreme court judge. It was just a wrong and pretty random guess that no one would have got wrong if she had been referred to as ‘Judge Judy’. I’d go further. Most people would only know Judge Judy’s real name if they happened to be fans, in which case they would instantly recognize this as a misdirection and so be able to pick between the three remaining alternatives, one of which even I (with no interest in or knowledge of parochial US trivia) recognize as wrong. So it is quite possible that a large proportion of correct or nearly correct answers were actually due to people watching too much mind-numbing daytime TV. Great.

What it does show in quite sharp relief is how dumb multiple choice questions tend to be.  If this were given as a quiz question in a course (not improbable – most are very much like it, and quite a few are worse) it would provide no evidence whatsoever that any given individual actually knew the answer. This is not even a test of recall, let alone higher order knowledge. A wrong answer does not indicate belief that it is true, but a correct answer does not reliably indicate a true belief either. Individually, multiple choice questions are completely useless as indicators of knowledge, in aggregate they are not much better.

As long as they are not used to judge performance or grade students, objective quizzes can be useful formative learning tools. Treated as fun interactive tools, they can encourage reflection, provide a sense of control over the process, and support confidence. They can also, in aggregate, provide oblique clues to teachers about where issues in teaching might lie. In a very small subset of subject matter (e.g. some sub-areas of math problem solving), given enough of them, they might coarsely differentiate between total incompetence and minimal competence. There are also a few ways to improve their reliability – adding a confidence weighting, for example, can help better distinguish between pure guesses and actual semi-recollection, and adaptive quizzes can focus in a bit more on misconceptions, if they are very carefully designed. But, if we are honest, the only reason they are ever used summatively in education or other fields of learning is because they are easy to mark, not because they are reliable indicators of knowledge or performance, and not because they help students to learn: in fact, when given as graded tests, they do exactly the opposite. I guess a secondary driver might be that it is easy to generate meaningful-looking (but largely meaningless) statistics from them. Neither reason seems compelling.

Apart from their uselessness at performing the task they are meant to perform, there are countless other reasons that graded objective tests are a bad idea, from the terrible systemic effects of teaching to the test, to the extrinsic motivation they rely on that kills the love of learning in most learners, to their total lack of authenticity. It is not hard to understand why they are so popular, but it is very hard to understand why teachers and others that see their job as to inspire, motivate and support would do this to students to whom they owe a duty of care.

Address of the bookmark: http://www.huffingtonpost.com/entry/polls-judge-judy-supreme-court_us_569e98b3e4b04c813761bbe8

Brain Based Learning and Neuroscience – What the Research Says!

Will Thalheimer provides a refreshing look at the over-hyping of (and quite pernicious lies about) neuroscience and brain-based learning. As he observes, neuroscience is barely out of diapers yet in terms of actual usable results for educators, and those actually researching in the field have no illusions that it is anywhere close yet (though they are very hopeful). What the research says is pretty close to nothing, when it comes to learning practice.

I am a little sceptical about whether neuroscience will ever be really valuable in education. This is not to say it is valueless – far from it. We have already had some useful insights into memory and have a better idea of some of the things that reduce or increase the effectiveness of brain functioning (sleep, exercise, etc), as well as a clearer notion of the mechanisms behind learning. Such things are good to know and can lead to some improvements in learning. The trouble is, though, that most researchers in the area are doing reductive science – seeking repeatable mechanisms and processes that underlie phenomena we see. This is of very little value when dealing with complex adaptive systems and emergence. Stuart Kauffman demonstrates that there are two main reasons reductive explanations fail to give us any help at all with understanding emergent systems: epistemological emergence and ontological emergence. Epistemological emergence means that it is impossible in principle to predict emergent features from constituent parts. Ontological emergence means that completely different kinds of causality occur in and between emergent phenomena than in and between their constituent parts, so knowledge of how the constituent parts work has no bearing at all on higher levels of causality in emergent phenomena. It’s a totally different kind of knowledge.

Knowing how the brain works in education is useful in much the same way that knowing about movements of water molecules in clouds is useful in meteorology. There are insights to be gained, explanations even, but they are of relatively little practical value in predicting the weather, let alone in predicting the precise shape of a specific cloud. Worse, in education, we don’t have a very precise idea of what kind of cloud shape we are seeking, most of the time. In fact, when we act like we do (learning objectives and associated assessment) we usually miss a great deal of the important stuff.

But it is worse than that. Those of us concerned with education are not just predicting or explaining phenomena, but orchestrating them. You can no more extrapolate how to teach from knowing how the brain works than you can extrapolate how to paint a masterpiece from knowing what paint is composed of. They are not even in the same family of phenomena. This doesn’t mean that a painter cannot learn useful things about paint that can assist the process – how fast it dries, its colour fastness, its viscosity, etc, and it does open up potential avenues for designing new kinds of paint. But we still need to know what to do with it once we know that. So, yes, brain science has value in education. Just not that much.

Address of the bookmark: http://www.willatworklearning.com/2016/01/brain-based-learning-and-neuroscience-what-the-research-says.html