Git for teachers — Medium

Git logoThis is a nice set of reflections on the potential value of GitHub to teachers. The title is broader, referring to the Git source code version control system, an open standard with hundreds of implementations, but most of the article is about GitHub, a closed commercial system that packages up Git with a deeply social workflow and friendly interface, making the bulk of its money from those that want support for closed projects and teams rather than open source goodness. Ben rightly points out that a source control system is great for text but less great for binary files and that, despite the quite friendly interface, there is quite a learning curve needed to use it effectively, especially if you are not used to the complexities of writing MarkDown code. Essentially, though it is a soft tool that can be repurposed and reassembled in many different ways, it is built for programmers, and structured in ways that support application development, not other things.

Ben’s suggestions are (typically) thought-provoking and good…

  1. An open source, freely-available content control tool designed for people working with non-code. It’s okay for it to not know about fancy file formats like Word, but it should be able to handle more than line-by-line changes. Perhaps call it scribe.
  2. A proprietary, beautiful city ecosystem built around it. A ScribeHub.

Nice idea and, as he observes, one that some people have already tried and failed to do, providing some good examples of tools that go part of the way. There’s a good discussion of some of the issues of doing so in the follow-up comments to a post by David Wiley a while back. That said, the big advantage of GitHub is that it does already exist (and is thriving) and does get used for much more than just coding. I really like some of the innovative uses of GitHub for things like journal production: https://github.com/ReScience/ for instance, uses it to make articles and research into living documents, updated as reviews and replications come in. But, as Ben says, it is not optimal for anything other than coding and text documentation and, though there are some great exemplars, it is not likely to hit the mainstream as an alternative means of production outside the coding and documentation community for some time, if ever. Also, much as I love GitHub for its innovative and smooth community integration, it is a commercial monolith. Such things should be distributed and open.

What makes GitHub so cool

Perhaps the biggest differentiating feature in GitHub that makes it stand out from other similar tools is the combination of (for the unpaid variant) required openness, and the ability for anyone at all to make a ‘pull request’. Anyone can make a copy (a ‘fork’) of an existing GitHub project and (and here is the good bit), if they make changes that would be useful in the upstream project, submit a pull request to the author(s) so that their changes can be reincorporated (merged) back into the main branch of the code. Github provides tools that, at least for text, make such merging relatively pain-free. Through this mechanism, the work of many loosely coupled people can cooperatively work on complex projects without the need for further mechanisms of collaboration, teams, collaboration, or complex project management. GitHub does, of course, have rich communication tools for discussing such changes and passing them back and forth, so it can be used very effectively for closed teams as well as in a more open, networked community, but its central social motif is the network, not the group.

An idea

I have been thinking for some time about building a programming course that uses GitHub or, perhaps better, an open variant such as GitLab, or a related coding support tool with similar intent like Phabricator. The basic idea would be that the course itself would evolve through pull requests – if students or others have ideas for improvements, they would simply implement them and submit a pull request which the course owner could choose to merge or reject. Others could, of course, build their own versions of the course at will. I don’t think this is particularly original in itself – many have built OERs this way – but it makes sense to me as both a way of actually hosting a course, and as a way of building in student participation in the development and evolution of a course. Amongst other things, it opens up the potential for students to customize courses for their particular needs: if the basic model contains stuff that is irrelevant or already known, they could adapt it to the way they want it and, of course, share that with others. This in turn opens up some interesting options for scalability and personalization (the good sort). Rather than providing a single, monolithic MOOC, courses could branch off into many related versions, each with its own communities and interests. Someone might, for example, adapt the structure for a different language, focus down on a particular element, restructure it for different pedagogical designs, or extend it for more or less advanced learners. As the ‘course’ itself would be hosted on GitHub (or whatever) there would be no need for additional tools, and the course communities/cohorts could relatively easily blend with one another, or overlap. There would be evolutionary competition between the various branches, perhaps, leading to ever better (or, more accurately, better adapted) versions of the courses.

At least a part of the assessment of the course would be based around taking an existing codebase (in some possible variants, perhaps the code used for the system employed to host the course?) and making improvements to it. Credit would be awarded to those whose pull requests were accepted. One particularly nice thing about that is that all work would, by its very nature, be original. There would be no value at all in simply copying what someone else had done, and success would be measured according to real-world metrics: it would have to be good enough to enter production. It might get a bit complicated as the course matured and there were fewer obvious things to be improved, but I have yet to come across any perfect software beyond very trivial and, with a plugin/service-based architecture the potential for improvements could be virtually limitless. There’s scope for most skill levels, apart from absolute beginners, here. And even relative newbies could contribute to things like documentation.

The idea appeals to me though, as others have found when trying to do something similar with OERs, the complexities mount up pretty quickly. One of the issues is that, unlike in the case of most programming code, one size does not fit all: it is not about producing one useful course or toolset. We are not talking about building an open textbook here, but a course that is suitable for many people in many different contexts. It is therefore more likely that forks would be more useful than merges for most people. In the coding community, this can be a problem – you wind up with many similar forks of code, each of which goes its own (increasingly incompatible) way, diluting the value and community around the original and making it difficult to choose between them (for instance, the many forks of MySQL or the two major branches of Open/LibreOffice). Big products can spawn so many forks and pull-requests that the original authors can be overwhelmed. For courses, forking would allow for the kind of repurposing – contextualization around individuals and communities – that makes OERs worthwhile in the first place. More than with open source applications, though, there would also be issues with diluting the learning community: this might be a benefit for something like a MOOC, where numbers are too large to be managed in the first place, but not so good for smaller courses.  There’s a balance to be sought. Having recently tried (and I am still trying) to incorporate changes from a main branch into a modified version of an OER course, I can verify that it can be fiendishly complex.  I want to maintain our own localizations but the updates are great and, in some cases, necessary. Merging is really difficult, because there is a great deal more involved than simple text, hierarchical directories and a few dependencies to deal with.

A system that would do as Ben suggests for complex media would be a great help in such things. Among those rich media I would love it if it could cope with, say, an exported Moodle course, where it is not just content but process and structure that needs to be tracked, and where changes to structure could greatly impact the meaning and value of the content. The complex, soft dependencies and need for narrative flow make such things structurally very different from the relatively proscribed ways that programs can change, so I don’t have a clear idea of how that could be done right now. It would certainly be possible to use an XML interchange format to track such things but those are made for machines, not people, to read. In fact, the only human-friendly way that I can think of for dealing with it would be to build it into the authoring environment itself – to have a Git-like thing at the backend of (say) Moodle. At Athabasca we do kind-of the same sort of thing using Alfresco to track changes, but the process is clunky, discontinuous, lacks the elegant cooperation of GitHub, is very document-centric (no fine-grained merging at all), and is very much a group, not a network environment, with teams and roles that are anything but open and that exist in very rigid organizational hierarchies, with roles that limit what they can do, and only a single, unforked course as the outcome.

Perhaps such a project – to build that friendlier front end – might be what course takers might use as raw material. Early, and more advanced, takers of the course would be building the infrastructure for later students. I rather like the idea.

Address of the bookmark: https://medium.com/@benwerd/git-for-teachers-e993d2ca423d#.nqby85xqs

Prizes as Curriculum • How my school gets students to “behave”

A harrowing report on systematic child abuse in an American school. What’s particularly tragic about that is that the teachers who are inflicting such abuses are not bad people: they genuinely believe that they are doing good or, if not good, then at least they are doing their best to help.

Louisa was a warm and well-meaning person. After this incident, she wanted to reflect on what had happened—it had been an upsetting day for all. Louisa asked herself certain questions and didn’t ask others. In the end, she was able to justify her decision in a way that enabled her to see her decision as a moral one. “Eric has problems entertaining himself, and that’s something we need to support him with. Maybe something is going on at home,” she sighed.

Very sad. We must change this reward and punishment culture. It does not work.

Address of the bookmark: http://www.rethinkingschools.org/archive/30_03/30-3_lagerwerff.shtml

Exams as the mothers of invention

I’m often delighted by the inventiveness and determination of exam cheats. It would be wonderful were such creativity and enthusiasm put into learning whatever it is that exams are supposed to be assessing but, tragically, the inherently demotivating nature of exams (it’s all about extrinsic motivation and various ways they diminish intrinsic motivation) means that this is a bit of an uphill struggle. I particularly like the ingenious but not very smart approaches mentioned in this article:

“One test taker apparently hid his or her mother under the desk, from where she fed the student answers, while in a second case, someone outside the test taker’s room communicated answers by coughing Morse code.”

Of course, the smart ones are not so easily discovered.

This is an endless and absurd arms war that no one can win. The inventiveness and determination of exam cheats is nearly but not quite matched by the inventiveness and determination of exam proctors. My favourite recent example is the Indian Army’s reported efforts to prevent exam cheating by making examinees remove all their clothes and sit in an open field, surrounded by uniformed guards. It is hard to believe this could happen but the source seems reliable enough and there are videos to prove it. I’m prepared to bet that they didn’t stop cheating altogether, though. 

I’ve found one and only one absolutely foolproof method of preventing cheating in proctored exams: don’t give them in the first place, and challenge yourself to think of smarter ways of judging competence instead. Everyone is better off that way. But, if you are determined to give them, despite the overwhelming evidence that they are demotivating, unfair, unreliable, unkind and costly, don’t make it possible for the answers to be given in morse code.

Address of the bookmark: https://www.insidehighered.com/quicktakes/2016/03/30/examity-shares-data-cheaters

The LMS of the future is yours! | Michael Goudzwaard

I think this is, from a quick skim through, the beginnings of a very good idea. An LMS that does almost nothing. Quoting directly:

What would this LMS look like? In my view, it would have three things:

1) a course roster with stellar SIS integration

2) a gradebook

3) a rock-star LTI and API

That’s it! Oh, except it would also be open source, students would control their own data, including publishing any of their work or evaluations to the block chain, and you could host it locally, distributed, or in the cloud. Never mind the pesky privacy laws (or lack thereof) in the country hosting your server, because the LMS is back on campus. Not connected to the internet? That’s okay too, because there is a killer app that syncs like a boss (like Evernote. Has Evernote ever given you a sync error? No, I didn’t think so.)

Who wins with the new LMS? Students because they own and control their data and it costs less to buy and run. Instructors because they have a solid core with the option to plug any LTI into a class hub. Institutions because costs are lower and the system more secure.
Who loses? The EdTech companies. Or do they? Without standard wiki features and discussion portals, startups and the old standard barriers can invest their R&D and venture funds in really great tools.”

The principle is a little like that of Elgg, that consists of a very small core, with everything else coming from plugins that use the API.

It seems to me that, though this concept allows its users (teachers, students, admins alike) to do what they like with tools and data, it is still firmly based around the assumption of a traditional classroom model, and seems, as much as the traditional LMS, to reinforce that view. It’s still a course and grading management system, not a learning management system. It needs something that goes beyond the classroom, even in a traditional institutional setting. It needs much more flexible groupings, networks and sets.

With that in mind, this lightweight LMS still seems heavier than needed. A SIS might well provide course information, so that might be redundant. If not, a plugin or service could be written, rather than including it in the core. I am not at all sure that an integral gradebook is needed either, for much the same reason. It might, instead, benefit from a standards-based open source learning record store using xAPI (TinCan), like Learning Locker.  Or, perhaps, an integration of an OpenBadges backpack. Either, along with APIs that allow integration with things like SISs that could make badges look like grades or that could identify relevant learning records, could serve the necessary functions and allow a great deal more openness. Perhaps integrated support for some kinds of grouping and networking would help satisfy the needs of those that want to build institutional courses. All that is really needed is that rockstar API to pull it all together. This begins to sound a lot more like Elgg, and something that could, in principle, be implemented within it.

The blockchain idea is a good one: being able to free data from a central machine is much to be wished for. But it bothers me that privacy laws are seen as pesky and that they should be circumvented. They are pesky, for sure, but with good reason. We cannot force students to part with private data where laws do not protect them (I do have at least one course that does this, but it’s one of the conditions of enrolling because we are actually studying such things). What people do of their own accord is, of course, just fine, but the tacit assumption that this LMS-lite continues to reinforce is that learning happens in courses that lead directly to accreditation. That’s not about people doing things of their own accord.

With that in mind I can foresee a few interesting issues with authorization too, whatever path is taken. The mechanisms for deciding who allows what to be seen by whom might turn out to be quite complex because of the tension between hierarchical roles implied by this system and individual access authority implied by the freedom to use anything from anywhere, especially given the balkanization of social media space that currently exists and that is likely to form a good part of the basis of actual learning activities. Anything that is not public is going to have to interface with this in some quite tricky ways.

For all its embedded assumptions, I like the idea. Building an Elgg-like system with integral LTI, especially if it could support more learner-centric technologies like xAPI, OpenBadges and so on, seems like a sensible way to go

Address of the bookmark: http://mgoudz.com/2016/02/26/the-lms-of-the-future-is-yours/

Study suggests high school students hold negative views of online education

This is a report on a poll of soon-to-be US high school graduates with aspirations to enter higher education, revealing an overwhelming majority want to take most of their college courses in person. Indeed, only just over a third wanted to do any online courses at all, while a measly 6% would be happy with half or more being online.

As the article rightly notes…

“Poulin warned against reading too much into those results. He argued that since many people associate the term “online learning” with massive open online courses and diploma mills, there are bound to be misperceptions. Studies that have looked at student outcomes from online courses have found them to be generally equal to those from face-to-face courses.”

It is also worth noting that those polled had largely not had any experience of online learning and have actively been taught not to learn that way. Schools teach a way of teaching, not just what is deliberately taught. And, especially in places like the US where standardized testing dominates actual learning, it is mostly a pretty terrible way of teaching. Overall, this is a comment about the students’ attitudes and failings in US schools, not about online learning. But, to be fair, though the US is notably weak in this regard, the same systemic problems significantly affect the vast majority of educational systems in the world, including in Canada.

One thing the study does tell us about online education is that a lot of work is needed to help get the message across to kids that online learning can be incredibly enriching and empowering in ways most face-to-face learning can only aspire to. Many adults that have spent time out of school (our dominant demographic) realize this, but school kids are so submerged in the teacher-tells-student-does model of education that it is understandably hard for them to think of it any other way.

An immediately obvious way to address this lack of knowledge would be to get a few of our better courses into schools, but I doubt that would help at all, and it might even make things worse. The trouble is, the methodologies and schedules of school teaching would mostly crowd it out: when our schools continue to teach dependence and teacher control, online learning – that thrives on freedom and independence – is likely to be swamped by the power-crazed structures that surround it. Faced with a scheduled, regimented system and compelling demands from local, discipline maintaining teachers, it would be all too easy for online work to be treated as something to fit in small gaps between more obviously pressing demands.

Far better would be for schools to spend more time supporting self-guided learning, which would better support students to take control of their own learning paths, in life and in further learning. Whether online or face to face, one of my biggest challenges as a university professor has always been to unteach the terribly dependent ways of learning school students have been forced to learn. 

Another thing that would help would be for those of us in the profession of distance teaching to more forthrightly and single-mindedly design and promote the experience of online learning as being a way to reduce distance. To show that it can be more personal, more social, more engaging than at least what is found in traditional large lectures and big, faceless institutions, and not far off what is found in smaller seminars and tutorials (better in some ways, worse in others). As the article suggests, many students are put off by the apparent isolation and few realize that it does not have to be that way. Sadly, there is still too much of it that is that way, albeit far less commonly in places like AU.

But it is not just about courses and teaching, that make up only a small, if prominent, part of the learning experience at a traditional collocation-based university. Most online institutions don’t do anything like enough to go beyond the course to engage students in the richer, broader academic community. Thanks to the Landing, we at Athabasca are better advanced that way than most, but it  only reaches a fraction of all staff and students and is all too often just seen as an extension of the course environment by some students. There are lots of ways the Landing could work better and do more but, really, it is only a small part of the solution. We need to embed the kinds of interactions and engagement it provides everywhere in our online spaces. If we don’t, we will eternally be stuck in a cold-start regime where students don’t come because there is no one there, and too many school kids (and others) will continue to miss out on the richness and value of online learning.

 

 

Address of the bookmark: https://www.insidehighered.com/news/2016/02/17/study-suggests-high-school-students-hold-negative-views-online-education

Demotivating students with participation grades

Alfie Kohn has posted another great article on ways we demotivate students. This time he is talking about the practice of ‘cold calling’ in classrooms, through which teachers coerce students that have not volunteered to speak into speaking, rightly observing that this is morally repugnant and reflects an inappropriate and mistaken behaviourist paradigm. As he puts it, “The goal is to produce a certain observable behavior; the experience of the student — his or her inner life — is irrelevant.” A very bad lesson to teach children. But it is not limited to children, and not limited to classrooms.

Online, it is way too common for teachers to achieve much the same results – with much the same moral repugnancy and with much the same behaviourist underpinnings – through ‘participation’ grades. We really need to stop doing this. It is disempowering, unfair (especially as, rather than grading terminal outcomes, one typically grades learning behaviours) and demotivating. It also too often leads to shallow dialogues, so it’s not as great for learning as it might be, but that’s the least of the problems with it.

Ideally we should help to create circumstances where students actually want to contribute and see value in doing so, regardless of grades. If it has no innate value and grades are needed to motivate engagement, there is something terribly wrong. There are lots of ways of doing that – not making everyone do the same thing, offering diverse opportunities for dialogue, for instance. I find student and tutor blog posts and the like are good for this, because they open up opportunities for voluntary engagement where topics are interesting, rather than having to follow a hierarchical threaded flow in a discussion forum. Allowing students a strong say in how they contribute can help – if they pick the topics and methods, they are far more likely to join in. Asking questions that matter to different students in different ways can help – choice is necessary for control, and is way easier to do in an asynchronous environment where multiple simultaneous threads can coexist. Splitting classes into smaller, mutually supportive groups (ideally letting students pick them for themselves) can be beneficial, especially when combined with pyramiding so each group contributes back to a larger group without the fear and power inequalities larger groups entail.

If grades are needed to enforce participation, it’s a failure of teaching.  Getting it right is an art and I freely admit that I have never perfected that art, but I am quite certain that grading participation is not the solution. There are no simple formulae that suit every circumstance and every student, but being aware of the problems rather than relying on a knee-jerk participation grade, especially, as is all too common, when there are no course learning outcomes that such a grade addresses, is a step in the right direction. Of course, if there actually is an explicit outcome that students should be able to argue, debate, discuss, etc then it is much less of an issue. That’s what the students (presumably) signed on to learn about, though there is still a lot of care needed to ensure all students have an equal chance, and that there is enough scaffolding, reflection and support available to ensure they are not graded on ‘raw’ untutored interaction, and that the interaction becomes a learning experience that is reflected upon, not just accomplished.

In case you are wondering how I deal with grading based on social interactions, my usual approach is to allow students to (optionally) treat their contributions as evidence of learning outcomes, typically in a reflective portfolio, and to encourage them to reflect on dialogues in which they may or may not have directly participated. This allows those that are comfortable contributing to do so, and for it to be rewarded if they wish, but does not pressure anyone to contribute for the sake of it, as there are always other ways to show competence. There’s still a reward lurking in there somewhere, so it is not perfect, but at least it provides choices, which is a start.

Address of the bookmark: http://www.alfiekohn.org/blogs/hands

Course Exam: Religious Studies (RELS) 211

One of what I hope will be a continuing series of interviews with AU faculty about their courses in AUSU’s Voice Magazine. This one is concerned with the intriguingly titled Death and Dying in World Religions, explained by the author and coordinator, Dr. Shandip SahaIt provides fascinating glimpses into the course rationale, process and pedagogy, as well as some nice insights into what drives and interests Dr Saha. There are some nice innovative aspects, such as formally arranged phone conversations between tutor and student at key points – low tech, high engagement, great for building empathy while doing much to assure high quality results. It does make me wonder, when tutors inevitably therefore get to know a lot about their students and their thinking, why an exam is still necessary. My inclination, in the next revision, would be to scrap that or make it more reflective (‘what I did on my course’ kind of thing) as it offers nothing much to an otherwise great-sounding course apart from a lot of stress and effort for all concerned. The course subject matter and pedagogy itself sounds brilliant and I really like Dr Saha’s attitude and approach to its design and implementation. 

I would love to see more of these. It’s a great way of sharing knowledge and reducing the distance. One of the fascinating things about our virtual institution is that, in some ways, we have far greater opportunities to learn from one another than those in conventional institutions, where geographical isolation means people seldom get a chance to see how those in other centres and faculties think and work, and the local is always more salient than the remote. Online learning can and should break down boundaries. Apart from places like here on the Landing, where a few dozen courses have a pitch, we don’t normally take enough advantage of this. I would encourage any AU faculty who are running courses that are even a little out of the ordinary to share a bit about them with the rest of us via blogs on the Landing, even if the courses themselves don’t actually use the site. Or maybe even to contact Marie Well at the Voice Magazine to volunteer an interview!

Address of the bookmark: https://www.voicemagazine.org/articles/articledisplay.php?ART=11137

Dumb poll illustrates flaws in objective tests

Given its appearance in Huffpost Weird News, this is a surprisingly acute, perceptive and level-headed analysis of the much-headlined claim that 10% of US college graduates believe Judge Judy serves on the US Supreme Court. As the article rightly shows, this is palpable and scurrilous nonsense. It does show that a few American college graduates don’t know who serves on the Supreme Court (which is not exactly a critical life skill) but, given that over 60% got the answer correct and over 20% picked someone who did formerly serve, the results seem quite encouraging. The article makes the point that Judge Judy is referred to on the poll simply as Judith Sheindlin,  that is not the name she is popularly known by, so there is no evidence at all that anyone actually believed her to be a supreme court judge. It was just a wrong and pretty random guess that no one would have got wrong if she had been referred to as ‘Judge Judy’. I’d go further. Most people would only know Judge Judy’s real name if they happened to be fans, in which case they would instantly recognize this as a misdirection and so be able to pick between the three remaining alternatives, one of which even I (with no interest in or knowledge of parochial US trivia) recognize as wrong. So it is quite possible that a large proportion of correct or nearly correct answers were actually due to people watching too much mind-numbing daytime TV. Great.

What it does show in quite sharp relief is how dumb multiple choice questions tend to be.  If this were given as a quiz question in a course (not improbable – most are very much like it, and quite a few are worse) it would provide no evidence whatsoever that any given individual actually knew the answer. This is not even a test of recall, let alone higher order knowledge. A wrong answer does not indicate belief that it is true, but a correct answer does not reliably indicate a true belief either. Individually, multiple choice questions are completely useless as indicators of knowledge, in aggregate they are not much better.

As long as they are not used to judge performance or grade students, objective quizzes can be useful formative learning tools. Treated as fun interactive tools, they can encourage reflection, provide a sense of control over the process, and support confidence. They can also, in aggregate, provide oblique clues to teachers about where issues in teaching might lie. In a very small subset of subject matter (e.g. some sub-areas of math problem solving), given enough of them, they might coarsely differentiate between total incompetence and minimal competence. There are also a few ways to improve their reliability – adding a confidence weighting, for example, can help better distinguish between pure guesses and actual semi-recollection, and adaptive quizzes can focus in a bit more on misconceptions, if they are very carefully designed. But, if we are honest, the only reason they are ever used summatively in education or other fields of learning is because they are easy to mark, not because they are reliable indicators of knowledge or performance, and not because they help students to learn: in fact, when given as graded tests, they do exactly the opposite. I guess a secondary driver might be that it is easy to generate meaningful-looking (but largely meaningless) statistics from them. Neither reason seems compelling.

Apart from their uselessness at performing the task they are meant to perform, there are countless other reasons that graded objective tests are a bad idea, from the terrible systemic effects of teaching to the test, to the extrinsic motivation they rely on that kills the love of learning in most learners, to their total lack of authenticity. It is not hard to understand why they are so popular, but it is very hard to understand why teachers and others that see their job as to inspire, motivate and support would do this to students to whom they owe a duty of care.

Address of the bookmark: http://www.huffingtonpost.com/entry/polls-judge-judy-supreme-court_us_569e98b3e4b04c813761bbe8