In-person vs online teaching

This is roughly the content of my 3 minute pitch to explain (some of) my research, that I gave at the OUNL research day in Heerlen, Netherlands yesterday. I was allowed one slide:

in-person vs self-paced online learning

This is (very roughly) what I said:

Mediaeval scholars were faced with the problem that knowledge (doctrine actually), often found in rare and expensive books, needed to be passed from the few to the many. Lecturing was an efficient solution, given the constraints of physics. Because everyone needed to be in the same place at the same time for this to work, we developed schools, universities, classes, courses, timetables and terms and semesters. We built resources like libraries.  We created organizational units to manage it all, like faculties and colleges. Above all,  for efficiency, we needed rules of behaviour and a natural power dynamic putting the lecturer in control for every moment of the learning activity in a classroom.

Learning (like most things) works best – by far – when learners are intrinsically motivated. It barely works at all when learners are amotivated. Self determination theory tells us that three things are needed for intrinsic motivation: support for autonomy, competence, and relatedness. The mediaeval solution was good for relatedness, but bad for competence (some found it too challenging, some not challenging enough) and terrible for autonomy. The chance of amotivation is thus very high. Many of our pedagogies, processes, and much of the art of teaching since then have been, in one way or another, attempts to deal with this one central problem.  The most common solution to the lack of intrinsic motivation that resulted was to apply externally regulated extrinsic motivation – rewards like grades and qualifications, rules of attendance,  punishments for non-compliance,  etc – which, self determination theory shows, is infallibly fatal to intrinsic motivation, making things far worse. How crazy is it that we have to force people to do the one thing that makes us most human, a drive to learn that is arguably stronger than sex or even the pursuit of food?   Good teachers using well considered teaching methods can usually overcome many of the issues, at least for many students much of the time. But that’s what good pedagogy means. It is highly situated in solving the innate problems of in-person teaching.

On the whole, for perfectly understandable reasons (much distance teaching evolved in an in-person context with which it had to interoperate) we have transferred those exact same pedagogies unthinkingly to open, self paced, self directed, distance learning. ‘Teaching is teaching’, advocates claim, and so they try, as much as possible, to replicate online what they do in a classroom. But the motivational problems faced by distance learners are almost the exact inverse of those of in-person learners. They have lots of autonomy – you can’t really take it away – and can take different paths and pacing to gain competence (e.g. rewinding or skipping videos, re-reading text, augmenting with other resources, etc), but tend to suffer from reduced relatedness, especially when learning truly independently, in a self paced modality. Given this mismatch and the lack of well evolved support and processes for this very different context, it is not surprising there is often a high rate of attrition, especially when teachers (lacking the closeness and authority or in-person colleagues) double down on rewards and punishments through grades, even to the extent of rewarding participation, thus making it even worse.  

There is no such thing as a disembodied, abstract, decontextualized pedagogy – it is all about orchestrating technologies- so any solution must be as much about buildings tools and structures as it is about using techniques and methods. They are entirely inseparable.  A significant part of my current research is thus an attempt to design native online pedagogies, technologies, and other parts of educational systems (including credentialling) that don’t rely on reward and punishment; that are built for supporting learning in the complex, ever changing modern world that does exist, rather than for the indoctrination of mediaeval students.

 

 

A blast from my past: Google reimplements CoFIND

While searching for a movie using Google Search last night I got (for the first time that I can recall) the option to tag the result, as described in this article. I was pleased to discover that the tool they provide for this is virtually identical (albeit with a much slicker and more refined modern interface overhaul) to the CoFIND system that underpinned my PhD, that I built over 20 years ago now. You are presented with a list of tags, and can select one or more that describe the movie, and/or suggest your own, effectively creating a multi-dimensional rating system that other users can use to judge what the movie is like. When I rated the movie last night, for instance, popular tags presented to me included ‘terrible acting’, ‘bad writing’, ‘clichéed’, ‘boring’ and so on. Having seen the movie, I agree about the bad writing and clichés – it was at the terrible end of the scale – but actually think most of the acting was fairly good, and it was not very boring. What is interestingly different about this, compared with other tagging systems currently available, is that this kind of tag is fuzzy – it represents a value statement about the movie that exists on a continuum, not a simple categorization. The sorting algorithm for the list of tags presented to you appears (like my original CoFIND) to be based mainly on simple popularity though it is possible that (like CoFIND) it uses other metrics like tag age and perhaps even a user model as well. It’s vastly more useful and powerful than the typical thumbs-up/thumbs-down that Google normally provides. The feature has sadly not reappeared on subsequent movie searches, so I am guessing that Google is either still testing it or trying to build up a sufficient base of recommendations by occasionally showing it to people, before opening it up to everyone.

Just in case Google or anyone else has tried to patent this, and to assert my prior art, you can find a description and screenshots (p183 and p184) of my original CoFIND system in chapter 6 of my PhD thesis as well as in many papers before and since, not to mention in a fair few blog posts. It’s out there in the public domain for anyone to use. The interface of my system was, even by the standards of the day, pretty awful and not even a fraction as good as the one provided by Google, but those were different times: it did work in exactly the same way, though. As I developed it further, the interface actually became much worse. Over the course of a few years I experimented with quite a range of methods to get and display ratings/tags, including an ill-conceived Likert scale as well as a much more successful early use of tag clouds, all of which added complexity and reduced usability. Some of these later systems are described and discussed in my PhD too.  In its final, refactored, and heavily evolved form that postdates my PhD by several years, a version of Cofind (last modified 2007) is actually still available, that almost reverts to the the Google-style tag selection approach of the original, with the slight tweak that, in CoFIND, you can disagree about any particular tag use (for instance, if you don’t believe it to be inane then you can cast a vote against that tag).  The interface remains at least as awful as the original, though, and not a patch on Google’s. The main other differences, apart from interface variations, are that the nomenclature differs (I used ‘qualities’  rather than ‘tags), and that CoFIND could be used for anything with a URL, not just movies. If you’re interested, click on any resource link in the system and you’ll see my primitive, ugly, frame-based attempt to do very much the same as Google is doing for movies (nb. unless you are logged in you cannot add new qualities but, for authorized users, a field appears at the end that is just like Google’s). Though primarily intended to share and recommend educational resources, CoFIND was very flexible and was, over the years, used for a range of other purposes from comparing interface designs to discovering images and videos. It was always flaky, ugly, and unscalable, but it worked well enough for my research and teaching purposes, and (because it provides RSS feeds) it was my go-to tool for sharing interesting links right up until 2007, after which I reverted to more conventional but better-maintained tools like the Landing or WordPress. 

A little bit of CoFIND background

I’ve written a fair bit about CoFIND, formally and informally, but not for a few years now, so here’s a little background for anyone that might be interested, and to remind myself of a little of what I learned all those years ago in the light of what I know now.

An evolving, self-organizing, social bookmarking tool

I started my PhD research in 1997 with the observation that, even then, there was a vast amount of stuff to learn from that could be easily found on the Web, but that it was really difficult to find good stuff, let alone stuff that was actually useful to a particular learner at a particular stage in their development. Remember that this was before Google even started, so things were significantly worse then than they are now. Infoseek was as good as it got.

I had also observed that, in any group of learners, people would find different things and, between them, discover a much larger range of useful resources than any one learner (or teacher) could do alone, a fact that I use in my teaching to this day. These would likely (and, it turns out, in reality) be better than what a teacher could find alone because, though individual learners might be less able to distinguish low from high quality, they would know what worked for them and sufficient numbers of eyes would weed out the bad stuff as long as there was a mechanism for it. This was where I came in.

The only such mechanisms widely available at the time were simple rating systems. However, learners have very different learning needs, so I immediately realized that ‘thumbs-up’ or simple Likert scales would not work. This was not about finding the one ‘best’ solution for everyone, but was instead concerned with finding a range of alternatives to fill different ecological niches, and somehow discovering the most useful solution in that niche for a given learner at a given time.  My initial idea was to make use of a crowd, not an individual curator, and to employ a process closely akin to natural evolution to kill bad suggestions and promote good ones, in order to create an ecosystem of learning resources rather than a simple database. CoFIND was a series of software solutions that explored and extended this initial idea.

CoFIND was, on the face of it, what would eventually come to be called a social bookmarking system – a means for learners to find and to share Web resources (and, later, other things) with one another, along with a mechanism for other learners to recommend or critique them. It was by no means the first social bookmarking system, but it was certainly not a common genre at the time, and I don’t think such a dedicated system had ever been used in education before (for all such assertions, I stand to be corrected), though other means of sharing links, from simple web pages or wikis or discussion forums to purpose-built teacher-curated tools were not that uncommon. A lot of my early research involved learning about self-organization and complex systems, in particular focusing on evolution and stigmergy (self-organization through signs left in the environment). As well as the survival-of-the-fittest dynamic, evolution furnished me with many useful concepts that I made good use of, such as the importance of parcellation, the necessity of death, ways to avoid skyhooks, benefits of spandrels, ways to leverage chance (including extinction events), and various approaches to supporting speciation.  As a result of learning about stigmergy I independently developed what later came to be know as tag clouds. I don’t believe that mine were the first ever tag clouds – weighted lists of one sort or another had been around for a few years – but, though mine didn’t then use the name, they were likely the first uses of such things in educational software, and almost certainly the first with this particular theoretical model to support them (again, I am happy to be corrected).

A collaborative filter

The name CoFIND is an acronym for ‘collaborative filter in n-dimensions’. The n dimensions were substantiated through what we (my supervisors and I) called qualities. We went through a long list of possible names for these, and I was drawn for a while to calling them ‘values’, but (unfortunately) we never thought of ‘tags’ because the term was not in common use for this kind of purpose at the time. After a phase of calling them q-tags, I now call qualities by the much more accessible name of ‘fuzzy tags’. Fuzzy tags are not just binary classifications of a topic but tags that describe what we value, or don’t value, in a resource, and how much we value it. While people may sometimes disagree about binary classifications (conventional tags) it is always possible to have different opinions about the application of fuzzy tags: some may find something interesting, for instance, while others may not, and others may feel it to be quite interesting, or incredibly so. Fuzzy tags are to do with fuzzy sets, that have a continuum of grades of membership, which is where the name comes from. Different versions of CoFIND used different ways to establish the fuzziness of a tag – the Likert Scale used in a few mid-period versions was my failed attempt to make it explicit but this was a nightmare for people to actually use.  The first versions used the same kind of frequency-based weighting as Google’s movie tags, but that was a bit coarse – I was uncomfortable with the averaging effect and the unbridled Matthew Effect that threatened to keep early tags at the top of the list for all time, that I rather coarsely kept in check with a simple age-related weighting that was only boosted when they were used (the unfortunate side effect of which being that, if a system was not used for a few weeks, all the tags vanished in a huge extinction event, albeit that they could be revived if anyone ever used one of the dead ones again). The final version was a bit in-between, allowing an indefinitely large scale via simple up-down ratings, balanced with an algorithm that included a decaying but renewable novelty weighting that adjusted to the frequency of use of the system as a whole. This still had the peculiar effect of evening out/initializing all of the tags over time if no one used the system, but at least it caused fewer catastrophes.

‘Traditional’ collaborative filters simply discover whether things are likely to be more valued or less valued on a usually implicit single dimension (good-bad, liked-disliked, useful-useless, etc). CoFIND’s qualities/fuzzy tags allowed people to express in what ways they were better or worse – more interesting, less helpful, more complex, less funny, etc, just as Google’s movie tagging allows you to express what you like or dislike about a movie, not just whether you liked it or not. In many tag-based systems, people tend to use quite a few simple tags that are inherently fuzzy (e.g. Flickr photos tagged as ‘beautiful’) but they are seldom differentiated in the software from those that simply classify a resource as fitting a particular category, so they are rarely particularly helpful in finding stuff to help with, say, learning.

I was building CoFIND just as the field of collaborative filtering was coming out of its infancy, so the precise definition of the term had yet to be settled. At the time, a collaborative filter (then usually called an ‘automated collaborative filter’) was simply any system that used prior explicit and/or implicit preferences of a number of previous users (a usually anonymous crowd) to help make better recommendations and/or filter out weaker recommendations for the current users. The PageRank algorithm that still underpins Google Search would perhaps have then been described as a collaborative filter, as was one of its likely inspirations, PHOAKS (People Helping One Another Know Stuff), that mined Usenet newsgroups for links, taking them as an implicit recommendation within the newsgroup topic area. By this definition, CoFIND was in fact a semi-automated collaborative filter that combined explicit preferences with automated matching. Nowadays the term ‘collaborative filter’ tends to only apply to a specific subset of recommender systems that automatically predict future interests by matching individual patterns of behaviour with those of multiple others, whether by item (people who bought this also bought…) or user (people whose past or expressed preferences seem to be like yours also liked…). I think that, if I built CoFIND today, I would simply refer to it more generically as a recommender system, to avoid confusion.

Disembodied user models

Rather than a collaborative filter, back in the late 90s Peter Brusilovsky saw CoFIND as a new species of educational adaptive hypermedia, as it was perhaps the first (or at least one of the first) that worked on an open corpus rather than through a closed corpus of linked resources. However, he and I were both puzzled about where to find the user model, which was part of Peter’s definition of adaptive hypermedia. I didn’t feel that it needed one, because users chose the things that mattered to them at runtime. In retrospect, I think that the trick behind CoFIND, and what still distinguishes it from almost all other systems apart from this fairly new Google tool, is that it disembodied and exposed the user model. Qualities were, in essence, the things that would normally be invisibly stored in a user model, but I made them visible, in an extreme variant of what Judy Kay later described as scrutable adaptation.  In effect, a learner chose their own learner model at the time they needed it. The reasoning behind doing so was that, for learners, past behaviour is usually a poor predictor of future needs, mainly because 1) learning changes people (so past preferences may have little bearing on future preferences), and 2) learning is driven by a vast number of things other than taste or past actions: we often have a need for it thrust upon us by an extrinsic agency, like a teacher, or a legislative demand for a driving licence, for instance. Qualities (fuzzy tags) allow us to express the current value of something to us, in a form that we can leave behind without a lot of sticky residue, and that future users can use. In fact, later versions did tend to slightly emphasize similar things to those people had added, categorized, or rated (fuzzily tagged) earlier, but this was just a pragmatic attempt to make the system more valuable as a personal bookmark store, and therefore to encourage more use of it, rather than an attempt to build a full-blown collaborative filter in the modern sense of the word.

Moving on

I still believe that, in principle, this is an excellent approach and I have been a little disappointed that more people have not taken up the idea and improved on it. The big and, at the time, insurmountable obstacles that I hit were 1) that it demands a lot of its users to provide both tags and resources, with little obvious personal benefit, so it is unlikely to get a lot of use, 2) that the cold-start problem that affects most collaborative filters (it relies on many users to be useful but no one will use it until it is useful) is magnified exponentially by every one of those n dimensions so it really demands a big lot of users, and 3) that it is fiendishly hard to represent the complex ecological niches effectively in an interface, making the cognitive load unusably high. Google seems to have made good progress on the last point (an evolution enabled by improved web standards and browsers combined with a simplification of the process, which together are enough to reduce the cognitive load by a sizeable amount), and has plenty sufficient numbers of users to cope with the first and second points, at least with regard to movie recommendations. It remains challenging to see how this would work in an educational setting in anything less than the largest of MOOCs or the most passionately focused of user bases. However, I would love to see Google extend this mechanism to OERs, courses, and other educational resources, from Quora answers to Kahn Academy tutorials, because they do have the numbers, and it would work well. For the same reasons, it would also be great to see it applied to something like StackExchange or similar large-scale systems (Reddit perhaps) where people go to seek solutions to learning problems. I doubt that I will build a new version of CoFIND as such, but the ideas behind it should live on, I think, and it’s great to see them back on a system as big as Google Search, even if it is so far only experimental and, so far, just used to recommend movies.

Power, responsibility, maps and plans: some lessons from being a Chair

Empty chair

I’ve reached the end of my first week of not Chairing the School of Computing & Information Systems here at Athabasca University, which is now in the capable hands of the very wonderful Ali Dewan.

Along with quite a few people that I know, I am amazed that I stuck it out for over 3 years. I was a most reluctant Chair in the first place, because I’d been in middle management roles before and knew much of what to expect. It’s really not my kind of thing at all. Ideologically and temperamentally I loathe hierarchies but I’d rather be at the top or at the bottom if I have to be in one at all. However, with the help of some cajoling, I eventually convinced myself that being a Chair is essentially much the same as being a teacher, which is an activity that I both enjoy and can mostly do reasonably well. Like a teacher (at least one that does the job well), the job of a Chair is to help nurture a learning community, and to make it possible for those in that community to achieve what they most want to achieve with as few obstacles as possible. Like teaching, it is not at all about telling, but about listening, supporting, and helping others to orchestrate the process for themselves, not so much about leadership as followership, about being a supportive friend. It’s a bit about nudging and inspiring, too, of sharing the excitement of discovery and growth with other people. It’s a bit about challenging people to be who they want to be, collectively and individually. It’s a bit about solving problems, a bit about being a shoulder to cry on, a bit about being a punchbag for those needing to let off steam, an arbiter in disputes. It could be fun. And I could always give it up after a few months if it didn’t work out. That was what I convinced myself.

On the bright side, I don’t think that I broke anything vital. I did help a couple of good things to happen, and I think that most of my staff were reasonably happy and empowered, a few of them more than before. One or two were probably less happy. But, in the grand scheme of it all, I left things much the same as or a little better than I found them, despite often strenuous efforts to bring about far more exciting changes. My tenure as Chair was, on the whole, not great, but not terrible. I have been wondering a bit about why that happened, and what I could or should have done differently, which is what the next part of this post is about.

Authority vs influence, responsibility vs power

One of my most notable discoveries (more accurately, rediscoveries) is that authority and responsibility barely, if at all, correlate with power and influence. In fact, for a middle management role like this, the precise inverse is true. One of the strange paradoxes of being in a position of more responsibility and authority has been that, in many ways, I feel that I’ve actually had considerably less capacity to bring about change, or to control my own life, than I had as a plain old professor.  It’s just possible that I may have overused the joke about a Chair being the one everyone gets to sit on, but it resonated with me. And this is not to contradict Uncle Ben’s sage advice to Spiderman – it may be true that with great power comes great responsibility, but that doesn’t mean that with great responsibility comes great power.

Partly the problem was just the myriad small but draining demands that had to be done throughout the course of a typical day (most of which were insufferably tedious and mostly mindless bureaucratic tasks that anyone else could do at least as well), as well as having to attend many more meetings, and to engage in a few much lengthier tasks like workload planning. It wore me down. I put a lot of things that were important to me, but that didn’t contribute to my role, to one side because there were too few chunks of uninterrupted time to do them. Blogging and sharing on social media, for instance.

Partly it was because I felt that my role was primarily to support those that reported to me – I had to do their bidding much more than they had to do mine. Instead of doing what I would intrinsically wish to do, much of the time I was trying to do what those that I supervised required of me. This was not just a result of my own views on leadership. I think a lot of it would have affected most people in the same position.

Partly it was because I often felt (with a little external reinforcement) that I must shut up and/or toe the line because I represented the School or the Dean or the University. Being the ‘face’ of the school meant that I often felt obliged to try to represent the opinions and demands of others, even when I disagreed with them. Often, I had to present a collective agenda, or that of an individual higher up the foodchain, rather than my own, whether or not I found it dull, mistaken, or pointless. Also, being a Chair puts you in some sensitive situations where a wrong step can easily lead to litigation, grievance proceedings, or (worse) very unhappy people. I’m not naturally tactful or taciturn, to say the least, so this was tricky at times. I sometimes stayed quiet when I might otherwise have spoken out.

The upshot of it is that, as a Chair, I was directly responsible both to my Dean and to the people I supervised (not to mention more or less directly to students, visitors, admins, tech staff, VPAs, etc, etc), and I consequently felt that I had very little control over my own life at all. Admittedly it was at least partly due to my very intentional approach to the role, but I think similar issues would emerge no matter what leadership style I had adopted. There’s a surprising amount of liberty in being at the bottom of a hierarchy, at least when (like all academics) you are expected – nay, actually required – to be creative, self-starting, and largely autonomous in your work. Academic freedom is a wonderful thing, and some of it is subdued when you move a little way up the scale.

Some compensations 

There have been plentiful compensations, of course. I wouldn’t have stayed this long if it had been uniformly awful. Being a Chair made some connections easier to make, within and beyond the university, and has helped me get to know my colleagues a lot better. And I have some great colleagues: it would have been much harder to manage had I not had such friendly, supportive, smart, creative, willing, and capable team to work with. I solved or at least made fair progress on a few problems, none huge but all annoying, and helped to lay the groundwork for some ongoing improvements. There were opportunities for creativity here and there. I will miss some of the ways I could help shape our values and systems simply thanks to being a Chair, rather than having to actually work at it. I’ll miss being the default person people came to with interesting ideas. I’ll miss the very small but not trivial stipend. I’ll miss being involved by default in most decisions that affect the school. I’ll miss the kudos. I’ll miss being a formal hub in a network, albeit a small one.

Not quite like teaching

In most ways I was right about the job being much like teaching. Most of the skills, techniques, goals, and patterns are very similar, but there’s one big difference that I had not thought enough about. On the whole, most actual teachers engage with learners over a fairly fixed period, or at least for a fixed project, and there is a clear beginning, middle, and end, with well defined rituals, rules, and processes to mark their passage. This is even true to an extent of more open forms of teaching like apprenticeship and mentorship. Although this in some ways relates to any kind of project, the fact that people, working together in a social group, are both the focus and the object of change, makes it fairly distinctive. I can’t think of many other human activities that are particularly similar to teaching in this regard, apart from perhaps some team sports or, especially, performing arts.

To be a teacher without a specific purpose in mind is a surprisingly different kind of activity, like producing an improvised play that has no script, no plot, no beginning, and no end. Although a teacher is responsible to their students, much as I was responsible to my staff, the responsibility is tightly delimited in time and in scope, so it remains quite manageable, for the most part. In retrospect, I think I should have planned it better. I probably should have set more distinct goals, milestones, tasks, sub-projects, etc. I should have planned for a very clear and intentional end, and set much firmer boundaries. It would not have been easy, though, as many goals emerged over the years, a lot changed when we got our new (and much upgraded) administration, and a lot depended on serendipity and opportunism. I had, at first, no idea how long I would stick with the role. Until quite some time into it, I had only a limited idea about what changes I might even be allowed to accomplish (not much, as it happens, with no budget, a freeze on course development, diminishing staff numbers, need to fit faculty plans, etc). It might have been difficult to plan too far ahead, though it would have been really useful to have had a map showing the directions we might have gone and the limits of the territory. I think there may be useful lessons to be learned from this about support for self-directed lifelong learning.

Lessons for learning and teaching

A curse of institutional learning can be the many scales of rigid structure it provides, that too often take agency away from learners and limit support for diversity. However, it also supports an individual learner’s agency to have a good map of the journey ahead, even if all that they are given is the equivalent of a bus route, showing only the fixed paths their learning will take. I have long grappled with the tensions and trade-offs between surfing the adjacent possible and following a planned learning path. I spent a lot of time in the late 1990s and early 2000s designing online systems that leveraged the crowd to allow learners to help one another to learn, but most of them only helped with finding what to do next, or to solve a current problem, not to chart a whole journey. Figuring out an effective way to plan ahead without sacrificing learner control was one of the big outstanding research problems left to be solved when I finished my PhD (in self-organized learning in networks) very many moons ago, and it still is. There are lots of ineffective ways that I and others have tried, of course. Obvious approaches like matching paths through collaborative filtering or similar techniques are a dead-end: there are way too many extraneous variables to confound it, way too much variation in start and end points to effectively cater for, even if you start with a huge dataset. This is not to mention the blind-leading-the-blind issues, the fact that learning changes people so past activity poorly predicts future behaviour, and the fact that there is often a narrative context that assumes specific prior activities have occurred and known future activities will follow. Using ontologies is even worse, because the knowledge map of a subject developed by subject experts is seldom if ever the best map for learning and may be among the worst. The most promising approaches I have seen, and that I had a doctoral student working on myself until he had to give up in the mid 2000s, mine the plans of many experts (e.g. by looking at syllabuses) to identify common paths and branches for a particular subject, combining them with whatever other information can be gleaned to come up with a good direction for a specific learner and learning need. However, there are plenty of issues with that, too, not least of which being the fact that institutional teaching assumes a very distinctive context, and suffers from a great many constraints (from having to be squashed into a standardized length to fitting preferred teaching patterns and schedules), that learners unhindered by such arbitrary concerns would neither want nor need. Many syllabuses are actually thoughtlessly copied from the same templates (e.g. from a professional association model syllabus), or textbooks, and may be awful in the same ways. And, again, narrative matters. If you took a chunk out of one of my courses and inserted it somewhere else it would often change its meaning and value utterly.

This is a problem I would dearly love to solve. Though I stand by my teaching approaches, one of the biggest perennial complaints about the tools and methods I tend to use is that it is easy to feel lost, especially if the helping hands of others are not around when needed. There are always at least a few students who would, as a matter of principle, rather be told what to do, how to do it, and where to go next. The majority would prefer to work in an environment that avoids the need for unnecessary decisions, such as where to upload a file, that have little to do with what they are trying to learn. My role (and that of my tutors, and the design of my courses) is to help them through all that, to relieve them of their dependency on being told what to do, and to help them at least understand why things are done the way they are done. However, that can result in quite inconsistent experiences if I or tutors let the ball slip for a moment. It can be hard for people who have been taught, often over decades, that teaching is telling, and that learning can reliably be accomplished by following a set of teacher-determined steps, to be set adrift to figure it out in their own ways.

It is made far worse by the looming threat of grades that, though eliminated in my teaching itself, still lie in wait at the end of the path as extrinsic targets. Students often find it hard to know in advance how they will meet the criteria, or even whether they have met them when they reach the end. I can and do tell them all of this, of course, usually repeatedly and in many ways and using many media, but the fact that at least some remain puzzled just proves the point: teaching is not telling. Again, a lot of manual social intervention is necessary. But that leads to the issue that following one of my courses demands a big leap of faith (mainly in me) that it will turn out OK in the end. It usually takes effort and time to build such trust, which is costly for all concerned, and is easily lost with a careless word or a missed message.  It would be really useful for my students to have a better map that allows them to plan detours and take more alternative transit options for themselves, especially with overlays to show recommended routes, warnings of steep hills and traffic, and real-time information about the whereabouts of people on their network and points of interest along the way. It would, of course, also be really handy to have a big ‘you are here’ label.  I would have really liked such a map when I started out as Chair.

Moving on

Leaving the Chair role behind still feels a little like stepping off a boat after a rough voyage, and either the land or my legs feel weird, I’m not sure which. As my balance returns, I am much looking forward to catching up with things I put to one side over the past 3 years. I’m happy to be getting back to doing more of what I do best, and I hope to be once more sharing more of my discoveries and cogitations in posts like this. It’s easier to move around with your feet on the ground than when you are sitting on a chair.

 

The return of the weblog – Ethical Tech

Blogs have evolved a bit over the past 20 years or so, and diversified. The always terrific Ben Werdmuller here makes the distinction between thinkpieces (what I tend to think of as vaguely equivalent to keynote presentations at a conference, less than a journal article, but carefully composed and intended as a ‘publication’) and weblogging (kind of what I am doing here when I bookmark interesting things I have been reading, or simply a diary of thoughts and observations). Among the surprisingly large number of good points that he makes in such a short post is that a weblog is best seen as a single evolving entity, not as a bunch of individual posts:

Blogging is distinct from journalism or formal writing: you jot down your thoughts and hit “publish”. And then you move on. There isn’t an editorial process, and mistakes are an accepted part of the game. It’s raw.

A consequence of this frequent, short posting is that the product isn’t a single post: it’s the weblog itself. Your website becomes a single stream of consciousness, where one post can build on another. The body of knowledge that develops is a reflection of your identity; a database of thoughts that you’ve put out into the world.

This is in contrast to a series of thinkpieces, which are individual articles that live by themselves. With a thinkpiece, you’re writing an editorial; with a blog, you’re writing the book of you, and how you think.

This is a good distinction. I also think that, especially in the posts of popular bloggers like Ben, the blog is also comprised of the comments, trackbacks, and pings that develop around it, as well as tweets, pins, curations, and connections made in other social media. Ideas evolve in the web of commentary and become part of the thing itself. The post is a catalyst and attractor, but it is only part of the whole, at least when it is popular enough to attract commentary.

This distributed and cooperative literary style can also be seen in other forms of interactive publication and dialogue – a Slashdot or Reddit thread, for instance, can sometimes be an incredibly rich source of knowledge, as can dialogue around a thinkpiece, or (less commonly) the comments section of online newspaper articles. What makes the latter less commonly edifying is that their social form tends to be that of the untarnished set, perhaps with a little human editorial work to weed out the more evil or stupid comments: basically, what matters is the topic, not the person. Untarnished sets are a magnet for trolls, and their impersonal nature that obscures the individual can lead to flaming, stupidity, and extremes of ill-informed opinion that crowd out the good stuff. Sites like Slashdot, StackExchange, and Reddit are also mostly set-based, but they use the crowd and an algorithm (a collective) to modulate the results, usually far more effectively than human editors, as well as to provide shape and structure to dialogues, so that dialogues become useful and informative. At least, they do when they work: none are close to perfect (though Slashdot, when used well, is closer than the rest because its algorithms and processes are far more evolved and far more complex, and individuals have far more control over the modulation) but the results can often be amazingly rich.

Blogs, though, tend to develop the social form of a network, with the blogger(s) at the centre. It’s a more intimate dialogue, more personal, yet also more public as they are almost always out in the open web, demanding no rituals of joining in order to participate, no membership, no commitment other than to the person writing the blog. Unlike dedicated social networks there is no exclusion, no pressure to engage, no ulterior motives of platforms trying to drive engagement, less trite phatic dialogue, more purpose, far greater ownership and control. There are plenty of exceptions that prove the rule and plenty of ways this egalitarian structure can be subverted (I have to clean out a lot of spam from my own blogs, for instance) but, as a tendency, it makes blogs still very relevant and valuable, and may go some way to explaining why around a quarter of all websites now run on WordPress, the archetypal blogging platform.

Address of the bookmark: https://words.werd.io/the-return-of-the-weblog-f6b702a7cf99

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2740999/the-return-of-the-weblog-%E2%80%93-ethical-tech

Strategies for successful learning at AU

Earlier today I responded to a prospective student who was, amongst other things, seeking advice on strategies for success on a couple of our self-paced programming courses. My response was just a stream of consciousness off the top of my head but I think it might be useful to others. Here, then, with some very light editing to remove references to specific courses, are a few fairly random thoughts on how to succeed on a self-paced online programming course (and, for the most part, other courses) at Athabasca University. In no particular order:

  • Try to make sure that people close to you know what you are doing and, ideally, are supportive. Other people can really help, not just for the mechanical stuff but for the emotional support. Online learning, especially the self-paced form we use, can feel a bit isolating at times, but there are lots of ways to close the gap and they aren’t all found in the course materials and processes. Find support wherever you can.
  • Make a schedule and try to keep to it, but don’t blame yourself if your deadlines slip a bit here and there – just adjust the plan. The really important thing is that you should feel in control of the process. Having such control is one of the huge benefits of our way of teaching, but you need to take ownership of the process yourself in order to experience the benefits.
  • If the course provides forums or other social engagement try to proactively engage in them. Again, other people really help.
  • You will have way more freedom than those in traditional classrooms, who have to follow a teacher simply because of the nature of physics. However, that freedom is a two-edged sword as you can sometimes be swamped with choices and not know which way to go. If you are unsure, don’t be afraid to ask for help. But do take advantage of the freedom. Set your own goals. Look for the things that excite you and explore further. Take breaks if you are getting tired. Play. Take control of the learning process and enjoy the ride.
  • Enjoy the challenges. Sometimes it will be hard, and you should expect that, especially in programming courses like these. Programming can be very frustrating at times – after 35 years of programming I can still spend days on a problem that turns out to involve a misplaced semi-colon! Accept that, and accept that even the most intractable problems will eventually be solved (and it is a wonderful feeling when you do finally get it to work). Make time to sleep on it. If you’re stuck, ask for help.
  • Get your work/life/learning balance right. Be realistic in your aspirations and expect to spend many hours a week on this, but make sure you make time to get away from it.
  • Keep a learning journal, a reflective diary of what you have done and how you have addressed the struggles, even if the course itself doesn’t ask for one. There are few more effective ways to consolidate and connect your learning than to reflect on it, and it can help to mark your progress: good to read when your motivation is flagging.
  • Get used to waiting for responses and find other things to learn in the meantime. Don’t stop learning because you are waiting – move on to something else, practice something you have already done, or reflect on what you have been doing so far.
  • Programming is a performance skill that demands constant and repeated practice. You just need to do it, get it wrong, do it again, and again, and again, until it feels like second nature. In many ways it is like learning a musical instrument or maybe even driving. It’s not something you can learn simply by reading or by being told, you really have to immerse yourself in doing it. Make up your own challenges if you run out of things to do.
  • Don’t just limit yourself to what we provide. Find forums and communities with appropriate interests. I am a big fan of StackOverflow.com for help and inspiration from others, though relevant subreddits can be useful and there are many other sites and systems dedicated to programming. Find one or two that make sense to you. Again, other people can really help.

Online learning can be great fun as long as you are aware of the big differences, primarily relating to control and personal agency. Our role is to provide a bit of structure and a supportive environment to enable you to learn, rather than to tell you stuff and make you do things, which can be disconcerting at first if you are used to traditional classroom learning. This puts more pressure on you, and more onus on you to organize and manage your own learning, but don’t ever forget that you are not ever really alone – we are here to help.

In summary, I think it really comes down to three big things, all of which are really about motivation, and all of which are quite different when learning online compared to face-to-face:

  1. Autonomy – you are in control, but you must take responsibility for your own learning. You can always delegate control to us (or others) when the going gets hard or choices are hard to make, but you are always free to take it back again, and there will be no one standing over you making you do stuff apart from yourself.
  2. Competence – there are few things more satisfying than being able to do more today than you could do yesterday. We provide some challenges and we try to keep them difficult-but-achievable at every stage along the way, but it is a great idea for you to also seek your own challenges, to play, to explore, to discover, especially if the challenges we offer are too difficult or too boring. Reflection can help a lot with this, as a means to recognize what, how, and why you have learned.
  3. Relatedness – never forget the importance of other people. You don’t have to interact with them if you don’t want to do so (that’s another freedom we offer), but it is at the very least helpful to think about how you belong in our community, your own community, and the broader community of learners and programmers, and how what and how you are learning can affect others (directly or indirectly).

This advice is by no means comprehensive! If you have other ideas or advice, or things that have worked for you, or things that you disagree with, do feel free to share them in the comments.

SCIS makes a great showing at HCI 2017, Vancouver

 

Ali Dewan presenting at HCI 2017

I had the pleasure to gatecrash the HCI 2017 conference in Vancouver today, which gave me the chance to see Dr Ali Dewan present three excellent papers in a row (two with his name on them) on a variety of themes, as well as a great paper written and presented by one of our students, Miao-Han Chang. Miao-Han Chang presenting

Both did superb jobs of presenting to a receptive crowd. Ali got particular acclaim from the audience for the first work he presented  (Combinatorial Auction based Mechanism Design for Course Offering Determination
by Anton Vassiliev, Fuhua Lin & M. Ali Akber Dewan) for its broad applicability in many areas beyond scheduling courses. 

Athabasca, and especially the School of Computing and Information Systems, has made a great showing at this prestigious conference, with contributions not just from Ali and Miao-Han, but also from Oscar (Fuhua) Lin, Dunwei Wen, Maiga Chang and Vive Kumar. Kurt Reifferscheid and Xiaokun Zhang also had a paper in the proceedings but were sadly not able to attend to present it.

 

Jon Dron and Ali Dewan at HCI 2017

Jon and Ali at the Vancouver Conference Centre after Ali’s marathon presentation stint. I detect a look of relief on Ali’s face!

 

Ali Dewan presenting

Papers

  • Combinatorial Auction based Mechanism Design for Course Offering Determination
    Anton Vassiliev, Fuhua Lin, M. Ali Akber Dewan, Athabasca University, Canada
  • Enhance the Use of Medical Wearables through Meaningful Data Analytics
    Kurt Reifferscheid, Xiaokun Zhang, Athabasca University, Canada
  • Classification of Artery and Vein in Retinal Fundus Images Based on the Context-Dependent Features
    Yang Yan, Changchun Normal University, P.R. China; Dunwei Wen, M. Ali Akber Dewan, Athabasca University, Canada; Wen-Bo Huang, Changchun Normal University, P.R. China
  • ECG Identification Based on PCA-RPROP
    Jinrun Yu, Yujuan Si, Xin Liu, Jilin University, P.R. China; Dunwei Wen, Athabasca University, Canada; Tengfei Luo, Jilin University, P.R. China; Liuqi Lang, Zhuhai College of Jilin University, P.R. China
  • Usability Evaluation Plan for Online Annotation and Student Clustering System – A Tunisian University Case
    Miao-Han Chang, Athabasca University, Canada; Rita Kuo, New Mexico Institute of Mining and Technology, United States; Fathi Essalmi, University of Kairouan, Tunisia; Maiga Chang, Vive Kumar, Athabasca University, Canada; Hsu-Yang Kung, National Pingtung University of Science and Technology, Taiwan

Athabasca’s bright future

Tony BatesThe always excellent Tony Bates provides a very clear summary of Ken Coates’s Independent Third-Party Review of Athabasca University released a week or two ago and, as usual, provides a great critical commentary as well as some useful advice on next steps.

Tony rightly points out that our problems are more internal than external, and that the solutions have to come from us, not from outside. To a large extent he hits the nail right on the head when he notes:

Major changes in course design, educational technology, student support and administration, marketing and PR are urgently needed to bring AU into advanced 21st century practice in online and distance learning. I fear that while there are visionary faculty and staff at AU who understand this, there is still too much resistance from traditionalists and those who see change as undermining academic excellence or threatening their comfort zone.

It is hard to disagree. But, though there are too many ostriches among our staff and we do have some major cultural impediments to overcome, it is far less people that impede our progress than it is our design itself, and the technologies – especially the management technologies – of which it consists. That must change, as a corequisite to changing the culture that goes along with it. With some very important exceptions (more on that below) our culture is almost entirely mediated through our organizational and digital technologies, most notably in the form of very rigid processes, procedures and rules, but also through our IT. Our IT should, but increasingly does not, embody those processes. The processes still exist, of course – it’s just that people have to perform them instead of machines. Increasingly often, to make matters worse, we shape our processes to our ill-fitting IT rather than vice versa, because the ‘technological debt’ of adapting them to our needs and therefore having to maintain them ourselves is considered too great (a rookie systems error caused by splitting IT into a semi-autonomous unit that has to slash its own costs without considering the far greater price paid by the university at large). Communication, when it occurs, is almost all explicit and instrumental. We do not yet have enough of the tacit flows of knowledge and easy communication that patch over or fix the (almost always far greater) flaws that exist in such processes in traditional bricks and mortar institutions. The continual partial attention and focused channels of communication resulting from working online mean that we struggle with tacit knowledge and the flexibility of embedded dialogue in ways old fashioned universities never have to even think about. One of the big problems with being so process-driven is that, especially in the absence of richer tacit communication, it is really hard to change those processes, especially because they have evolved to be deeply entangled with one another – changing one process almost always means changing many, often in structurally separate parts of the institutional machine, and involves processes of its own that are often entangled with those we set out to change. As a result for much of its operation, our university does what it does despite us, not because of us. Unlike traditional universities, we have nothing else to fall back on when it fails, or when things fall between cracks. And, though we likely have far fewer than most traditional universities, there are still very many cracks to fall through.

This, not coincidentally, is exactly true of our teaching too. We are pretty darn good at doing what we explicitly intend to do: our students achieve learning outcomes very well, according to the measures we use. AU is a machine that teaches, which is fine until we want the machine to do more than what it is built to do or when other, faster, lighter, cheaper machines begin to compete with it.  As well as making it really hard to make even small changes to teaching, what gets lost – and what matters about as much as what we intentionally teach – is the stuff we do not intend to teach, the stuff that makes up the bulk of the learning experience in traditional universities, the stuff where students learn to be, not just to do. It’s whole-person learning. In distance and online learning, we tend to just concentrate on parts we can measure and we are seldom even aware of the rest. There is a hard and rigid boundary between the directed, instrumental processes and the soft, invisible patterns of culture and belonging, beyond which we rarely cross. This absence is largely what gives distance learning a bad reputation, though it can be a strength if focused teaching of something well-defined is exactly what is needed, or if students are able to make the bigger connections in other ways (true of many of our successful students), when the control that the teaching method provides is worth all the losses and where a more immersive experience might actually get in the way. But it’s a boundary that alienates a majority of current and prospective students. A large percentage of even those we manage to enrol and keep with us would like to feel more connected, more a part of a community, more engaged, more belonging. A great many more don’t even join us in the first place because of that perceived lack, and a very large number drop out before submitting a single piece of work as a direct result.

This is precisely the boundary that the Landing is intended to be a step towards breaking down.

https://landing.athabascau.ca/file/view/410777/video-decreasing-the-distance

If we cannot figure out how to recover that tacit dimension, there is little chance that we can figure out how to teach at a distance in a way that differentiates us from the crowd and that draws people to us for the experience, rather than for the qualification. Not quite fair. Some of us will. If you get the right (deeply engaged) tutor, or join the right (social and/or open) course, or join the Landing, or participate in local meet-ups, or join other social media groups, you may get a fair bit of the tacit, serendipitous, incidental learning and knowledge construction that typifies a traditional education. Plenty of students do have wonderful experiences learning with others at AU, be it with their tutors or with other students. We often see those ones at convocation – ones for whom the experience has been deep, meaningful, and connected. But, for many of our students and especially the ones that don’t make it to graduation (or even to the first assignment), the chances of feeling that you belong to something bigger, to learn from others around you, to be part of a richer university experience, are fairly low. Every one of our students needs to be very self-directed, compared with those in traditional institutions – that’s a sina qua non of working online – but too many get insufficient support and too little inspiration from those around them to rise beyond that or to get through the difficult parts. This is not too surprising, given that we cannot do it for ourselves either. When faced with complicated things demanding close engagement, too many of our staff fall back on the comfortable, easy solution of meeting face to face in one of our various centres rather than taking the hard way, and so the system remains broken. This can and will change.

Moving on

I am much heartened by the Coates report which, amongst other things but most prominently and as our central value proposition, puts our leadership in online and distance education at the centre of everything. This is what I have unceasingly believed we should do since the moment I arrived. The call to action of Coates’s report is fundamentally to change our rigid dynamic, to be bold, to innovate without barriers, to evolve, to make use of the astonishingly good resources – primarily our people – to (again) lead the online learning world. As a virtual institution this should be easier than it would be for others but, perversely, it is exactly the opposite. This is for aforesaid reasons, and also because the boundaries of our IT systems create the boundaries of our thinking, and embed processes more deeply and more inflexibly than almost any bricks and mortar establishment could hope to do. We need soft systems, fuzzy systems, adaptable systems, agile systems for our teaching, research, and learning community development, and we need hard systems, automated systems, custom tailored, rock solid systems for our business processes, including the administrational and assessment recording outputs of the teaching process. This is precisely the antithesis of what we have now. As Coates puts it:

“AU should rebrand itself as the leading Canadian centre for online learning and twenty- first century educational technology. AU has a distinct and potentially insurmountable advantage. The university has the education technology professionals needed to provide leadership, the global reputation needed to attract and hold attention, and the faculty and staff ready to experiment with and test new ideas in an area of emerging national priority. There is a critical challenge, however. AU currently lacks the ICT model and facilities to rise to this opportunity.”

We live in our IT…

We have long been challenged with our IT systems, but things were not always so bad. Our ICT model has made a 180 degree turnaround in the past few years in the exact opposite direction to one that will support continuing evolution and innovation, driven by people that know little about our core mission and that have failed to understand what makes us special as a university. The best defence offered for these poor decisions is usually that ‘most other universities are doing it,’ but we are not most other universities.  ICTs are not just support tools or performance enhancers for us. We are our IT. It is our one and only face to our students and the world. Without IT, we are literally nothing. We have massively underinvested in developing our IT, and what we have done in recent years has destroyed our lead, our agility, and our morale. Increasingly, we have rented generic, closed, off-the-shelf cloud-based applications that would be pretty awful in a factory, that force us into behaviours that make no sense, that sap our time and will, and that are so deeply inappropriate for our very unique distributed community that they stifle all progress, and cut off almost all avenues of innovation in the one area that we are best placed to innovate and lead. We have automated things that should not be automated and let fall into disrepair the things that actually give us an edge. For instance, we rent an absurdly poor CRM system to manage student interactions, building a call centre for customers when we should be building relationships with students, embedding our least savoury practices of content delivery still further, making tweaks to a method of teaching that should have died when we stopped using the postal service for course packs. Yes, when it works, it incrementally improves a broken system, so it looks OK (not great) on reports, but the system it enhances is still irrevocably broken and, by further tying it into a hard embodiment in an ill-fitting application, the chances of fixing it properly diminish further. And, of course, it doesn’t work, because we have rented an ill-fitting system designed for other things with little or no consideration of whether it meets more than coarse functional needs. This can and must change.

Meanwhile, we have methodically starved the environments that are designed for us and through which we have innovated in the past, and that could allow us to evolve. Astonishingly, we have had no (as in zero) central IT support for research for years now, getting by on a wing and a prayer, grabbing for bits of overtime where we can, or using scarce, poorly integrated departmental resources. Even very well-funded and well-staffed projects are stifled by it because almost all of our learning technology innovations are completely reliant on access, not only to central services (class lists, user logins, LMS integration, etc), but also to the staff that are able to perform integrations, manage servers, install software, configure firewalls, etc, etc.  We have had a 95% complete upgrade for the Landing sitting in the wings for nearly 2 years, unable to progress due to lack of central IT personnel to implement it, even though we have sufficient funds to pay for them and then some, and the Landing is actively used by thousands of people. Even our mainstream teaching tools have been woefully underfunded and undermined: we run a version of Moodle that is past even its security update period, for instance, and that creaks along only thanks to a very small but excellent team supporting it. Tools supporting more innovative teaching with more tenuous uptake, such as Mahara and OpenSIM servers, are virtual orphans, riskily trundling along with considerably less support than even the Landing.

This can and will change.

… but we are based in Athabasca

There are other things in Coates’s report that are given a very large emphasis, notably advice to increase our open access, particularly through forming more partnerships with Northern Albertan colleges serving indigenous populations (good – and we will need smarter, more human, more flexible, more inclusive systems for that, too), but mainly a lot of detailed recommendations about staying in Athabasca itself. This latter recommendation seems to have been forced upon Coates, and it comes with many provisos. Coates is very cognizant of the fact that being based in the remote, run-down town of Athabasca is, has been, and will remain a huge and expensive hobble. He mostly skims over sensitive issues like the difficulty of recruiting good people to the town (a major problem that is only slightly offset by the fact that, once we have got them there, they are quite unlikely to leave), but makes it clear that it costs us very dearly in myriad other ways.

… the university significantly underestimates the total cost of maintaining the Athabasca location. References to the costs of the distributed operation, including commitments in the Town of Athabasca, typically focus on direct transportation and facility costs and do not incorporate staff and faculty time. The university does not have a full accounting of the costs associated with their chosen administrative and structural arrangements.”

His suggestions, though making much of the value of staying in Athabasca and heavily emphasizing the importance of its continuing role in the institution, involve moving a lot of people and infrastructure out of it and doing a lot of stuff through web conferencing. He walks a tricky political tightrope, trying to avoid the hot potato of moving away while suggesting ways that we should leave. He is right on both counts.

Short circuits in our communications infrastructure

Though cost, lack of decent ICT infrastructure, and difficulties recruiting good people are factors in making Athabasca a hobble for us, the biggest problem is, again, structural. Unlike those working online, among those living and working in the town of Athabasca itself, all the traditional knowledge flows occur without impediment, almost always to the detriment of more inclusive ways of online communication. Face to face dialogue inevitably short-circuits online engagement – always has, always will. People in Athabasca, as any humans would and should, tend to talk among themselves, and tend to only communicate with others online, as the rest of us do, in directed, intentional ways. This might not be so bad were it not for the fact that Athabasca is very unrepresentative of the university population as a whole, containing the bulk of our administrators, managers, and technical staff, with less than 10 actual faculty in the region. This is a separate subculture, it is not the university, but it has enormous sway over how we evolve. It is not too surprising that our most critical learning systems account for only about 5% of our IT budget because that side of things is barely heard of among decision-makers and implementors that live there and they only indirectly have to face the consequences of its failings (a matter made much worse by the way we disempower the tutors that have to deal with them most of all, and filter their channels of communication through just a handful of obligated committee members). It is no surprise that channels of communication are weak because those that design and maintain them can easily bypass the problems they cause. In fact, if there were more faculty there, it would be even worse, because then we would never face any of the problems encountered by our students. Further concentrations of staff in Edmonton (where most faculty reside), St Albert (mainly our business faculty) and Calgary do not help one bit, simply building further enclaves, which again lead to short circuits in communication and isolated self-reinforcing clusters that distort our perspectives and reduce online communication. Ideas, innovations, and concerns do not spread because of hierarchies that isolate them, filter them as they move up through the hierarchy, and dissipate them in Athabasca. Such clustering could be a good part of the engine that drives adaptation: natural ecosystems diversify thanks to parcellation. However, that’s not how it works here, thanks to the aforementioned excess in structure and process and the fact that those clusters are far from independently evolving. They are subject to the same rules and the same selection pressures as one another, unable to independently evolve because they are rigidly, structurally, and technologically bound to the centre. This is not evolution – it is barely even design, though every part of it has been designed and top-down structures overlay the whole thing. It’s a side effect of many small decisions that, taken as a whole, result in a very flawed system.

This can and must change.

The town of Athabasca and what it means to us

Athabasca high street

Though I have made quite a few day trips to Athabasca over the years, I had never stayed overnight until around convocation time this year. Though it was a busy few days so I only had a little chance to explore, I found it to be a fascinating place that parallels AU in many ways. The impression it gives is of a raw, rather broken-down and depressed little frontier town of around 4,000 souls (a village by some reckonings) and almost as many churches. It was once a thriving staging post on the way to the Klondike gold rush, when it was filled with the rollicking clamour of around 20,000 prospectors dreaming of fortunes. Many just passed through, but quite a few stayed, helping to define some of its current character but, when the gold rush died down, there was little left to sustain a population. Much of the town still feels a bit temporary, still a bit of a campground waiting to turn into a real town. Like much of Northern Alberta, its fortunes in more recent years have been significantly bound to the oil business, feeding an industry that has no viable future and the morals of an errant crow, tied to its roller coaster fortunes. There are signs that money has been around, from time to time: a few nice buildings, a bit of landscaping here and there, a memorial podium at Athabasca Landing.  But there are bigger signs that it has left.

Athabasca Landing

Today, Athabasca’s bleak main street is filled with condemned buildings, closed businesses, discount stores, and shops with ‘sale’ signs in their windows. There are two somewhat empty town centre pubs, where a karaoke night in one will denude the other of almost all its customers.

There are virtually no transit links to the outside world: one Greyhound bus from Edmonton (2 hours away) comes through it, in the dead of night, and passenger trains stopped running decades ago. The roads leading in and out are dangerous: people die way too often getting there, including one of our most valued colleagues in my own school. It is never too far from being reclaimed by the forces of nature that surround it. Moose, bear, deer, and coyotes wander fairly freely. Minus forty temperatures don’t help, nor does a river that is pushed too hard by meltwaters from the rapidly receding Athabasca Glacier and that is increasingly polluted by the side-effects of oil production.

Athabasca

So far so bleak. But there are some notable upsides too. The town is full of delightfully kind, helpful, down-to-earth people infused with that wonderful Canadian spirit of caring for their neighbours, grittily facing the elements with good cheer, getting up early, eating dinner in the late afternoon, gathering for potlucks in one another’s houses, and organizing community get-togethers. The bulk of housing is well cared-for, set in well-tended gardens, in quiet, neat little streets. I bet most people there know their neighbours and their kids play together. Though tainted by its ties with the oil industry, the town comes across as, fundamentally, a wholesome centre for homesteaders in the region, self-reliant and obstinately surviving against great odds by helping one another and helping themselves. The businesses that thrive are those selling tools, materials, and services to build and maintain your farm and house, along with stores for loading your provisions into your truck to get you through the grim winters. It certainly helps that a large number of residents are employees of the university, providing greater diversity than is typically found in such settlements, but they are frontier folk like the rest. They have to be.

It would be unthinkable to pull the university out at this point – it would utterly destroy an already threatened town and, I think, it would cause great damage to the university. This was clearly at the forefront of Coates’s mind, too. The solution is not to withdraw from this strange place, but to dilute and divert the damage it causes and perhaps, even, to find ways to use its strengths. Greater engagement with Northern communities might be one way to save it – we have some big largely empty buildings up there that will be getting emptier, and that might not be a bad place for some face-to-face branching out, perhaps semi-autonomously, perhaps in partnership with colleges in the region. It also has potential as a place for a research retreat though it is not exactly a Mecca that would draw people to it, especially without transit links to sustain it. A well-designed research centre cost a fortune to build, though, so it would be nice to get some use out of it.

Perhaps more importantly, we should not pull out because Athabasca is a part of the soul of the institution. It is a little fitting that Athabasca University has – not without resistance – had its fortunes tied to this town. Athabasca is kind-of who we are and, to a large extent, defines who we should aspire to be. As an institution we are, right now, a decaying frontier town on the edge of civilization that was once a thriving metropolis, forced to help ourselves and one another battle with the elements, a caring bunch of individuals bound by a common purpose but stuck in a wilderness that cares little for us and whose ties with the outside world are fickle, costly, and tenuous. Athabasca is certainly a hobble but it is our hobble and, if we want to move on, we need to find ways to make the best of it – to find value in it, to move people and things away from it that it impedes the most, at least where we can, but to build upon it as a mythic hub that helps to define our identity, a symbolic centre for our thinking. We can and will help ourselves and one another to make it great again. And we have a big advantage that our home town lacks: a renewable and sustainable resource and product. Very much unlike Athabasca the town, the source of our wealth is entirely in our people, and the means we have for connecting them. We have the people already: we just need to refocus on the connection.

The cost of admission to the unlearning zone

picture of dull classroom (pubic domain)I describe some of what I do as ‘unteaching’, so I find this highly critical article by Miss Smith – The Unlearning Zone –  interesting. Miss Smith dislikes the terms ‘ unteaching’ and ‘unlearning’ for some well-expressed aesthetic and practical reasons: as she puts it, they are terms “that would not be out of place in a particularly self-satisfied piece of poststructuralist literary analysis circa 1994.”  I partially agree. However, she also seems equally unenamoured with what she thinks they stand for. I disagree with her profoundly on this so, as she claims to be new to these terms, here is my attempt to explain a little about what I mean by them and why I think they are a useful part of the educators’ lexicon, and why they are crucially important for learners’ development in general.

First the terms…

Yes, ‘unteaching’ is an ugly neoligism and it doesn’t really make sense: that’s part of the appeal of using it – a bit of cognitive dissonance can be useful for drawing attention to something. However, it is totally true that someone who is untaught is just someone who has not (yet) been taught, so ‘unteaching’, seen through that light, is at best pointless, at worst self-contradictory.  On the other hand, it does seem to follow pretty naturally from ‘unlearning’ which, contrary to Miss Smith’s assertion, has been in common use for centuries and makes perfect sense. Have you ever had to unlearn bad habits? Me too.

As I understand it, ‘unteach’ is to ‘teach’ as ‘undo’ is to ‘do’.  Unteaching is still teaching, just as undoing is still doing, and unlearning is still learning. Perhaps deteaching would be a better term. Whatever we choose to call it, unteaching is concerned with intentionally dismantling the taught belief that teaching is about exerting power over learners to teach, and replacing it with the attitude that teachers are there to empower learners to learn. This is not a particularly radical idea. It is what all teachers should do anyway, I reckon. But it is worth drawing attention to it as a distinct activity because it runs counter to the tide, and the problem it addresses is virtually ubiquitous in education up to, and sometimes at, doctoral level.

Traditional teaching of the sort Miss Smith seems to defend in her critique does a lot more than teach a subject, skill, or way of thinking. It teaches that learning is a chore that is not valuable in and of itself, that learners must be forced to do it for some other purpose, often someone else’s purpose. It teaches that teaching is something done to students by a teacher: at its worst, it teaches that teaching is telling; at best, that teaching involves telling someone to do something. It’s not that (many) teachers deliberately seek these outcomes, but that they are the most likely lessons to be learned, because they are the ones that are repeated most often. The need for unteaching arises because traditional teaching, with luck in addition to whatever it intends to teach, teaches some terrible lessons about learning and the role of teaching in that process that must be unlearned.

What is unteaching?

Miss Smith claims that unteaching means “open plan classes, unstructured lessons and bean bags.” That’s not the way I see it at all. Unlike traditional teaching, with its timetables, lesson plans, learning objectives, and uniform tests, unteaching does not have its own technologies and methods, though it does, for sure, tend to be a precursor to connectivist, social constructivist, constructionist, and other more learner-centred ways of thinking about the learning process, which may sometimes be used as part of the process of unteaching itself. Such methods, models, and attitudes emerge fairly naturally when you stop forcing people to do your bidding. However, they are just as capable of being used in a controlling way as the worst of instructivist methods: the number of reports on such interventions that include words like ‘students must…’, ‘I make my students…’ or (less blatantly) ‘students (do X)’ far outnumber all others, and that is the very opposite of unteaching. The specific technologies (including pedagogies as much as open-plan classrooms and beanbags) are not the point. Lectures, drill-and-practice and other instructivist methods are absolutely fine, as long as:

  1. they at least attempt to do the job that students want or need,
  2. they are willingly and deliberately chosen by students,
  3. students are well-informed enough to make those choices, and
  4. students can choose to learn otherwise at any time.

No matter how cool and groovy your problem-based, inquiry-based, active methods might be, if they are imposed on students (especially with the use of threats for non-compliance and rewards for compliance – e.g. qualifications, grades, etc) then it is not unteaching at all: it’s just another way of doing the same kind of teaching that caused the problem in the first place. But if students have control – and ‘control’ includes being able to delegate control to someone else who can scaffold, advise, assist, instruct, direct, and help them when needed, as well as being able to take it back whenever they wish – then such methods can be very useful. So can lectures. To all those educational researchers that object to lectures, I ask whether they have ever found them valuable in a conference (and , if not, why did they go to a conference in the first place?). It’s not the pedagogy of lectures that is at fault. It’s the requirement to attend them and the accompanying expectation that people are going to learn what you are teaching as a result. That’s, simply put, empirically wrong. It doesn’t mean that lecturees learn nothing. Far from it. But what you teach and what they learn are different kinds of animal.

Problems with unteaching

It’s really easy to be a bad unteacher – I think that is what Miss Smith is railing against, and it’s a fair criticism. I’m often pretty bad at it myself, though I have had a few successes along the way too. Unteaching and, especially, the pedagogies that result from having done unteaching, are far more likely to go wrong, and they take a lot more emotional, intellectual, and social effort than traditional teaching because they don’t come pre-assembled. They have no convenient structures and processes in place to do the teaching for you.  Traditional teaching ‘works’ even when it doesn’t. If you throw someone into a school system, with all its attendant rewards, punishments, timetables, rules and curricula, and if you give them the odd textbook and assessment along the way, then most students will wind up learning something like what is intended to be taught by the system, no matter how awful the teachers might be. In such a system, students will rarely learn well, rarely persistently, rarely passionately, seldom kindly, and the love of learning will have been squashed out of many of them along the way (survivors often become academics and teachers themselves). But they will mostly pass tests at the end of it. With a bit of luck many might even have gained a bit of useful knowledge or skill, albeit that much will be not just wasted and forgotten as easily as a hotel room number when your stay is over, but actively disliked by the end of it. And, of course, they will have learned dependent ways of learning that will serve them poorly outside institutional systems.

To make things far worse, those very structures that assist the traditional teacher (grades, compulsory attendance, fixed outcomes, concept of failure, etc) are deeply antagonistic to unteaching and are exactly why it is needed in the first place. Unteachers face a huge upstream struggle against an overwhelming tide that threatens to drown passionate learning every inch of the way. The results of unteaching can be hard to defend within a traditional educational system because, by conventional measures, it is often inefficient and time-consuming. But conventional measures only make sense when you are trying to make everyone do the same things, through the same means, with the same ends, measured by and in order to meet the same criteria. That’s precisely the problem.

The final nail in unteaching’s coffin is that it is applied very unevenly across the educational system, so every freedom it brings is counterbalanced by a mass of reiterated antagonistic lessons from other courses and programs. Every time we unteach someone, two others reteach them.  Ideally, we should design educational systems that are friendlier to and more supportive of learner autonomy, and that are (above all else) respectful of learners as human beings. In K-12 teaching there are plenty of models to draw from, including Summerhill, Steiner (AKA Waldorf) schools, Montessori schools, Experiential Learning Schools etc. Few are even close to perfect, but most are at least no worse than their conventional counterparts, and they start with an attitude of respect for the children rather than a desire to make them conform. That alone makes them worthwhile. There are even some regional systems, such as those found in Finland or (recently) British Columbia, that are heading broadly in the right direction. In universities and colleges there are plenty of working models, from Oxford tutorials to Cambridge supervisions, to traditional theses and projects, to independent study courses and programs, to competency-based programs, to PLAR/APEL portfolios, and much more. It is not a new idea at all. There is copious literature and many theoretical models that have stood the test of time, from andragogy to communities of practice, through to teachings from Freire, Illich, Dewey and even (a bit quirkily) Vygotsky. Furthermore, generically and innately, most distance and e-learning unteaches better than its p-learning counterparts because teachers cannot exert the same level of control and students must learn to learn independently. Sadly, much of it is spoiled by coercing students with grades, thereby providing the worst of both worlds: students are forced to behave as the teacher demands in their terminal behaviours but, without physical copresence, are less empowered by guidance and emotional/social support with the process. Much of my own research and teaching is concerned with inverting that dynamic – increasing empowerment and social support through online learning, while decreasing coercion. I’d like to believe that my institution, Athabasca University, is largely dedicated to the same goal, though we do mostly have a way to go before we get it right.

Why it matters

Unteaching is to a large extent concerned with helping learners – including adult learners – to get back to the point at which most children start their school careers – driven by curiosity, personal interest, social value, joy, delight – but that is schooled out of them over years of being taught dependency.  Once misconceptions about what education is for, what teachers do, and how we learn, have been removed, teaching can happen much more effectively: supporting, nurturing, inspiring, challenging, responding, etc, but not controlling, not making students do things they are not ready to do for reasons that mean little to them and have even less to do with what they are learning.

However, though it is an immensely valuable terminal outcome, improved learning is perhaps not the biggest reason for unteaching. The real issue is moral: it’s simply the right thing to do. The greatest value is that students are far more likely to have been treated with the respect, care, and honour that all human beings deserve along the way. Not ‘care’ of the sort you would give to a dog when you train it to be obedient and well behaved. Care of the sort that recognizes and valorizes autonomy and diversity, that respects individuals, that cherishes their creativity and passion, that sees learners as ends in themselves, not products or (perish the thought) customers. That’s a lesson worth teaching, a way of being that is worth modelling. If that demands more effort, if it is more fallible, and if it means that fewer students pass your tests, then I’m OK with that. That’s the price of admission to the unlearning zone.

 

True costs of information technologies

Switchboard (public domain)Microsoft unilaterally and quietly changed the spam filtering rules for Athabasca University’s O365 email system on Thursday afternoon last week. On Friday morning, among the usual 450 or so spams in my spam folder (up from around 70 per day in the old Zimbra system) were over 50 legitimate emails, including one to warn me that this was happening, claiming that our IT Services department could do nothing about it because it’s a vendor problem. Amongst junked emails were all those sent to the allstaff alias (including announcements about our new president), student work submissions, and many personal messages from students, colleagues, and research collaborators.

The misclassified emails continue to arrive, 5 days on.  I have now switched off Microsoft’s spam filter and switched to my own, and I have risked opening emails I would never normally glance at, but I have probably missed a few legitimate emails. This is perhaps the worst so far in a long line of ‘quirks’ in our new O365 system, including persistently recurring issues of messages being bounced for a large number of accounts, and it is not the first caused by filtering systems: many were affected by what seems to be a similar failure in the Clutter filter in May.

I assume that, on average, most other staff at AU have, like me, lost about half an hour per day so far to this one problem. We have around 1350 employees, so that’s around 675 hours – 130 working days – being lost every day it continues. This is not counting the inevitable security breaches, support calls, proactive attempts at problem solving, and so on, nor the time for recovery should it ever be fixed, nor the lost trust, lost motivation, the anger, the conversations about it, the people that will give up on it and redirect emails to other places (in breach of regulations and at great risk to privacy and security, but when it’s a question of being able to work vs not being able to work, no one could be blamed for that). The hours I have spent writing this might be added to that list, but this happens to relate very closely indeed to my research interests (a great case study and catalyst for refining my thoughts on this), so might be seen as a positive side-effect and, anyway, the vast majority of that time was ‘my own’: faculty very rarely work normal 7-hour days.

Every single lost minute per person every day equates to the time of around 3 FTEs when you have 1350 employees. When O365 is running normally it costs me around five extra minutes per day, when compared with its predecessor, an ancient Zimbra system.  I am a geek that has gone out of his way to eliminate many of the ill effects: others may suffer more.  It’s mostly little stuff: an extra 10-20 seconds to load the email list, an extra 2-3 seconds to send each email, a second or two longer to load them, an extra minute or two to check the unreliable and over-spammed spam folder, etc. But we do such things many times a day. That’s not including the time to recover from interruptions to our work, the time to learn to use it, the support requests, the support infrastructure, etc, etc.

To be fair, whether such time is truly ‘lost’ depends on the task. Those ‘lost’ seconds may be time to reflect or think of other things. The time is truly lost if we have to put effort into it (e.g. checking spam mail) or if it is filled with annoyance at the slow speed of the machine, but may sometimes simply be used in ways we would not otherwise use it.  I suspect that flittering attention while we wait for software to do its thing creates habits of mind that are both good and bad. We are likely more distracted, find it harder to concentrate for long periods, but we probably also develop different ways of connecting things and different ways of pacing our thinking. It certainly changes us, and more research is needed on how it affects us. Either way, time spent sorting legitimate emails from spam is, at least by most measures of productivity, truly time lost, and we have lost a lot of it.

Feeding the vampires

It goes without saying that, had we been in control of our own email system, none of this would have happened. I have repeatedly warned that putting one of the most central systems of our university into the hands of an external supplier, especially one with a decades-long history of poor software, broken or proprietary standards, weak security, inadequate privacy policies, vicious antagonism to competitors, and a predatory attitude to its users, is a really stupid idea. Microsoft’s goal is profit, not user satisfaction: sometimes the two needs coincide, often they do not. Breakages like this are just a small part of the problem. The worst effects are going to be on our capacity to innovate and adapt, though our productivity, engagement and workload will all suffer before the real systemic failures emerge.  Microsoft had to try hard to sell it to us, but does not have to try hard to keep us using it, because we are now well and truly locked in on all sides by proprietary, standards-free tools that we cannot control, cannot replace, cannot properly understand, that change under our feet without warning, that will inevitably insinuate themselves into our working lives. And it’s not just email and calendars (that can use only slightly broken standards) but completely opaque standards-free proprietary tools like OneDrive, OneNote and Yammer. Now we have lost standards-compliance and locked ourselves in, we have made it unbelievably difficult to ever change our minds, no matter how awful things get. And they will get more awful, and the costs will escalate. This makes me angry. I love my university and am furious when I see it being destroyed by avoidable idiocy.

O365 is only one system among many similar tools that have been foisted upon us in the last couple of years, most of which are even more awful, if marginally less critical to our survival. They have replaced old, well-tailored, mostly open tools that used to just work: not brilliantly, seldom prettily, but they did the job fast and efficiently so that we didn’t have to. Our new systems make us do the work for them. This is the polar opposite of why we use IT systems in the first place, and it all equates to truly lost time, lost motivation, lost creativity, lost opportunity.

From leave reporting to reclaiming expenses to handling research contracts to managing emails, let’s be very conservative indeed and say that these new baseline systems just cost us an average of an extra 30 minutes per working day per person on top of what we had before (for me, it is more like an hour, for others, more).  If the average salary of an AU employee is $70,000/year that’s $5,400,000 per year in lost productivity. It’s much worse than that, though, because the work that we are forced to do as a result is soul-destroying, prescriptive labour, fitting into a dominative system as a cog into a machine. I feel deeply demotivated by this, and that infects all the rest of my work. I sense similar growing disempowerment and frustration amongst most of my colleagues.

And it’s not just about the lost time of individuals. Almost always, other people in the system have to play a role that they did not play before (this is about management information systems, not just the digital tools), and there are often many iterations of double-checking and returned forms,  because people tend to be very poor cogs indeed.  For instance, the average time it takes for me to get recompense for expenses is now over 6 months, up from 2-4 weeks before. The time it takes to simply enter a claim alone is up from a few minutes to a few hours, often spread over months, and several other people’s time is also taken up by this process. Likewise, leave reporting is up from 2 minutes to at least 20 minutes, usually more, involving a combination of manual emails, tortuous per-hour entry, the ability to ask for and report leave on public holidays and weekends, and a host of other evils. As a supervisor, it is another world of pain: I have lost many hours to this, compounding the ‘mistakes’ of others with my own (when teaching computing, one of the things I often emphasize is that there is no such thing as user error: while they can make mistakes and do weird stuff we never envisaged, it is our failure to design things right that is the problem). This is not to mention the hours spent learning the new systems, or the effects on productivity, not just in time and motivation, but in preventing us from doing what we are supposed to do at all. I am doing less research, not just because my time is taken with soul-destroying cog-work, but because it is seldom worth the hassle of claiming, or trying to manage projects using badly designed tools that fit better – though not well – in a factory. Worse, it becomes part of the culture, infecting other processes like ethics reviews, student-tutor interactions, and research & development. In an age when most of the world has shaken off the appalling, inhuman, and empirically wrong ideas of Taylorism, we are becoming more and more Taylorist. As McLuhan said, we shape our tools and our tools shape us.

To add injury to insult, these awful things actually cost money to buy and to run –  often a lot more money than they were planned to cost, making a lot less savings or even losses, even in the IT Services department where they are justified because they are supposed to be cutting costs. For instance, O365 cost nearly three times initial estimates on which decisions were based, and it appears that it has not reduced the workload for those having to support it, nor the network traffic going in and out of the university (in fact it may be much worse), all the while costing us far more per year to access than the reliable and fully-featured elderly open source product it replaced. It also breaks a lot more. It is hard to see what we have gained here, though it is easy to see many losses.

Technological debt

The one justification for this suicidal stupidity is that our technological debt – the time taken to maintain, extend, and manage old systems – is unsustainable. So, if we just buy baseline tools without customization, especially if we outsource the entire management role to someone else, we save money because we don’t have to do that any more.

This is – with more than due respect – utter bullshit.

Yes, there is a huge investment involved over years whenever we build tools to do our jobs and, yes, if we do not put enough resources into maintaining them then we will crawl to a halt because we are doing nothing but maintenance. Yes, combinatorial complexity and path dependencies mean that the maintenance burden will always continue to rise over time, at a greater-than-linear rate. The more you create, the more you have to maintain, and connections between what we create adds to the complexity. That’s the price of having tools that work. That’s how systems work. Get over it. That’s how all technology evolves, including bureaucratic systems. Increasing complexity is inevitable and relentless in all technological systems, not withstanding the occasional paradigm shift that kind-of starts the ball rolling again. Anyone that had stuck around in an organization long enough to see the long-term effects of their interventions would know this.

These new baseline systems are in no way different, save for one: rather than putting the work into making the machines work for us, we instead have to evolve, maintain and manage processes in which we do the work of machines. The complexity therefore impacts on every single human being that is having to enact the machine, not just developers. This is crazy. Exactly the same work has to be done, with exactly the same degree of precision as that of the machines (actually more, because we have to add procedures to deal with the errors that software is less likely to make). It’s just that now it is done by slow, unreliable, fallible, amotivated human beings. For creative or problem-solving work, it would be a good thing to take tasks away from machines that humans should be doing. For mechanistic, process-driven work where human error means it breaks, it is either great madness, great stupidity, or great evil. There are no other options. At a time when our very survival is under threat, I cannot adequately express my deep horror that this is happening.

I suspect that the problem is in a large part due to short-sighted local thinking, which is a commonplace failure in hierarchical systems, and that gets worse the deeper and more divisive the hierarchies go.  We only see our own problems without understanding or caring about where we sit in the broader system. Our IT directors believe that their job is to save money in ITS (the department dealing with IT), rather than to save money for the university. But, not only are they outsourcing our complex IT functions to cloud-based companies (a terrible idea for aforementioned reasons), they are outsourcing the work of information technologies to the rest of the university. The hierarchies mean a) that directors seldom get to see or hear of the trouble it causes, b) they mix mainly with others at or near their hierarchical level who do not see it either, and c) that they tend to see problems in caricature, not as detailed pictures of actual practices. As the hierarchies deepen and separate,  those within a branch communicate less with others in parallel branches or those more than a layer above or below. Messages between layers are, by design, distorted and filtered. The more layers, the greater the distortion. People take further actions based on local knowledge, and their actions affect the whole tree. Hierarchies are particularly awful when coupled with creative work of the sort we do at Athabasca or fields where change is frequent and necessary. They used to work OK for factories that did not vary their output much and where everything was measurable though, in modern factories, that is rarely true any more. For a university, especially one that is online and that thus lacks many of the short circuits found in physical institutions, deepening hierarchies are a recipe for disaster. I suppose that it goes without saying that Athabasca University has, over the past few years, seen a huge deepening in those hierarchies.

True costs

Our university is in serious financial trouble that it would not be in were it not for these systems. Even if we had kept what we had, without upgrading, we would already be many millions of dollars better off, countless thousands of hours would not have been wasted, we would be far more motivated, we would be far more creative, and we would still have some brilliant people that we have lost as a direct result of this process. All of this would be of great benefit to our students and we would be moving forwards, not backwards. We have lost vital capacity to innovate, lost vital time to care about what we are supposed to be doing rather than working out how the machine works. The concept of a university as a machine is not a great one, though there are many technological elements and processes that are needed to make it run. I prefer to think of it like an ecosystem or an organism. As an online university, our ecosystem/body is composed of people and machines (tools, processes, methods, structures, rules, etc). The machinery is just there to support and sustain the people, so they can operate as a learning community and perform their roles in educating, researching and community engagement. The more that we have to be the machines, the less efficiently the machinery will run, and the less human we can all be. It’s brutal, ugly, and self-destructive.

When will we learn that the biggest costs of IT are to its end users, not to IT Services? We customized and created the tools that we have replaced for extremely good reasons: to make our university and its systems run better, faster, more efficiently, more effectively. Our ever-growing number of new off-the-shelf and outsourced systems, that take more of our time, intellectual and emotional effort, have wasted and continue to waste countless millions of dollars, not to mention huge costs in lost motivation and ill will, not to mention in loss of creativity and caring. In the process we have lost control of our tools, lost the expertise to run them, lost the capability to innovate in the one field in which we, as an online institution, must and should have most expertise. This is killing us. Technological debt is not voided by replacing custom parts with generic pieces. It is transferred at a usurious rate of interest to those that must replace the lost functionality with human labour.

It won’t be easy to reverse this suicidal course, and I would not enjoy being the one tasked with doing so. Those who were involved in implementing these changes might find it hard to believe, because it has taken years and a great deal of pain to do so (and it is far from over yet – the madness continues), but breaking the system was hundreds of times easier than it will be to fix it. The first problem is that the proprietary junk that has been foisted upon us, especially when hosted in the cloud, is a one-way valve for our data, so it will be fiendishly hard to get it back again. Some of it will be in formats that cannot be recovered without some data loss. New ways of working that rely on new tools will have insinuated themselves, and will have to be reversed. There will be plentiful down-time, with all the associated costs. But it’s not just about data. From a systems perspective this is a Humpty Dumpty problem. When you break a complex system, from a body to an ecosystem, it is almost impossible to ever restore it to the way it was. There are countless system dependencies and path dependencies, which mean that you cannot simply start replacing pieces and assume that it will all work. The order matters. Lost knowledge cannot be regained – we will need new knowledge. If we do manage to survive this vandalism to our environment, we will have to build afresh, to create a new system, not restore the old. This is going to cost a lot. Which is, of course, exactly as Microsoft and all the other proprietary vendors of our broken tools count upon. They carefully balance the cost of leaving them against what they charge. That’s how it works. But we must break free of them because this is deeply, profoundly, and inevitably unsustainable.