Education for life or Education for work? Reflections on the RBC Future Skills Report

Tony Bates extensively referenced this report from the Royal Bank of Canada on Canadian employer demands for skills over the next few years, in his characteristically perceptive keynote at CNIE 2019 last week (it’s also referred to in his most recent blog post). It’s an interesting read. Central to its many findings and recommendations are that the Canadian education system is inadequately designed to cope with these demands and that it needs to change. The report played a big role in Tony’s talk, though his thoughts on appropriate responses to that problem were independently valid in and of themselves, and not all were in perfect alignment with the report. Tony Bates at CNIE 2019

The 43-page manifesto (including several pages of not very informative graphics) combines some research findings, with copious examples to illustrate its discoveries, and with various calls to action based on them. I guess not surprisingly for a document intended to ignite, it is often rather hard to tell in any detail how the research itself was conducted. The methodology section is mainly on page 33 but it doesn’t give much more than a broad outline of how the main clustering was performed, and the general approach to discovering information. It seems that a lot of work went into it, but it is hard to tell how that work was conducted.

A novel (-ish) finding: skillset clusters

Perhaps the most distinctive and interesting research discovery in the report is a predictive/descriptive model of skillsets needed in the workplace. By correlating occupations from the federal NOC (National Occupational Classification) with a US Labor Department dataset (O*NET) the researchers abstracted and identified six distinct clusters of skillsets, the possessors of which they characterize as:

  • solvers (engineers, architects, big data analysts, etc)
  • providers (vets, musicians, bloggers, etc)
  • facilitators (graphic designers, admin assistants, Uber drivers, etc)
  • technicians (electricians, carpenters, drone assemblers, etc)
  • crafters (fishermen, bakers, couriers, etc)
  • doers (greenhouse workers, cleaners, machine-learning trainers, etc)

From this, they make the interesting, if mainly anecdotally supported, assertion that there are clusters of occupations across which these skills can be more easily transferred. For instance, they reckon, a dental assistant is not too far removed from a graphic designer because both are high on the facilitator spectrum (emotional intelligence needed). They do make the disclaimer that, of course, other skills are needed and someone with little visual appreciation might not be a great graphic designer despite being a skilled facilitator. They also note that, with training, education, apprenticeship models, etc, it is perfectly possible to move from one cluster to another, and that many jobs require two or more anyway (mine certainly needs high levels of all six). They also note that social skills are critical, and are equally important in all occupations. So, even if their central supposition is true, it might not be very significant.

There is a somewhat intuitive appeal to this, though I see enormous overlap between all of the clusters and find some of the exemplars and descriptions of the clusters weirdly misplaced: in what sense is a carpenter not a crafter, or a graphic designer not a provider, or an electrician not a solver, for instance? It treads perilously close to the borders of x-literacies – some variants of which come up with quite similar categories – or learning style theories, in its desperate efforts to slot the world into manageable niches regardless of whether there is any point to doing so. The worst of these is the ‘doers’ category, which seems to be a lightly veiled euphemism for ‘unskilled’ (which, as they rightly point out, relates to jobs that are mostly under a great deal of threat). ‘Doing’ is definitely ripe for transfer between jobs because mindless work in any occupation needs pretty much the same lack of skill. My sense is that, though it might be possible to see rough patterns in the data, the categories are mostly very fuzzy and blurred, and could easily be used to label people in very unhelpful ways. It’s interesting from a big picture perspective, but, when you’re applying it to individual human beings, this kind of labelling can be positively dangerous. It could easily lead to a species of the same general-to-specific thinking that caused the death of many airplane pilots prior to the 1950s, until the (obvious but far-reaching) discovery that there is no such thing as an average-sized pilot. You can classify people into all sorts of types, but it is wrong to make any further assumptions about them because you have done so. This is the fundamental mistake made by learning style theorists: you can certainly identify distinct learner types or preferences but that makes no difference whatsoever to how you should actually teach people.

Education as a feeder for the job market

Perhaps the most significant and maybe controversial findings, though, are those leading more directly to recommendations to the educational and training sector, with a very strong emphasis on preparedness for careers ahead. One big thing bothers me in all of this. I am 100% in favour of shifting the emphasis of educational institutions from knowledge acquisition to more fundamental and transferable capabilities: on that the researchers of this report hit the nail on the head. However, I don’t think that the education system should be thought of, primarily, as a feeder for industry or preparation for the workplace. Sure, it’s definitely one important role for education, but I don’t think it’s the dominant one, and it’s very dangerous indeed to make that its main focus to the exclusion of the rest. Education is about learning to be a human in the context of a society; it’s about learning to be part of that culture and at least some of its subcultures (and, ideally, about understanding different cultures). It’s a huge binding force, it’s what makes us smart, individually and collectively, and it is by no means limited to things we learn in institutions or organizations. Given their huge role in shaping how we understand the world,  at the very least media (including social media) should, I think, be included whenever we talk of education. In fact, as Tony noted, the shift away from institutional education is rapid and on a vast scale, bringing many huge benefits, as well as great risks. Outside the institutions designed for the purpose, education is often haphazard, highly prone to abuse, susceptible to mob behaviours, and often deeply harmful (Trump, Brexit, etc being only the most visible tips of a deep malaise). We need better ways of dealing with that, which is an issue that has informed much of my research. But education (whether institutional or otherwise) is for life, not for work.

I believe that education is (and should be) at least partly concerned with passing on what we know, who we have been, who we are, how we behave, what we value, what we share, how we differ, what drives us, how we matter to one another. That is how it becomes a force for societal continuity and cohesion, which is perhaps its most important role (though formal education’s incidental value to the economy, especially through schools, as a means to enable parents to work cannot be overlooked). This doesn’t have to exclude preparation for work: in fact, it cannot.  It is also about preparing people to live in a culture (or cultures), and to continue to learn and develop productively throughout their lives, evolving and enhancing that culture, which cannot be divorced from the tools and technologies (including rituals, norms, rules, methods, artefacts, roles, behaviours, etc) of which the cultures largely consist, including work. Of course we need to be aware of, and incorporate into our teaching, some of the skills and knowledge needed to perform jobs, because that’s part of what makes us who we are. Equally, we need to be pushing the boundaries of knowledge ever outwards to create new tools and technologies (including those of the arts, the humanities, the crafts, literature, and so on, as well as of sciences and devices) because that’s how we evolve. Some – only some – of that will have value to the economy. And we want to nurture creativity, empathy, social skills, communication skills, problem-solving skills, self-management skills, and all those many other things that make our culture what it is and that allow us to operate productively within it, that also happen to be useful workplace skills. But human beings are also much more than their jobs. We need to know how we are governed, the tools needed to manage our lives, the structures of society. We need to understand the complexities of ethical decisions. We need to understand systems, in all their richness. We need to nurture our love of arts, sports, entertainment, family life, the outdoors, the natural and built environment, fine (and not fine) dining, being with friends, talking, thinking, creating stuff, appreciating stuff, and so on. We need to develop taste (of which Hume eloquently wrote hundreds of years ago).  We need to learn to live together. We need to learn to be better people. Such things are (I think) more who we are, and more what our educational systems should focus on, than our productive roles in an economy. The things we value most are, for the most part, seldom our economic contributions to the wealth of our nation, and the wealth of a nation should never be measured in economic terms.  Even those few that love money the most usually love the power it brings even more, and that’s not the same thing as economic prosperity for society. In fact, it is often the very opposite.

I’m not saying economic prosperity is unimportant, by any means: it’s often a prerequisite for much of the rest, and sometimes (though far from consistently) a proxy marker for them. And I’m not saying that there is no innate value in the process of achieving economic prosperity: many jobs are critical to sustaining that quality of life that I reckon matters most, and many jobs actually involve doing the very things we love most. All of this is really important, and educational systems should cater for it. It’s just that future employment should not be thought of as the main purpose driving education systems.

Unfortunately, much of our teaching actually is heavily influenced by the demands of students to be employable, heavily reinforced on all sides by employers, families, and governments, and that tends to lead to a focus on topics, technical skillsets, and subject knowledge, not so much to the exclusion of all the rest, but as the primary framing for it. For instance, HT to Stu Berry and Terry Anderson for drawing my attention to the mandates set by the BC government for its post secondary institutions, that are a litany of shame, horribly focused on driving economic prosperity and feeding industry, to the exclusion of almost anything else (including learning and teaching, or research for the sake of it, or things that enrich us as human beings rather than cogs in an economic machine). This report seems to take the primary role of education as a driver of economic prosperity as just such a given. I guess, being produced by a bank, that’s not too surprising, but it’s worth viewing it with that bias in mind.

And now the good news

What is heartwarming about this report, though, is that employers seem to want (or think they will want) more or less exactly those things that also enrich our society and our personal lives. Look at this fascinating breakdown of the skills employers think they will need in the future (Tony used this in his slides):

Projected skills demands, from the RBC future skills report

 

There’s a potential bias due to the research methodology, that I suspect encouraged participants to focus on more general skills, but it’s really interesting to see what comes in the first half and what dwindles into unimportance at the end.

Topping the list are active listening, speaking, critical thinking, comprehension, monitoring, social perceptiveness, coordination, time management, judgement and decision-making, active learning, service orientation, complex problem solving, writing, instructing, persuasion, learning strategies, and so on. These mostly quite abstract skills (in some cases propensities, albeit propensities that can be cultivated) can only emerge within a context, and it is not only possible but necessary to cultivate them in almost any educational intervention in any subject area, so it is not as though they are being ignored in our educational systems. More on that soon. What’s interesting to me is that they are the human things, the things that give us value regardless of economic value. I find it slightly disconcerting that ethical or aesthetic sensibilities didn’t make the list and there’s a surprising lack of mention of physical and mental health but, on the whole, these are life skills more than just work skills.

Conventional education can and often does cultivate these skills. I am pleased to brag that, as a largely unintentional side-effect of what I think teaching in my fields should be about, these are all things I aim to cultivate in my own teaching, often to the virtual exclusion of almost everything else. Sometimes I have worried (a little) that I don’t have very high technical expectations of my students. For instance, my advanced graduate level course in information management provides technical skills in database design and analysis that are, for the most part, not far above high-school level (albeit that many students go far beyond that); my graduate level social computing course demands no programming skills at all (technically, they are optional); my undergraduate introduction to web programming course sometimes leads to limited programming skills that would fail to get them a passing grade in a basic computer science course (though they typically pass mine). However (and it’s a huge HOWEVER) they have a far greater chance to acquire far more of those skills that I believe matter, and (gratifyingly) employers seem to want, than those who focus only on mastery of the tools and techniques. My web programming students produce sites that people might actually want to visit, and they develop a vast range of reflective, critical thinking, complex problem-solving, active learning, judgment, persuasion, social perceptiveness and other skills that are at the top of the list. My information management students get all that, and a deep understanding of the complex, social, situated nature of the information management role, with some notable systems analysis skills (not so much the formal tools, but the ways of understanding and thinking in systems). My social computing students get all that, and come away with deep insights into how the systems and environments we build affect our interactions with one another, and they can be fluent, effective users and managers of such things. All of the successful ones develop social and communication skills, appropriate to the field. Above all, my target is to help students to love learning about the subjects of my courses enough to continue to learn more. For me, a mark of successful teaching is not so much that students have acquired a set of skills and knowledge in a domain but that they can, and actually want to, continue to do so, and that they have learned to think in the right ways to successfully accomplish that. If they have those skills, then it is not that difficult to figure out specific technical skillsets as and when needed. Conveniently, and not because I planned it that way, that happens to be what employers want too.

Employers don’t (much) want science or programming skills: so what?

Even more interesting, perhaps, than the skills employers do want are the skills they do not want, from Operation Monitoring onwards in the list, that are often the primary focus of many of our courses. Ignoring the real nuts and bolts stuff at the very bottom like installation, repairing, maintenance, selection (more on that in a minute), it is fascinating that skills in science, programming, and technology design are hardly wanted at all by most companies, but are massively over-represented in our teaching. The writers of the report do offer the proviso that it is not impossible that new domains will emerge that demand exactly these skills but, right now and for the foreseeable future, that’s not what matters much to most organizations. This doesn’t surprise me at all. It has long been clear that the demand for people that create the foundations is, of course, going to be vastly much smaller than the demand for people that build upon them, let alone the vastly greater numbers that make use of what has been built upon them. It’s not that those skills are useless – that’s a million miles from the truth – but that there is a very limited job market for them. Again, I need to emphasize that educators should not be driven by job markets: there is great value in knowing this kind of thing regardless of our ability to apply it directly in our jobs. On the other hand, nor should we be driven by a determination to teach all there is to know about foundations, when what interests people (and employers, as it happens) is what can be done with them. And, in fact, even those building such foundations desperately need to know that too, or the foundations will be elegant but useless. Importantly, those ‘foundational’ skills are actually often anything but, because the emergent structures that arise from them obey utterly different rules to the pieces of which they are made. Knowing how a cell works tells you nothing whatsoever about function of a heart, let alone how you should behave towards others, because different laws and principles apply at different levels of organization. A sociologist, say, really doesn’t need to know much about brain science, even though our brains probably contribute a lot to our social systems, because it’s the wrong foundation, at the wrong level of detail. Similarly, there is not a lot of value in knowing how CPUs work if your job is to build a website, or a database system supporting organizational processes (it’s not useless, but it’s not very useful so, given limited resources, it makes little sense to focus on it). For almost all occupations (paid or otherwise) that make use of science and technology, it matters vastly much more to understand the context of use, at the level of detail that matters, than it does to understand the underlying substructures. This is even true of scientists and technologists themselves: for most scientists, social and business skills will have a far greater effect on their success than fundamental scientific knowledge. But, if students are interested in the underlying principles and technologies on which their systems are based, then of course they should have freedom and support to learn more about them. It’s really interesting stuff, irrespective of market demand. It enriches us. Equally, they should be supported in discovering gothic literature, social psychology, the philosophy of art, the principles of graphic design, wine making, and anything else that matters to them. Education is about learning to be, not just learning to do. Nothing of what we learn is wasted or irrelevant. It all contributes to making us creative, engaged, mutually supportive human beings.

With that in mind, I do wonder a bit about some of the skills at the bottom of the list. It seems to me that all of the bottom four demand – and presuppose – just about all of those in the top 12. At least, they do if they are done well. Similarly for a few others trailing the pack. It is odd that operation monitoring is not much desired, though monitoring is. It is strange that troubleshooting is low in the ranks, but problem-solving is high. You cannot troubleshoot without solving problems. It’s fundamental. I guess it speaks to the idea of transferability and the loss of specificity in roles. My guess is that, in answering the questions of the researchers, employers were hedging their bets a bit and not assuming that specific existing job roles will be needed. But conventional teachers could, with some justification, observe that their students are already acquiring the higher-level, more important skills, through doing the low-level stuff that employers don’t want as much. Though I have no sympathy at all with our collective desire to impose this on our students, I would certainly defend our teaching of things that employers don’t want, at least partly because (in the process) we are actually teaching far more. I would equally defend even the teaching of Latin or ancient Greek (as long as these are chosen by students, never when they are mandated) because the bulk of what students learn is never the skill we claim to be teaching. It’s much like what the late, wonderful, and much lamented Randy Pausch called a head fake – to be teaching one thing of secondary importance while primarily teaching another deeper lesson – except that rather too many teachers tend to be as deceived as their students as to the real purpose and outcomes of their teaching.

Automation and outsourcing

As the report also suggests, it may also be that those skills lower in the ranking tend to be things that can often be outsourced, including (sooner or later) to machines. It’s not so much that the jobs will not be needed, but that they can be either automated or concentrated in an external service provider, reducing the overall job market for them. Yes, this is true. However, again, the methodology may have played a large role in coming to this conclusion. There is a tendency of which we are all somewhat guilty to look at current patterns of change (in this case the trend towards automation and outsourcing) and to assume that they will persist into the future. I’m not so sure.

Outsourcing

Take the stampede to move to the cloud, for instance, which is a clear underlying assumption in at least the undervaluing of programming. We’ve had phases of outsourcing several times before over the past 50 or 60 years of computing history. Cloud outsourcing is only new to the extent that the infrastructure to support it is much cheaper and more well-established than it was in earlier cycles, and there are smarter technologies available, including many that benefit from scale (e.g. AI, big data). We are currently probably at or near peak Cloud, but it is just a trend even if it has yet to peak. It might last a little longer than the previous generations (which, of course, never actually went away – it’s just an issue of relative dominance) but it suffers from most of the problems that brought previous outsourcing hype cycles to an end. The loss of in-house knowledge, the dangers of proprietary lock-in, the surrender of control to another entity that has a different (and, inevitably, at some point conflicting) agenda, and so on, are all counter forces to hold outsourcing in check. History and common sense suggests that there will eventually be a reversal of the trend and, indeed, we are seeing it here and there already, with the emergence of private clouds, regional/vertical cloud layers, hybrid clouds, and so on. Big issues of privacy and security are already high on the agendas of many organizations, with an increasing number of governments starting to catch up with legislation that heavily restricts unfettered growth of (especially) US-based hosting, with all the very many very bad implications for privacy that entails. Increasingly, businesses are realizing that they have lost the organizational knowledge and intelligence to effectively control their own systems: decisions that used to be informed by experts are now made by middle-managers with insufficient detailed understanding of the complexities, who are easy prey for cloud companies willing to exploit their ignorance. Equally, they are liable to be flanked by those who can adapt faster and less uniformly, inasmuch as everyone gets the same tools in the Cloud so there is less to differentiate one user of it from the next. OK, I know that is a sweeping generalization – there are many ways to use cloud resources that do not rely on standard tools and services. We don’t have to buy in to the proprietary SaaS rubbish, and can simply move servers to containers and VMs while retaining control, but the cloud companies are persuasive and keen to lure us in, with offers of reduced costs, higher reliability, and increased, scalable performance that are very enticing to stressed, underfunded CIOs with immediate targets to meet. Right now, cloud providers are riding high and making ridiculously large profits on it, but the same was true of IBM (and its lesser competitors) in the 60s and 70s. They were brought down (though never fully replaced) by a paradigm change that was, for the most part, a direct reaction to the aforementioned problems, plus a few that are less troublesome nowadays, like performance and cost of leased lines. I strongly suspect something similar will happen again in a few years.

Automation and the end of all things we value

Automation – especially through the increased adoption of AI techniques – may be a different matter. It is hard to see that becoming less disruptive, albeit that the reality is and will be much more mundane than the hype, and there will be backlashes. However, I greatly fear that we have a lot of real stupidity yet to come in this. Take education, for instance. Many people whose opinions I otherwise respect are guilty of thinking that teachers can be, to a meaningful extent, replaced by chatbots. They are horribly misguided but, unfortunately, people are already doing it, and claiming success, not just in teaching but in fooling students that they are being taught by a real teacher.  You can indeed help people to pass tests through the use of such tools. However, the only things that tests prove about learning is that you have learned to pass them. That’s not what education is for. As I’ve already suggested, education is really not much to do with the stuff we think we teach. It is about being and becoming human. If we learn to be human from what are, in fact, really very dumb machines with no understanding whatsoever of the words they speak, no caring for us, no awareness of the broader context of what they teach, no values to speak of at all, we will lower the bar for artificial intelligence because we will become so much dumber ourselves. It will be like being taught by an unusually tireless and creepily supportive (because why would you train a system to be otherwise?) person. We should not care for them, and that matters, because caring (both ways) is critical to the relationship that makes learning with others meaningful. But it will be even worse if and when we do start caring for them (remember the Tamagotchi?).  When we start caring for soulless machines (I don’t mean ‘soul’ in a religious or transcendent sense), when it starts to matter to us that we are pleasing them, we will learn to look at one another in the same way and, in the process, lose our own souls.  A machine, even one that fools us it is human, makes a very poor role model. Sure, let them handle helpdesk enquiries (and pass them on if they cannot help), let them supplement our real human interactions with useful hints and suggestions, let them support us in the tasks we have to perform, let them mark our tests to double-check we are being consistent: they are good at that kind of thing, and will get better. But please, please, please don’t let them replace teachers.

I am afraid of AI, not because I am bothered by the likelihood of an AGI (artificial general intelligence) superseding our dominant role on the planet: we have at least decades to think about that, and we can and will augment ourselves with dumb-but-sufficient AI to counteract any potential ill effects. The worst outcome of AI in the foreseeable future is that we devalue ourselves, that we mistake the semblance of humanity for humanity itself, that machines will become our role models. We may even think they are better than us, because they will have fewer human foibles and a tireless, on-demand, semblance of caring that we will mistake for being human (a bit like obsequious serving staff seeking tips in a restaurant, but creepier, less transparent, and infinitely patient). Real humans will disappoint us. Bots will be trained to be what their programmers perceive as the best of us, even though we don’t have more than the glimmerings of an idea of what ‘best’ actually means (philosophers continue to struggle with this after thousands of years, and few programmers have even studied philosophy at a basic level). That way the end of humanity lies: slowly, insidiously, barely noticeably at first. Not with a bang but with an Alicebot. Arthur C. Clark delightfully claimed that any teacher who could be replaced by a machine should be. I fear that we are not smart enough to realize that it is, in fact, very easy to successfully replace a teacher with a machine if you don’t understand the teacher’s true role in the educational machine, and you don’t make massive changes to it. As long as we think of education as the achievement of pre-specified outcomes that we measure using primitive tools like standardized tests, exams, and other inauthentic metrics, chatbots will quite easily supersede us, despite their inadequacies. It is way too easy to mistake the weirdly evolved educational system that we are part of for education itself: we already do so in countless ways. Learning management systems, for instance, are not designed for learning: they are designed to replicate mediaeval classrooms, with all the trimmings, yet they have been embraced by nearly all institutions because they fit the system. AI bots will fit even better. If we do intend to go down this path (and many are doing so already) then please let’s think of these bots as supplemental, first line support, and please let’s make it abundantly clear that they are limited, fixed-purpose mechanisms, not substitutes but supplements that can free us from trivial tasks to let us concentrate on being more human.

Co-ops and placements

The report makes a lot of recommendations, most of which make sense – e.g. lifelong support for learning from governments, focus on softer more flexible skills, focus on adaptability, etc. Notable among these is the suggestion, as one of its calls to action, that all PSE students should engage in some form of  meaningful work-integrated learning placements during their studies. This is something that we have been talking about offering to our program students in computing for some time at Athabasca University, though the demand is low because a large majority of our students are already working while studying, and it is a logistical nightmare to do this across the whole of Canada and much of the rest of the globe. Though some AU programs embed it (nursing, for instance) I’m not sure we will ever get round to it in computing. I do very much agree that co-ops and placements are typically a good idea for (at least) vocationally-oriented students in conventional in-person institutions. I supervised a great many of these (for computing students) at my former university and observed the extremely positive effects it usually had, especially on those taking the more humanistic computing programs like information systems, applied computing, computer studies, and so on. When they came back from their sandwich year (UK terminology), students were nearly always far wiser, far more motivated, and far more capable of studying than the relatively few that skipped the opportunity. Sometimes they were radically transformed – I saw borderline-fail students turn into top performers more than once – but, apart from when things fell apart (not common, but not unheard of), it was nearly always worth far more than at least the previous couple of years of traditional teaching. It was expensive and disruptive to run, demanding a lot from all academic staff and especially from those who had to organize it all, but it was worth it.

But, just because it works in conventional institutions doesn’t mean that it’s a good idea. It’s a technological solution that works because conventional institutions don’t. Let’s step back a bit from this for a moment. Learning in an authentic context, when it is meaningful and relevant to clear and pressing needs, surrounded by all the complexities of real life (notwithstanding that education should buffer some of that, and make the steps less risky or painful), in a community or practice, is a really good idea. Apprenticeship models have thousands of years of successful implementation to prove their worth, and that’s essentially what co-ops or placements achieve, albeit only in a limited (typically 3-month to 1 year) timeframe. It’s even a good idea when the study area and working practices do not coincide, because it allows many more connections to be made in both aspects of life. But why not extend that to all (or almost all) of the process? To an extent, this is what we at Athabasca already do, although it tends to be more the default context than something we take intentional advantage of. Again, my courses are an exception – most of mine (and all to some extent) rely on students having a meaningful context of their own, and give opportunities to integrate work or other interests and study by default. In fact, one of the biggest problems I face in my teaching arises on those rare occasions when students don’t have sufficient aspects of work or leisure that engage them (e.g. prisoners or visiting students from other universities), or work in contexts that cannot be used (e.g. defence workers). I have seen it work for in-person contexts, too: the Teaching Company Scheme in the UK, that later became Knowledge Transfer Partnerships, has been hugely successful over several decades, marrying workplace learning with academic input, usually leading to a highly personalized MSc or MA while offering great benefits to lecturers, employers and students alike. They are fun, but resource-intensive, to supervise. Largely for this reason, in the past it might have been hard to make this scalable to lower than graduate levels of learning, but modern technologies – shared workspaces, blogs, portfolio management tools, rich realtime meeting tools, etc, and a more advanced understanding of ways to identify and record competencies – make it far more possible. It seems to me that what we want is not co-ops or placements, but a robust (and, ideally, publicly funded) approach to integrating academic and in-context learning. Already, a lot of my graduate students and a few undergraduates are funded by their employers, working on our courses at the same time as doing their existing jobs, which seems to benefit all concerned, so there’s clearly a demand. And it’s not just an option for vocational learning. Though (working in computing) much of my teaching does have a vocational grounding, if not a vocational focus, I have come across students elsewhere across the university who are doing far less obviously job-related studies with the support of their employers. In fact, it is often a much better idea for students to learn stuff that is not directly applicable to their workplace, because the boundary-crossing it entails better improves a vast range of the most important skills identified in the RBC report – creativity, communication, critical thinking, problem solving, judgement, listening, reading, and so on. Good employers see the value in that.

Conclusions

Though this is a long post, I have only cherry-picked a few of the many interesting issues that emerge from the report, but I think there are some general themes in my reactions to it that are consistent:

1: it’s not about money

Firstly, the notion that educational systems should be primarily thought of as feeders for industry is dangerous nonsense. Our educational systems are preparation for life (in society and its cultures), and work is only a part of that. Preparedness for work is better seen as a side-effect of education, not its purpose. And education is definitely not the best vehicle for driving economic prosperity. The teaching profession is almost entirely populated by extremely smart, capable, people who (especially in relation to their qualifications) are earning relatively little money. To cap it all, we often work longer hours, in poorer conditions than many of our similarly capable industry colleagues. Though a fair living wage is, of course, very important to us, and we get justly upset when offered unfair wages or worsening conditions, we don’t work for pay: we are paid for our work. Notwithstanding that a lack of money is a very bad thing indeed and should be avoided like the plague, we do so precisely because we think there are some things – common things –  that are much more important than money (this may also partly account for a liberal bias in the profession, though it also helps that the average IQ of teachers is a bit above the norm). And, whether explicitly or otherwise, this is inevitably part of what we teach. Education is not primarily about learning a set of skills and facts: it’s about learning to be, and the examples that teachers set, the way they model roles, cannot help but come laden with their own values. Even if we scrupulously tried to avoid it, the fact of our existence serves as a prime example of people who put money relatively low on their list of priorities. If we have an influence (and I hope we do) we therefore encourage people to value things other than a large wage packet. So, if you are going to college or school in the hope of learning to make loads of money, you’re probably making the wrong choice. Find a rich person instead and learn from them.

2: it is about integrating education and the rest of our lives

Despite its relentless focus on improving the economy, I think this report is fundamentally right in most of the suggestions it makes about education, though it doesn’t go far enough. It is not so much that we should focus on job-related skills (whatever they might be) but that we should integrate education with and throughout our lives. The notion of taking someone out of their life context and inflicting a bunch of knowledge-acquisition tasks with inauthentic, teacher-led criteria for success, not to mention to subjugate them to teacher control over all that they do, is plain dumb. There may be odd occasions where retreating from and separating education from the world is worthwhile, but they are few and far between, and can be catered for on an individual needs basis.

Our educational processes evolved in a very different context, where the primary intent was to teach dogma to the many by the few, and where physical constraints (rarity of books/reading skills, limited availability of scholars, limits of physical spaces) made lecture forms in dedicated spaces appropriate solutions to those particular technical problems. Later, education evolved to focus more on creating a pliant and capable workforce to meet the needs of employers and the military, which happened to fit fairly well with the one-to-many top-down-control models devised to teach divinity etc. Though those days are mostly ended, we still retain strong echoes of these roles in much of our structure and processes – our pedagogies are still deeply rooted in the need to learn specific stuff, dictated and directed by others, in this weird, artificial context. Somehow along the way (in part due to higher education, at least, formerly being a scarce commodity) we turned into filters and gatekeepers for employment purposes.  But, today, we are trying to solve different problems. Modern education has tended to tread a shifting path between supporting individual development and improving our societies: these should be mutually supportive roles though different educational systems tend to put more emphasis on one than the other. With that in mind, it no longer makes sense to routinely (in fact almost universally) take people out of their physical, social, or work context to learn stuff. There are times that it helps or may even be necessary: when we need access to expensive shared resources (that mediaeval problem again), for instance, or when we need to work with in-person communities (hard to teach acting unless you have an opportunity to act with other actors, for example), or when it might be notably dangerous to practice in the real world (though virtual simulations can help). But, on the whole, we can learn far better when we learn in a real world context, where we can put our learning directly into useful practice, where it has value to us and those around us. Community matters immensely – for learning, for motivation, for diversity of ideas, for belonging, for connection, etc – and one of the greatest values in traditional education is that it provides a ready-made social context. We should not throw the baby out with the bathwater and it is important to sustain such communities, online or in-person. But it does not have to be, and should not ever be, the only social context, and it does not need to be the main social context for learning. Pleasingly, in his own excellent keynote at CNIE, our president Neil Fassina made some very similar points. I think that Athabasca is well on course towards a much brighter future.

3: what we teach is not what you learn

Finally, the whole education system (especially in higher education) is one gigantic head fake. By and large, the subjects we teach are of relatively minor significance. We teach ways of thinking, we teach values, we teach a few facts and skills, but mainly we teach a way of being. For all that, what you actually learn is something else entirely, and it is different from what every one of your co-learners learns, because 1) you are your main and most important teacher and 2) you are surrounded by others (in person, in artefacts they create, online) who also teach you. We need to embrace that far more than we typically do. We need to acknowledge and celebrate the differences in every single learner, not teach stuff at them in the vain belief that what we have to tell you matters more than what you want to learn, or that somehow (contrary to all evidence) everyone comes in and leaves knowing the same stuff. We’ve got to stop rewarding and punishing compliance and non-compliance.

What you learn changes you. It makes you able to see things differently, do things differently, make new connections. Anything you learn. There is no such thing as useless learning. It is, though, certainly possible to learn harmful things – misconceptions, falsehoods, blind beliefs, and so on – so the most important skill is to distinguish those from the things that are helpful (not necessarily true – helpful). On the whole, I don’t like approaches to teaching that make you learn stuff faster (though they can be very useful when solving some kinds of problem) because it devalues the journey. I like approaches that help you learn better: deeper, more connected, more transformative. This doesn’t mean that the RBC report is wrong in criticizing our current educational systems, but it is wrong to believe that the answer is to stop (or reduce) teaching the stuff that employers don’t think they need. Learners should learn whatever they want or need to learn, whenever they need to do so, and educational institutions (collectively) should support that. But that also doesn’t mean teachers should teach what learners (or employers, or governments) think they should teach, because 1) we always teach more than that, whether we want to or not, and it all has value and 2) none of these entities are our customers. The heartbreaking thing is that some of the lessons most of us unintentionally teach – from mindless capitulation to authority, to the terrible approaches to learning nurtured by exams, to the truly awful beliefs that people do not like/are not able to learn certain subjects or skills – are firmly in the harmful category.  It does mean that we need to be more aware of the hidden lessons, and of what our students are actually learning from them. We need to design our teaching in ways that allow them to make it relevant and meaningful in their lives. We need to design it so that every student can apply their learning to things that matter to them, we need to help them to reflect and connect, to adopt approaches, attitudes, and values that they can constantly use throughout their lives, in the workplace or not. We need to help them to see what they have learned in a broader social context, to pay it forward and spread their learning contagiously, both in and out of the classroom (or wherever they are doing their learning). We need to be partners and collaborators in learning, not providers.  If we do that then, even if we are teaching COBOL, Italian Renaissance poetry, or some other ‘useless’ subject, we will be doing what employers seem to want and need. More importantly, we will be enriching lives, whether or not we make people fiscally richer.

In-person vs online teaching

This is roughly the content of my 3 minute pitch to explain (some of) my research, that I gave at the OUNL research day in Heerlen, Netherlands yesterday. I was allowed one slide:

in-person vs self-paced online learning

This is (very roughly) what I said:

Mediaeval scholars were faced with the problem that knowledge (doctrine actually), often found in rare and expensive books, needed to be passed from the few to the many. Lecturing was an efficient solution, given the constraints of physics. Because everyone needed to be in the same place at the same time for this to work, we developed schools, universities, classes, courses, timetables and terms and semesters. We built resources like libraries.  We created organizational units to manage it all, like faculties and colleges. Above all,  for efficiency, we needed rules of behaviour and a natural power dynamic putting the lecturer in control for every moment of the learning activity in a classroom.

Learning (like most things) works best – by far – when learners are intrinsically motivated. It barely works at all when learners are amotivated. Self determination theory tells us that three things are needed for intrinsic motivation: support for autonomy, competence, and relatedness. The mediaeval solution was good for relatedness, but bad for competence (some found it too challenging, some not challenging enough) and terrible for autonomy. The chance of amotivation is thus very high. Many of our pedagogies, processes, and much of the art of teaching since then have been, in one way or another, attempts to deal with this one central problem.  The most common solution to the lack of intrinsic motivation that resulted was to apply externally regulated extrinsic motivation – rewards like grades and qualifications, rules of attendance,  punishments for non-compliance,  etc – which, self determination theory shows, is infallibly fatal to intrinsic motivation, making things far worse. How crazy is it that we have to force people to do the one thing that makes us most human, a drive to learn that is arguably stronger than sex or even the pursuit of food?   Good teachers using well considered teaching methods can usually overcome many of the issues, at least for many students much of the time. But that’s what good pedagogy means. It is highly situated in solving the innate problems of in-person teaching.

On the whole, for perfectly understandable reasons (much distance teaching evolved in an in-person context with which it had to interoperate) we have transferred those exact same pedagogies unthinkingly to open, self paced, self directed, distance learning. ‘Teaching is teaching’, advocates claim, and so they try, as much as possible, to replicate online what they do in a classroom. But the motivational problems faced by distance learners are almost the exact inverse of those of in-person learners. They have lots of autonomy – you can’t really take it away – and can take different paths and pacing to gain competence (e.g. rewinding or skipping videos, re-reading text, augmenting with other resources, etc), but tend to suffer from reduced relatedness, especially when learning truly independently, in a self paced modality. Given this mismatch and the lack of well evolved support and processes for this very different context, it is not surprising there is often a high rate of attrition, especially when teachers (lacking the closeness and authority or in-person colleagues) double down on rewards and punishments through grades, even to the extent of rewarding participation, thus making it even worse.  

There is no such thing as a disembodied, abstract, decontextualized pedagogy – it is all about orchestrating technologies- so any solution must be as much about buildings tools and structures as it is about using techniques and methods. They are entirely inseparable.  A significant part of my current research is thus an attempt to design native online pedagogies, technologies, and other parts of educational systems (including credentialling) that don’t rely on reward and punishment; that are built for supporting learning in the complex, ever changing modern world that does exist, rather than for the indoctrination of mediaeval students.

 

 

A blast from my past: Google reimplements CoFIND

While searching for a movie using Google Search last night I got (for the first time that I can recall) the option to tag the result, as described in this article. I was pleased to discover that the tool they provide for this is virtually identical (albeit with a much slicker and more refined modern interface overhaul) to the CoFIND system that underpinned my PhD, that I built over 20 years ago now. You are presented with a list of tags, and can select one or more that describe the movie, and/or suggest your own, effectively creating a multi-dimensional rating system that other users can use to judge what the movie is like. When I rated the movie last night, for instance, popular tags presented to me included ‘terrible acting’, ‘bad writing’, ‘clichéed’, ‘boring’ and so on. Having seen the movie, I agree about the bad writing and clichés – it was at the terrible end of the scale – but actually think most of the acting was fairly good, and it was not very boring. What is interestingly different about this, compared with other tagging systems currently available, is that this kind of tag is fuzzy – it represents a value statement about the movie that exists on a continuum, not a simple categorization. The sorting algorithm for the list of tags presented to you appears (like my original CoFIND) to be based mainly on simple popularity though it is possible that (like CoFIND) it uses other metrics like tag age and perhaps even a user model as well. It’s vastly more useful and powerful than the typical thumbs-up/thumbs-down that Google normally provides. The feature has sadly not reappeared on subsequent movie searches, so I am guessing that Google is either still testing it or trying to build up a sufficient base of recommendations by occasionally showing it to people, before opening it up to everyone.

Just in case Google or anyone else has tried to patent this, and to assert my prior art, you can find a description and screenshots (p183 and p184) of my original CoFIND system in chapter 6 of my PhD thesis as well as in many papers before and since, not to mention in a fair few blog posts. It’s out there in the public domain for anyone to use. The interface of my system was, even by the standards of the day, pretty awful and not even a fraction as good as the one provided by Google, but those were different times: it did work in exactly the same way, though. As I developed it further, the interface actually became much worse. Over the course of a few years I experimented with quite a range of methods to get and display ratings/tags, including an ill-conceived Likert scale as well as a much more successful early use of tag clouds, all of which added complexity and reduced usability. Some of these later systems are described and discussed in my PhD too.  In its final, refactored, and heavily evolved form that postdates my PhD by several years, a version of Cofind (last modified 2007) is actually still available, that almost reverts to the the Google-style tag selection approach of the original, with the slight tweak that, in CoFIND, you can disagree about any particular tag use (for instance, if you don’t believe it to be inane then you can cast a vote against that tag).  The interface remains at least as awful as the original, though, and not a patch on Google’s. The main other differences, apart from interface variations, are that the nomenclature differs (I used ‘qualities’  rather than ‘tags), and that CoFIND could be used for anything with a URL, not just movies. If you’re interested, click on any resource link in the system and you’ll see my primitive, ugly, frame-based attempt to do very much the same as Google is doing for movies (nb. unless you are logged in you cannot add new qualities but, for authorized users, a field appears at the end that is just like Google’s). Though primarily intended to share and recommend educational resources, CoFIND was very flexible and was, over the years, used for a range of other purposes from comparing interface designs to discovering images and videos. It was always flaky, ugly, and unscalable, but it worked well enough for my research and teaching purposes, and (because it provides RSS feeds) it was my go-to tool for sharing interesting links right up until 2007, after which I reverted to more conventional but better-maintained tools like the Landing or WordPress. 

A little bit of CoFIND background

I’ve written a fair bit about CoFIND, formally and informally, but not for a few years now, so here’s a little background for anyone that might be interested, and to remind myself of a little of what I learned all those years ago in the light of what I know now.

An evolving, self-organizing, social bookmarking tool

I started my PhD research in 1997 with the observation that, even then, there was a vast amount of stuff to learn from that could be easily found on the Web, but that it was really difficult to find good stuff, let alone stuff that was actually useful to a particular learner at a particular stage in their development. Remember that this was before Google even started, so things were significantly worse then than they are now. Infoseek was as good as it got.

I had also observed that, in any group of learners, people would find different things and, between them, discover a much larger range of useful resources than any one learner (or teacher) could do alone, a fact that I use in my teaching to this day. These would likely (and, it turns out, in reality) be better than what a teacher could find alone because, though individual learners might be less able to distinguish low from high quality, they would know what worked for them and sufficient numbers of eyes would weed out the bad stuff as long as there was a mechanism for it. This was where I came in.

The only such mechanisms widely available at the time were simple rating systems. However, learners have very different learning needs, so I immediately realized that ‘thumbs-up’ or simple Likert scales would not work. This was not about finding the one ‘best’ solution for everyone, but was instead concerned with finding a range of alternatives to fill different ecological niches, and somehow discovering the most useful solution in that niche for a given learner at a given time.  My initial idea was to make use of a crowd, not an individual curator, and to employ a process closely akin to natural evolution to kill bad suggestions and promote good ones, in order to create an ecosystem of learning resources rather than a simple database. CoFIND was a series of software solutions that explored and extended this initial idea.

CoFIND was, on the face of it, what would eventually come to be called a social bookmarking system – a means for learners to find and to share Web resources (and, later, other things) with one another, along with a mechanism for other learners to recommend or critique them. It was by no means the first social bookmarking system, but it was certainly not a common genre at the time, and I don’t think such a dedicated system had ever been used in education before (for all such assertions, I stand to be corrected), though other means of sharing links, from simple web pages or wikis or discussion forums to purpose-built teacher-curated tools were not that uncommon. A lot of my early research involved learning about self-organization and complex systems, in particular focusing on evolution and stigmergy (self-organization through signs left in the environment). As well as the survival-of-the-fittest dynamic, evolution furnished me with many useful concepts that I made good use of, such as the importance of parcellation, the necessity of death, ways to avoid skyhooks, benefits of spandrels, ways to leverage chance (including extinction events), and various approaches to supporting speciation.  As a result of learning about stigmergy I independently developed what later came to be know as tag clouds. I don’t believe that mine were the first ever tag clouds – weighted lists of one sort or another had been around for a few years – but, though mine didn’t then use the name, they were likely the first uses of such things in educational software, and almost certainly the first with this particular theoretical model to support them (again, I am happy to be corrected).

A collaborative filter

The name CoFIND is an acronym for ‘collaborative filter in n-dimensions’. The n dimensions were substantiated through what we (my supervisors and I) called qualities. We went through a long list of possible names for these, and I was drawn for a while to calling them ‘values’, but (unfortunately) we never thought of ‘tags’ because the term was not in common use for this kind of purpose at the time. After a phase of calling them q-tags, I now call qualities by the much more accessible name of ‘fuzzy tags’. Fuzzy tags are not just binary classifications of a topic but tags that describe what we value, or don’t value, in a resource, and how much we value it. While people may sometimes disagree about binary classifications (conventional tags) it is always possible to have different opinions about the application of fuzzy tags: some may find something interesting, for instance, while others may not, and others may feel it to be quite interesting, or incredibly so. Fuzzy tags are to do with fuzzy sets, that have a continuum of grades of membership, which is where the name comes from. Different versions of CoFIND used different ways to establish the fuzziness of a tag – the Likert Scale used in a few mid-period versions was my failed attempt to make it explicit but this was a nightmare for people to actually use.  The first versions used the same kind of frequency-based weighting as Google’s movie tags, but that was a bit coarse – I was uncomfortable with the averaging effect and the unbridled Matthew Effect that threatened to keep early tags at the top of the list for all time, that I rather coarsely kept in check with a simple age-related weighting that was only boosted when they were used (the unfortunate side effect of which being that, if a system was not used for a few weeks, all the tags vanished in a huge extinction event, albeit that they could be revived if anyone ever used one of the dead ones again). The final version was a bit in-between, allowing an indefinitely large scale via simple up-down ratings, balanced with an algorithm that included a decaying but renewable novelty weighting that adjusted to the frequency of use of the system as a whole. This still had the peculiar effect of evening out/initializing all of the tags over time if no one used the system, but at least it caused fewer catastrophes.

‘Traditional’ collaborative filters simply discover whether things are likely to be more valued or less valued on a usually implicit single dimension (good-bad, liked-disliked, useful-useless, etc). CoFIND’s qualities/fuzzy tags allowed people to express in what ways they were better or worse – more interesting, less helpful, more complex, less funny, etc, just as Google’s movie tagging allows you to express what you like or dislike about a movie, not just whether you liked it or not. In many tag-based systems, people tend to use quite a few simple tags that are inherently fuzzy (e.g. Flickr photos tagged as ‘beautiful’) but they are seldom differentiated in the software from those that simply classify a resource as fitting a particular category, so they are rarely particularly helpful in finding stuff to help with, say, learning.

I was building CoFIND just as the field of collaborative filtering was coming out of its infancy, so the precise definition of the term had yet to be settled. At the time, a collaborative filter (then usually called an ‘automated collaborative filter’) was simply any system that used prior explicit and/or implicit preferences of a number of previous users (a usually anonymous crowd) to help make better recommendations and/or filter out weaker recommendations for the current users. The PageRank algorithm that still underpins Google Search would perhaps have then been described as a collaborative filter, as was one of its likely inspirations, PHOAKS (People Helping One Another Know Stuff), that mined Usenet newsgroups for links, taking them as an implicit recommendation within the newsgroup topic area. By this definition, CoFIND was in fact a semi-automated collaborative filter that combined explicit preferences with automated matching. Nowadays the term ‘collaborative filter’ tends to only apply to a specific subset of recommender systems that automatically predict future interests by matching individual patterns of behaviour with those of multiple others, whether by item (people who bought this also bought…) or user (people whose past or expressed preferences seem to be like yours also liked…). I think that, if I built CoFIND today, I would simply refer to it more generically as a recommender system, to avoid confusion.

Disembodied user models

Rather than a collaborative filter, back in the late 90s Peter Brusilovsky saw CoFIND as a new species of educational adaptive hypermedia, as it was perhaps the first (or at least one of the first) that worked on an open corpus rather than through a closed corpus of linked resources. However, he and I were both puzzled about where to find the user model, which was part of Peter’s definition of adaptive hypermedia. I didn’t feel that it needed one, because users chose the things that mattered to them at runtime. In retrospect, I think that the trick behind CoFIND, and what still distinguishes it from almost all other systems apart from this fairly new Google tool, is that it disembodied and exposed the user model. Qualities were, in essence, the things that would normally be invisibly stored in a user model, but I made them visible, in an extreme variant of what Judy Kay later described as scrutable adaptation.  In effect, a learner chose their own learner model at the time they needed it. The reasoning behind doing so was that, for learners, past behaviour is usually a poor predictor of future needs, mainly because 1) learning changes people (so past preferences may have little bearing on future preferences), and 2) learning is driven by a vast number of things other than taste or past actions: we often have a need for it thrust upon us by an extrinsic agency, like a teacher, or a legislative demand for a driving licence, for instance. Qualities (fuzzy tags) allow us to express the current value of something to us, in a form that we can leave behind without a lot of sticky residue, and that future users can use. In fact, later versions did tend to slightly emphasize similar things to those people had added, categorized, or rated (fuzzily tagged) earlier, but this was just a pragmatic attempt to make the system more valuable as a personal bookmark store, and therefore to encourage more use of it, rather than an attempt to build a full-blown collaborative filter in the modern sense of the word.

Moving on

I still believe that, in principle, this is an excellent approach and I have been a little disappointed that more people have not taken up the idea and improved on it. The big and, at the time, insurmountable obstacles that I hit were 1) that it demands a lot of its users to provide both tags and resources, with little obvious personal benefit, so it is unlikely to get a lot of use, 2) that the cold-start problem that affects most collaborative filters (it relies on many users to be useful but no one will use it until it is useful) is magnified exponentially by every one of those n dimensions so it really demands a big lot of users, and 3) that it is fiendishly hard to represent the complex ecological niches effectively in an interface, making the cognitive load unusably high. Google seems to have made good progress on the last point (an evolution enabled by improved web standards and browsers combined with a simplification of the process, which together are enough to reduce the cognitive load by a sizeable amount), and has plenty sufficient numbers of users to cope with the first and second points, at least with regard to movie recommendations. It remains challenging to see how this would work in an educational setting in anything less than the largest of MOOCs or the most passionately focused of user bases. However, I would love to see Google extend this mechanism to OERs, courses, and other educational resources, from Quora answers to Kahn Academy tutorials, because they do have the numbers, and it would work well. For the same reasons, it would also be great to see it applied to something like StackExchange or similar large-scale systems (Reddit perhaps) where people go to seek solutions to learning problems. I doubt that I will build a new version of CoFIND as such, but the ideas behind it should live on, I think, and it’s great to see them back on a system as big as Google Search, even if it is so far only experimental and, so far, just used to recommend movies.

Power, responsibility, maps and plans: some lessons from being a Chair

Empty chair

I’ve reached the end of my first week of not Chairing the School of Computing & Information Systems here at Athabasca University, which is now in the capable hands of the very wonderful Ali Dewan.

Along with quite a few people that I know, I am amazed that I stuck it out for over 3 years. I was a most reluctant Chair in the first place, because I’d been in middle management roles before and knew much of what to expect. It’s really not my kind of thing at all. Ideologically and temperamentally I loathe hierarchies but I’d rather be at the top or at the bottom if I have to be in one at all. However, with the help of some cajoling, I eventually convinced myself that being a Chair is essentially much the same as being a teacher, which is an activity that I both enjoy and can mostly do reasonably well. Like a teacher (at least one that does the job well), the job of a Chair is to help nurture a learning community, and to make it possible for those in that community to achieve what they most want to achieve with as few obstacles as possible. Like teaching, it is not at all about telling, but about listening, supporting, and helping others to orchestrate the process for themselves, not so much about leadership as followership, about being a supportive friend. It’s a bit about nudging and inspiring, too, of sharing the excitement of discovery and growth with other people. It’s a bit about challenging people to be who they want to be, collectively and individually. It’s a bit about solving problems, a bit about being a shoulder to cry on, a bit about being a punchbag for those needing to let off steam, an arbiter in disputes. It could be fun. And I could always give it up after a few months if it didn’t work out. That was what I convinced myself.

On the bright side, I don’t think that I broke anything vital. I did help a couple of good things to happen, and I think that most of my staff were reasonably happy and empowered, a few of them more than before. One or two were probably less happy. But, in the grand scheme of it all, I left things much the same as or a little better than I found them, despite often strenuous efforts to bring about far more exciting changes. My tenure as Chair was, on the whole, not great, but not terrible. I have been wondering a bit about why that happened, and what I could or should have done differently, which is what the next part of this post is about.

Authority vs influence, responsibility vs power

One of my most notable discoveries (more accurately, rediscoveries) is that authority and responsibility barely, if at all, correlate with power and influence. In fact, for a middle management role like this, the precise inverse is true. One of the strange paradoxes of being in a position of more responsibility and authority has been that, in many ways, I feel that I’ve actually had considerably less capacity to bring about change, or to control my own life, than I had as a plain old professor.  It’s just possible that I may have overused the joke about a Chair being the one everyone gets to sit on, but it resonated with me. And this is not to contradict Uncle Ben’s sage advice to Spiderman – it may be true that with great power comes great responsibility, but that doesn’t mean that with great responsibility comes great power.

Partly the problem was just the myriad small but draining demands that had to be done throughout the course of a typical day (most of which were insufferably tedious and mostly mindless bureaucratic tasks that anyone else could do at least as well), as well as having to attend many more meetings, and to engage in a few much lengthier tasks like workload planning. It wore me down. I put a lot of things that were important to me, but that didn’t contribute to my role, to one side because there were too few chunks of uninterrupted time to do them. Blogging and sharing on social media, for instance.

Partly it was because I felt that my role was primarily to support those that reported to me – I had to do their bidding much more than they had to do mine. Instead of doing what I would intrinsically wish to do, much of the time I was trying to do what those that I supervised required of me. This was not just a result of my own views on leadership. I think a lot of it would have affected most people in the same position.

Partly it was because I often felt (with a little external reinforcement) that I must shut up and/or toe the line because I represented the School or the Dean or the University. Being the ‘face’ of the school meant that I often felt obliged to try to represent the opinions and demands of others, even when I disagreed with them. Often, I had to present a collective agenda, or that of an individual higher up the foodchain, rather than my own, whether or not I found it dull, mistaken, or pointless. Also, being a Chair puts you in some sensitive situations where a wrong step can easily lead to litigation, grievance proceedings, or (worse) very unhappy people. I’m not naturally tactful or taciturn, to say the least, so this was tricky at times. I sometimes stayed quiet when I might otherwise have spoken out.

The upshot of it is that, as a Chair, I was directly responsible both to my Dean and to the people I supervised (not to mention more or less directly to students, visitors, admins, tech staff, VPAs, etc, etc), and I consequently felt that I had very little control over my own life at all. Admittedly it was at least partly due to my very intentional approach to the role, but I think similar issues would emerge no matter what leadership style I had adopted. There’s a surprising amount of liberty in being at the bottom of a hierarchy, at least when (like all academics) you are expected – nay, actually required – to be creative, self-starting, and largely autonomous in your work. Academic freedom is a wonderful thing, and some of it is subdued when you move a little way up the scale.

Some compensations 

There have been plentiful compensations, of course. I wouldn’t have stayed this long if it had been uniformly awful. Being a Chair made some connections easier to make, within and beyond the university, and has helped me get to know my colleagues a lot better. And I have some great colleagues: it would have been much harder to manage had I not had such friendly, supportive, smart, creative, willing, and capable team to work with. I solved or at least made fair progress on a few problems, none huge but all annoying, and helped to lay the groundwork for some ongoing improvements. There were opportunities for creativity here and there. I will miss some of the ways I could help shape our values and systems simply thanks to being a Chair, rather than having to actually work at it. I’ll miss being the default person people came to with interesting ideas. I’ll miss the very small but not trivial stipend. I’ll miss being involved by default in most decisions that affect the school. I’ll miss the kudos. I’ll miss being a formal hub in a network, albeit a small one.

Not quite like teaching

In most ways I was right about the job being much like teaching. Most of the skills, techniques, goals, and patterns are very similar, but there’s one big difference that I had not thought enough about. On the whole, most actual teachers engage with learners over a fairly fixed period, or at least for a fixed project, and there is a clear beginning, middle, and end, with well defined rituals, rules, and processes to mark their passage. This is even true to an extent of more open forms of teaching like apprenticeship and mentorship. Although this in some ways relates to any kind of project, the fact that people, working together in a social group, are both the focus and the object of change, makes it fairly distinctive. I can’t think of many other human activities that are particularly similar to teaching in this regard, apart from perhaps some team sports or, especially, performing arts.

To be a teacher without a specific purpose in mind is a surprisingly different kind of activity, like producing an improvised play that has no script, no plot, no beginning, and no end. Although a teacher is responsible to their students, much as I was responsible to my staff, the responsibility is tightly delimited in time and in scope, so it remains quite manageable, for the most part. In retrospect, I think I should have planned it better. I probably should have set more distinct goals, milestones, tasks, sub-projects, etc. I should have planned for a very clear and intentional end, and set much firmer boundaries. It would not have been easy, though, as many goals emerged over the years, a lot changed when we got our new (and much upgraded) administration, and a lot depended on serendipity and opportunism. I had, at first, no idea how long I would stick with the role. Until quite some time into it, I had only a limited idea about what changes I might even be allowed to accomplish (not much, as it happens, with no budget, a freeze on course development, diminishing staff numbers, need to fit faculty plans, etc). It might have been difficult to plan too far ahead, though it would have been really useful to have had a map showing the directions we might have gone and the limits of the territory. I think there may be useful lessons to be learned from this about support for self-directed lifelong learning.

Lessons for learning and teaching

A curse of institutional learning can be the many scales of rigid structure it provides, that too often take agency away from learners and limit support for diversity. However, it also supports an individual learner’s agency to have a good map of the journey ahead, even if all that they are given is the equivalent of a bus route, showing only the fixed paths their learning will take. I have long grappled with the tensions and trade-offs between surfing the adjacent possible and following a planned learning path. I spent a lot of time in the late 1990s and early 2000s designing online systems that leveraged the crowd to allow learners to help one another to learn, but most of them only helped with finding what to do next, or to solve a current problem, not to chart a whole journey. Figuring out an effective way to plan ahead without sacrificing learner control was one of the big outstanding research problems left to be solved when I finished my PhD (in self-organized learning in networks) very many moons ago, and it still is. There are lots of ineffective ways that I and others have tried, of course. Obvious approaches like matching paths through collaborative filtering or similar techniques are a dead-end: there are way too many extraneous variables to confound it, way too much variation in start and end points to effectively cater for, even if you start with a huge dataset. This is not to mention the blind-leading-the-blind issues, the fact that learning changes people so past activity poorly predicts future behaviour, and the fact that there is often a narrative context that assumes specific prior activities have occurred and known future activities will follow. Using ontologies is even worse, because the knowledge map of a subject developed by subject experts is seldom if ever the best map for learning and may be among the worst. The most promising approaches I have seen, and that I had a doctoral student working on myself until he had to give up in the mid 2000s, mine the plans of many experts (e.g. by looking at syllabuses) to identify common paths and branches for a particular subject, combining them with whatever other information can be gleaned to come up with a good direction for a specific learner and learning need. However, there are plenty of issues with that, too, not least of which being the fact that institutional teaching assumes a very distinctive context, and suffers from a great many constraints (from having to be squashed into a standardized length to fitting preferred teaching patterns and schedules), that learners unhindered by such arbitrary concerns would neither want nor need. Many syllabuses are actually thoughtlessly copied from the same templates (e.g. from a professional association model syllabus), or textbooks, and may be awful in the same ways. And, again, narrative matters. If you took a chunk out of one of my courses and inserted it somewhere else it would often change its meaning and value utterly.

This is a problem I would dearly love to solve. Though I stand by my teaching approaches, one of the biggest perennial complaints about the tools and methods I tend to use is that it is easy to feel lost, especially if the helping hands of others are not around when needed. There are always at least a few students who would, as a matter of principle, rather be told what to do, how to do it, and where to go next. The majority would prefer to work in an environment that avoids the need for unnecessary decisions, such as where to upload a file, that have little to do with what they are trying to learn. My role (and that of my tutors, and the design of my courses) is to help them through all that, to relieve them of their dependency on being told what to do, and to help them at least understand why things are done the way they are done. However, that can result in quite inconsistent experiences if I or tutors let the ball slip for a moment. It can be hard for people who have been taught, often over decades, that teaching is telling, and that learning can reliably be accomplished by following a set of teacher-determined steps, to be set adrift to figure it out in their own ways.

It is made far worse by the looming threat of grades that, though eliminated in my teaching itself, still lie in wait at the end of the path as extrinsic targets. Students often find it hard to know in advance how they will meet the criteria, or even whether they have met them when they reach the end. I can and do tell them all of this, of course, usually repeatedly and in many ways and using many media, but the fact that at least some remain puzzled just proves the point: teaching is not telling. Again, a lot of manual social intervention is necessary. But that leads to the issue that following one of my courses demands a big leap of faith (mainly in me) that it will turn out OK in the end. It usually takes effort and time to build such trust, which is costly for all concerned, and is easily lost with a careless word or a missed message.  It would be really useful for my students to have a better map that allows them to plan detours and take more alternative transit options for themselves, especially with overlays to show recommended routes, warnings of steep hills and traffic, and real-time information about the whereabouts of people on their network and points of interest along the way. It would, of course, also be really handy to have a big ‘you are here’ label.  I would have really liked such a map when I started out as Chair.

Moving on

Leaving the Chair role behind still feels a little like stepping off a boat after a rough voyage, and either the land or my legs feel weird, I’m not sure which. As my balance returns, I am much looking forward to catching up with things I put to one side over the past 3 years. I’m happy to be getting back to doing more of what I do best, and I hope to be once more sharing more of my discoveries and cogitations in posts like this. It’s easier to move around with your feet on the ground than when you are sitting on a chair.

 

Strategies for successful learning at AU

Earlier today I responded to a prospective student who was, amongst other things, seeking advice on strategies for success on a couple of our self-paced programming courses. My response was just a stream of consciousness off the top of my head but I think it might be useful to others. Here, then, with some very light editing to remove references to specific courses, are a few fairly random thoughts on how to succeed on a self-paced online programming course (and, for the most part, other courses) at Athabasca University. In no particular order:

  • Try to make sure that people close to you know what you are doing and, ideally, are supportive. Other people can really help, not just for the mechanical stuff but for the emotional support. Online learning, especially the self-paced form we use, can feel a bit isolating at times, but there are lots of ways to close the gap and they aren’t all found in the course materials and processes. Find support wherever you can.
  • Make a schedule and try to keep to it, but don’t blame yourself if your deadlines slip a bit here and there – just adjust the plan. The really important thing is that you should feel in control of the process. Having such control is one of the huge benefits of our way of teaching, but you need to take ownership of the process yourself in order to experience the benefits.
  • If the course provides forums or other social engagement try to proactively engage in them. Again, other people really help.
  • You will have way more freedom than those in traditional classrooms, who have to follow a teacher simply because of the nature of physics. However, that freedom is a two-edged sword as you can sometimes be swamped with choices and not know which way to go. If you are unsure, don’t be afraid to ask for help. But do take advantage of the freedom. Set your own goals. Look for the things that excite you and explore further. Take breaks if you are getting tired. Play. Take control of the learning process and enjoy the ride.
  • Enjoy the challenges. Sometimes it will be hard, and you should expect that, especially in programming courses like these. Programming can be very frustrating at times – after 35 years of programming I can still spend days on a problem that turns out to involve a misplaced semi-colon! Accept that, and accept that even the most intractable problems will eventually be solved (and it is a wonderful feeling when you do finally get it to work). Make time to sleep on it. If you’re stuck, ask for help.
  • Get your work/life/learning balance right. Be realistic in your aspirations and expect to spend many hours a week on this, but make sure you make time to get away from it.
  • Keep a learning journal, a reflective diary of what you have done and how you have addressed the struggles, even if the course itself doesn’t ask for one. There are few more effective ways to consolidate and connect your learning than to reflect on it, and it can help to mark your progress: good to read when your motivation is flagging.
  • Get used to waiting for responses and find other things to learn in the meantime. Don’t stop learning because you are waiting – move on to something else, practice something you have already done, or reflect on what you have been doing so far.
  • Programming is a performance skill that demands constant and repeated practice. You just need to do it, get it wrong, do it again, and again, and again, until it feels like second nature. In many ways it is like learning a musical instrument or maybe even driving. It’s not something you can learn simply by reading or by being told, you really have to immerse yourself in doing it. Make up your own challenges if you run out of things to do.
  • Don’t just limit yourself to what we provide. Find forums and communities with appropriate interests. I am a big fan of StackOverflow.com for help and inspiration from others, though relevant subreddits can be useful and there are many other sites and systems dedicated to programming. Find one or two that make sense to you. Again, other people can really help.

Online learning can be great fun as long as you are aware of the big differences, primarily relating to control and personal agency. Our role is to provide a bit of structure and a supportive environment to enable you to learn, rather than to tell you stuff and make you do things, which can be disconcerting at first if you are used to traditional classroom learning. This puts more pressure on you, and more onus on you to organize and manage your own learning, but don’t ever forget that you are not ever really alone – we are here to help.

In summary, I think it really comes down to three big things, all of which are really about motivation, and all of which are quite different when learning online compared to face-to-face:

  1. Autonomy – you are in control, but you must take responsibility for your own learning. You can always delegate control to us (or others) when the going gets hard or choices are hard to make, but you are always free to take it back again, and there will be no one standing over you making you do stuff apart from yourself.
  2. Competence – there are few things more satisfying than being able to do more today than you could do yesterday. We provide some challenges and we try to keep them difficult-but-achievable at every stage along the way, but it is a great idea for you to also seek your own challenges, to play, to explore, to discover, especially if the challenges we offer are too difficult or too boring. Reflection can help a lot with this, as a means to recognize what, how, and why you have learned.
  3. Relatedness – never forget the importance of other people. You don’t have to interact with them if you don’t want to do so (that’s another freedom we offer), but it is at the very least helpful to think about how you belong in our community, your own community, and the broader community of learners and programmers, and how what and how you are learning can affect others (directly or indirectly).

This advice is by no means comprehensive! If you have other ideas or advice, or things that have worked for you, or things that you disagree with, do feel free to share them in the comments.

SCIS makes a great showing at HCI 2017, Vancouver

 

Ali Dewan presenting at HCI 2017

I had the pleasure to gatecrash the HCI 2017 conference in Vancouver today, which gave me the chance to see Dr Ali Dewan present three excellent papers in a row (two with his name on them) on a variety of themes, as well as a great paper written and presented by one of our students, Miao-Han Chang. Miao-Han Chang presenting

Both did superb jobs of presenting to a receptive crowd. Ali got particular acclaim from the audience for the first work he presented  (Combinatorial Auction based Mechanism Design for Course Offering Determination
by Anton Vassiliev, Fuhua Lin & M. Ali Akber Dewan) for its broad applicability in many areas beyond scheduling courses. 

Athabasca, and especially the School of Computing and Information Systems, has made a great showing at this prestigious conference, with contributions not just from Ali and Miao-Han, but also from Oscar (Fuhua) Lin, Dunwei Wen, Maiga Chang and Vive Kumar. Kurt Reifferscheid and Xiaokun Zhang also had a paper in the proceedings but were sadly not able to attend to present it.

 

Jon Dron and Ali Dewan at HCI 2017

Jon and Ali at the Vancouver Conference Centre after Ali’s marathon presentation stint. I detect a look of relief on Ali’s face!

 

Ali Dewan presenting

Papers

  • Combinatorial Auction based Mechanism Design for Course Offering Determination
    Anton Vassiliev, Fuhua Lin, M. Ali Akber Dewan, Athabasca University, Canada
  • Enhance the Use of Medical Wearables through Meaningful Data Analytics
    Kurt Reifferscheid, Xiaokun Zhang, Athabasca University, Canada
  • Classification of Artery and Vein in Retinal Fundus Images Based on the Context-Dependent Features
    Yang Yan, Changchun Normal University, P.R. China; Dunwei Wen, M. Ali Akber Dewan, Athabasca University, Canada; Wen-Bo Huang, Changchun Normal University, P.R. China
  • ECG Identification Based on PCA-RPROP
    Jinrun Yu, Yujuan Si, Xin Liu, Jilin University, P.R. China; Dunwei Wen, Athabasca University, Canada; Tengfei Luo, Jilin University, P.R. China; Liuqi Lang, Zhuhai College of Jilin University, P.R. China
  • Usability Evaluation Plan for Online Annotation and Student Clustering System – A Tunisian University Case
    Miao-Han Chang, Athabasca University, Canada; Rita Kuo, New Mexico Institute of Mining and Technology, United States; Fathi Essalmi, University of Kairouan, Tunisia; Maiga Chang, Vive Kumar, Athabasca University, Canada; Hsu-Yang Kung, National Pingtung University of Science and Technology, Taiwan

Our educational assessment systems are designed to create losers

The always wonderful Alfie Kohn describes an airline survey that sought to find out how it compared with others, which he chose not to answer because the airline was thus signalling no interest in providing the best quality experience possible, just aiming to do enough to beat the competition. The thrust of his article is that much the same is true of standardized tests in schools. As Kohn rightly observes, the central purpose of testing as it tends to be used in schools and beyond is not to evaluate successful learning but to compare students (or teachers, or institutions, or regions) with one another in order to identify winners and losers.

‘When you think about it, all standardized tests — not just those that are norm-referenced — are based on this compulsion to compare. If we were interested in educational excellence, we could use authentic forms of assessment that are based on students’ performance at a variety of classroom projects over time. The only reason to standardize the process, to give all kids the same questions under the same conditions on a contrived, one-shot, high-stakes test, is if what we wanted to know wasn’t “How well are they learning?” but “Who’s beating whom?”

It’s a good point, but I think it is not just an issue with standardized tests. The problem occurs with all the summative assessments (the judgments) we use. Our educational assessment systems are designed to create losers as much as they a made to find winners. Whether they follow the heinous practice of norm-referencing or not, they are sorting machines, built to discover competent people, and to discard the incompetent. In fact, as Kohn notes, when there are too many winners we are accused of grade inflation or a dropping of standards.

Wrong Way sign This makes no sense if you believe, as I do, that the purpose of education is to educate. In a system that demands grading, unless 100% of students that want to succeed get the best possible grades, then we have failed to meet the grade ourselves. The problem, though, is not so much the judgments themselves as it is the intimate, inextricable binding of judgmental with learning processes. Given enough time, effort, and effective teaching, almost anyone can achieve pretty much any skill or competence, as long as they stick at it. We have very deliberately built a system that does not aim for that at all. Instead, it aims to sort wheat from chaff. That’s not why I do the job I do, and I hope it is not why you do it either, but that’s exactly what the system is made to do. And yet we (at least I) think of ourselves as educators, not judges. These two roles are utterly separate and inconsolably inconsistent.

Who needs 100%?

It might be argued that some students don’t actually want to get the best possible grades. True. And sure, we don’t always want or need to learn everything we could learn. If I am learning how to use a new device or musical instrument I sometimes read/watch enough to get me started and do not go any further, or skim through to get the general gist. Going for a less-than-perfect understanding is absolutely fine if that’s all you need right now. But that’s not quite how it works in formal education, in part because we punish those that make such choices (by giving lower grades) and in part because we systematically force students to learn stuff they neither want nor need to learn, at a time that we choose, using the lure of the big prizes at the end to coax them. Even those that actually do want or need to learn a topic must stick with it to the bitter end regardless of whether it is useful to do the whole thing, regardless of whether they need more or less of it, regardless of whether it is the right time to learn it, regardless of whether it is the right way for them to learn it. They must do all that we say they must do, or we won’t give them the gold star. That’s not even a good way to train a dog.

It gets worse. At least dogs normally get a second chance. Having set the bar, we normally give just a single chance at winning or, at best, an option to be re-tested (often at a price and usually only once), rather than doing the human thing of allowing people to take the time they need and learn from their mistakes until they get as good as they want or need to get. We could learn a thing or two from computer games –  the ability to repeat over and over, achieving small wins all along the way without huge penalties for losing, is a powerful way to gain competence and sustain motivation. It is better if students have some control over the pacing but, even at Athabasca, an aggressively open university that does its best to give everyone all the opportunity they need to succeed, where self-paced learners can choose the point at which they are ready to take the assessments, we still have strict cut-offs for contract periods and, like all the rest, we still tend to allow just a single stab at each assessment. In most of my own self-paced courses (and in some others) we try to soften that by allowing students to iterate without penalty until the end but, when that end comes, that’s still it. This is not for the benefit of the students: this is for our convenience. Yes, there is a cost to giving greater freedom – it takes time, effort, and compassion – but that’s a business problem to solve, not an insuperable barrier. WGU’s subscription model, for instance, in which students pay for an all-you-can-eat smorgasbord, appears to work pretty well.

Meta lessons

It might be argued that there are other important lessons that we teach when we competitively grade. Some might suggest that competition is a good thing to learn in and of itself, because it is one of the things that drives society and everyone has to do it at least sometimes. Sure, but cooperation and mutual support is usually better, or at least an essential counterpart, so embedding competition as the one and only modality seems a bit limiting. And, if we are serious about teaching people about how to compete, then that is what we should do, and not actively put them in jeopardy to achieve that: as Jerome Bruner succinctly put it, ‘Learning something with the aid of an instructor should, if instruction is effective, be less dangerous or risky or painful than learning on one’s own’ (Bruner 1966, p.44).

Others might claim that sticking with something you don’t like doing is a necessary lesson if people are to play a suitably humble/productive role in society. Such lessons have a place, I kind-of agree. Just not a central place, just not a pervasive place that underpins or, worse, displaces everything else. Yes, grit can be really useful, if you are pursuing your goals or helping others to reach theirs. By all means, let’s teach that, let’s nurture that, and by all means let’s do what we can to help students see how learning something we are teaching can help them to reach their goals, even though it might be difficult or unpleasant right now. But there’s a big difference between doing something for self or others, and subservient compliance with someone else’s demands. ‘Grit’ does not have to be synonymous with ‘taking orders’. Doing something distasteful because we feel we must, because it aligns with our sense of self-worth, because it will help those we care about, because it will lead us where we want to be, is all good. Doing something because someone else is making us do it (with the threat/reward of grades) might turn us into good soldiers, might generate a subservient workforce in a factory or coal face, might keep an unruly subjugated populace in check, but it’s not the kind of attitude that is going to be helpful if we want to nurture creative, caring, useful members of 21st Century society.

Societal roles

It might be argued that accreditation serves a powerful societal function, ranking and categorizing people in ways that (at least for the winners and for consumers of graduates) have some value. It’s a broken and heartless system, but our societies do tend to be organized around it and it would be quite disruptive if we got rid of it without finding some replacement. Without it, employers might actually need to look at evidence of what people have done, for instance, rather than speedily weeding out those with insufficient grades. Moreover, circularly enough, most of our students currently want and expect it because it’s how things are done in our culture. Even I, a critic of the system, proudly wear the label ‘Doctor’, because it confers status and signals particular kinds of achievement, and there is no doubt that it and other qualifications have been really quite useful in my career. If that were all accreditation did then I could quite happily live with it, even though the fact that I spent a few years researching something interesting about 15 years ago probably has relatively little bearing on what I do or can do now.  The problem is not accreditation in itself, but that it is inextricably bound to the learning process. Under such conditions, educational assessment systems are positively harmful to learning. They are anti-educative. Of necessity, due to the fact that they tend to determine precisely what students should do and how they should do it, they sap intrinsic motivation and undermine love of learning. Even the staunchest of defenders of tightly integrated learning and judgment would presumably accept that learning is at least as important as grading so, if grading undermines learning (and it quite unequivocally does), something is badly broken.

A simple solution?

It does not have to be this way. I’ve said it before but it bears repeating: at least a large part of the solution is to decouple learning and accreditation altogether. There is a need for some means to indicate prowess, sure. But the crude certificates we currently use may not be the best way to do that in all cases, and it doesn’t have to dominate the learning process to the point of killing love of learning. If we could drop the accreditation role during the teaching process we could focus much more on providing useful feedback, on valorizing failures as useful steps towards success, on making interesting diversions, on tailoring the learning experience to the learner’s interests and capabilities rather than to credential requirements, on providing learning experiences that are long enough and detailed enough for the students’ needs, rather than a uniform set of fixed lengths to suit our bureaucracies.

Equally, we could improve our ability to provide credentials. For those that need it, we could still offer plenty of accreditation opportunities, for example through a portfolio-based approach and/or collecting records of learning or badges along the way. We could even allow for some kind of testing like oral, written, or practical exams for those that must, where it is appropriate to the competence (not, as now, as a matter of course) and we could actually do it right, rather than in ways that positively enable and reward cheating. None of this has to bound to specific courses. This decoupling would also give students the freedom to choose other ways of learning apart from our own courses, which would be quite a strong incentive for us to concentrate on teaching well. It might challenge us to come up with authentic forms of assessment that allow students to demonstrate competence through practice, or to use evidence from multiple sources, or to show their particular and unique skillset. It would almost certainly let us do both accreditation and teaching better. And it’s not as though we have no models to work from: from driving tests to diving tests to uses of portfolios in job interviews, there are plenty of examples of ways this can work already.

Apart from some increased complexities of managing such a system (which is where online tools can come in handy and where opportunities exist for online institutions that conventional face-to-face institutions cannot compete with) this is not a million miles removed from what we do now: it doesn’t require a revolution, just a simple shift in emphasis, and a separation of two unnecessarily and mutually inconsistent intertwined roles. Especially when processes and tools already exist for that, as they do at Athabasca University, it would not even be particularly costly. Inertia would be a bigger problem than anything else, but even big ships can eventually be steered in other directions. We just have to choose to make it so.

 

Reference

Bruner, J. S. (1966). Toward a Theory of Instruction. Cambridge MA: The Belknap Press of Harvard University Press.

Athabasca’s bright future

Tony BatesThe always excellent Tony Bates provides a very clear summary of Ken Coates’s Independent Third-Party Review of Athabasca University released a week or two ago and, as usual, provides a great critical commentary as well as some useful advice on next steps.

Tony rightly points out that our problems are more internal than external, and that the solutions have to come from us, not from outside. To a large extent he hits the nail right on the head when he notes:

Major changes in course design, educational technology, student support and administration, marketing and PR are urgently needed to bring AU into advanced 21st century practice in online and distance learning. I fear that while there are visionary faculty and staff at AU who understand this, there is still too much resistance from traditionalists and those who see change as undermining academic excellence or threatening their comfort zone.

It is hard to disagree. But, though there are too many ostriches among our staff and we do have some major cultural impediments to overcome, it is far less people that impede our progress than it is our design itself, and the technologies – especially the management technologies – of which it consists. That must change, as a corequisite to changing the culture that goes along with it. With some very important exceptions (more on that below) our culture is almost entirely mediated through our organizational and digital technologies, most notably in the form of very rigid processes, procedures and rules, but also through our IT. Our IT should, but increasingly does not, embody those processes. The processes still exist, of course – it’s just that people have to perform them instead of machines. Increasingly often, to make matters worse, we shape our processes to our ill-fitting IT rather than vice versa, because the ‘technological debt’ of adapting them to our needs and therefore having to maintain them ourselves is considered too great (a rookie systems error caused by splitting IT into a semi-autonomous unit that has to slash its own costs without considering the far greater price paid by the university at large). Communication, when it occurs, is almost all explicit and instrumental. We do not yet have enough of the tacit flows of knowledge and easy communication that patch over or fix the (almost always far greater) flaws that exist in such processes in traditional bricks and mortar institutions. The continual partial attention and focused channels of communication resulting from working online mean that we struggle with tacit knowledge and the flexibility of embedded dialogue in ways old fashioned universities never have to even think about. One of the big problems with being so process-driven is that, especially in the absence of richer tacit communication, it is really hard to change those processes, especially because they have evolved to be deeply entangled with one another – changing one process almost always means changing many, often in structurally separate parts of the institutional machine, and involves processes of its own that are often entangled with those we set out to change. As a result for much of its operation, our university does what it does despite us, not because of us. Unlike traditional universities, we have nothing else to fall back on when it fails, or when things fall between cracks. And, though we likely have far fewer than most traditional universities, there are still very many cracks to fall through.

This, not coincidentally, is exactly true of our teaching too. We are pretty darn good at doing what we explicitly intend to do: our students achieve learning outcomes very well, according to the measures we use. AU is a machine that teaches, which is fine until we want the machine to do more than what it is built to do or when other, faster, lighter, cheaper machines begin to compete with it.  As well as making it really hard to make even small changes to teaching, what gets lost – and what matters about as much as what we intentionally teach – is the stuff we do not intend to teach, the stuff that makes up the bulk of the learning experience in traditional universities, the stuff where students learn to be, not just to do. It’s whole-person learning. In distance and online learning, we tend to just concentrate on parts we can measure and we are seldom even aware of the rest. There is a hard and rigid boundary between the directed, instrumental processes and the soft, invisible patterns of culture and belonging, beyond which we rarely cross. This absence is largely what gives distance learning a bad reputation, though it can be a strength if focused teaching of something well-defined is exactly what is needed, or if students are able to make the bigger connections in other ways (true of many of our successful students), when the control that the teaching method provides is worth all the losses and where a more immersive experience might actually get in the way. But it’s a boundary that alienates a majority of current and prospective students. A large percentage of even those we manage to enrol and keep with us would like to feel more connected, more a part of a community, more engaged, more belonging. A great many more don’t even join us in the first place because of that perceived lack, and a very large number drop out before submitting a single piece of work as a direct result.

This is precisely the boundary that the Landing is intended to be a step towards breaking down.

https://landing.athabascau.ca/file/view/410777/video-decreasing-the-distance

If we cannot figure out how to recover that tacit dimension, there is little chance that we can figure out how to teach at a distance in a way that differentiates us from the crowd and that draws people to us for the experience, rather than for the qualification. Not quite fair. Some of us will. If you get the right (deeply engaged) tutor, or join the right (social and/or open) course, or join the Landing, or participate in local meet-ups, or join other social media groups, you may get a fair bit of the tacit, serendipitous, incidental learning and knowledge construction that typifies a traditional education. Plenty of students do have wonderful experiences learning with others at AU, be it with their tutors or with other students. We often see those ones at convocation – ones for whom the experience has been deep, meaningful, and connected. But, for many of our students and especially the ones that don’t make it to graduation (or even to the first assignment), the chances of feeling that you belong to something bigger, to learn from others around you, to be part of a richer university experience, are fairly low. Every one of our students needs to be very self-directed, compared with those in traditional institutions – that’s a sina qua non of working online – but too many get insufficient support and too little inspiration from those around them to rise beyond that or to get through the difficult parts. This is not too surprising, given that we cannot do it for ourselves either. When faced with complicated things demanding close engagement, too many of our staff fall back on the comfortable, easy solution of meeting face to face in one of our various centres rather than taking the hard way, and so the system remains broken. This can and will change.

Moving on

I am much heartened by the Coates report which, amongst other things but most prominently and as our central value proposition, puts our leadership in online and distance education at the centre of everything. This is what I have unceasingly believed we should do since the moment I arrived. The call to action of Coates’s report is fundamentally to change our rigid dynamic, to be bold, to innovate without barriers, to evolve, to make use of the astonishingly good resources – primarily our people – to (again) lead the online learning world. As a virtual institution this should be easier than it would be for others but, perversely, it is exactly the opposite. This is for aforesaid reasons, and also because the boundaries of our IT systems create the boundaries of our thinking, and embed processes more deeply and more inflexibly than almost any bricks and mortar establishment could hope to do. We need soft systems, fuzzy systems, adaptable systems, agile systems for our teaching, research, and learning community development, and we need hard systems, automated systems, custom tailored, rock solid systems for our business processes, including the administrational and assessment recording outputs of the teaching process. This is precisely the antithesis of what we have now. As Coates puts it:

“AU should rebrand itself as the leading Canadian centre for online learning and twenty- first century educational technology. AU has a distinct and potentially insurmountable advantage. The university has the education technology professionals needed to provide leadership, the global reputation needed to attract and hold attention, and the faculty and staff ready to experiment with and test new ideas in an area of emerging national priority. There is a critical challenge, however. AU currently lacks the ICT model and facilities to rise to this opportunity.”

We live in our IT…

We have long been challenged with our IT systems, but things were not always so bad. Our ICT model has made a 180 degree turnaround in the past few years in the exact opposite direction to one that will support continuing evolution and innovation, driven by people that know little about our core mission and that have failed to understand what makes us special as a university. The best defence offered for these poor decisions is usually that ‘most other universities are doing it,’ but we are not most other universities.  ICTs are not just support tools or performance enhancers for us. We are our IT. It is our one and only face to our students and the world. Without IT, we are literally nothing. We have massively underinvested in developing our IT, and what we have done in recent years has destroyed our lead, our agility, and our morale. Increasingly, we have rented generic, closed, off-the-shelf cloud-based applications that would be pretty awful in a factory, that force us into behaviours that make no sense, that sap our time and will, and that are so deeply inappropriate for our very unique distributed community that they stifle all progress, and cut off almost all avenues of innovation in the one area that we are best placed to innovate and lead. We have automated things that should not be automated and let fall into disrepair the things that actually give us an edge. For instance, we rent an absurdly poor CRM system to manage student interactions, building a call centre for customers when we should be building relationships with students, embedding our least savoury practices of content delivery still further, making tweaks to a method of teaching that should have died when we stopped using the postal service for course packs. Yes, when it works, it incrementally improves a broken system, so it looks OK (not great) on reports, but the system it enhances is still irrevocably broken and, by further tying it into a hard embodiment in an ill-fitting application, the chances of fixing it properly diminish further. And, of course, it doesn’t work, because we have rented an ill-fitting system designed for other things with little or no consideration of whether it meets more than coarse functional needs. This can and must change.

Meanwhile, we have methodically starved the environments that are designed for us and through which we have innovated in the past, and that could allow us to evolve. Astonishingly, we have had no (as in zero) central IT support for research for years now, getting by on a wing and a prayer, grabbing for bits of overtime where we can, or using scarce, poorly integrated departmental resources. Even very well-funded and well-staffed projects are stifled by it because almost all of our learning technology innovations are completely reliant on access, not only to central services (class lists, user logins, LMS integration, etc), but also to the staff that are able to perform integrations, manage servers, install software, configure firewalls, etc, etc.  We have had a 95% complete upgrade for the Landing sitting in the wings for nearly 2 years, unable to progress due to lack of central IT personnel to implement it, even though we have sufficient funds to pay for them and then some, and the Landing is actively used by thousands of people. Even our mainstream teaching tools have been woefully underfunded and undermined: we run a version of Moodle that is past even its security update period, for instance, and that creaks along only thanks to a very small but excellent team supporting it. Tools supporting more innovative teaching with more tenuous uptake, such as Mahara and OpenSIM servers, are virtual orphans, riskily trundling along with considerably less support than even the Landing.

This can and will change.

… but we are based in Athabasca

There are other things in Coates’s report that are given a very large emphasis, notably advice to increase our open access, particularly through forming more partnerships with Northern Albertan colleges serving indigenous populations (good – and we will need smarter, more human, more flexible, more inclusive systems for that, too), but mainly a lot of detailed recommendations about staying in Athabasca itself. This latter recommendation seems to have been forced upon Coates, and it comes with many provisos. Coates is very cognizant of the fact that being based in the remote, run-down town of Athabasca is, has been, and will remain a huge and expensive hobble. He mostly skims over sensitive issues like the difficulty of recruiting good people to the town (a major problem that is only slightly offset by the fact that, once we have got them there, they are quite unlikely to leave), but makes it clear that it costs us very dearly in myriad other ways.

… the university significantly underestimates the total cost of maintaining the Athabasca location. References to the costs of the distributed operation, including commitments in the Town of Athabasca, typically focus on direct transportation and facility costs and do not incorporate staff and faculty time. The university does not have a full accounting of the costs associated with their chosen administrative and structural arrangements.”

His suggestions, though making much of the value of staying in Athabasca and heavily emphasizing the importance of its continuing role in the institution, involve moving a lot of people and infrastructure out of it and doing a lot of stuff through web conferencing. He walks a tricky political tightrope, trying to avoid the hot potato of moving away while suggesting ways that we should leave. He is right on both counts.

Short circuits in our communications infrastructure

Though cost, lack of decent ICT infrastructure, and difficulties recruiting good people are factors in making Athabasca a hobble for us, the biggest problem is, again, structural. Unlike those working online, among those living and working in the town of Athabasca itself, all the traditional knowledge flows occur without impediment, almost always to the detriment of more inclusive ways of online communication. Face to face dialogue inevitably short-circuits online engagement – always has, always will. People in Athabasca, as any humans would and should, tend to talk among themselves, and tend to only communicate with others online, as the rest of us do, in directed, intentional ways. This might not be so bad were it not for the fact that Athabasca is very unrepresentative of the university population as a whole, containing the bulk of our administrators, managers, and technical staff, with less than 10 actual faculty in the region. This is a separate subculture, it is not the university, but it has enormous sway over how we evolve. It is not too surprising that our most critical learning systems account for only about 5% of our IT budget because that side of things is barely heard of among decision-makers and implementors that live there and they only indirectly have to face the consequences of its failings (a matter made much worse by the way we disempower the tutors that have to deal with them most of all, and filter their channels of communication through just a handful of obligated committee members). It is no surprise that channels of communication are weak because those that design and maintain them can easily bypass the problems they cause. In fact, if there were more faculty there, it would be even worse, because then we would never face any of the problems encountered by our students. Further concentrations of staff in Edmonton (where most faculty reside), St Albert (mainly our business faculty) and Calgary do not help one bit, simply building further enclaves, which again lead to short circuits in communication and isolated self-reinforcing clusters that distort our perspectives and reduce online communication. Ideas, innovations, and concerns do not spread because of hierarchies that isolate them, filter them as they move up through the hierarchy, and dissipate them in Athabasca. Such clustering could be a good part of the engine that drives adaptation: natural ecosystems diversify thanks to parcellation. However, that’s not how it works here, thanks to the aforementioned excess in structure and process and the fact that those clusters are far from independently evolving. They are subject to the same rules and the same selection pressures as one another, unable to independently evolve because they are rigidly, structurally, and technologically bound to the centre. This is not evolution – it is barely even design, though every part of it has been designed and top-down structures overlay the whole thing. It’s a side effect of many small decisions that, taken as a whole, result in a very flawed system.

This can and must change.

The town of Athabasca and what it means to us

Athabasca high street

Though I have made quite a few day trips to Athabasca over the years, I had never stayed overnight until around convocation time this year. Though it was a busy few days so I only had a little chance to explore, I found it to be a fascinating place that parallels AU in many ways. The impression it gives is of a raw, rather broken-down and depressed little frontier town of around 4,000 souls (a village by some reckonings) and almost as many churches. It was once a thriving staging post on the way to the Klondike gold rush, when it was filled with the rollicking clamour of around 20,000 prospectors dreaming of fortunes. Many just passed through, but quite a few stayed, helping to define some of its current character but, when the gold rush died down, there was little left to sustain a population. Much of the town still feels a bit temporary, still a bit of a campground waiting to turn into a real town. Like much of Northern Alberta, its fortunes in more recent years have been significantly bound to the oil business, feeding an industry that has no viable future and the morals of an errant crow, tied to its roller coaster fortunes. There are signs that money has been around, from time to time: a few nice buildings, a bit of landscaping here and there, a memorial podium at Athabasca Landing.  But there are bigger signs that it has left.

Athabasca Landing

Today, Athabasca’s bleak main street is filled with condemned buildings, closed businesses, discount stores, and shops with ‘sale’ signs in their windows. There are two somewhat empty town centre pubs, where a karaoke night in one will denude the other of almost all its customers.

There are virtually no transit links to the outside world: one Greyhound bus from Edmonton (2 hours away) comes through it, in the dead of night, and passenger trains stopped running decades ago. The roads leading in and out are dangerous: people die way too often getting there, including one of our most valued colleagues in my own school. It is never too far from being reclaimed by the forces of nature that surround it. Moose, bear, deer, and coyotes wander fairly freely. Minus forty temperatures don’t help, nor does a river that is pushed too hard by meltwaters from the rapidly receding Athabasca Glacier and that is increasingly polluted by the side-effects of oil production.

Athabasca

So far so bleak. But there are some notable upsides too. The town is full of delightfully kind, helpful, down-to-earth people infused with that wonderful Canadian spirit of caring for their neighbours, grittily facing the elements with good cheer, getting up early, eating dinner in the late afternoon, gathering for potlucks in one another’s houses, and organizing community get-togethers. The bulk of housing is well cared-for, set in well-tended gardens, in quiet, neat little streets. I bet most people there know their neighbours and their kids play together. Though tainted by its ties with the oil industry, the town comes across as, fundamentally, a wholesome centre for homesteaders in the region, self-reliant and obstinately surviving against great odds by helping one another and helping themselves. The businesses that thrive are those selling tools, materials, and services to build and maintain your farm and house, along with stores for loading your provisions into your truck to get you through the grim winters. It certainly helps that a large number of residents are employees of the university, providing greater diversity than is typically found in such settlements, but they are frontier folk like the rest. They have to be.

It would be unthinkable to pull the university out at this point – it would utterly destroy an already threatened town and, I think, it would cause great damage to the university. This was clearly at the forefront of Coates’s mind, too. The solution is not to withdraw from this strange place, but to dilute and divert the damage it causes and perhaps, even, to find ways to use its strengths. Greater engagement with Northern communities might be one way to save it – we have some big largely empty buildings up there that will be getting emptier, and that might not be a bad place for some face-to-face branching out, perhaps semi-autonomously, perhaps in partnership with colleges in the region. It also has potential as a place for a research retreat though it is not exactly a Mecca that would draw people to it, especially without transit links to sustain it. A well-designed research centre cost a fortune to build, though, so it would be nice to get some use out of it.

Perhaps more importantly, we should not pull out because Athabasca is a part of the soul of the institution. It is a little fitting that Athabasca University has – not without resistance – had its fortunes tied to this town. Athabasca is kind-of who we are and, to a large extent, defines who we should aspire to be. As an institution we are, right now, a decaying frontier town on the edge of civilization that was once a thriving metropolis, forced to help ourselves and one another battle with the elements, a caring bunch of individuals bound by a common purpose but stuck in a wilderness that cares little for us and whose ties with the outside world are fickle, costly, and tenuous. Athabasca is certainly a hobble but it is our hobble and, if we want to move on, we need to find ways to make the best of it – to find value in it, to move people and things away from it that it impedes the most, at least where we can, but to build upon it as a mythic hub that helps to define our identity, a symbolic centre for our thinking. We can and will help ourselves and one another to make it great again. And we have a big advantage that our home town lacks: a renewable and sustainable resource and product. Very much unlike Athabasca the town, the source of our wealth is entirely in our people, and the means we have for connecting them. We have the people already: we just need to refocus on the connection.

The cost of admission to the unlearning zone

picture of dull classroom (pubic domain)I describe some of what I do as ‘unteaching’, so I find this highly critical article by Miss Smith – The Unlearning Zone –  interesting. Miss Smith dislikes the terms ‘ unteaching’ and ‘unlearning’ for some well-expressed aesthetic and practical reasons: as she puts it, they are terms “that would not be out of place in a particularly self-satisfied piece of poststructuralist literary analysis circa 1994.”  I partially agree. However, she also seems equally unenamoured with what she thinks they stand for. I disagree with her profoundly on this so, as she claims to be new to these terms, here is my attempt to explain a little about what I mean by them and why I think they are a useful part of the educators’ lexicon, and why they are crucially important for learners’ development in general.

First the terms…

Yes, ‘unteaching’ is an ugly neoligism and it doesn’t really make sense: that’s part of the appeal of using it – a bit of cognitive dissonance can be useful for drawing attention to something. However, it is totally true that someone who is untaught is just someone who has not (yet) been taught, so ‘unteaching’, seen through that light, is at best pointless, at worst self-contradictory.  On the other hand, it does seem to follow pretty naturally from ‘unlearning’ which, contrary to Miss Smith’s assertion, has been in common use for centuries and makes perfect sense. Have you ever had to unlearn bad habits? Me too.

As I understand it, ‘unteach’ is to ‘teach’ as ‘undo’ is to ‘do’.  Unteaching is still teaching, just as undoing is still doing, and unlearning is still learning. Perhaps deteaching would be a better term. Whatever we choose to call it, unteaching is concerned with intentionally dismantling the taught belief that teaching is about exerting power over learners to teach, and replacing it with the attitude that teachers are there to empower learners to learn. This is not a particularly radical idea. It is what all teachers should do anyway, I reckon. But it is worth drawing attention to it as a distinct activity because it runs counter to the tide, and the problem it addresses is virtually ubiquitous in education up to, and sometimes at, doctoral level.

Traditional teaching of the sort Miss Smith seems to defend in her critique does a lot more than teach a subject, skill, or way of thinking. It teaches that learning is a chore that is not valuable in and of itself, that learners must be forced to do it for some other purpose, often someone else’s purpose. It teaches that teaching is something done to students by a teacher: at its worst, it teaches that teaching is telling; at best, that teaching involves telling someone to do something. It’s not that (many) teachers deliberately seek these outcomes, but that they are the most likely lessons to be learned, because they are the ones that are repeated most often. The need for unteaching arises because traditional teaching, with luck in addition to whatever it intends to teach, teaches some terrible lessons about learning and the role of teaching in that process that must be unlearned.

What is unteaching?

Miss Smith claims that unteaching means “open plan classes, unstructured lessons and bean bags.” That’s not the way I see it at all. Unlike traditional teaching, with its timetables, lesson plans, learning objectives, and uniform tests, unteaching does not have its own technologies and methods, though it does, for sure, tend to be a precursor to connectivist, social constructivist, constructionist, and other more learner-centred ways of thinking about the learning process, which may sometimes be used as part of the process of unteaching itself. Such methods, models, and attitudes emerge fairly naturally when you stop forcing people to do your bidding. However, they are just as capable of being used in a controlling way as the worst of instructivist methods: the number of reports on such interventions that include words like ‘students must…’, ‘I make my students…’ or (less blatantly) ‘students (do X)’ far outnumber all others, and that is the very opposite of unteaching. The specific technologies (including pedagogies as much as open-plan classrooms and beanbags) are not the point. Lectures, drill-and-practice and other instructivist methods are absolutely fine, as long as:

  1. they at least attempt to do the job that students want or need,
  2. they are willingly and deliberately chosen by students,
  3. students are well-informed enough to make those choices, and
  4. students can choose to learn otherwise at any time.

No matter how cool and groovy your problem-based, inquiry-based, active methods might be, if they are imposed on students (especially with the use of threats for non-compliance and rewards for compliance – e.g. qualifications, grades, etc) then it is not unteaching at all: it’s just another way of doing the same kind of teaching that caused the problem in the first place. But if students have control – and ‘control’ includes being able to delegate control to someone else who can scaffold, advise, assist, instruct, direct, and help them when needed, as well as being able to take it back whenever they wish – then such methods can be very useful. So can lectures. To all those educational researchers that object to lectures, I ask whether they have ever found them valuable in a conference (and , if not, why did they go to a conference in the first place?). It’s not the pedagogy of lectures that is at fault. It’s the requirement to attend them and the accompanying expectation that people are going to learn what you are teaching as a result. That’s, simply put, empirically wrong. It doesn’t mean that lecturees learn nothing. Far from it. But what you teach and what they learn are different kinds of animal.

Problems with unteaching

It’s really easy to be a bad unteacher – I think that is what Miss Smith is railing against, and it’s a fair criticism. I’m often pretty bad at it myself, though I have had a few successes along the way too. Unteaching and, especially, the pedagogies that result from having done unteaching, are far more likely to go wrong, and they take a lot more emotional, intellectual, and social effort than traditional teaching because they don’t come pre-assembled. They have no convenient structures and processes in place to do the teaching for you.  Traditional teaching ‘works’ even when it doesn’t. If you throw someone into a school system, with all its attendant rewards, punishments, timetables, rules and curricula, and if you give them the odd textbook and assessment along the way, then most students will wind up learning something like what is intended to be taught by the system, no matter how awful the teachers might be. In such a system, students will rarely learn well, rarely persistently, rarely passionately, seldom kindly, and the love of learning will have been squashed out of many of them along the way (survivors often become academics and teachers themselves). But they will mostly pass tests at the end of it. With a bit of luck many might even have gained a bit of useful knowledge or skill, albeit that much will be not just wasted and forgotten as easily as a hotel room number when your stay is over, but actively disliked by the end of it. And, of course, they will have learned dependent ways of learning that will serve them poorly outside institutional systems.

To make things far worse, those very structures that assist the traditional teacher (grades, compulsory attendance, fixed outcomes, concept of failure, etc) are deeply antagonistic to unteaching and are exactly why it is needed in the first place. Unteachers face a huge upstream struggle against an overwhelming tide that threatens to drown passionate learning every inch of the way. The results of unteaching can be hard to defend within a traditional educational system because, by conventional measures, it is often inefficient and time-consuming. But conventional measures only make sense when you are trying to make everyone do the same things, through the same means, with the same ends, measured by and in order to meet the same criteria. That’s precisely the problem.

The final nail in unteaching’s coffin is that it is applied very unevenly across the educational system, so every freedom it brings is counterbalanced by a mass of reiterated antagonistic lessons from other courses and programs. Every time we unteach someone, two others reteach them.  Ideally, we should design educational systems that are friendlier to and more supportive of learner autonomy, and that are (above all else) respectful of learners as human beings. In K-12 teaching there are plenty of models to draw from, including Summerhill, Steiner (AKA Waldorf) schools, Montessori schools, Experiential Learning Schools etc. Few are even close to perfect, but most are at least no worse than their conventional counterparts, and they start with an attitude of respect for the children rather than a desire to make them conform. That alone makes them worthwhile. There are even some regional systems, such as those found in Finland or (recently) British Columbia, that are heading broadly in the right direction. In universities and colleges there are plenty of working models, from Oxford tutorials to Cambridge supervisions, to traditional theses and projects, to independent study courses and programs, to competency-based programs, to PLAR/APEL portfolios, and much more. It is not a new idea at all. There is copious literature and many theoretical models that have stood the test of time, from andragogy to communities of practice, through to teachings from Freire, Illich, Dewey and even (a bit quirkily) Vygotsky. Furthermore, generically and innately, most distance and e-learning unteaches better than its p-learning counterparts because teachers cannot exert the same level of control and students must learn to learn independently. Sadly, much of it is spoiled by coercing students with grades, thereby providing the worst of both worlds: students are forced to behave as the teacher demands in their terminal behaviours but, without physical copresence, are less empowered by guidance and emotional/social support with the process. Much of my own research and teaching is concerned with inverting that dynamic – increasing empowerment and social support through online learning, while decreasing coercion. I’d like to believe that my institution, Athabasca University, is largely dedicated to the same goal, though we do mostly have a way to go before we get it right.

Why it matters

Unteaching is to a large extent concerned with helping learners – including adult learners – to get back to the point at which most children start their school careers – driven by curiosity, personal interest, social value, joy, delight – but that is schooled out of them over years of being taught dependency.  Once misconceptions about what education is for, what teachers do, and how we learn, have been removed, teaching can happen much more effectively: supporting, nurturing, inspiring, challenging, responding, etc, but not controlling, not making students do things they are not ready to do for reasons that mean little to them and have even less to do with what they are learning.

However, though it is an immensely valuable terminal outcome, improved learning is perhaps not the biggest reason for unteaching. The real issue is moral: it’s simply the right thing to do. The greatest value is that students are far more likely to have been treated with the respect, care, and honour that all human beings deserve along the way. Not ‘care’ of the sort you would give to a dog when you train it to be obedient and well behaved. Care of the sort that recognizes and valorizes autonomy and diversity, that respects individuals, that cherishes their creativity and passion, that sees learners as ends in themselves, not products or (perish the thought) customers. That’s a lesson worth teaching, a way of being that is worth modelling. If that demands more effort, if it is more fallible, and if it means that fewer students pass your tests, then I’m OK with that. That’s the price of admission to the unlearning zone.