Niggles about NGDLEs – lessons from ELF

Malcom Brown has responded to Tony Bates and me in an Educause guest post in which he defends the concept of the NGDLE and expands a bit on the purposes behind it. This does help to clarify the intent although, as I mentioned in my earlier post, I am quite firmly in favour of the idea, so I am already converted on the main points. I don’t mind the Lego metaphor if it works, but I do think we should concentrate more on the connections than the pieces. I also see that it is fairly agnostic to pedagogy, at least in principle. And I totally agree that we desperately need to build more flexible, assemblable systems along these lines if we are to enable effective teaching, management of the learning process and, much much more importantly, if we are to support effective learning. Something like the proposed environment (more of an ecosystem, I’d say) is crucial if we want to move on.


It has been done before, over ten years ago in the form of ELF, in much more depth and detail and with large government and standards bodies supporting it, and it is important to learn the lessons of what was ultimately a failed initiative. Well – maybe not failed, but certainly severely stalled. Parts persist and have become absorbed, but the real value of it was as a model for building tools for learning, and that model is still not as widespread as it should be. The fact that the Educause initiative describes itself as ‘next generation’ is perhaps the most damning evidence of its failure.


Why ELF ‘failed’

I was not a part of nor close to the ELF project but, as an outsider, I suspect that it suffered from four major and interconnected problems:

  1. It was very technically driven and framed in the language of ICTs, not educators or learners. Requirements from educators were gathered in many ways, with workshops, working groups and a highly distributed team of experts in the UK, Australia, the US, Canada, the Netherlands and New Zealand (it was a very large project). Some of the central players had a very deep understanding of the pedagogical and organizational needs of not just learners but organizations that support them, and several were pioneers in personal learning environments (PLEs) that went way beyond the institution. But the focus was always on building the technical infrastructure – indeed, it had to be, in order to operationalize it. For those outside the field, who had not reflected deeply on the reasons this was necessary, it likely just seemed like a bunch of techies playing with computers. It was hard to get the message across.
  2. It was far too over-ambitious, perhaps bolstered by the large amounts of funding and support from several governments and large professional bodies. The e-learning framework was just one of several strands like e-science, e-libraries and so on, that went to make up the e-framework. After a while, it simply became the e-framework and, though conceptually wonderful, in practical terms it was attempting far too much in one fell swoop. It became so broad, complex and fuzzy that it collapsed under its own weight. It was not helped by commercial interests that were keen to keep things as proprietary and closed as they could get away with. Big players were not really on board with the idea of letting thousands of small players enter their locked-in markets, which was one of the avowed intents behind it. So, when government funding fizzled out, there was no one to take up such a huge banner. A few small flags might have been way more successful.
  3. It was too centralized (oddly, given its aggressively decentralized intent and the care taken to attempt to avoid that). With the best of intent, developers built over-engineered standards relying on web service architectures that the rest of the world was abandoning because they were too clunky, insufficiently agile and much too troublesome to implement. I am reminded, when reading many of the documents that were produced at the time, of the ISO OSI network standards of the late 80s that took decades to reach maturity through ornate webs of committees and working groups, were beautifully and carefully engineered, and that were thoroughly and completely trounced by the lighter, looser, more evolved, more distributed TCP/IP standards that are now pretty much ubiquitous. For large complex systems, evolution beats carefully designed engineering every single time.
  4. The fact that it was created by educators whose framing was entirely within the existing system meant that most of the pieces that claimed to relate to e-learning (as opposed to generic services) had nothing to do with learning at all, but were representative of institutional roles and structures: marking, grading, tracking, course management, resource management, course validation, curriculum, reporting and so on. None of this has anything to do with learning and, as I have argued on many occasions elsewhere, may often be antagonistic to learning. While there were also components that were actually about learning, they tended to be framed in the context of existing educational systems (writing lessons, creating formal portfolios, sequencing of course content, etc). Though very much built to support things like PLEs as well as institutional environments, the focus was the institution far more than the learner.

As far as I can tell, any implementation of the proposed NGDLE is going to run into exactly the same problems. Though the components described are contemporary and the odd bit of vocabulary has evolved a bit, all of them can be found in the original ELF model and the approach to achieving it seems pretty much the same. Moreover, though the proposed architecture is flexible enough to support pretty much anything – as was ELF – there is a tacit assumption that this is about education as we know it, updated to support the processes and methods that have been developed since (and often in response to) the heinous mistakes we made when we designed the LMSs that dominate education today. This is not surprising – if you ask a bunch of experts for ideas you will get their expertise, but you will not get much in the way of invention or new ideas. The methodology is therefore almost guaranteed to miss the next big thing. Those ideas may come up but they will be smoothed out in an averaging process and dissenting models will not become part of the creed. This is what I mean when I criticize it as a view from the inside.

Much better than the LMS

If implemented, a NGDLE will undoubtedly be better than any LMS, with which there are manifold problems. In the first place, LMSs are uniformly patterned on mediaeval educational systems, with all their ecclesiastic origins, power structures and rituals intact. This is crazy, and actually reinforces a lot of things we should not be doing in the first place, like courses, intimately bound assessment and accreditation, and laughably absurd attempts to exert teacher control, without the slightest consideration of the fact that pedagogies determined by the physics of spaces in which we lock doors and keep learners controlled for an hour or two at a time make no sense whatsoever in online learning. In the second place, centralized systems have to maintain an uneasy and seldom great balance between catering to every need and remaining usably simple. This inevitable leads to compromises, from small things (e.g. minor formatting annoyances in discussion forums) to the large (e.g. embedded roles or units of granularity that make everything a course). While customization options can soften this a little, centralized systems are structurally flawed by their very nature. I have discussed such things in some depth elsewhere, including both my published books. Suffice to say, the LMS shapes us in its own image, and its own image is authoritarian, teacher-controlled and archaic. So, a system that componentizes things so that we can disaggregate any or all of it, provide local control (for teachers and other learners as well as institutions and administrators) and allow creative assemblies is devoutly to be wished for. Such a system architecture can support everything from the traditional authoritarian model to the loosest of personal learning environments, and much in between.


NGDLE is a misnomer. We have already seen that generation come and go. But, as a broad blueprint for where we should be going and what we should be doing now, both ELF and NGDLE provide patterns that we should be using and thinking about whenever we implement online learning tools and content and, for that, I welcome it. I am particularly appreciative that NGDLE provides reinvigorated support for approaches that I have been pushing for over a decade but that ICT departments and even faculty resist implacably. It’s great to be able to point to the product of so many experts and say ‘look, I am not a crank: this is a mainstream idea’. We need a sea-change in how we think of learning technologies and such initiatives are an important part of creating the culture and ethos that lets this happen. For that I totally applaud this initiative.

In practical terms, I don’t think much of this will come from the top-down, apart from in the development of lightweight, non-prescriptive standards and the norming of the concepts behind it. Of current standards, I think TinCan is hopeful, though I am a bit concerned that it is becoming over-ornate in its emerging development. LTI is a good idea, sufficiently mature, and light enough to be usable but, again, in its new iteration it is aiming higher than might be wise. Caliper is OK but also showing signs of excessive ambition. Open Badges are great but I gather that is becoming less lightweight in its latest incarnation. We need more of such things, not more elaborate versions of them. Unfortunately, the nature of technology is that it always evolves towards increasing complexity. It would be much better if we stuck with small, working pieces and assembled those together rather than constantly embellishing good working tools. Unix provides a good model for that, with tools that have worked more or less identically for decades but that constantly gain new value in recombination.

Footnote: what became of ELF?

It is quite hard to find information about ELF today. It seems (as an outsider) that the project just ground to a halt rather than being deliberately killed. There were lots of exemplar projects, lots of hooks and plenty of small systems built that applied the idea and the standards, many of which are still in use today, but it never achieved traction. If you want to find out more, here is a small reading list: – the main site (the link to the later e-framework site leads to a broken page)  – some of the relevant projects ELF incorporated. – good, brief overview from 2004 of what it involved and how it fitted together  – spooky: this is about ‘Next Generation E-Learning Environments’ rather than digital ones. But, though framed in more technical language, the ideas are the same as NGDLE. – a slightly less technical variant (links to part 1, which explains web services for non-technical people)

See also and, a set of scenarios and use cases that are eerily similar to those proposed for NGDLE.

If anyone has any information about what became of ELF, or documents that describe its demise, or details of any ongoing work, I’d be delighted to learn more!



Why so many questions?

Athabasca River Flood

At Athabasca University, our proposed multi-million dollar investment in a student relationship management system, dubbed the ‘Student Success Centre’ (SSC), is causing quite a flood of discussion and debate among faculty and tutors at the moment. Though I do see some opportunities in this if (and only if) it is very intelligently and sensitively designed, there are massive and potentially fatal dangers in creating such a thing.  See a previous post of mine for some of my worries. I have many thoughts on the matter, but one thing strikes me as interesting enough to share more widely and, though it has a lot to do with the SSC, it also has broader implications.

Part of the justification for the SSC is that an alleged 80% of current interactions with students are about administrative rather than academic issues. I say ‘alleged’ because such things are notoriously hard to measure with any accuracy. But let’s assume that it actually is accurate.

How weird is that?

Why is it that our students (apparently) need to contact us for admin support in overwhelming numbers but actually hardly talk at all about the complicated subjects they are taking? Assuming that these 80% of interactions are not mostly to complain about things that have gone wrong (if so, an SSC is not the answer!) then it seems, on the face of it, more than a bit topsy turvy.
One reasonable explanation might be that our course materials are so utterly brilliant that they require little further interaction, but I am not convinced that this sufficiently explains the disparity. Students are mostly spending 100+ hours on academic work for each course whereas (I hope) at most a couple of hours are spent on administrivia. No matter how amazing our courses might be, the difference is remarkable. It is doubly remarkable when you consider that a fair number of our courses do involve at least some required level of interaction which, alone, should easily account for most if not more than all of that remaining 20%. In my own courses it is a lot more than that and I am aware of many others with very active Landing groups, Moodle forums, webinar sessions, and even the occasional visit to an immersive world.
It is also possible that our administrative processes are extremely opaque and ill-explained. This certainly accords with my own experience of trying to work out something as simple as how much a course would cost or the process needed to submit project work. But, if that is the case, and assuming our distance, human-free teaching works as well as we believe it does, then why can we not a) simplify the processes and b) provide equally high quality learning materials for our admin processes so that students don’t need to bother our admin staff so much? If our course materials are so great then that would seem, on the face of it, very much more cost-effective than spending millions on a system that is at least as likely to have a negative as a positive impact and that actually increases our ongoing costs considerably. It is also quite within the capabilities of our existing skillset.
Even so, it seems very odd to me that students can come to terms with inordinately complex subjects from philosophy to biochemistry, but that they are foiled by a simple bit of bureaucracy and need to seek human assistance. It may be hard, but it is not beyond the means of a motivated learner to discover, especially given that we are specialists in producing high quality learning materials that should make such things very clear. And in motivation, I think, lies the key.

Other people matter

Other people are wonderful things when you need to learn something, pretty much across the board. Above all they matter when there is no obvious reason that you should be interested or care about it for its own merits, and bureaucratic procedures are seldom very interesting. I have known only one person in my whole life that actually likes filling in forms (I think it is a meditative pursuit – my father felt much the same way about dishwashing and log sawing) but, for the most part, this is not a thing that excites most people.  
I hypothesize that our students tend to need less academic than bureaucratic help at least partly because, by and large, for the coursework they are very self-motivated people learning things that interest them whereas our bureaucracy is at most a means to an end, at worst a demotivating barrier. It would not help much to provide great teaching materials for bureaucratic procedures because 99% of students would have no intrinsic interest in learning about them, and it would have zero value to them in any future activity. Why would they bother? It is far easier to ask someone.
Our students actually like the challenge of facing and solving problems in their chosen subjects – in fact, that’s one of the great joys of learning. They don’t turn to tutors to discuss things because there are plenty of other ways of getting the help they need, both in course materials and elsewhere, and it is fun to overcome obstacles. The more successful ones tend to have supportive friends, families or colleagues, or are otherwise very single-minded. They tend to know why they are doing what they are doing. We don’t get many students that are not like this, at least on our self-paced courses, because either they don’t bother coming in the first place or they are among the scarily large percentage that drop out before starting (we don’t count them in our stats though, in fairness, neither to face-to-face universities).
But, of course, that only applies for students that do really like the process of learning and most of what they are learning, that know how to do it and/or that have existing support networks. It does not apply to those that hit very difficult or boring spots, that give up before they start, that hit busy times that mean they cannot devote the energy to the work, that need a helping hand with the process but cannot find it elsewhere, or that don’t bother even looking at a distance option at all because they do not like the isolation it (apparently) entails. For those students, other people can help a lot. Even for our own students, over half (when asked) claim that they would appreciate more human interaction. And those are the ones that have knowingly self-selected a largely isolated process and that have not already dropped out. 
Perhaps more worryingly, it raises concerns about the quality of the learning experience. Doing things alone means that you miss out on all the benefits of a supportive learning community. You don’t get to argue, to explain, to question, save in your own head or in formal, largely one-way, assignments. You don’t get multiple perspectives, different ways of seeing, opportunities to challenge and be challenged. You don’t get the motivation of writing for an audience of people that you care about. You don’t get people that care about you and the learning community providing support when times are hard, nor the pleasure of helping when others are in difficulty. You don’t get to compare yourself with others, the chance to reflect on how you differ and whether that is a good or bad thing. You don’t get to model behaviours or see those behaviours being modelled. These are just some of the notable benefits of traditional university systems that are relatively hard to come by in Athabasca’s traditional self-paced model (not in all courses, but in many). It’s not at all about asking questions and getting solutions. It’s about engaging in a knowledge creation process with other people. There are distinct benefits of being alone, notably in the high degree of control it brings, but a bit of interaction goes a long long way. It takes a very special kind of person to get by without that and the vast majority of our successful students (at least in undergraduate self-paced courses) are exactly that special kind of person. 
If it is true that only 20% of interactions are currently concerned with academic issues, that is a big reason for concern, because it means our students are missing out on an incredibly rich set of opportunities in which they can help one another as well as interact with tutors. Creating an SSC system that supports what is therefore, for those that are not happy alone (i.e. the ones we lose or never get in the first place), an impoverished experience, seems simply to ossify a process that should at least be questioned. It is not a solution to the problem – it is an exacerbation of it, further entrenching a set of approaches and methods that are inadequate for most students (the ones we don’t get or keep) in the first place.

A sustainable future?

As a university seeking sustainability we could simply continue to concentrate on addressing the needs of self-motivated, solitary students that will succeed almost no matter what we do to them, and just make the processing more cost-efficient with the SSC.  If we have enough of those students, then we will thrive for some time to come, though I can’t say it fits well with our open mission and I worry greatly about those we fail to help. If we want to get more of those self-guided students then there are lots of other things we should probably do too like dropping the whole notion of fixed-length courses (smaller chunks means the chances of hitting the motivation sweet-spot are higher) and disaggregating assessment from learning (because extrinsic motivation kills intrinsic motivation).
But, if we are sticking with the idea of traditional courses, the trouble is that we are no longer almost alone in offering such things and there is a finite market of self-motivated, truly independent learners who (if they have any sense) will find cheaper alternatives that offer the same or greater value. If all we are offering is the opportunity to learn independently and a bit of credible certification at the end of it, we will wind up competing on price with institutions and businesses that have deeper coffers, cheaper staff, and less constraints. In a cut-throat price war with better funded peers, we are doomed.
If we are to be successful in the future then we need to make more of the human side of our teaching, not less, and that means creating richer, more direct channels to other people in this learning community, not automating methods that are designed for the era of correspondence learning. This is something that, not uncoincidentally, the Landing is supposed to help with, though it is just an exemplar and at most a piece of the puzzle – we ideally want connection to be far more deeply embedded everywhere rather than in a separate site. It is also something that current pilot implementations of the SSC are antagonistic towards, thanks mainly to equating time and effort, focusing on solving specific problems rather than human connection, failing to support technological diversity, and standing as an obstacle between people that just need to talk. It doesn’t have to be built that way. It could almost as easily vanish into the background, be seamlessly hooked into our social environments like email, Moodle and the Landing, could be an admin tool that gives support when needed but disappears when not. And there is no reason whatsoever that it needs to be used to pay tutors by the recorded minute, a bad idea that has been slung on the back of it that has no place in our culture. Though not what the pilot systems do at all, a well-designed system like this could step in or be called upon when needed, could support analytics that would be genuinely helpful, could improve management information, all without getting in the way of interaction at all. In fact, it could easily be used to enhance it, because it could make patterns of dialogue more visible and comprehensible.

In conclusion

At Athabasca we have some of the greatest distance educators and researchers on the planet, and that greatness rubs off on those around them. As a learning community, knowledge spreads among us and we are all elevated by it. We talk about such things in person, in meetings, via Skype, in webinars, on mailing lists, on the Landing, in pubs, in cafes, etc. And, as a result, ideas, methods and values get created, transformed and flow through our network. This makes us quite unique – as all learning communities are unique – and creates the distinctive culture and values of our university that no other university can replicate. Even when people leave, they leave traces of their ideas and values in those that remain, that get passed along for long after they have gone, become part of the rich cultural identity that defines us. It’s not mainly about our structures, processes and procedures: except when they support greater interaction, those actually get in the way much of the time. It’s about a culture and community of learning. It’s about the knowledge that flows in and through this shifting but identifiable crowd. This is a large part of what gives us our identity. It’s exactly the same kind of thing that means we can talk about (say) the Vancouver Canucks or Apple Inc as a meaningful persistent entity, even though not one of the people in the organization is the same as when it began and virtually all of its processes, locations, strategies and goals beyond the most basic have changed, likely many times. The thing is, if we hide those people behind machines and processes, separate them through opaque hierarchies, reduce the tools and opportunities for them to connect, we lose almost all of the value. The face of the organization becomes essentially the face of the designer of the machine or the process and the people are simply cogs implementing it. That’s not a good way forward, especially as there are likely quite a few better machine and process designers out there. Our people – staff and students – are the gold we need to mine, and they are also the reason we are worth saving. We need to be a university that takes the distance out of distance learning, that connects, inspires, supports and nurtures both its staff and its students. Only then will we justly be able to claim to have a success centre.


The cost of time

A few days back, an email was sent to our ‘allstaff’ mailing list inviting us to join in a bocce tournament. This took me a bit of time to digest, not least because I felt impelled to look up what ‘bocce’ means (it’s an Italian variant of pétanque, if you are interested). I guess this took a couple of minutes of my time in total. And then I realized I was probably not alone in this – that over a thousand people had also been reading it and, perhaps, wondering the same thing. So I started thinking about how we measure costs.

The cost of reading an email

A single allstaff email at Athabasca will likely be read by about 1200 people, give or take. If such an email takes one minute to read, that’s 1200 minutes – 20 hours – of the institution’s time being taken up with a single message. This is not, however, counting the disruption costs of interrupting someone’s train of thought, which may be quite substantial. For example, this study from 2002 reckons that, not counting the time taken to read email, it takes an average of 64 seconds to return to previous levels of productivity after reading one. Other estimates based on different studies are much higher – some studies suggest the real recovery time from interruptions to tasks could be as high as 15-20 minutes. Conservatively, though, it is probably safe to assume that, taking interruption costs into account, an average allstaff email that is read but not acted upon consumes an average of two minutes of a person’s time: in total, that’s about 40 hours of the institution’s time, for every message sent. Put another way, we could hire another member of staff for a week for the time taken to deal with a single allstaff message, not counting the work entailed by those that do act on the message, nor the effort of writing it. It would therefore take roughly 48 such messages to account for a whole year of staff time. We get hundreds of such messages each year.
But it’s not just about such tangible interruptions. Accessing emails can take a lot of time before we even get so far as reading them. Page rendering just to view a list of messages on our web front end for our email system is an admirably efficient 2 seconds (i.e. 40 minutes of the organization’s time for everyone to be able to see a page of emails, not even to read their titles). Let’s say we all did that an average of 12 times a day –  that’s 8 hours, or more than a day of the institution’s time, taken up with waiting for that page to render each day. Put another way, as we measure such things, if it took four seconds, we would have to fire someone to pay for it. As it happens, for another university for which I have an account, using MS Exchange, simply getting to the login screen of its web front end takes 4 seconds. Once logged in (a further few seconds thanks to Exchange’s insistence on forcing you to tell it that your computer is not shared even though you have told it that a thousand times before), loading the page containing the list of emails takes a further 17 seconds. If AU were using the same system, using the same metric of 12 visits each day, that could equate to around 68 hours of the institution’s time every single day, simply to view a list of emails, not including a myriad of other delays and inefficiencies when it comes to reading, responding to and organizing such messages. Of course, we could just teach people to use a proper email client and reduce the delay to one that is imperceptible, because it occurs in the background – webmail is a truly terrible idea for daily use – or simply remind them not to close their web browsers so often, or to read their emails less regularly. There are many solutions to this problem. Like all technologies, especially softer ones that can be used in millions of ways, it ain’t what you do it’s the way that you do it. 

But wait – there’s more

Email is just a small part of the problem, though: we use a lot of other websites each day. Let’s conservatively assume that, on average, everyone at AU visits, say, 24 pages in a working day (for me that figure is always vastly much higher) and that each page averages out at about 5 seconds to load. That’s two minutes per person. Multiplied by 1200, it’s another week of the institution’s time ‘gone’ every day simply waiting to read a page.
And then there are the madly inefficient bureaucratized processes that are dictated and mediated by poorly tailored software. When I need to log into our CRM system I reckon that simply reading my tasks takes a good five minutes. Our leave reporting system typically eats 15 minutes of my time each time I request leave (it replaces one that took 2-3 minutes).  Our finance system used to take me about half an hour to add in expenses for a conference but, since downgrading to a baseline version, now takes me several hours, and it takes even more time from others that have to give approvals along the way. Ironically, the main intent behind implementing this was to save us money spent on staffing. 
I could go on, but I think you see where this is heading. Bear in mind, though, that I am just scratching the surface. 

Time and work

My point in writing this is not to ask for more efficient computer and admin systems, though that would indeed likely be beneficial. Much more to the point, I hope that you are feeling uncomfortable or even highly sceptical about how I am measuring this. Not with the figures: it doesn’t much matter whether I am wrong with the detailed timings or even the math. It is indisputable that we spend a lot of time dealing with computer systems and the processes that surround them every day, and small inefficiencies add up. There’s nothing particularly peculiar to ICTs about this either – for instance, think of the time taken to walk from one office to another, to visit the mailroom, to read a noticeboard, to chat with a colleague, and so on. But is that actually time lost or does it even equate precisely to time spent?  I hope you are wondering about the complex issues with equating time and dollars, how we learn, why and how we account for project costs in time, the nature of technologies, the cost vs value of ICTs, the true value of bocce tournament messages to people that have no conceivable chance of participating in them (much greater than you might at first imagine), and a whole lot more. I know I am. If there is even a shred of truth in my analysis, it does not automatically lead to the conclusion that the solution is simply more efficient computer systems and organizational procedures. It certainly does bring into question how we account for such things, though, and, more interestingly, it highlights even bigger intangibles: the nature and value of work itself, the nature and value of communities of practice, the role of computers in distributed intelligence, and the meaning, identity and purpose of organizations. I will get to that in another post, because it demands more time than I have to spend right now (perhaps because I receive around 100 emails a day, on average).

Beyond the group: how education is changing and why institutions need to catch up

Understanding the ways people interact in an online context matters if we are interested in deliberate learning, because learning is almost always with and/or from other people: people inform us, inspire us, challenge us, motivate us, organize us, help us, engage with us. In the process, we learn. Intentional learning is now, more than ever, whether informally, non-formally or formally, an activity that occurs outside a formal physical classroom. We are no longer limited to what our schools, universities, teachers and libraries in our immediate area provide for us, nor do we need to travel and pay the costs of getting to the experts in teaching and subject matter that we need. We are not limited to classes and courses any more. We don’t even need books. Anyone and everyone can be our teachers. This matters.

Traditional university education

Traditional university education is all about groups, from classes to courses to committees to cohorts (Dron & Anderson, 2014). I use the word ‘group’ in a distinctive and specific way here, following a pattern set by Wellman, Downes and others before and since. Groups have names, owners, members, roles and hierarchies. Groups have purposes and deliberate boundaries. Groups have rules and structures. Groups embody a large set of highly evolved mechanisms that have developed over millenia to deal with the problems of coordinating large numbers of people in physical spaces and, in the context they have evolved, they are a pretty effective solution.

But there are two big problems with using groups in their current form in online learning. The first is that the online context changes group dynamics. In the past, professors were able to effectively trap students in a room for an hour or more, and to closely control their activities throughout that time. That is the context in which our most common pedagogies evolved. Even in the closest simulations of a face-to-face context (immersive worlds or webmeetings) this is no longer possible.

The second problem is more significant and follows from the first: group technologies, from committees to classrooms, were developed in response to the constraints and affordances of physical contexts that do not exist in an online and connected world. For example, it has been a long time since the ability to be in hearing range of a speaker has mattered if we wish to understand what he or she says. Teachers needed to control such groups because, apart from anything else, in a physical context, it would have been impossible to otherwise be heard without disruption. It was necessary to avoid such disruption and to coordinate behaviour because there was no other easy way to gain the efficiencies of one person teaching many (books notwithstanding). We also had to be disciplined enough to be in the same place at the same time – this involved a lot of technologies like timetables, courses, and classroom furniture. We needed to pay close attention because there was no persistence of content. The whole thing was shaped by the need to solve problems of access to rival resources in a physical space. 

We do not all have to be together in one place at one time any more. It is no longer necessary for the teacher to have to control a group because that group does not (always or in the same way) need to be controlled.

Classrooms used to be the only way to make efficient use of a single teacher with a lot of learners to cater for, but compromises had to be made: a need for discipline, a need to teach to the norm, a need to schedule and coordinate activities (not necessarily when learners needed or wanted to learn), a need to demand silence while the teacher spoke, a need to manage interactions, a perceived need to guide unwilling learners, brought on by the need to teach things guaranteed to be boring or confusing to a large segment of a class at any given time. We therefore had to invent ways to keep people engaged, either by force or intentional processes designed to artificially enthuse. This is more than a little odd when you think about it. Given that there is hardly anything more basically and intrinsically motivating than to learn something you actually want to learn when you want to learn it, the fact that we had to figure ways to motivate people to learn suggests something went very wrong with the process. It did not go wonderfully. A whole load of teaching had worse than no effect and little resulted in persistent and useful learning – at least, little of what was intentionally taught. It was a compromise that had to be made, though. The educational system was a technology designed to make best use of limited resources and the limitations imposed by physics, without which the spread of knowledge and skills would have been (and used to be and, in pockets where education is unavailable, still is) very limited.

Online learning

For those of us who are online (you and me) we don’t need to make all of those compromises any more. There are millions of other ways to learn online with great efficiency and relevance that do not involve groups at all, from YouTube to Facebook to Reddit to StackExchange, to this post. These are under the control of the learners, each at the centre of his or her own network and in control of the flow, each able to choose which sets of people to engage with, and to what attention should be paid.

Networks have no boundaries, names, roles or rules – they are just people we know.

Sets have no ties, no rituals of joining, no allegiances or social connections – they are just collections of people temporarily occupying a virtual or physical space who share similar interests without even a social network to bind them.

Sets and networks are everywhere and they are the fundamental social forms from which anyone with online access learns and they are all driven by people or crowds of people, not by designed processes and formal patterns of interaction.

Many years ago Chambers, then head of Cisco, was ridiculed for suggesting that e-learning would make email look like a rounding error. He was absolutely right, though, if not in quite the way he meant it: how many people reading this do not turn first to Google, Wikipedia or some other online, crowd-driven tool when needing or wanting to learn something? Who does not learn significant amounts from their friends, colleagues or people they follow through social networks or email? We are swimming in a sea of billions of teachers: those who inform, those with whom we disagree, those who act as role models, those who act as anti-models, those that inspire, those that affirm, those that support, those we doubt, those we trust. If there was ever a battle for supremacy between face-to-face and e-learning (an entirely artificial boundary) then e-learning has won hands down, many times over. Not so’s you’d know it if you look at our universities. Very oddly, even an online university like Athabasca is largely trapped in the same constrained and contingent pattern of teaching that has its origins in the limitations of physical space as its physical counterparts. It is largely as though the fact of the Internet has had no significant impact beyond making things slightly more convenient. Odd.

Replicating the wrong things

Those of us who teach entirely online are still, on the whole, making use of the single social form of the group, with all of its inherent restrictions, hierarchies and limitations inherited from its physical ancestors. Athabasca is at least a little revolutionary in providing self-paced courses at undergraduate level (albeit rarely with much social engagement at all – its inspiration is as much the book as the classroom) , but it still typically keeps the rest of the trappings, and it uses groups like all the rest in most of its graduate level courses. Rather than maintaining discipline in classrooms through conventional means, we instead make extensive use of assessments which have become, in the absence of traditional disciplinary hierarchies that give us power in physical spaces, our primary form of control as well as the perceived primary purpose of at least higher education (the one follows from the other). It has become a transaction: if you do what I say and learn how I tell you to learn then, if you succeed, I will give you a credential that you can use as currency towards getting a job. If not, no deal. Learning and the entire process of education has become secondary to the credential, and focused upon it. We do this to replicate a need that was only there in the first place thanks to physics, not because it made sense for learning.

As alternative forms of accreditation become more commonplace and more reliable, it is hard to see us sustaining this for much longer. Badges, social recommendations, commercial credits, online portfolios, direct learning record storage, and much much more are gaining credence and value.

It is hard to see what useful role a university might play when it is not the best way to learn what you want to learn and it is not the best way to gain accreditation for your skills and knowledge.

Will universities become irrelevant? Maybe not. A university education has always been about a lot more than what is taught. It is about learning ways of thinking, habits of mind, ways of building knowledge with and learning from others. It is about being with others that are learning, talking with them, socializing with them, bumping serendipitously into new ideas and ways of being. All of this is possible when you throw a bunch of smart people together in a shared space, and universities are a good gravitational force of attraction for that. It is, and has always been, about networks and sets as much as if not more than groups. The people we meet and get to know are not just networks of friends but of knowledge. The sets of people around us, explicit and implicit, provide both knowledge and direction. And such sets and nets have to form somewhere – they are not mere abstractions. Universities are good catalysts. But that is only true as long as we actually do play this role. Universities like Athabasca focus on isolated individuals or groups in boundaried courses. Only in odd spaces like here, on the Landing, or in external social sites like Twitter, Facebook or RateMyProfessor, is there a semblance of those other roles a university plays, a chance to extend beyond the closed group and credential-focused course process.

Moving on

We can still work within the old constraints, if we think it worthwhile – I am not suggesting we should suddenly drop all the highly evolved methods that worked in the past at once. Like a horse and cart or a mechanical watch, education still does the job it always did, in ways that more evolved methods will never not replicate, any more than folios beat scrolls or cars beat horses. There will be both gains and losses as things shift. Like all technologies (Kelly, 2010), the old ways of teaching will never go away completely and will still have value for some.  Indeed, they might retain quite a large niche for many years to come. 

But now we can do a whole lot more as well and instead, and the new ways work better, on the whole. In a competitive ecosystem, alternatives that work better will normally come to dominate. All the pieces are in place for this to happen: it is just taking us a little while to collectively realize that we don’t need the trainer-wheels any more. Last gasp attempts to revamp the model, like first-generation xMOOCs, merely serve to illustrate the flaws in the existing model, highlighting in sharp relief the absurdities of adopting group-based forms on an Internet-based scale. imposing structural forms designed to keep learners on track in physical classrooms have no sense or meaning when applied to a voluntary, uncredentiallled and interest-driven course. I think we can do better than that.

The key steps are to disaggregate learning and assessment, and to do away with uniform courses with fixed schedules and pre-determined processes and outcomes. Outsiders, from MOOC providers (they are adapting fast) to publishers are beginning to realize this, as are a few universities like WGU.

It is time to surf the adjacent possible (Kauffman, 2000), to discover ways of learning with others that take advantage of the new horizons, that are not trapped like horseless carriages replicating the limitations of a bygone era. Furthermore, we need to learn to build new virtual environments and learning ecosystems in ways that do not just mimic patterns of the past, but that help people to learn in more flexible, richer ways that take advantage of the freedoms they enable – not personalized (with all the power assertion that implies) but both personal and social. If we build tools like learning management systems or the first generation xMOOC environments like edX, that are trapped into replicating traditional classroom-bound forms, we not only fail to take advantage of the wealth of the network, but we actually reinforce and ossify the very things we are reacting against rather than opening up new vistas of pedagogical opportunity. If we sustain power structures by linking learning and formal assessment, we hobble our capacity to teach. If we enclose learning in groups that are defined as much by who they exclude as who they encompass (Shirky, 2003) then we actively prevent the spread of knowledge. If we design outcome-based courses on fixed schedules, we limit the potential for individual control, and artificially constrain what need not be constrained.

Not revolution but recognition of what we already do

Any and all of this can change. There have long been methods for dealing with the issues of uniformity in course design and structure and/or tight integration of summative assessment to fixed norms, even within educational institutions. European-style PhDs (the ones without courses), portfolio-based accreditation (PLAR, APEL, etc), challenge exams, competency-based ‘courses’,  open courses with negotiable outcomes, assessments and processes (we have several at AU), whole degrees by negotiated learning outcomes, all provide different and accepted ways to do this and have been around for at least decades if not hundreds of years. Till recently these have mostly been hard to scale and expensive to maintain. Not any more. With the growth of technologies like OpenBadges, Caliper and xAPI, there are many ways to record and accredit learning that do not rely on fixed courses, pre-designed outcomes-based learning designs and restrictive groups. Toolsets like the Landing, Mahara or LPSS provide learner-controlled ways to aggregate and assemble both the process and evidence of learning, and to facilitate the social construction of knowledge – to allow the crowd to teach – without demanding the roles and embodied power structures of traditional learning environments. By either separating learning and accreditation or by aligning accreditation with individual learning and competences, it would be fairly easy to make this change and, whether we like it or not, it will happen: if universities don’t do it, someone else will. 

All of traditional education is bound by historical constraint and path dependencies. It has led to a vast range of technologies to cope, such as terms and semesters, libraries, classrooms, courses, lessons, exams, grading, timetables, curricula, learning objectives, campuses, academic forms and norms in writing, disciplinary divisions and subdivisions, textbooks, rules and disciplinary procedures, avoidance of plagiarism, homework, degrees, award ceremonies and a massive range of other big and small inventions and technologies that have nothing whatsoever to do with learning.

Nothing at all.

All are contingent. They are simply a reaction to barriers and limitations that made good sense while those barriers existed. Every one of them is up for question. We need to imagine a world in which any or all of these constraints can be torn down. That is why we need to think about different social forms, that is why we continue to build the Landing, that is why we continue to explore the ways that learning is evolving outside the ivory tower, that is why we are trying to increase learner control in our courses (even if we cannot yet rid ourselves of all their constraints), that is why we are exploring alternative and open forms of accreditation. It is not just about doing what we have always done in slightly better, more efficient ways. Ultimately, it is about expanding the horizons of education itself. Education is not about courses, awards, classes and power hierarchies. Education is about learning. more accurately, it is about technologies of learning – methods, tools, processes, procedures and techniques. These are all inventions, and inventions can be superseded and improved. Outside formal institutions, this has already begun to happen. It is time we in universities caught up.


Dron, J., & Anderson, T. (2014). Teaching crowds: social media and distance learning. Athabasca: AU Press. 

Kauffman, S. (2000). Investigations (Kindle ed.). New York: Oxford University Press. 

Kelly, K. (2010). What Technology Wants (Kindle ed.). New York: Viking. 

Shirky, C. (2003). A Group Is Its Own Worst Enemy. Retrieved from




Time to change education again: let's not make the same mistakes this time round

We might as well start with exams

In case anyone missed it, one of countless examples of mass cheating in exams is being reported quite widely, such as at

The videos are stunning (Chrome and Firefox users – look for the little shield or similar icon somewhere in or near you browser’s address field to unblock the video. IE users will probably have a bar appearing in the browser asking if you want to trust the site – you do. Opera, Konqueror and Safari users should be able to see the video right away), e.g.:

As my regular readers will know, my opinions of traditional sit-down, invigilated, written exams could not be much lower. Sitting in a high-stress environment, unable to communicate with anyone else, unable to refer to books or the Internet, with enormous pressure to perform in a fixed period to do someone else’s bidding, in an atmosphere of intense powerlessness, typically using a technology you rarely encounter anywhere else (pencil and paper), knowing your whole future depends on what you do in the next 3 hours, is a relatively unusual situation to find yourself in outside an exam hall. It is fair enough for some skills – journalism, for example, very occasionally leaves you in similar conditions. But, if it actually is an authentic skill needed for a particular field, then it should be explicitly taught and, if we are serious about it, it should probably be examined under truly authentic conditions (e.g. for a journalist, in a hotel room, cafe, press room, or trench). This is seldom done. It is not surprising, therefore, that exams are an extremely poor indicator of competence and an even worse indicator of teaching effectiveness. By and large, they assess things that we do not teach.

If that were all, I might not be so upset with the idea – it would just be weird and ineffective. However, exams are not just inefficient in a system designed to teach, they are positively antagonistic to learning. This is an incredibly wasteful tragedy of the highest order. Among the most notable of the many ways that they oppose teaching are that:

  • they shift the locus of control from the learner to the examiner
  • they shift the focus of attention from the activity to the accreditation
  • they typically punish cooperation and collaboration
  • they typically focus on content rather than performance
  • they typically reward conformity and punish creativity
  • they make punishments or rewards the reasons for performing, rather than the love of the subject
  • they are unfair – they reward exam skills more than subject skills.

In short, the vast majority of unseen written exams are deeply demotivating (naysayers, see footnote), distract attention away from learning, and fail to discriminate effectively or fairly. They make the whole process of learning inefficient, not just in the wasted time and energy involved surrounding the examination itself, but in (at the very least) doubling the teaching effort needed just to overcome their ill effects. Moreover, especially in the sciences and technologies, they have a strong tendency to reinforce and encourage ridiculous content-oriented ways of teaching that map some abstract notion of what a subject is concerned with to exercises that relate to that abstract model, rather than to applied practices, problem solving and creative synthesis – i.e. the things that really matter.  The shortest path for an exam-oriented course is usually bad teaching and it takes real creativity and a strong act of will to do otherwise. Professional bodies are at least partly culpable for such atrocities.

There is one and only one justification for 99% of unseen written exams that makes any sense at all, which is that it allows us to relatively easily and with some degree of assurance (if very expensively, especially given the harmful effects on learning) determine that the learner receiving accreditation is the one that has learned. It’s not the only way, but it is one of them. That sounds reasonable enough. However, as examples like this show in very sharp relief, exams are not particularly good at that either. If you create a technology that has a single purpose of preventing cheating, then cheats (bearing in mind that the only thing we have deliberately and single-mindedly taught them from start to finish is that the single purpose of everything they do is to pass an exam) will simply find better ways to cheat – and they do so, in spades. There is a whole industry dedicated to helping people to cheat in exams, and it evolves at least as fast as the technologies that we use to prevent it. At least twenty percent of students in North America admit to having at some point in the last year cheated in exams. Some studies show much higher rates overall – 58% of high school students in Canada, for example.  It is hard to think of a more damning indictment of a broken system than this. The problem is likely even worse in other regions of the world. For instance, Davis et al (2009) reckon a whopping 83% of Chinese and 70% of Russian schoolkids cheat on exams. Let me repeat that: only 17% of Chinese people claim never to have cheated in an exam. See a previous post of mine for some intriguing examples of how that happens. When something that most people believe to be wrong is so deeply endemic, it is time to rethink the whole thing. No amount of patching over and tweaking at the edges is going to fix this.

But it’s not just exams

This is part of a much broader problem, and it is a really simple and obvious one: if you teach people that accreditation rather than learning is the purpose of education, especially if such accreditation makes a massive difference to what kind and quality of life they might have as a result of having or not having it, then it is perfectly reasonable that they should find better ways of achieving accreditation, rather than better ways of learning. Even most of our ‘best’ students, the ones that put in some of the hardest work, tend to be focused on the grades first and foremost, because that is our implicit and/or explicit subtext. To my shame, I’m as guilty as anyone of having used grades to coerce: I have been known to annoy my students with a little song that includes the lines ‘If a good mark is what you seek, blog, blog, blog, every week’.  Even if we assume that student will not cheat (and, on the whole, mature students like those that predominate at Athabasca U do not cheat, putting the lie to the nonsense some have tried to promote about distance education leading to more cheating) it challenges teachers to come up with ways of constructively aligning assessment and learning, so that assessment actually contributes to rather than detracts from learning. With skill and ingenuity, it can be done, but it is hard work and an uphill struggle. We really shouldn’t have to be doing that in the first place because learning is something that all humans do naturally and extremely willingly when not pressured to do so. We don’t need to be forced to do what we love to do. We love the challenge, the social value, the control it brings. In fact, forcing us to do things that we love always takes away some or all of the love we feel for them. That’s really sad. Educational systems make the rods that beat themselves.

Moving forwards a little

We can start with the simple things first. I think that there are ways to make exams much less harmful. My friend and colleague Richard Huntrods, for example, simply asks students to reflect about what they have done on his (open, flexible and learner-centred) course. The students know exactly what they will be asked to do in advance, so there is no fear of the unknown, and there is no need for frantic revising because, if they have done the work, they can be quite assured of knowing everything they need to know already. It is a bit odd not to be able to talk with others or refer to notes or the Web, but that’s about all that is inauthentic. This is a low-stress approach that demands nothing more than coming to an exam centre and writing about what they have done, which is an activity that actually contributes substantially to effective learning rather than detracting from it. It is constructively aligned in a quite exemplary way and would be part of any effective learning process anyway, albeit not at an exam centre.  It is still expensive, it still creates a bit more stress for students who have learned to fear exams, but it makes sense if we feel we don’t know our students well enough or we do not trust them enough to credit them for the work they have done. Of course, it demands a problem- or enquiry-based, student-centred pedagogy in the first place. This would not be effective for a textbook wraparound or other content-centric course. But then, we should not be writing those anyway as little is more certain to discourage a love of learning, a love of the subject, or a satisfying learning experience. 

There are plenty of exam-like things that can make sense, in the right kind of context, when approached with care: laboratory exercises, driving tests, and other experiences that closely resemble those of the practice being examined, for example, are quite sensible approaches to accreditation that are aligned with and can even be supportive of the learning process. There are also ways of doing exams that can markedly reduce the problems associated with them, such as allowing conversation and the use of the Internet, open-book papers that allow students to come and go as needed, questions that challenge students to creatively solve problems, exams that use questions created by the students themselves, oral exams that allow examiners to have a useful learning dialogue with examinees, and so on. There are different shades of grey and not all are as awful as the worst, by any means. There are other ways that tend to work better – for instance, badges, portfolios, and many other approaches that allow us to demonstrate competence rather than compliance, that rely on us coming to know our students, and that allow multiple approaches and different skills to be celebrated – but not all exam-like things are as bad as the worst of them.

And, of course, if we avoid exams altogether then we can do much more useful things, like involving students in creating the assignments; giving feedback instead of grades for work done; making the work relevant to student needs, allowing multiple paths, different evidence; giving badges for achievement, not to goad it, etc, etc. There’s a book or two in what we can do to limit the problems though, ultimately, this can only take us so far because, looming at the end of every learning path at an institution, is the accreditation. And therein lies the rub.

Moving forwards a lot

The central problem that we have to solve is not so much the exam itself as the unbreakable linkage of teaching and accreditation. Exams are just a symptom of a flawed system taken to its obvious and most absurd conclusion. But all forms of accreditation that become the purpose of learning are carts driving horses. horse pulling car I recognize and celebrate the value of authentic and meaningful accreditation, but there is no reason whatsoever that learning and accreditation should be two parts of the same system, let alone of the same process.  It it were entirely clear that the purpose of taking a course (or any other learning activity – courses are another demon we need to think carefully about) were to learn, rather than to succeed in a test, then education would work a great deal better. We would actually be able to do things that support learning, rather than that support credit scores; to give feedback that leads to improvement, rather than as a form of punishment or reward; to allow students to expand and explore pathways that diverge rather than converge; to get away from our needs and to concentrate on those of our students; to support people’s growth rather than to stunt it by setting false goals; to valorize creativity and ingenuity; to allow people to gain the skills they actually need rather than those we choose to teach; to empower them, rather than to become petty despots ourselves. And, in an entirely separate process of assessment that teachers may have little or anything to do with at all, we could enable multiple ways to demonstrate learning that are entirely dissociated from the process. Students might use evidence from learning activities we help them with as something to prove their competence, but our teaching would not be focused on that proof. It’s a crucial distinction that makes all the difference in the world.  This is not a revolutionary idea about credentialling – it’s exactly what many of the more successful and enlightened companies already do when hiring or promoting people: they look at the whole picture presented, take evidence from multiple sources, look at the things that matter in the context of application, and treat each individual as a human being with unique strengths, skills and weaknesses, given the evidence available. Credentials from institutions may be part of that right now, but there is no reason for that idea to persist and plenty of alternative ways of showing skills and knowledge that are becoming increasingly popular and significant, from social network recommendations to open badges to portfolios. In fact, we even have pockets of such processes well entrenched within universities. Traditional British PhDs, for example, while they are examined through the thesis and an oral exam (a challenging but flexible process), are examined on evidence that is completely unique to the individual student. Students may target the final assessment a bit, but the teaching itself is not much focused on that. Instead, it is on helping them to do what they want to do. And, of course, there are no grades involved at all – only feedback.


It’s going to be a long slow struggle to change the whole of the educational system across most of the world, especially as there’s a good portion of the world that would be delighted to have these kinds of problems in the first place. We need education before we can have cheating. But we do need to change this, and exams are a good place to start. It changed once before, with far less research to support the change, and far weaker technologies and communication to enable it. And it changed recently. In the grand scheme of things, the first ever university exam of the kind we now recognize as almost universal was the blink of an eye ago. The first ever written exam of the kind we use now (not counting a separate branch for the Chinese Civil Service that began a millenium before) was at the end of the 18th Century (the Cambridge Tripos) and it was only near the end of the 19th Century that written exams began to gain a serious foothold. This was within the lifetime of my grandparents. This is not a tradition steeped in history – it’s an invention that appeared long after the steam engine and only became significant as the internal combustion engine was born.  I just hope institutions like ours are not heading back down the tunnel or standing still, because those heading into the light are going to succeed while those that stay in the shadows will at best become the laughing stock of the world. 

On the subject of which, do watch the video. It is kind-of funny in a way, but the humour is very dark and deeply tragic. The absurdity makes me want to laugh but the reality of how this crazy system is wrecking people’s lives makes me want to cry. On balance, I am much more saddened and angered by it than amused. These are not bad people: this is a bad system. 


Davis, S., Drinan, P., and Gallant, T. (2009). Cheating in School: What We Know and What We Can Do. West Sussex, UK: Wiley-Blackwell.


I know some people will want to respond that the threat or reward of assessment is somehow motivating. If you are one of those, this postscript is for you. 

I understand what you are saying. That is what many of us were taught to believe and it is one way we justify persisting despite the evidence that it doesn’t work very well. I agree that it is motivating, after a fashion, very much like paying someone to do something you want them to do, or hitting them if they don’t. Very much indeed. You can create an association between a reward/punishment and some other activity that you want your subject to perform and, as long as that association persists, you might actually make them do it. Personally speaking, I find that quite offensive, not to mention only mildly effective at achieving its own limited ends, but each to their own. But notice how you have replaced the interest in the activity with an interest in the reward and/or the desire to avoid punishment. Countless research studies from several fields have pretty conclusively shown that both reward and punishment are strongly antagonistic to intrinsic motivation and, in many cases, actually destroy it altogether. So, you can make someone do something by destroying their love of doing it – good job. But that doesn’t make a lot of sense to me, especially as what they have learned is presumably meant to be of ongoing value and interest, to help them in their lives. It is my belief that, if you want to teach effectively, you should never make people learn anything – you should support them in doing so if that is what they want to do. It is good to encourage and enthuse them so that they want to do it and can see the value – that’s a useful teacher role – but it’s a whole different ballgame altogether to coerce them. Alas, it is very hard to avoid it altogether until we change education, and that’s one good reason (I hope you agree) we need to do that.

For further information, you could do worse that to read pretty much anything by Alfie Kohn. If you are seeking a broader range of in-depth academic work, try the Self Determination Theory site.

Defaults matter

I have often written about the subtle and not-so-subtle constraints of learning management systems (LMSs) that channel teaching down a limited number of paths, and so impose implicit pedagogies on us that may be highly counter productive and dissuade us from teaching well – this paper is an early expression of my thoughts on the matter. I came across another example today.

When a teacher enters comments on assignments in Moodle (and in most LMSs), it is a one-time, one-way publication event. The student gets a notification and that’s it. While it is perfectly possible for a dialogue to continue via email or internal messaging, or to avoid having to use such a system altogether, or to overlay processes on top of it to soften the hard structure of the tool, the design of the software makes it quite clear this is not expected or normal. At best, it is treated as a separate process. The design of such an assignment submission system is entirely about delivering a final judgement. It is a tacit assertion of teacher power. The most we can do to subvert that in Moodle is to return an assignment for resubmission, but that carries its own meanings and, on resubmission, still returns us to the same single feedback box.

Defaults are very powerful things that profoundly shape how we behave (e.g. see here, here and here). Imagine how different the process would be if the comment box were, by default, part of a dialogue, inviting response from the student. Imagine how different it would be if the student could respond by submitting a new version (not replacing the old) or by posting amendments in a further submission, to keep going until it is just right, not as a process of replacement but of evolution and augmentation. You might think of this as being something like a journal submission system, where revisions are made in response to reviewers until the article is acceptable. But we could go further. What if it were treated as a debugging process, using approaches like those in Bugzilla or Github to track down issues and refine solutions until they were as good as they could be, incorporating feedback and help from students and others on or beyond the course? It seems to me that, if we are serious about assignments as a formative means of helping someone to learn (and we should be), that’s what we should be doing. There is really no excuse, ever, for a committed student to get less than 100% in the end. If students are committed and willing to persist until they have learned what they come here to learn, it is not ever the students’ failure when they achieve less than the best: it is the teachers’.

This is, of course, one of the motivations behind the Landing. In part we built this site to enable pedagogies like this that do not fit the moulds that LMSs ever-so-subtly press us into. The Landing has its own set of constraints and assumptions, but it is an alternative and complementary set, albeit one that is designed to be soft and malleable in many more ways than a standard LMS. The point, though, is not that any one system is better than any other but that all of them embed pedagogical and process assumptions, some of which are inherently incompatible.

The solution is, I think, not to build a one-size-fits-all system. Yes, we could easily enough modify Moodle to behave the way I suggest and in myriad other ways (e.g. I’d love to see dialogue available in every component, to allow student-controlled spaces wherever we need them, to allow students to add to their own courses, etc) but that doesn’t work either. The more we pack in, the softer the system becomes, and so the harder it is to operate it effectively. Greater flexibility always comes at a high price, in cognitive load, technical difficulty and combinatorial complexity. Moreover, the more we make it suit one group of people, the less well it suits others. This is the nature of monolithic systems.

There are a few existing ways to greatly reduce this problem, without massive reinvention and disruption. One is to disaggregate the pieces. We could build the LMS out of interoperable blocks so that we could, for instance, replace the standard submission system with a different one, without impacting other parts of the system. That was the goal of OKI and the now-defunct E-Framework although, in both cases, assembly was almost always a centralized IT management function and not available to those who most needed it – students and teachers. Neither have really made it to the mainstream. Sakai (an also-ran LMS that still persists) continues to use OKI technologies under the hood but the e-framework (a far better idea) seems dead in the water. These were both great ideas. There just wasn’t the will or the money, and competition from incumbents like Moodle and Blackboard was too strong. Other widget-based methods (e.g. using Wookie) offer more hope, because they do not demand significant retooling of existing systems, but they are currently far from on the ascendent and the promising EU TENCompetence project that was a leader behind this seems moribund, its site offline.

Another approach is to use modules/plugins/building blocks within an existing system. However, this can be difficult or impossible to manage in a manner that delivers control to the end user without at the same time making it difficult for those that do not want or need such control, because LMSs are monoliths that have to address the needs of many people. Not everyone needs a big toolkit and, for many, it would actively make things worse if they had one. Judicious use of templates can help with that, but the real problem is that one size does not fit all. Also, it locks you in to a particular platform, making evolution dependent on designers whose goals may not align with how you want to teach.

Bearing that in mind, another way to cope with the problem is to use multiple independent systems bound by interoperability standards – LTI, OpenBadges or TinCan, for example. With such standards, different learning platforms can become part of the same federated environment, sharing data, processing, learning paths and so on, allowing records to be kept centrally while enabling incompatible pedagogies to run independently within each system. That seems to me to be the most sensible option right now. It’s still more complex for all concerned than taking the easy path, and it increases management burden as well as replicating too much functionality for no particularly good reason. But sometimes the easy path is the wrong one, and diversity drives growth and improvement.


There is an ever-growing assortment of x-literacies. Here are just a few that have entered the realms of academic discourse:

  • Computer literacy
  • Internet literacy
  • Digital literacy
  • Information literacy
  • Network literacy
  • Technology literacy
  • Critical literacy
  • Health literacy
  • Ecological literacy
  • Systems literacy
  • Statistical literacy
  • New literacies
  • Multimedia literacy
  • Media literacy
  • Visual literacy
  • Music literacy
  • Spatial literacy
  • Physical literacy
  • Legal literacy
  • Scientific literacy
  • Transliteracy
  • Multiliteracy
  • Metamedia literacy

This list is a small subset of x-literacies: if there is some generic thing that people do that demands a set of skills, there is probably a literacy that someone has invented to match.  I’ll be arguing in this post that the majority of these x-literacies miss the point, because they focus on tools and technologies more than the reasons and contexts for using them. 

The confusion starts with the name. ‘Literacy’, literally, means the ability to read and write, so most other literacies are not. We might just as meaningfully talk about ‘multinumeracy’ or ‘digital numeracy’ as ‘multiliteracy’ or ‘digital literacy’ and, for some (e.g. ‘statistical literacy’), ‘numeracy’ would actually make far more sense. But that’s fine – words shift in meaning all the time and leave their origins behind. It is not too hard to see how the term might evolve, without bending the meaning too much, to relate to the ability to use not just text but any kind of symbol system. That sometimes makes sense – visual, media or musical literacy, for example, might benefit from this extension of meaning. But most of the literacies I list above have at best only a partial relationship to symbol systems. I think what really appeals to their inventors is that describing a set of skills as ‘x-literacy’ makes ‘x’ seem more important than just a set of skills. They bask in the reflected glory of reading and writing, which actually are awfully important. 

I’m OK with a bit of bigging up, though. The trouble is that prefixing ‘literacy’ with something else infects how we see the thing. It has certainly led to many silly educational initiatives with poorly defined goals and badly considered outcomes. This is because, all too often, it draws attention far too much to the technology and skills, and far too far away from its application in a specific culture. This context-sensitive application (as I shall argue below) is actually what makes it ‘literacy’, as opposed to ‘skill’, and is in fact what makes literacy important.

So this is my rough-draft attempt to unravel the confusion so that at least I can understand it – it’s a bit of sense-making for me. Perhaps you will find it useful too. Some of this is not far off the underpinnings of the multiliteracy camp (albeit with notably different conclusions) and one of my main conclusions will be very similar to what many others have concluded too: that literacy spans many skills, tools and modalities, and is highly contextualized to a given culture at a given time. 

Culture and technology

When they pass a certain level of size and complexity, societies need more than language, ritual, stories, structures and laws passed by word of mouth (mostly things that demand physical co-presence) in order to function. They need tools to manage the complexity, to distribute cognition, replicate patterns, preserve structures, build new ones, pass ideas around, and to bind a dispersed society together. Since the invention of printing, most of the tools that play this role have been based on the technologies of text, which makes reading and writing fundamental to participation in a modern society and its numerous cultures and subcultures.

To be literate has, till recently, simply meant that you can do text. There may also be some suggestion of the ability to use text that relate to abilities to decipher, analyze, synthesize and appreciate: these are at least the product of literacy if not a part of it, and they are among the main reasons we need literacy. But the central point here is that people who are literate, in the traditional sense, are simply able to operate the technology of writing, whether as consumers, producers or both. Why this is ‘literacy’ rather than simply a skillset like any other, is that text manipulation is a prerequisite for people to participate in their culture. It lets them draw on accumulated knowledge, add to it, and be able to operate the social and organizational machinery. At its most basic, this is a pragmatic need: from filling in forms and writing letters to reading signs, labels on food, news, books, contracts and so on. Beyond that, it is also a means to disseminate ideas, challenges, and creative thought in a society. It is futhermore a fundamental technology for learning, arguably second only to language itself in importance. More than that, it is a technology to think with and extend our thinking far beyond what we could manage without such assistance. It lets us offload and enhance our cognition. This remains true, despite multiple other media vying for our attention, most of which incorporate text as well as other forms. I could not do what I am doing right now without text because it is scaffolding and extending the ideas I started with. Other media and modalities can in some contexts achieve this end too and, for some purposes, might even do it better. But only text does it so sweepingly across multiple cultures, and nothing but text has such power and efficiency. In all but the most limited of cultures, text performs culture, and text makes culture: not all of it, by any means, but enough to matter more than most other learned technology skills.

Other ways to perform culture

There have for countless millenia been many other media and tools for cultural transmission and coordination, including many from way before the invention of writing. Paintings, drawings, sculpture, dance, music, rituals, maps, architecture, furniture, transport systems, sport, games, roads, numbers, icons, clothing, design, money, jewellery, weapons, decoration, litany, laws, myths, drama, boats, screwdrivers, door-knobs and many many more technologies, serve (often amongst their other functions) as repositories of cognition, belief, structure and process. They are not just the signs of a culture: they play an active role in its embodiment and enactment. But text, maybe hand in hand with number, holds a special place because of its immense flexibility and ubiquitous application. Someone else can make roads or paintings or door-knobs and everyone else can benefit without needing such skills – this is one of the great benefits of distributed labour. But almost everyone needs skill in text, or at least needs to be close to someone with it. It is far from the only fruit but everyone needs it, just to participate in the cultures of a society.

Cultures and technologies

There are many senses in which we might consider technology and culture to be virtually synonymous. Both are, as Ursula Franklin puts it, ‘the way things are done around here’. Both concern process, structure and purpose. However, I think that there are many significant things about cultures  – attitudes, frames of mind, beliefs, ways of seeing, values, ideologies, for instance – that may be nurtured or enacted by technology, but that are quite distinct from it. Such things are not technological inventions – they are the consequence, precursors and shapers of inventions. Cultures may, however, be ostensively defined by technologies even if they are not functionally identical with them. Archeologists, sociologists and historians do it all the time. Things like language, clothing, architecture, tools, laws and so on are typically used to distinguish one culture from another.

One of the notable things about technologies is that they tend to evolve towards both increasing complexity and increasing specialization. This is a simple dynamic of the adjacent possible. The more we add, the more we are able to add, the more combinations and the more new possibilities that were unavailable to us before reveal themselves, so the more we diversify, subdivide, concatenate and invent. Thus it goes on ad infinitum (or at least ad singularum). Technologies tend to continuously change and evolve, in the absence of unusual forces or events that stop them. Of course, there are countless ways that technologies, notably in the form of religions, can slow this down or reverse it, as well as catastrophes that may be extrinsic or that may result from a particularly poor choice of technologies (over-cultivation of the land, development of oil-dependency, nuclear power, etc). There are also many technologies that play a stabilizing rather than a disruptive role (education systems, for example). Overall, however, viewed globally, in large cultures, the rate of technological change increases, with ever more rapid lifecycles and lifespans.  This means that skills in using technologies are increasingly deictic and increasingly short-lived or, if they survive, increasingly marginalized. In other words, they relate specifically to contexts outside of which they have different or no meaning, and those contexts keep changing thanks to the ever-expanding adjacent possible. Skills and techniques become redundant as contexts change and cultures evolve. That’s a slight over-simplification, but the broad pattern is relentless.

Towards a broader definition of ‘literacy’

Literal literacy is the ability to use a particular technology (text) to give us the ability to learn from, interact with and add to our various different cultures. The label implies more than just reading and writing: to be literate implies that, as a consequence of reading and writing, stuff has been and will be read – not just reading primers, but books, news, reports and other cultural artefacts. In the recent past, text was about the most significant way (after talking and showing) that cultural knowledge was disseminated. In recent decades, there have been plentiful other channels, including movies, radio, TV, websites, multimedia and so on. It was only natural that people would see the significance of this and begin to talk about different kinds of literacy, because these media were playing a very similar cultural role to reading and writing. The trouble is that, in doing so, the focus shifted from the cultural role to the technology itself. At its most absurd, it resulted in terms like ‘computer literacy’ that led to initiatives that were largely focused on building technical skills messily divorced from the cultures they were supporting and of little or no relevance to being an active  member of such a culture.

So here’s a tentative (re)definition of ‘literacy’ that restores the focus: literacy is the prerequisite set of technological skills needed for participation in a culture.  And, of course, we are all members of many cultures. There are other things that matter in a culture apart from technological skills, such as (for example) a playful spirit, honesty, caring for others, good judgement, curiosity, ethical sensibility, as well as an ability to interpret, synthesize, classify, analyze, remix, create and seek within the cultural context. These are probably more important foundations of most cultures than the tools and techniques used to enact them. But, though traits like these can certainly be nurtured, inculcated, encouraged, shown, practiced, learned and improved, they are not literacies. These are the values and valued traits in a culture, not the skills needed to be a part of it, though there is an intimate iterative relationship between the two. In passing, I think it is those traits and others like them that education is really aimed at developing: the rest, the literacy part, is transient and supportive. We don’t have values and propensities in order to achieve literacy. We learn most of them at least partly through the use of literacies, and literacies are there to support them and let them flourish, to provide mechanisms through which they can be exercised.

My suggestion is that, rather than defining a literacy in terms of its technologies, we should define it in terms of the particular culture it supports. If a culture exists, then there is a literacy for it, which is comprised of a set of skills needed to participate in that culture. There is literacy for being a Canadian, but there is equally literacy for being part of the learning technologies community (and for each of its many subcultures), being a researcher, a molecular scientist, a member of a family or of a local chess club. There is literacy for every culture we belong to. Some technological skillsets cross multiple cultures, and some are basic to them. The first of these is nearly always language. Most cultures, no matter how trivial and constrained, have their own vocabularies and acceptable/expected forms of language but, apart from cases where languages are actually a culturally distinguishing factor (e.g. many nations or tribes) they tend to inherit most of the language they use from a super-culture they are a part of. Reading and writing are equally obvious examples of skills that cross multiple cultures, as are numeracy skills. This is why they matter so much – they are foundational. Beyond that, different technologies and consequent skills may matter as much or more in different cultures. In a religious culture these might include the rules, rituals, principles, mythologies and artefacts that define the religion. In a city culture they could include knowledge of bylaws, transit systems, road layouts, map-reading, zones, and norms. In an academic culture it might relate to (for instance) methodologies, corpora, accepted tenets, writing conventions, dress standards, pedagogies, as well as the particular tools and methods relating to the subject matter. In combination, these skills are what makes someone in a given culture literate in that culture.

For instance

Is there such a thing as computer literacy? I’d say hardly at all. In fact, it makes little sense at all to think in those terms. It’s a bit like claiming there is pen literacy, table literacy or wall literacy.  But there might be computing literacy, inasmuch as there may be a culture of computing. In fact, once upon a time, when dinosaurs roamed the earth and people who used computers had to program them themselves, it might have been a pretty important culture that any people who wished to use computers for any purpose at all would need to at least dip their toes in and, most likely, become a part of. That culture is still very much there but it is not a prerequisite of owning a computer that one needs to be a part of it any more – computing culture is now the preserve of a relatively tiny band of geeks who are dwarfed in number by those that simply use computers. The average North American home has dozens of computers, but few of their users need to or want to be part of a computing culture. They just want to operate their TVs, drive their cars, use their phones, take photos, browse the Web, play the keyboard, etc. This is as it should be. Those in a computing culture are undoubtedly still an important tiny band who do important things that affect the rest of the world a lot, but they are just another twig at the end of a branch of the cultural tree, not the large stem that they once were. Within what is left of that computing culture there are a lot of overlapping computing sub-cultures: engineers, bricoleurs, hardware freaks, software specialists, interaction designers, server managers, programmers, object-oriented programmers, PHP enthusiasts, iOS/Mac users, Android/Windows users, big-endians, little-endians. Each sub-culture has its own literacy, its own language, its own technologies on which it is founded, as well as many shared commonalities and cross-cutting concerns. 

Is there such a thing as ‘digital literacy’? Hardly. There is no significant distinctive thing that is digital culture, so there is no such thing as digital literacy. Again, like computing culture, once upon a time, there probably was such a thing and it might have mattered. I recall a point near the start of the 1990s, as we started to build web servers, connect Gopher servers, use email and participate in Usenet Newsgroups, at which it really did seem that we were participating in a new culture, with its own evolving values, its own technologies, its own methods, rules, and ethics. This has almost entirely evaporated now. That culture has in part been absorbed and diffused, in part branched into subcultures. Being ‘digital’ is no longer a way of defining a culture that we are a part of, no longer a way of being. Unless you are one of the very few that has not in the last decade or so bought a telephone, a TV, a washing machine, a stove, or one of countless other digital devices, you are ‘digital’. And, if there were such a thing as a digital culture, you would almost certainly be a part of the digital culture if you are reading this. This is too tenuous a thing – it has nothing to bind it apart from the use of digital devices that are almost entirely ubiquitous, at least in first world cultures, and that are too diverse to bind a culture together. There are, as a result, insufficient shared values to make it meaningful any more. It is, however, still possible to be anti-digital. Some digital luddites (I mean this non-perjoratively to refer to anyone who deliberately eschews digital technologies) do very much have cultures and probably have their own literacies. And there might well be literacies that relate to specific digital technologies and subsets of them. Twitter has a culture, for instance, that implies rules, norms, behaviours, language and methods that anyone participating should probably know. The same may be (and at some point certainly was) true of Facebook, but I think that is less obvious now.

Network culture is probably still a thing, but it is already fading in much the same way that digital culture has already faded, with ubiquity, diversity and specialization each taking bites out of it. We have seen network culture norms develop and spread. New vocabularies have been developed with subtle nuances (LOL, ROFL, LMFAO) that often branch into meanings that may only be deciphered by a few sub-cultures but that may subsequently spread into other cultures (TIL, RT, TLDR, LPT).   We have had to learn new skills, figuring out how to negotiate privacy, filter bubbles, trolls, griefing, effective tagging, filtering, sorting, unfriending and friending, and much much more, in order to participate in a social network culture, one that is (for now) still a bit distinct from other cultures. But that culture has already diversified, spread, diffused, and it is getting more diffuse every day. As it becomes larger and more diverse it ceases to be a relevant means of identifying people, and it ceases to be something we can identify with.

Much of the reason for network culture’s retreat is technological. It was enabled by an assembly of technologies and spawned new ones (norms, conventions, languages, etc) but, as they evolve, other technologies will render it irrelevant. Technologies often help to establish cultures and may even form their foundation but, as they and the cultures co-develop, the technologies that helped build those cultures stop being definitional of them. Partly this results from diffusion, as ways of thinking creep back into the broader super-culture and as more and more diverse cultures spread into it. Partly it is because new technologies take their place and diversify into niches. Partly it is because, rather than us learning to use technologies, they learn to use us. This sounds creepier than it really is: what I mean is that individual inventors see the adjacent possibles and grab them, so technologies change and, in many cases, become embedded, replacing our manual roles in them with pre-orchestrated equivalents. Take, for example, a trivial thing like emoticons, images built from arbitrary text characters, that take some of the role of phatic communication in text communication – like this :-). Emoticons are increasingly being replaced by standardized emojis, like this Smile. Bizarrely, there are now social networks based on emoji that use no text at all. I am intrigued by the kind of culture that this will entail or support but the significant point here is that what we used to have to orchestrate ourselves is now orchestrated in the machine. Consequently, the context changes, problems are solved, and new problems emerge, often as a direct result of the solution. Like, how on earth do you communicated effectively with nothing but emojis Undecided?

Where do we go from here? 

Rather than constantly sub-divide literacies into ever more absurdly-named niches named for the tools to which they relate, or attempt to find bridging competences or values that underly them and call those multiliteracies (or whatever), I propose that we should think of a literacy as being a highly situated set of skills that enable us to play a role as an operator in any given social machine, as creators and/or consumers of a culture – any culture and every culture.  The specificity we choose should be determined by the culture that interests us, not by any predetermined formula. Each subculture has its own language, tools, methods, and signs, and each comes with a set of shared (often contested) attitudes, beliefs, values and passions, that both drive and are driven by the technologies they use.  As a result, each has its own history, that branches from the histories of other subcultures, helping to make it more distinct. This chain of path dependencies helps to reinforce a culture and emphasize its differences. It can also lead to its demise.

In most if not all cases, literacy is an assembly of skills and techniques, not a single skill. ‘Literacy’ is thus simply a label for the essential skills and techniques needed to actively participate in a given culture. Such a culture may be big or small. It may span millenia or centuries but it may span only decades, years or (maybe) months or even weeks or days. It may span continents or exist only in a single room. I have, for example, been involved with courses, workshops and conferences that have evolved their own fleeting cultures, or at least something prototypical of one. In my former job I shared an office with a set of colleagues that developed a slightly different culture from that of the office next door. Of course, the vast majority of our culture was shared because we performed similar roles in the same department in the same organization, the same country, the same field, the same language, the same ethos. But there were differences that might, in some contexts and for some purposes, be important. For most contexts, they were probably not.

Researching literacies 

Assuming that we know what culture we are looking at, identifying literacy in any given culture is simply (well…simply-ish) a question of looking at the technologies that are used in that culture.  While technology use is far from a complete definition of a culture, what makes it distinct from another may be described in terms of its technologies, including its rules, tools, methods, language, techniques, practices, standards and structures. This seems a straightforward way of thinking about it, if seemingly a bit circular. We identify cultures by their technology uses, and define literacy by technology use in a culture. I don’t think this apparent circularity is a major issue, however, as this is an iterative process of discovery: we may start with coarse differentiators that distinguish one culture from another but, as we examine them more closely, will almost certainly find others, or find further differentiators that indicate subcultures. A range of methods and methodologies may be used here, from grounded theory to ethnography, from discourse analysis to Delphi methods, simple observation, questionnaires, interviews, focus groups, and so on. If we want to know about literacy in a culture, we have to discover what technologies are foundational in that culture.

Most of the cultures we belong to are subcultures of some other or others, while others straddle borders between different and otherwise potentially unrelated cultures.  Some skills that partially constitute a given literacy will cross many other cultural boundaries. Almost all will involve language, most will involve reading and writing, many will involve number, lots will involve visual expression, quite a few will involve more or less specific skills using machines (particularly software running on computers, some of which may be common). The ability to create will usually trump the ability to consume although, in some cultures, prosumption may be a defining or overwhelmingly common characteristic (those that emerge in social networks, for instance).

This all implies that a first concern when researching literacy for a given culture, is to identify that culture in the first place, and decide why it is of interest. While this may in some cases be obvious, there may often be subcultures and cross-cultural concerns that could make it more complex to define. One way to help separate out different cultures is to look at the skills, terminology, technologies, implicit and explicit rules, norms, and patterns of technology use in the subset of people that we are looking at. If there are patterns of differences, then there is a good chance that we have identified a cultural divide of some kind. A little more easily, we can also look both at why people are excluded from a culture, and seek to discover the things people need to learn to become a part of it – to look at the things that distinguish an outsider from an insider and how people transition from one to the other.

For example, the literacy for the culture of a country is almost entirely defined by invention. Countries are technologies, first and foremost. They have legislated (if often disputed) borders and boundaries, laws, norms, language, ways of doing things, patterns, establishments, and institutions that are almost entirely enshrined in technology. It is dead easy to spot this particular culture and mostly simple enough to figure out who is not in it and, normally, what they need to do to become a part of it. To be literate in the context of a country is to have the tools to be able to know and to actively interact with the technologies that define it. To give a simple example, although it is quite possible to be Canadian with only a limited grasp of English and/or French, part of what it means to be literate in Canadian culture is to speak one or (ideally) both languages. Other languages are a bonus, but those two are foundational. It is also possible to see similar patterns in religious cultures, academic cultures, sports cultures, sailing cultures and so on. We can see it in subcultures – for example, goths and hipsters are easily identified by a set of technologies that they use and create, because many of them are visible and definitional.  It gets trickier once we try to find subcultures of such easily identified sets but, on the whole, different technologies mark different cultures.

What makes all this technical detail worth knowing is not that different sets of people use different tools but that there are consequences of doing so. Technologies have a deep impact on attitudes, values, beliefs and relationships between people. In turn these values and beliefs equally impact the technologies that are used, developed, and valued. This is what matters and this is what is worth investigating. This is the kind of knowledge that is needed in order to effect change, whether to improve literacy within a culture or to change the culture itself. For example, imagine a university that runs on highly prescriptive processes and a reward structure based on awards for performance. You may not have to look far to find an example. Such a university might be dysfunctional on many counts, either because of lack of literacy in the technologies or because the technologies themselves are poorly considered (or both). One way to improve this would be to ensure that all its members are able to operate the processes and gain awards. This would be to improve literacy within the culture and would, consequently, reinforce it and sustain it. This might be very bad news if the surrounding context changes, making it significantly harder to adapt and change to new demands, but it would be an improvement by some measures. Another, not necessarily conflicting, approach would be to change or eliminate some of the processes, and get rid of or change the nature of rewards for performance: to modify the machinery that drives the culture. This would change the culture and thus change the literacy needed to operate within it. It might do unexpected things, especially as the existing attitudes and values may be at odds with the new culture: people within it would be literate in things that are not relevant or useful any more, while not having literacy needed to operate the new tools and structures. Much existing work surrounding x-literacies fails to clearly make this crucial distinction. By focusing largely on the technological requirements and ignoring the culture, we may reinforce things that are useless, redundant or possibly harmful. For instance, multimedia literacy might be great, sure. But for what and for whom? And in what forms? Different skillsets are needed in different contexts, and will have different value in different cultures.

To conclude

I have proposed that we should define literacy as the skills needed to operate the technologies that underpin a particular culture. While some of those skills are common to many cultures, the precise set and the form they take is likely different in almost every culture, and cultures evolve all the time so no literacy is forever. I think this is a potentially useful perspective.

We cannot sensibly define a set of skills or propensities without reference to the culture that they support, and we should expect differences in literacies both between different cultures and across time and space in any given culture. We can ask meaningful questions about literacy in a culture of (say) people who use Twitter for learning and research as opposed to those needed by people that only use Twitter to stay in touch with one another.  We can look at different literacies for people who are Canadian, people who are in schools, people of a particular religion, people who like a particular sport, people who research learning technologies, people in a particular office, people who live in Edmonton, not to mention their intersections and their subsets. By looking at literacy as simply a set of skills needed for a given culture we can gain large insights into the nature of that culture and its values. As a result, we can start to think more carefully about which skills are important, whether we want to simply support the acquisition of those skills, or whether we want to transform the culture itself.

This is just my little bit of sense making. I have very probably trodden territory that is very familiar to a lot of people who research such things with more rigour, and I doubt very much that any of it is at all original. But I have been bothered by this issue for a while and it now seems a little clearer to me what I think about this. I hope it has encouraged you to think about what you think too. Feel free to share your thoughts in the comment box!

Researching things that don't exist

As the end of my sabbatical is approaching fast, I am still tinkering with a research methodology based on tinkering (or the synonymous bricolage, to make it sound more academic). Tinkering is an approach to design that involves making things out of what we find around us, rather than as an engineered, designed process. This is relatively seldom seen as valid approach to design (though there are strong arguments to be made for it), let alone to research, though it underpins much invention and discovery. Tinkering is, by definition, a step into the unknown, and research is generally concerned with knowing the unknown (or at least clarifying, confirming or denying the partly- or tentatively-known). This is not a direct path, however.

Research can take many forms but, typically and I think essentially, the sort that we do in academia is a process of discovery, rather than one of invention. This is there in the name – ‘recherche’ (the origin of the term) means to go about seeking, which implies there is something to be found. The word ‘discovery’ suggests that there is something that exists that can be discovered, whereas inventions, by definition, do not exist, so they are never exactly discovered as such.

While we can seldom substitute ‘invention’ for ‘discovery’, the borders are blurry. Did Maxwell discover his equations or did he invent them? What he discovered was something about the order of the universe, that his (invented) equations express, but the equations formed an essential and inextricable part of that discovery. R&D labs get around the problem by simply using two terms so that you know they are using both. The distinction is similarly blurry in art: an artwork is normally not, at least in a traditional sense, research because, for most art, it is a form of invention rather than discovery. But sculptors often talk of discovering a form in stone or wood. And, even for the most mundane of paintings or drawings, artists are in a dialogue with their media and with what they have created, each stroke building on and being influenced by those that came before. A relative of mine recently ran an exhibition of works based on the forms suggested by blots of ink and water, which illustrates this in sharper relief than most, and I do rather like these paintings from Bradley Messer that follow the forms of wood grain. Such artists discover as much as they create and, like Maxwell’s equations, their art is an expression of their discovery, not the discovery itself, though the art is equally a means of making that discovery. Discovery is even more obvious in ‘found’ art such as that of some of the Dadaists, though the ‘art’ part of it is arguably still the invention, not the discovered object itself. Duchamp Fountaine And, as Dombois observes  there are some very important ways research and art can connect: research can inform art and be about art, and art can be about research, can support research and can arise from it. Dombois also believes art can be a means of performing research. Komar and Melamid’s ‘most-wanted paintings’ project is a good example of art not only being informed by research itself being a form of research. Their paintings resulted from research into what ‘the people’ wanted in their paintings. The paintings themselves challenge what collective taste means, and the value of it, changing how we know and make use of such information. And the artwork itself is the research, of which the paintings are just a part. 

Inventions (including art works) use discoveries and, from our inventions, we can make discoveries (including discoveries about our inventions). Invention makes it possible to make novel discovery, but the research is that discovery, not the inventions that lead to it. Research perceived as invention means discovering not what is there but what is not there, which is a little bizarre. More accurately, perhaps, it is seeking to discover what is latently there. It is about discovering possible futures. But even this is a bit strange, inasmuch as latent possibilities are, in many cases, infinite. I don’t think it counts as discovery if you are picking a few pieces from a limitless range of possibilities. It is creation that depends entirely on what you put into it, not on something that can be discovered in that infinity. But, perhaps, the discovery of patterns and regularities in that infinite potential palette is the research. This is because those infinite possibilities are maybe not as infinite as they seem. They are at the very least constrained by what came before, as well as by a wide range of structural constraints that we impose, or have imposed upon us. What is nice about tinkering is that, because it is concerned with using things around us, the forms we work on already have such patterns and constraints. 

Tinkering is concerned with exploring the adjacent possible. It is about looking at the things around you (which, in Internet space, means practically everywhere) and finding ways to put them together in new ways to do new things. These new things can then, themselves, create new adjacent possibles, and so it goes on. Beyond invention, tinkering is a tool for making new discoveries. It is a way of having a conversation with objects in which the tinker manipulates the objects and the objects in turn suggest ways of putting them together. It can inspire new ways of thinking. We discover what our creations reveal. Writing (such as this) is a classic example of this process. The process of writing is not one of recording thoughts so much as it is one of making new ones. We scaffold our thoughts with the words we write, pulling ourselves up by our own bootstraps as we do so in order to build further thoughts and connections.

The construction of all technologies works the same way, though it is often hidden behind walls of abstraction and deliberate design. If, rather than design-then-build, we simply tinker, then the abstraction falls away. The paths we go down are unknown and unknowable in advance, because the process of construction leads to new ideas, new concepts, new possibilities that only become visible as we build. Technologies are (all) tools to think with at least as much as they are tools to perform the tasks we build them for, and tinkering is perhaps the purest way of building them. And this is what makes tinkering a process of discovery. The focus is not on what we build, but on what we discover as a direct result of doing so – both process and product. Tinkering is a scaffold for discovery, not discovery itself. This begins to feel like something that could underpin a methodology.

With this in mind, here is an evolving set of considerations and guidelines for tinkering-based research that have occurred to me as I go along.

Exploring the possible

To be able to explore the adjacent possible, it is first necessary to explore the possible. In fact, it is necessary to be immersed in the possible. At a simple level, this because the bigger your pile of junk, the more chances there are of finding interesting pieces and interesting combinations. But there are other sub-aspects of this that matter as much: the nature of the pile of junk, the skills to assemble the junk, and immersion in the problem space. 

1) The pile of junk

Tinkering has to start with something – some tools, some pieces, some methods, some principles, some patterns. It is important that these are as diverse as possible, on the whole. If you just have a pile of engine parts then the chances are you are going to make another engine although, with a tinker-space containing sufficiently diverse patterns, you might make something else. There is a store near me that sells clocks, lights and other household objects made from bits of old electrical equipment and machinery, and it is wonderful. Similarly, some of the finest blues musicians can make infinite complexity out of just three chords and a (loosely) pentatonic scale. But having diverse objects, methods, patterns and principles certainly makes it easier than just having a subset of it all.

It is important that the majority of the junk is relatively complex and self-contained in itself – that it does something on its own, that it is already an assembly of something. Doing bricolage with nothing but raw materials is virtually impossible – they are too soft (in a technology sense). You have to start with something, otherwise the adjacent possible is way too far away and what is close is way too boring. The chances are that, unless you have a brilliant novel idea (which is a whole other territory and very rare) you will wind up making something that already exists and has probably been done better. This is still scrabbling around in the realms of the possible. The whole point is to start with something and assemble it with something else to make it better, in order to do something that has never been done before. That’s what makes it possible to discover new things. Of course, the complexity does not need to be in physical objects: you might have well-assembled theories, models, patterns, belief systems, aesthetic sensibilities and so on that could be and probably will be part of the assembly. And, since we are not just talking about physical objects but methods, principles, patterns etc, this means you need to immerse yourself in the process – to do it, read about it, talk about it, try it. 

2) The tools of assembly

It is not enough to have a great tinker-space full of bits and pieces. You need tools to assemble them. Not just physical tools, but conceptual tools, skills, abilities, etc. You can buy, make, beg, borrow or steal the tools, but skills to use them take time to develop. Of course, one of the time-honoured and useful ways to do that is to tinker, so this works pretty well. Again, this is about immersion. You cannot gain skills unless you apply them, reflect on it, apply them again, in a never-ending cycle.

There is a flip side to this though. If you get to be too skillful then you start to ignore things that you have discovered to be irrelevant, and irrelevant things aren’t always as irrelevant as they seem. They are only irrelevant to the path you have chosen to tread. Treading multiple paths is essential so, once you become too much of an expert, it is probably time to learn new skills. It is hard to know when you are too much of an expert. Often, the clue is that someone with no idea about the area suggests something and you laughingly tell them it cannot be done. Of course it can. This is technology. It’s about invention. You are just too smart to know it.

Being driven by your tools (including skills) is essential and a vital part of the methodology – it’s how the adjacent possible reveals itself. But it’s a balance. Sometimes you go past an adjacent possible on your way and then leave it so far behind that you forget it is there at all. It sometimes takes a beginner to see things that experts believe are not there. It can be done in all sorts of ways. For example, I know someone who, because he does not want to be trapped by his own expertise, constantly retunes his guitar to new tunings, partly to make discoveries through serendipity, partly to be a constant amateur. But, of course, a lot of his existing knowledge is reusable in the new context. You do not (and cannot) leave expertise behind when learning new things – you always bring your existing baggage. This is good – it’s more junk to play with. The trick is to have a ton of it and to keep on adding to it.

3) The problem space

While simply playing with pieces can get you to some interesting places, once you start to see the possibilities, tinkering soon becomes a problem-solving process and, as you follow a lead, the problem becomes more and more defined, almost always adding new problems with each one solved. Being immersed in a problem space is crucial, which tends to make tinkering a personal activity, not one that lends itself well to formally constructed groups. Scratching your own itch is a pretty good way to get started on the tinkering process because, having scratched one itch, it always leads to more or, at least, you notice other itches as you do so.

If you are scratching someone else’s itch then it can be too constraining. You are just solving a known problem, which seldom gets you far beyond the possible and, if it does, your obligations to the other person make it harder for you to follow the seam of gold that you have just discovered along the way that is really the point of it. It’s the unknown problems, the ones that only emerge as we cross the border of the adjacent possible, that matter here. Again, though, this is a balance. A little constraint can help to sustain a focus and doing something that is not your own idea can spark serendipitous ideas that turn out to be good.

Just because it is not really a team process doesn’t mean that other people are not important to it. Talking with others, exchanging ideas, gaining inspiration, receiving critique, seeing the world through different eyes – all this is very good. And it can also be great to work closely with a small number of others, particularly in pairs – XP relies on this for its success. A small number of people do not need to be bogged down with process, schedules, targets, and other things that get in the way of effective tinkering, can inspire one another, spot more solutions, and sustain motivation when the going gets rough. 

The Structural Space

One of the points of bricolage is that it is structured from the bottom up, not the top down. Just because it is bottom-up structure does not mean it is not structure. This is a classic of example of shaping our tools and our tools shaping us (as McLuhan put it), or shaping our dwellings while our dwellings shape our lives (as Churchill put it a couple of decades earlier). Tinkering starts with forms that influence what we do with them, and what we do with them influences what we do next – our creations and discoveries become the raw material for further creations and discoveries. Though rejecting deliberate structured design processes, I have toyed with and tried things like prototyping, mock-ups and sketches of designs, but I have come to the opinion that they get in the way – they abstract the design too much. What matters in bricolage is picking up pieces and putting them together. Anything beyond vague ideas and principles is too top-down. You are no longer talking with the space but with a map of the space, which is not the same thing at all.


One of the big problems with tinkering is that it tends to lead to highly inefficient design, from an engineering perspective. Part of the reason for that is that path dependencies set in early on. A bad decision early can seriously constrain what you do later. One has only to look at our higher education systems, the result of massively distributed large scale tinkering over nearly a thousand years, to see the dangers here. The vast majority of what we continue to do today is mediaeval in origin and, in a lot of cases, has survived unscathed, albeit assembled with a few other things along the way.

Building from existing pieces can limit the damage – at least you don’t have to pull everything apart if it turns out that it is not a fruitful path. It is also very helpful to start with something like Lego, that is designed to be fitted together this way. Most of my work during my sabbatical has involved programming using the Elgg framework, which is very elegantly designed so that, as long as you follow the guidelines, it naturally forms into at least a decent outline structure. On the other hand, as I have found to my cost, it is easy to put enough work into something that it makes it very discouraging when you to have to start again. As the example of educational systems shows, some blocks are so foundational and deeply linked with everything else, that they affect everything that follows and simply cannot be removed without breaking everything.

Working together

Tinkering is quite hard to do in teams, apart from as sounding boards for reflection on a process already in motion. It is instructive to visit LegoLand to see how it can work, though. In the play spaces of LegoLand one sees kids (and more than a few adults) working alone on building things, but they are doing so in a very social space. They talk about what they are doing, see what others are doing and, sometimes, put their bits of assemblies together, making bigger and more complex artefacts. We can see similar processes at work in GitHub, a site where programmers, often working alone, post projects that others can fork and, through pull-requests, return in modified form to their originators or others, with or without knowing them or ineracting with them in any other way. It’s a wonderful evolutionary tinker-space. If programs are reasonably modular, people can work on different pieces independently, that can then be assembled and reassembled by others. Inspiration, support, patterns of thinking and problem solving, as well as code, flow through the system. The tinkering of others becomes a part of your own tinker-space.  It’s a learning space – a space where people learn but also a space that learns. The fundamental social forms for tinkering are not traditional, purpose-driven, structured and scheduled teams (groups), but networks and, more predominantly, sets of people connected by nothing but shared interest and a shared space in which to tinker.


As well as resulting in inefficient systems, tinkering is not easy to plan. At the start, one never knows much more than the broad goal (that may change or may not even be there at all) and the next steps. You can build very big systems by tinkering (back to education again but let’s go large on this and think of the whole of gaia) but it is very hard to do so with a fixed purpose in mind and harder still to do so to a schedule. At best, you might be able to roughly identify the kind of task and look to historical data to help get some statistical approximation of how long it might take for something useful to emerge.

A corollary of the difficulty of planning (indeed, that it is counter-productive to do so) is that it is very easy to be thrown off track. Other things, especially those that involve other people that rely on you, can very quickly divert the endeavour. At the very least, time has to be set aside to tinker and, come hell or high water, that time should be used. Tinkering often involves following tenuous threads and keeping many balls in the air at once (mixing metaphors is a good form of tinkering) so distractions are anethema to the effective tinkerer. That said, coming up for a breath of air can remind you of other items in the tinker-chest that may inspire or provoke new ways of assembling things. It is a balance.

Evolution, not design

Naive creationists have in the past suggested that the improbability of finding something as complex as even a watch, let alone the massively more complex mechanisms of the simplest of organisms, means that there must be an intelligent designer. This is sillier than silly. Evolution works by a ratchet, each adaptation providing the basis for the next, with some neat possibilities emerging from combinatorial complexity as well. Given enough time and a suitable mechanism, exponentially increasingly complex systems are not just possible put overwhelmingly probable. In fact, it would be vastly more difficult to explain their absence than their existence. But they are not the result of a plan. Likewise for tinkering with technologies. If you take two complex things and put them together, there is a better than fair chance that you will wind up with something more complex that probably does more than you imagined or intended when you stuck them together.  And, though maybe there is a little less chance of disaster than the random-ish recombinations of natural evolution, the potential for the unexpected increases with the complexity. Most unexpected things are not beneficial – the bugs in every large piece of software attest to that, as do most of my attempts at physical tinkering over the course of my lifetime. However, now and then, some can lead to more actual possibles. The adjacent possible is what might happen next but, in many cases, changes simply come with baggage. Gould calls these exaptations – they are not adaptations as such, but a side-effect or consequence of adaptation. Gould uses the example of the Spandrels of St Marco to illustrated this point, showing how the structure of the cathedral of St Marco, with its dome sitting on rounded arches, unintentionally but usefully created spaces where they met that proved to be the perfect place to put images of saints – in fact, they seem made for them. But they are not – the spaces are just a by-product of the design that were coopted by the creators of the cathedral to a useful purpose. A lot of systems work that way. It is the nature of their assembly to create both constraints and affordances, path dependencies and patterns early on deeply defining later growth and change. Effective tinkering involves using such spandrels, and that means having to think about what you have built. Thinking deeply.

The Reflection Space

Just tinkering can be fun but, to make it a useful research process, it should involve more than just invention. It should also involve discovery. It is essential, therefore, that the process is seen as one of reflective dialogue with the creations we make. Reflection is not just part of an iterative cycle – it is embedded deeply and inextricably throughout the process. Only if we are able to constructively think about what we are doing as well as what we have done can this generate ideas, models, principles and foundations for further development. It is part of the dialogue with the objects (physical, conceptual, etc) that we produce and, perhaps even more importantly, it is the real research output of the tinkering process. Reflection is the point at which we discover rather than just invent. In part it is to think about the meaning and consequence, in part to discover the inevitable exaptions, in part to spot the next adjacent possible. This is not a simple collaboration. Much of the time we argue with the objects we create – they want to be one way but we want them to be another and, from that tension, we co-create something new.  

We need to build stories and rich pictures as much as we need to build technologies. Indeed, it doesn’t really matter that much if we fail to produce any useful artefact through tinkering, as long as the stories have value.  From those stories spin ideas, inspirations, and repeatable patterns. Stories allow us to critique what we have done and learn from it, to see it in a broader context and, perhaps, to discover different contexts where the ideas might apply. And, of course, these stories should be shared, whether with a few friends or the world, creating further feedback loops as well as spreading around what we have discovered.

Stories don’t have to be in words. Pictures are equally and often more useful and, often most useful of all, the interactions with our creations can tell a story too. This is obviously the case in things like games, Arduino projects or interactive site development but is just as true of making things like furniture, accessories and most of the things that can be made or enhanced with Sugru.

Here are two brief stories that I hope begin to reveal a little of what I mean.

A short illustrative story

Early in my sabbatical I wrote one Elgg plugin that, as it emerged, I was very pleased with, because it scratched an itch that I have had for a long time. It allowed anyone to tag anything, and for duplicate tags used by different people to be displayed as a tag cloud instead of the normal list of tags that comes with a post. This was an assembly of many ideas, and was a conversation with the Elgg framework, which provided a lot of the structure and form of what I wanted to achieve. In doing it, I was learning how to program in Elgg but, in shaping Elgg, I was also teaching it about the theories that I had developed over many years. If it had worked, it would have given me a chance to test those theories, and the results would probably have led to some refinements, but that was really a secondary phase of the research process and not the one that I was focusing on.

Before any other human being got to use the system, the research process was shaping and refining the ideas. With each stage of development I was making discoveries. A big one was the per-post tag cloud. My initial idea had simply been to allow people to tag one another’s posts. This would have been very useful in two main ways. Firstly, it would give people the chance to meaningfully bookmark things they had found interesting. Rather than the typical approach of putting bookmarks into organized hierarchies, tags could be used to apply faceted categorizations, allowing posts to cross hierarchical boundaries easily and enabling faceted classification of the things people found interesting. Secondly, the tags would be available to others, allowing social construction of an ontology-like thing, better search, a more organized site. Tags are already very useful things but, in Elgg, they are applied by post authors and there are not enough of them for strong patterns to develop on their own in any but quite large systems. One of the first things I realized was that this meant the same tag might be used for the same post more than once.  It was hard to miss in fact, because what I saw when I ran the program was multiple tags for each post – the system I had assembled was shouting at me. Having built a tag cloud system in the 1990s before I even knew the word ‘tag’ let alone ‘tag cloud’ I was primed to spot the opportunity for a tag cloud, which is a neat way to give shape and meaning to a social space. Individually, tags categorize into binary categories. Collectively, they become fuzzy and scalar – an individual post can be more of one tag than another, not because some individual has decided so, but because a crowd has decided so. This is more than a folksonomy. It is a kind of collaborative recommender system, a means to help people recognize not just whether something is good or bad but in what ways it is good or bad. Already, I was thinking of my PhD work which involved fuzzy tags I called ‘qualities’ (e.g. ‘good for beginners’, ‘comprehensive’, ‘detailed’, etc) that allowed users of my CoFIND system not just to categorize but to rate posts, on multiple pedagogical dimensions. Higher tag weight is an implicity proxy for saying that, in the context of what is described by this tag, the post has been recommended. As I write this (writing is great tinkering – this is the power of reflection) I realize that I could explicitly separate such tags from Elgg’s native tags, which might be a neat way to overcome the limitations of the system I wrote about 15 years ago, that was a good idea but very unusable. Anyway…

It worked like a dream, exactly as I had planned, up to the point that I tried to allow people to see the things they had tagged, which was pretty central to the idea and without which the whole thing was pretty pointless: it is highly improbably that individuals would see great value in tagging things unless they could use those tags to find and organize stuff on the site. As it turns out, the Elgg developers never thought tags might be used this way, so the owner of a tag is not recorded in the system. The person that tags a post is just assumed to be the owner of the post. I’m not a great Elgg developer (which is why I did not realise this till it was too late) but I do know the one cardinal rule – you never, ever, ever mess with the core code or the data model. There was nothing I could do except start again, almost completely from scratch. That was a lot of work – weeks of effort. It was not entirely wasted – I learned a lot in the process and that was the central purpose of it all. But it was very discouraging. Since then, as I have become more immersed in Elgg, my skills have improved. I think I can now see roughly how this could be made to work. The reason I know this is because I have been tinkering with other things and, in the process, found a lightweight way of using relationships to link individuals and objects that, in the ways that matter, can behave much like tags. Now that I have the germ of an idea about how to make this pedagogically powerful, hopefully I will have time to do that. 

Another illustrative story

One of my little sabbatical projects (that actually it turned out to be about the biggest, and it’s not over yet) was to build an OpenBadge plugin. This was actually prompted by and written for someone else. I would not thought of it as a good itch to scratch because I happen to know something about badges and something about learning and, from what I have seen, badges (as implemented so far) are at best of mixed value in learning. In the vast majority of instances that I have seen them used, they can be at the very best as demotivating as they are motivating. Much of the time it is worse than that: they turn into extrinsic proxies that divert motivation away from learning almost entirely. They embed power structures and create divisions. From a learning perspective, they are a pretty bad idea. On the plus side, they are a very neat way to do credentials which is great if that is what you are aiming for, opening up the potential for much more interesting separation of teaching and accreditation, diverse learning paths, and distributed learning, so I don’t hate them. In fact, I quite like them. But their pedagogical risks mean that I don’t love them enough to have even considered writing a plugin that implements them.

Despite reservations, I said I would do it. It didn’t seem like a big task because I reckoned I could just lightly modify one of a couple of existing (non-open) badge plugins that had already been written for Elgg.  I also happened to have some parts lying round – my pedagogical principles, the Elgg framework, the Mozilla OpenBadge standard documentation, various code snippets for implementing OpenBadges – that I could throw together. Putting these pieces together made me realize early on that social badging could be a good idea that might help overcome several of my objections to their usual implementations. Because of the nature of Elgg, the obvious way to build such a plugin would be such that anyone could make a badge, and anyone could award one, making use of Elgg’s native fine-grained bottom-up permissions. This meant that the usual power relationships implied in badging would not be such a problem. This was an interesting start.

Because Elgg has no roles in its design (apart from a single admin role for the site builder and manager), and so no explicit teaching roles, this could have been potentially tricky from a trust perspective – although its network features would mean you could trust awards by people you know, how would you trust an award from someone you don’t know and who is not playing a traditional teacher role in a power hierarchy? Even with the native Elgg option to ‘recommend’ a badge (so more people could assert its validity) this could become chaotic. But my principles told me that teacher control is a bad thing so I was not about to add a teacher role.

After tossing this idea around for a few minutes, I came up with the idea of inheritable badges – in other words, a badge could be configured so that you could only award a badge if you had received it yourself. In an instant, this began to look very plausible. If you could trace the badge to someone you trust (eg. a teacher or a friend or someone you know is trustworthy), which is exactly what Elgg would make possible by default, then you could trust anyone else who had awarded the badge to at least have the competence that the badge signifies, and so be more likely to be able to accurately recognize it in someone else. This was neat – it meant that accreditation could be distributed across a network of strangers (as in a MOOC) without the usual difficulties of the blind accrediting the blind that tend to afflict peer assessment methods in such contexts. Better still, this is a great way to signify and gain social capital, and to build deeper and richer bonds in a community of strangers. It is, I think, among the first scalable approaches to accreditation in a connectivist context, though I have not looked too deeply into the literature, so stand to be corrected.

Later, as I tinkered and became immersed in the problem, thinking how it would be used, I added a further option to let a badge creator specify a prerequisite award (any arbitrarily chosen badge) that must be held before a badge could be awarded. As well as allowing more flexibility than simple inheritance, this meant that you could introduce roles by the back door if you wished, by allowing someone to award a ‘teacher’ badge or similar, and only allowing people holding that badge to make awards of other badges.  I then realized this was a generalized case of the same thing as the inheritance feature, so got rid of the inheritance feature and just added the option to make a prerequisite of the current badge itself. It is worthy of note that this was quite difficult to do – had I planned it from the start, it would have been trivial, but I had to unpick what I had done as well as build it afresh.

Social badging, peer assessment, scalable viral accreditation, social capital, motivation  – this was looking cool. Furthermore, tinkering with an existing framework suggested other cool things. By default, it was a lot easier to build this if people could award badges to themselves. The logical next step would have been to prevent them from doing this but, as I saw it working, I realised self-badging was a very good idea! It bothered me for a moment that it might be a bit confusing, at least, not to mention appearing narcissistic if people started awarding themselves badges. However, Elgg posts can be private, so people giving themselves badges would not have to show them to others. But they could, and that could be useful. They could make a learning contract with someone else or a group of people, and allow them to observe, thus not only improving motivation and honesty, but also building bonding social capital. So, people could set goals for themselves and award themselves badges when they accomplished them, and do so in a safe social context that they would be in control of. It might be useful in many self-directed learning contexts. 

These were not ideas that simply flowed in my head from start to finish: it was a direct result of dialogue with what I was creating that this came about, and it could only have done so because I already had ideas and principles about things like portfolios, learning contracts and social learning floating around in my toolkit, ready to be assembled. I did include the admin option to turn off self-awarding at a system level in case anyone disagreed with me, and because I could imagine contexts where it might get out of hand. I even (a little reluctantly) made it possible to limit badge awarding to admins only, so that there could be a ‘root’ badge or two that would provide the source of all accreditation and awarding. Even then, it could still be a far more social approach to accreditation than most, making expertise not just something that is awarded with an extrinsic badge, but also something that gives real power to its holder to play an important role in a learning community.

This is not exactly what my sponsors asked for: they wanted automation, so that an administrator could set some criteria and the system would automatically award badges when those criteria had been met.  Although I reckon my social solution meets the demand for scalability that lay at the heart of that request, I realized that, with some effort, I could assemble all of this with a karma point plugin that I happened to have in my virtual toolshed in order to enable automated badge awarding for things like posting blogs, etc. Because there was no obvious object for which such an award could be given as it could relate to any arbitrary range of activities, I made the object providing evidence to be the user’s own profile. Again, this was just assembling what was there – it was an adjacent possible, so I took it. I could, if I had not been lazy, have generated a page displaying all of the evidence, but I did not (though I still might – it is an adjacent possible that might be worth exploring). And so, of course, now it is possible to award a badge to a user, rather than for a specific post which, though not normally a good idea from a motivation perspective, could have a range of uses, especially when assembled with the tabbed profile we built earlier (what I refer to in academic writings as a ‘context switcher’ and that can be used as a highly flexible portfolio system).

These are just a sample of many conversations I had with the tools and objects that were available to me. I influenced them, they influenced me. There were plenty of others – exaptions like my discovery that the design I had opted for, which made awards and badges separate objects, meant that I had a way of making awards persistent and not allowing badge owners to sneakily change them afterwards, for example, thus enhancing trust in the system. Or that the Elgg permissions model made it very simple to reliably assert ownership, which is very important if you are going to distribute accreditation over multiple sites and systems. Or that the fact that it turned out to be an incredibly complex task to make it all work in an Elgg Group context was a blessing because I therefore looked for alternatives, and found that the pre-requisite functionality does the job at least as well, and much more elegantly. Or that the Elgg views system made it possible to fairly easily create OpenBadge assertions for use on other sites. The list goes on. 

It was not all wonderful though. Sometimes the conversation got weird. My plan to start with an existing badge plugin quickly bit the dust. It turns out that the badge plugins that were available were both of the kind I hate – they awarded badges to individuals, not for specific competences. To add injury to injury, they could be awarded only by the administrator, either automatically through accrued points or manually. This was exactly the kind of power structure that I wanted to get away from. From an architectural perspective, making these flawed plugins work the way I wished would have been much harder than writing the plugin from scratch. However, in the spirit of tinkering, I didn’t start completely from scratch. I looked around for a plugin that would do some of the difficult stuff for me. After playing with a few, I opted standard Elgg Files plugin, because that ought to have made light work of storing and organizing the badge images. In retrospect, maybe not the best plan, but it was a starting point. After a while I realized I had deleted or not used 90% of the original plugin, which was more effort than it was worth. I also got stuck in a path dependency again, when I wanted to add multiple prerequisites (ie you could specify more than one badge as a prerequisite) : by that time, my ingenious single-prerequisite model was so firmly embedded that it would have taken more than a solid week to change it. I did not have the energy, or the time.  And, relatedly, my limited Elgg skills and lack of forward planning meant that I did not always divide the code into neatly reusable chunks. This still continues to cause me trouble as I try to make the OpenBadge feature work. Reflecting on such issues is useful – I now know that multiple inheritence makes sense for this kind of system, which would not have occurred to me if I hadn’t built a system with a single-prerequisite data model. And I have a better idea about what kind of modularity works best in an Elgg system.

Surfing the adjacent possible

Like all stories worthy of the name, my examples are highly selective and probably contain elements of fiction in some of the details of the process. Distance in time and space changes memories so I cannot promise that everything happened in the order and manner presented here – it  was certainly a lot more complicated, messy and detailed than I have described it to be. I think this fictionlizing is crucial, though. Objective reporting is exactly not what is needed in a bricolage process. It is the sense-making that matters, not religious adherence to standards of objectivity. What matters are the things we notice, the things we reflect on and things we consider to be important. Those are the discoveries. 

This is a brief and condensed set of ten of the main principles that I think matter in effective tinkering for research:

  1. do not design – just build
  2. start with pieces that are fully formed
  3. surround yourself with both quantity and diversity in tools, materials, methods, and perspectives
  4. dabble hard – gain skills, but be suspicious of expertise
  5. look for exaptations and surf the adjacent possible
  6. avoid schedules and goals, but make time and space for tinkering, and include time for daydreaming
  7. do not fear dismantling and starting afresh
  8. beware of teams, but cultivate networks: seek people, not processes
  9. talk with your creations and listen to what they have to say
  10. reflect, and tell stories about your reflections, especially to others

As I read these ideas it strikes me that this is the very antithesis of how research, at least in fields I work in, is normally done and that it would be extremely hard to get a grant for this. With a deliberate lack of process control, no clear budgets, no clear goals, this is not what grant awarders would normally relish. Whatever. It is still worth doing.

Tinkering as a research methodology offers a lot – it is a generative process of discovery that builds ideas and connections as much as it builds objects that are interesting or useful. It is far from being a random process but it is unpredictable. That is why it is interesting. I think that some aspects of it resemble systematic literature review: the discovery and selection of appropriate pieces to assemble, in particular, is something that can be systematized to some extent and, just as in a literature review, once you start with a few pieces, other pieces fall naturally into place. It is very closely related to design-based research and action research, with their formal cycles and iterative processes, although the iteration cycle in tinkering is far finer grained, it is not as rigid in its requirements, and it deliberately avoids the kind of abstractions that such methodologies thrive on. It might be a subspecies though. It definitely resembles and can benefit from soft systems methodologies, because it is the antithesis of hard systems design. Rich pictures have a useful role to play, in particular, though not at the early stages they are used in soft systems methods. And, unlike soft systems, the system isn’t the goal.

Finally, tinkering is not a solution to everything. It is a means of generating knowledge. On the whole, if the products are worthwhile, then they should probably feed into a better engineered system. Note, however, that this is not prototyping. Though products of tinkering may sometimes play the role of a prototype at a later stage in a product cycle, the point of the process is not to produce a working model of something yet to come. That would imply that we know what we are looking for and, to a large extent, how we will go about achieving it. The point is to make discoveries. 

This is not finished yet. It might just turn out to be a lazy way to do research or, perhaps, just another name for something that is already well pinned down. It certainly lacks rigour but, since the purpose is generative, I am not too concerned about that, as long as it works to produce new knowledge. I tinker on, still surfing the adjacent possible.

Three glimpses of a fascinating future

I’d normally post these three links as separate bookmarks but each, which have popped up in the last few days, share a common theme that is worth noting:

In this, a neural network made out of the brain cells of a rat is trained to fly a flight simulator.

In this, signals are transmitted directly from one brain to another, using non-invasive technologies (well – if you can call a large cap covered in sensors and cables ‘non-invasive’!)

This reports on a DARPA neuromodulation/neuroaugmentation project to embed tiny electronic devices in brains to (amongst other things) cure brain diseases and conditions, augment brain function and interface with the outside world (including, presumably, other brains). This article contains an awesome paragraph:

“What makes all of this so much more interesting is the fact that, unlike all the other systems of the body, which tend to reject implants, the nervous system is incorporative—meaning it’s almost custom-designed to handle these technologies. In other words, the nervous system is like your desktop computer— as long as you have the right cables, you can hook up just about any peripheral device you want.”

I’m both hugely excited and deeply nervous about these developments and others like them. This is serious brain hacking. Artificial intelligence is nothing like as interesting as augmented intelligence and these experiments show different ways this is beginning to happen. It’s a glimpse into an awe-inspiring future where such things gain sophistication and ubiquity. The potential for brain cracking, manipulation, neuro-digital divides, identity breakdown, privacy intrusion, large-scale population monitoring and control, spying, mass-insanity and so on is huge and scary, as is the potential for things to go horribly wrong in so many new and extraordinary ways. But I would be one of the first to sign up for things like augmenting my feeble brain with the knowledge of billions (and maybe giving some of my knowledge back in return), getting to see the world through someone else’s eyes or even just being able to communicate instantly, silently and unambiguously with loved ones wherever they might be. This is transhumanity writ large, a cyborg future where anything might happen. Smartphones, televisions, the web, social media, all the visible trappings of our information and communication technologies that we know now, might very suddenly become amusing antiques, laughably quaint, redundant and irrelevant. A world wide web of humans and machines (biological and otherwise), making global consciousness (of a kind, at least) a reality. It is hard but fascinating to imagine what the future of learning and knowledge might be in the kind of super-connected scenario that this implies. At the very least, it would disrupt our educational systems beyond anything that has ever come before! From the huge to the trivial, everything would change. What would networked humans (not metaphorically, not through symbolic intermediaries, but literally, in real time) be like? What would it be like to be part of that network? In what new ways would we know one another, how would are attitudes to one another change? Where would our identities begin and end? What would happen if we connected our pets? What would be the effects of a large solar flare that wiped out electronic devices and communication once we had grown used to it all? Everything blurs, everything connects. So very, very cool. So very, very frightening.

The trouble with (most) courses

I recently did a session at the University of Brighton’s Learning and Teaching Conference on the trouble with modules – the name used for what are more commonly known as ‘courses’ in North America, ‘units’ in Australia and ‘papers’ in New Zealand. A couple of people who missed the session have asked for more details than what was shown in the slides that I posted from the session, so this post is a summary of some of the main points. It is mostly gleaned from my notes that accompanied the short presentation part, tidied up and slightly expanded on a bit for the blog.  I have not gone into much detail about what would happen if we did away with courses altogether, nor described the results of any of the reflective activities that were involved in the original session as I have no notes on those parts and not enough time to write them. It does contain a bunch of ideas and suggestions about how to overcome some of the innate weaknesses of courses though, that I hope will have some value to somebody. If anything is unclear or arguable, I’m very happy to follow up via the comments on this post!

Why (most) courses are a bad idea

The taught university course as we know it today started out as nothing more than the study of a (single) book, in schools in pre-university times and in the early days of universities, nearly a thousand years ago. The master or lecturer would read the book and, perhaps, comment on it and discuss it with students. This made a lot of sense. Books were very expensive and rare objects, and so were scholars. It was by far the most efficient way to make use of a rival good (the teacher and/or the book) to reach as many people as possible. Whether or not it was the best way to learn, without it there would be no learning about or from the book at all. These efficiencies remained significant for the next 900 years or so after universities were invented (first in Bologna and, later, Paris, Oxford and the slow-moving flood that followed over the next few centuries, right up to the recent trend in MOOCs). The course slowly evolved into more subject-specific areas that often drew from many books and, later, papers, and the printing press made books slightly less of a luxury, but the general principle, that knowledge was thinly disributed and the most efficient way to make it available was one-to-many transmission in a physical room, continued to make sense. As universities grew, it was equally sensible that processes and architectures were designed to make this still more efficient. Timetables were used to schedule these scarce resources, lecture theatres designed to reach as many ears and eyes as possible, desks invented to take notes, blackboards invented to provide a source for them, written exams invented to make assessments easier to mark (the first were in 1789) and libraries and classification systems invented to store and retrieve books and periodicals. And, of course, if students and teachers were not around, there was no point in scheduling classes, so courses naturally divided around the holidays of Christmas, Easter and during harvest time in the summer, when (perhaps – this is disputed) students were called back to work on farms. All of this made perfect sense and made the best use of limited means – perhaps the only means that could have worked at all. And this is what we have inherited, whether or not we observe Christian holidays, whether or not we have almost free access to a cornucopia of information on the web and mobile devices, whether or not we have sophisticated information systems that make scheduling and organization of resources more flexible, or tools to connect us with anyone, anywhere, any time around the world. Around it we have built innumerable structures – notions of course equivalence that are related to accreditation and assessment, replicability, resource allocations, pay structures, etc – that have become very deeply embedded, not just within universities but in society as a whole. Universities have become gatekeepers that filter students as they come in and warrant their competencies as they leave, not just to become academics but to work in many occupations. And the unit of measurement is based around the course. Courses are so deeply embedded that, when people attempt educational reform, they are seldom even noticed, let alone questioned. If people want to make things better in education, they normally explicitly mean ‘better courses’. Even open and distance universities like Athabasca, that dumped prerequisites, the schedule and traditional lecture/tutorial/seminar format, adhere to the broad pattern of course length (measured now in hours of study, like most of the rest of the world outside North America) fixed outcomes and assessments.  Likewise, companies unwisely create or purchase courses for their employees to go out and learn stuff, albeit usually with fewer institutional constraints on timing, accreditation and format.  But there is no pedagogical reason whatsover that it should be this way.

What this means

The trouble is that courses, at least as they have mostly evolved, are not pedagogically neutral technologies. This is pretty obvious to anyone who has ever created one. It is a completely insane idea that every subject can be taught in multiples of precisely the same period or requires the same amount of study as every other. Typically (varying from place to place but usually unvaryingly within a given institution) this means 10-15 weeks or some multiple of that, or 100-200 hours of student effort. Taught courses, as we know them in our institutions today, have objectives and/or outcomes, and assessments to match, which conspire to mean that the intent is that everyone learns exactly the same thing or skill, whether they already know it, don’t need to know it, or not. Courses therefore differentiate – you pass them or fail them. Maybe you pass or fail them well or badly. As an incidental peculiarity, the blame for failure to teach is transferred to the students – they fail, not their teachers. This has big implications for an individual’s sense of self worth and on their ability to seek employment, and it impacts society (and individuals who suffer this process) deeply. Another consequence of this is that, thanks to the need for economies of scale and/or fitting things into timeslots or with other courses that might be similar, typically everyone is taught the same way on a given course, and taught the same things, whether or not it suits their needs, prior knowledge, interests and aspirations. While the notion of teaching to learning styles is palpable nonsense, there is no doubt that people have very different needs and preferences from one another, so parts of every course will bore or confuse some of their students some or all of the time and nearly all will contain parts of little or no relevance to a learner’s needs. None of this makes any pedagogical sense whatsoever. Bloom’s two-sigma problem (based on the fact that there is roughly a two sigma difference between results for those taught in traditional classrooms and those taught one-to-one) is a difficult challenge to address because, quite apart from their innate peculiarities, these features of the typical pattern followed by courses lead to one extremely big and elephant-in-the-room: they are inherently demotivating. 

Courses and motivation

People love to and need to learn, constantly and voraciously. It’s in our nature. If someone wants and/or needs to learn something, you have to do something pretty substantial to prevent them from doing so. Enter the taught course.

The first way that courses stand in the way of learning is, at first glance, relatively innocuous. The fixed nature and form of the course combined with its length necessarily means that, for the vast majority of students, parts will be boring, parts will be irrelevant, and parts will be over-taxing. This means most students’ need for challenge at an attainable level will not be met, at least some of the time.  It means that course content, process, rules of conduct, expectations and methods are strongly determined by someone else, sapping control away. Self-determination theory, a powerful construct that has been validated countless times over several decades, makes it very clear that, unless people feel in control, are challenged with achievable goals and experience relatedness, they will not be intrinsically motivated, no matter what other factors motivate them. Though often supporting relatedness (connection to something or someone beyond yourself), taught courses are, by and large, structured to reduce two of those three vital factors. It is no surprise then that teachers have to find ways to get around the lack of motivation engendered by the course format. There are a few teachers, sadly, who positively relish the exercise of their power, who enjoy rewarding and punishing students, who like to apply rigid control over behaviour in the classroom, who take a kind of sick pleasure in watching students suffer, who make students do things ‘because it’s for their own good’. They need our pity and support, but should not be allowed to teach until they have overcome this sickness. Luckily, by far the majority of us do our best to inspire, to actively encourage students to reflect on and actively align their intrinsic hopes and desires with what we are teaching, to offer flexibility and control, to empower students, to nurture their creativity, and to give some attention to each student. That’s the pleasure most of us get from teaching. We certainly don’t all succeed all of the time, even the best fail pretty regularly, and we could all improve, but at least we try. However, it’s an uphill battle.

This leads to the second and far more harmful effect of taught courses on motivation.  Most of us who work in higher education are constrained by the nature of the course and its accreditation to apply extrinsic rewards and punishments in the form of grades, even though we know it is a truly terrible idea. The reasoning behind the use of grades as motivators is understandable. We can easily observe that extrinsic methods do actually, on the whole, to some extent work, in the short term. Depending on the context, the effect can last from minutes to months. Indeed, behaviourists (who only ever did short-term studies) based a whole psychological movement on this idea. What is less obvious, and the most crucial structural disaster in the way the vast majority of courses are designed, is that they invariably and totally predictably utterly destroy any intrinsic motivation that people may already have, often irreparably. A big part of the reason for this is that it creates a locus of causality for a task or behaviour that is perceived as being controlled by someone or something else, so it does again come back to an issue of control, but this time the effects are devastating, not just reducing motivation but actively militating against it. This crowding effect has been demonstrated over and over again in well-designed and hard-to-refute research studies for decades. In many cases, rewards and punishments don’t even achieve what they set out to do in the first place. For example, companies that offer performance-related bonuses typically get lower performance from their workers, and daycare that punish parents who are late picking up their children find that parents actually pick them up even later. Worse, once the damage is done, it is very hard if not (sometimes) impossible to entirely undo it. It’s like the motivation pathways have been permanently short-circuited. Worse still, how we are taught is often a major factor in determining how we learn, and we come to expect and (like addicts) even depend on extrinsic motivation to drive us. This is one of the reasons I sometimes describe my role as ‘un-teaching’ – there is often a lifetime of awful learning habits to undo before we can even start. 

If you are not convinced, do check out a few of the hundreds of papers at or read pretty much anything by Alfie Kohn, or Edward Deci, or Richard Ryan. There are plenty of studies from the field of education that look at the effects of rewards and punishments and find them worse than wanting.

Breaking the cycle

There are alternatives to typical institutional taught courses, some of them very common, others less so. The University of Brighton has a great program, the MSc/MA by Learning Objectives, in which students work with supervisors to develop a set of outcomes, a means of assessment, and a work plan to reach their goals. While there are a few time and process constraints here and there for practical reasons, they are not too onerous. Students on this program tend to pass it, not because its standards are low, but because everything is aligned with what they want and need to do. A few programs at Athabasca University have similarly flexible courses that act as a kind of catch-all to enable people to do things that matter to them.  PhD programs, of the traditional variety used in the UK, have (or had – the course-based American model is sadly becoming more prevalent) no obligatory courses and are entirely customized to and often by the individual student, with nothing but a few processes to ensure students remain on track and supported. They can take from 2-10 years to complete. This length can be a problem as our motivation usually changes over such a long time and extrinsic factors are often introduced that can affect it badly, but the general principle is a good one. Athabasca University’s challenge process makes it possible to completely separate accreditation from learning, which (almost) avoids the whole course problem altogether, though it does unfortunately only work if you happen to have the precise set of competences provided by actual taught courses. Its self-paced undergraduate courses, though still markedly constrained by a notional equivalence to their paced brethren, free students from the tyrrany of schedules, even if they do have other features that are overly limiting. PLAR/APEL processes that are common in institutions across the world separate learning from accreditation almost entirely. And that’s not to mention a huge host of teach-yourself methods and resources from Google Search to Wikipedia to the Khan Academy to Stack Exchange and hundreds of other fine online systems that most of us use when we actually want and need to learn something. And, of course, there are books, which have the great benefit of allowing us to skip things, re-read things, look up references and so on, so our paths through them are seldom linear and always under out control – unless we are forced to read them because of a course. 

But what about the run-of-the-mill?

Though there is much to be learned from existing methods that entirely or partially by-pass the harmful effects of taught courses, teachers in higher education operate under a set of ugly constraints that make it very difficult and often impossible for us to completely avoid their ill effects, especially when student numbers are large and things like professional standards bodies come into the picture. Until we achieve massive educational reform, which might allow us to provide multiple paths to achieving competence, that might separate learning from accreditation, that might be chunked in ways that suit the needs of learner and subject, we are mostly stuck with the offspring of a mediaeval system that has evolved to defend itself against change. Most of us have to grade things, we have  to make use of learning objectives/outcomes, and we don’t have much control over course length. Often, especially in lower-level courses and/or where standards bodies are involved, we have little control over the competences that need to be attained, whether or not we are competent to teach them. Moreover, many of the most effective existing methods of teaching without courses are very resource-hungry. It would be great to apply the (UK-style) PhD process to all of our teaching but it is economically infeasible. PhDs are expensive for a very good reason – many of the economic and physical constraints that drove the development of courses in the first place have not gone away, even though some have been notably diminished. Given these issues, I will finish this post with a few general ideas, suggestions and patterns to help reduce the ill effects of courses without destroying the system of which they are a part. 

Give control

Traditional teaching seems determined to take control away from learners, but we can do much to give it back. Amongst other things:

  • allow students to choose what they do and how they do it. For instance, I have a web development course that centres around a site that students build throughout the course, that is about something they choose and they care about, and a course process that encourages them to choose between (or discover for themselves or their peers) multiple resources and methods to learn the requisite skills along the way. It makes extensive use of peer support and encourages sharing of problems and solutions, so that students teach one another as a natural fall-out of the process. It uses reflection to support the process, and an assessment based on evidence (that the students select for themselves) of meeting specified learning outcomes. It’s far from perfect, and it does often cause problems (especially at first) for those who have learned dependence via our broken educational system, but it shows one way that learners can take the reins.
  • allow students to choose the learning outcomes. This is trickier to enact because of the rigid requirements we usually have to develop curricula and match them with those delivered elsewhere. However, if the outcomes we specify are not too specific, relating to broad competences, it is still possible to allow some flexibility to students to identify finer-grained outcomes that suit their needs and that are exemplars of the general overarching outcomes. I’ve found this approach easier to follow in graduate level courses in ill-defined subject areas – I don’t really have a way of doing this well for those that are constrained by disciplinary standards.
  • allow students to design their own assessments. This one is easier. Learning contracts are one way to do this, supported with scaffolding that allows students to develop their own plans for assessment. Similarly, we can ask for them to provide evidence in a form that suits them (one of the best computing assignments I have ever seen was mostly done as poetry, and I once had a great explanation of the ISO model of network management explained using Santa Claus’s elves). At the very least, we can offer alternative pre-written forms of assessment that students can choose between according to their preferences.
  • allow students to pick their own content. This is a trick I have used for several courses. I offer a menu of options that address the intended (broad) outcomes and negotiate which parts we/they will cover during the course. It takes a little more effort to prepare, but the payoff is large. For graduate level courses I sometimes encourage students to develop their own content that we all then use.
  • allow students to choose their own tools, media, platforms, etc. Where possible, students should not be limited in their choice of technologies needed to complete the course. This can be tricky where we are constrained by things like institutional platforms, but there are often ways to allow at least some flexibility (e.g. mobile-friendly versions, PDF and e-book formats, standard formats that allow the use of any editor or development tool, etc)
  • allow students to pick the time and place. This is the default at Athabasca University for most courses, but can be trickier when there are timetables and constraints of working with others according a schedule. Classroom flipping can help a bit, limiting what is done in the class to things that actually benefit from being somewhere with other people (feedback, dialogue, collaboration, problem-solving, etc), and leaving a lot to self-paced study. This is true online as well as in face-to-face teaching. Indeed, counter-intuitively, it is even one of the odd potential benefits of traditional lectures, inasmuch as they typically only take an hour of a student’s time once a week, between which students are free to learn as they please (not a completely serious point, but worth pointing out because of the important and universally applicable lesson it reminds us of, that teaching behaviours only have a tangential relationship with learning behaviours).
  • allow students to control social interaction. I am a huge fan of learning with other people but we all have different needs for engagement with others in our learning, and it doesn’t suit everyone equally all the time. Where possible, I try to build processes that let those that benefit from social interaction to work with others, but that let those that prefer a different approach to work alone, using evidence-based assessments rather than process-based ones. For instance, evidence can include help given to others or conversations with others, but can as easily come from individual work (unless social competences are on the menu for learning). I find it useful to build simple sharing (as opposed to dialogue) into the process so that even the least sociable of students share things and therefore support the learning of others.

Use better forms of extrinsic motivation

Extrinsic motivation is not all equally awful and some is barely distinguishable or even a part of intrinsic motivation. Extrinsic motivators lie on a spectrum from bad (externally imposed reward and punishment) to much better and more internally regulated varieties, such as:

  • doing things out of a sense of duty, guilt or obligation (introjected regulation) or, better,
  • doing things because they are perceived as worthwhile in themselves (identified regulation, e.g. losing weight) or, better still,
  • doing things because they are necessary steps to achieve something else we are really motivated to achieve (integrated regulation).

See for more about these differentiations. There are plenty of ways to use this to our advantage. It can often, for instance, be useful to encourage reflection on a learning activity. This can be used to think about why we are doing something, how it relates to our needs and goals, and what it means to us. Reflection can kindle more effective forms of extrinsic motivation that are far less harmful than externally imposed rewards and punishments. It is also valuable to nurture community, so that students feel obligations to the team or to one another, and support one another when the going gets rougher. Also, seeing how others are motivated can inspire us to recognize similar motivations in ourselves. Shared reflections (e.g. via blogs) can be particularly valuable.

Grades are not always necessary. While getting rid of the need to summatively assess is seldom possible, we can often avoid the use of grades (pass/fail is a little better than a mark), and we can make it possible for students to keep at it without grading until it is right, thus reducing the chance of failure. My courses tend to have feedback opportunities scattered throughout but I explictly avoid giving any grades until the last possible moment. It can upset some students who have learned grade-dependence, so it is important that they are fully aware of the reasoning and intent, and that the feedback is good enough that they can judge for themselves how well they are doing (I don’t always get that bit right!). Of course, I am only suggesting that we lose the grades, not the useful feedback. Feedback is crucial to allowing students to feel in control – they need to know what they are doing well and what could be improved, and plentiful feedback can be hugely motivating, showing that other people care, contributing to a sense of achievement, and more. Good, descriptive feedback that focuses on the work (never the student) is a cornerstone of effective educational practice. Grades tell us little or nothing, while encouraging an extrinsic focus that is harmful to motivation.

Step outside the course

Making links beyond a single course can be very beneficial to motivation. I attended an interesting presentation (at the same conference this originated in) the other day by Norman Jackson who talks about lifewide as opposed to lifelong learning, an idea that captures this principle well. Creating opportunities for students to engage in external activities like (for example) clubs, societies, geological digs, competitions, community work, conferences, charitable work, kickstarters, Wikipedia articles, coding camps and so on can fill in a lot of motivational gaps, making it easier to see the relevance of a course, to feed new ideas into a course, to gain a greater sense of personal relevance and responsibility for one’s own learning, to expand on work done in a course in greater detail wihout the imposition of extrinsic motivation. Of course, students should be free to choose which of these they engage with and, better still, should find them for themselves. However, there is no harm in advertising such things, nor in designing courses that allow students to capitalize on learning from other activities within the course itself such as projects, show-and-tell sessions, flexible discussions and so on. There are also often opportunities for doing things across multiple courses, using outputs of one to feed another, or bringing together different skillsets for joint projects. Another way to reduce the harm slightly is to build multiple courses into a single overarching one, of lengths appropriate to the needs of the students and subject. 

Build learning communities and spaces rather than courses

Given the wealth of potential resources and people’s time that are available for free on the Web (not to mention in libraries) there is often no need to provide much, if any content (in the sense of stuff presenting subject matter). A couple of the most successful courses I have ever run have had no curriculum or content to speak of, just a set of broad outcomes, a very flexible and student-designed assessment, an approach to making use of the learning community and a responsive process to make it all happen. The process can take a surprising amount of time to develop, as it is important that it is both understood well by the students (including how it is assessed, expectations, norms, etc) and that it can be guaranteed to result in the intended outcomes (assuming these are not negotiated too). Getting that process and community right can be hard work both in the design phase and (especially) during the course but, when it does go right, it is very rewarding. I have often learned as much if not more than my students on those courses, and they are the only courses I have ever run with more than a couple of students where I have had nothing but grade A students (moderated by external examiners as well as by peers). The massive enthusiasm and passion that results from a rich learning community of learners who are in control of their own learning has to be seen to be believed. The essence of the method is to let go just enough but no more: a teacher’s role is to provide plentiful prodding, ideas, critical feedback and, above all, scaffolding so that students feel confident that they are making progress in useful directions (and get help when they are not). It is also a bit of a juggling act to make sure that even loose outcomes are met, especially as students tend to diverge in all sorts of different directions, some of which are brilliant and worth pursuing – getting those outcomes loose enough in the first place but sufficiently recognizable and relevant to academic careers is a bit of an art that I am still learning. It also takes a lot of energy and dedication to make it work so, if you are having a bad week or two, things can go topsy turvy pretty fast.  It is worth putting a huge amount of effort into the first few weeks, responding enthusiastically and personally at any time of day or night that you can afford in order to set the tone, show that you care, explain your approach and soothe any fears. Once you have established trust that you care, and have nurtured a strong learning community, students tend to help one another a lot and forgive you when you are less attentive later on. I try to design the process so that I can intentionally let go in later weeks too.

In conclusion

As an intrinsic design feature, traditional university taught courses and their attendant processes and regulations impose unnatural restrictions on both teachers and students, reducing control and stunting motivation. It would be great to throw off these restrictions altogether. We could make enormous gains simply through separating teaching from accreditation (at least, wherever possible – in extremely rare cases it really is true that there is only one person who can reliably judge competence and that person is the teacher). This may soon become a necessity rather than a virtue if MOOCs continue to evolve faster than the means to reliably accredit the results. Athabasca University already has the challenge process to cope with that, though is significantly fettered by the need to match competences achieved with those that apply to existing courses – our challenge process is insufficiently fine-grained to allow real flexibility. There would be equally great gains if we made courses the right size (typically though not necessarily small) to fit the needs of different students rather than shoehorning them to fit the needs of institutions. We have technologies than can take the hard work out of managing the ensuing complexity so traditional timetabling woes need not impede us, and it would make it much easier to mix and match, including to accredit learning done in different ways. However, there is plenty that can be done even within the constraints of a typical university course, as long as we are aware of the dangers and take steps to reduce the harm. I hope that this little piece and this smattering of suggestions has sparked an idea or two about how we might go about doing that. Perhaps, if more of us start to question the system and apply such ideas, it might help to make a climate where bigger change is possible. If you’re interested in finding out more, I have written about this kind of thing once or twice before, with slightly different emphases, such as at and at