On the value of awards

The week before last was a bit of a gold-star week for me. Firstly, I received Athabasca University’s Craig Cunningham Memorial Award for Teaching Excellence.  Secondly, Jisc named me one of the 50 top social media influencers in UK higher education (I was eligible because, though I don’t live in the UK any more, I still maintain strong informal and formal ties). It’s always nice to have one’s ego stroked, and mine was purring like a satisfied kitten for some time:  the accompanying photo of one of my kittens gives a rough rendition of my state of mind. Also, I am very Kittenthankful to those that nominated and supported me: thank you all! None-the-less, I have somewhat mixed feelings about both of these. Partly, it’s just because of embarrassment and a general sense of lack of worthiness. I know from intimate personal experience that I am at the very least as awful as I am great.  Equally, I am acutely aware that there are very many people who do things far better than me in many significant ways in both areas, and who did not receive an award for it. But there’s more to my discomfort than that. In this post I am mostly going to focus on the teaching award, but some of these issues relate to being on the list of UK social media influencers too.

The teaching crowd vs the teaching star

The teaching award bothers me, mainly, because no teacher is or should be a stand-alone prima donna or primo uomo, least of all in a highly distributed teaching environment like that at Athabasca. At AU, and to an only slightly lesser extent elsewhere, teaching is always the work of a team, always the result of a much larger community than just that team, and never, ever, the sole domain of one individual. Students (especially), administrators, technicians, learning designers, editors, graphic artists, fellow academics, tutors, textbook authors, Wikipedia editors, Facebook friends and the collectively generated processes and culture that make the university what it is, are at the very least as significant as any one person. To give one person an award for what we all do together therefore just doesn’t make much sense. It’s particularly ironic that I should get a teaching award in the light of a great deal of my work, which for more than 15 years has been about just that – how crowds and systems teach. The individual we label as a ‘teacher’ is just a part of a much larger teaching gestalt and need not be its star. It is true that the charismatic inspirers and/or visible innovators and/or empathetic carers do tend to be the teachers we most remember and are the ones that we tend to nominate for awards. But they also tend to be, for much the same reasons,  the worst teachers for some people: love ’em or hate ’em, there’s not much in between. Truly great teachers, including all those that make up the gestalt, often disappear into the background. My friend and mentor Richard Mitchell wanted a t-shirt slogan for education conferences that summed it up nicely: ‘shut up and let them learn’ (I don’t know if he ever had it made). The point is that it should never be about teachers teaching: it’s always about learners learning, and there are many ways to support that, most of the best of which are driven by the learners, not the teachers. Teachers that do that well are not always the ones that get the awards.

Competition vs caring

I was a bit disconcerted to learn on the day of the award ceremony that my faculty has been competitively pushing its staff for these awards over a period of years so, for some, this was less about celebrating excellence than winning. I don’t think academia needs to be nor should it be gamified: it has far more than enough of that already. If these contests were simple games with clear rules that made winning and losing unequivocal and fair, I would be fine with it. But, outside such a clearly game-like context, competition is not good for motivation – whether you are a winner or a loser – and it is often destructive to communities. Like performance-related pay and grades (deeply flawed ideas), it can all too easily make the award into the goal, which takes away the love of the activity itself as well as shaping how we perform it. This can very easily turn into a bit of behaviourist nonsense that can drive action in the short term but weaken interest in the long term. It is fundamentally unfair, too, which can cause unnecessary tension and divisions in a community that, by its nature, needs to work together to a common goal that everyone plays an important part in reaching. Giving an award is also an expression of power: a bit of behavioural shaping done to us, not with us, the use of award committees and panels notwithstanding. At the AU awards ceremony our leaders told us how proud they were of us. They meant this very kindly, and were simply following a traditional pattern and doing the right thing for the ritual purposes of the event, but it’s not a good idea. Sure, feel pride to be part of a great learning community, show interest in what we do, care about what we do together. Yes, by all means, celebrate the good things we have done, all of us, but not that we, as individuals, are therefore good. That’s too much like patting a dog on the head for behaving the way we want him to behave.

A better way?

What really made my ego purr was not the award itself but reading the generous things kind colleagues and students wrote about me in support of the nomination. Those brought tears to my eyes, and that’s what I am really grateful for.  So, rather than giving one person an award, which seems a bit arbitrary and divisive, I think it would make far more sense that we should all regularly nominate at least one other person for acclaim, but that we should scrap giving an actual award or, if we must, should give it to everyone or a large group. The really valuable part, from a personal perspective, is not the award as such but the kindness and affirmation from friends, students and colleagues. It’s also really nice to give such acclaim. Everyone’s a winner.

The value of awards

For all my misgivings, I think that awards do have real value, especially to those that are not in the competition themselves. Awards are good ways to make concrete the values that we (or, at least, the givers of the awards) deem to be significant. By giving an award for teaching, AU is signalling the importance of teaching to its employees and to the rest of the world, and that’s a message worth sending. Similarly for Jisc, its influential position means that it got a lot of attention for not just the contest but, more significantly, the criteria for success in that contest.  That is really valuable. Social media activities are seldom given much weight when deciding on promotions or research excellence in academia, but they should be. By far the most significant measure of success in academia is whether our work increases the knowledge in the world, whether through research or teaching or dialogue, and social media are a great means of doing that. The most popular of my papers and books have been read by a few thousand people, and most have been read by far fewer than that. My biggest keynotes have addressed less than a thousand people, and some conference papers have reached no more than a few dozen readers and attendees. Some of my blog posts and shared bookmarks have had tens of thousands of readers, and most are read by thousands. There are different measures of quality for such things, for sure: most of my posts are far more like presentations intended to spark ideas than rigorous papers and books and I doubt that any have ever been cited in academic literature. But, though not rivaling peer-reviewed papers, that is still useful, I think, for exactly the reasons it is useful to attend conference presentations and, in the same way, each one is an opportunity to interact directly. Blog posts themselves may not always have much academic clout compared with peer reviewed papers but, sometimes, the dialogue that develops around them can become an incredibly significant artefact in itself, much like the glosses on mediaeval manuscripts, entering depths that can put most peer review to shame. Perhaps the Jisc list will catalyze further social media activity among those who feel that their time is better spent publishing work in journals with high impact factors and low readership. Perhaps it will encourage those outsiders to investigate what those of us who care about such things are sharing. Perhaps it will act as a pre-filter to help them to find stuff worth reading. Perhaps it will inspire innovative uses of the tools and spread good practices. Perhaps it is a good thing to simply assert that there is a community that we are part of. Awards can be catalysts for change, builders of community, and organizers of values. That’s good.

There is, too, some value in recognizing the value of people and what they do for whatever reason. I find it odd that, as well as awards for specific activities, AU gives long service awards. That rather implies that staying here might have been an achievement in itself, which further implies that it might have been a chore to stick it out for so long. That’s not a good message – I’m here because I want to be here, not because I feel I must – and it is made worse by adding a reward for it. To be fair it is, quite literally, a token reward, of a few dollars to spend in the AU store and a pin. But, as carrots go, that’s likely worse than no carrot at all: it sends both a message that it is an extrinsic reward – akin to a payment – and that we are not worth very much. I reckon a bit of applause and a hand-shake is more than enough acknowledgement without muddying the waters with cold hard cash. As a ritual, though, celebrating the simple fact of our continuing community is very worthwhile. Not only is it an opportunity to meet and eat with colleagues in person – a rare thing at AU – it’s an affirmation of the value of the community itself. We need such rituals and celebrations of togetherness.

And that is, I think, the most profound value of awards in general. They are, arguably, counter-productive as ways to drive good practice or encourage better behaviour in those that compete for them. But the ceremonies associated with them and the shared values that they represent bind all of us. They symbolize what we endeavour to be, they signal the values that we cherish, they exclude those outside the community and thus contribute to the community’s internal cohesion, albeit at a potential cost of competition. On balance, for all the complexities and risks, that’s not a bad thing.

A waste of time

pocket watch 

A while back I wrote a blog post about the apparent waste of time involved in things like reading email, loading web pages, etc. At the end of the post I suggested that the simplistic measure of time as money that I was using should be viewed with great suspicion, though it is precisely the kind of measure that we routinely use. This post is mostly about why we should be suspicious.

But first, my basic initial argument, restated and stripped to its bones, is simple. According to the vacation request form that I have to fill in (and, after taking vacation, repeat the process) an Athabasca University working day is 7 hours, or 25,200 seconds, long. There are about 1,200 employees at Athabasca University so, if each employee could save 21 seconds in a day (25,200/1200 = 21), it would be like getting another employee. Equally, every time we do something that loses everyone 21 seconds a day for no good reason, the overall effect is the same as firing someone. I observed then that we have lately adopted a lot of ICT systems that waste a great deal more than that. Since then, things have been getting worse. We are about to move to an Office 365 system, for instance, that I am guessing will cost us the time of at least 5 people, maybe more, compared with our current aged Zimbra suite. It’s not rocket science: a minute of everyone’s day easily accounted for in loading time alone, which I have  checked seems to be roughly 20 seconds longer than the old system, and most people will load it many times a day. At the start, it will take way more than that, what with training, migration, confusion and all and, if my experience of Microsoft’s Exchange system is anything to go by, it is going to carry on sapping minutes out of everyone’s day for the foreseeable future thanks to poor design and buggy implementation. So far, so depressing.

But what have we actually lost?

The simplistic assumption that time is money has a little merit when tasks are routine and mechanical. If you are producing widgets then time spent not producing widgets equates directly to widgets lost, so money is lost for every second spent doing something else. Even that notion is a bit suspect, though, inasmuch as there are normally diminishing returns on working more. Even if a task requires only the slightest hint of skill or judgement, the correlation between time and money is a long, long way from linear. Far more often than not, productivity is lower if you insist on uninterrupted working or longer hours than it would be if you insisted on regular breaks and shorter hours.  At the other end of the spectrum, it is also true, even in the most creative and open occupations, that it is possible to spend so much time doing something else that you never get round to the thing that you claim to be doing, though it is very hard to pin down the actual break-even point. For instance, a poet might spend 23.5 out of every 24 hours not actually writing poetry and that might be absolutely fine. On the other hand, if a professor spends a similar amount of time not marking student work there will probably be words. For most occupations, there’s a happy balance.

But what about those enforced breaks caused by waiting for computers to do something, or playing a mechanical role in a bureaucratic system, or reading an ‘irrelevant’ all-staff email? These are the ones that relate most directly to my original point, and all are quite different cases, so I will take each in turn, as each is illustrative of some of the different ways time and value are strangely connected.

Waiting for the machine

As I wait for machines to do something I have from time to time tried to calculate the time I ‘lose’ to them. As well as time waiting for them to boot up, open a web page, open an application, convert a video or save a document, this includes various kinds of futzing, such as organizing emails or files, backing up a machine, updating the operating system, fixing things that are broken, installing tools, shuffling widgets, plugging and unplugging peripherals, and so on. On average, given that almost my entire working life is mediated through a computer, I reckon that an hour or more of every day is taken up with such things. Some days are better than others, but some are much worse. I sometimes lose whole days to this. Fixing servers can take much more. Because I work in computing and find the mental exercise valuable, futzing is not exactly ‘lost’ time for me, especially as (done well) it can save time later on. Nor, for that matter, is time spent waiting for things to happen. I don’t stop thinking simply because the machine is busy. In fact, it can often have exactly the opposite effect. I actually make very deliberate point of setting aside time to daydream throughout my working day because that’s a crucial part of the creative, analytic and synthetic process. Enforced moments of inactivity thus do a useful job for me, like little inverted alarm clocks reminding me when to dream. Slow machines (up to a point) do not waste time – they simply create time for other actitivies but, as ever, there is a happy balance.

Bacn

pig, showing cuts of meat

Bacn is a bit like spam except that it consists of emails that you have chosen or are obliged to receive. Like spam, though, it is impersonal, often irrelevant, and usually annoying. Those things from mailing lists you sometimes pay attention to, calls for conference papers that might be interesting, notifications from social media systems (like the Landing) that have the odd gem, offers from stores you have shopped at, or messages to all-staff mailing lists that are occasionally very important but that are mostly not –  I get a big lot of bacn. Those ‘irrelevant’ allstaff emails are particularly interesting examples. They are actually very far from irrelevant even though they may have no direct value to the work that I am doing, because they are part of the structure of the organization. They are signals passing around the synapses of the organizational brain that help give its members a sense of belonging to something bigger, even if the particular signals themselves might rarely fire their particular synapses. Every one is an invitation to being a potential contributor to that bigger thing. They are the cloth that is woven of the interactions of an organization that helps to define the boundaries of that organization and reflect back its patterns and values. The same is true of social media notifications: I only glance at the vast majority but, just now and then, I pick up something very useful and, maybe once every day or two, I may contribute to the flow myself. The flow is part of my extended brain, like an extra sense that keeps me informed about the zeitgeist of my communities and social networks and that makes me a part of them. Time spent dealing with such things is time spent situating myself in the sets, networks and groups that I belong to. Organizations, especially those that are largely online, that are seeking to reduce bacn had better beware that they don’t lose all that salty goodness because bacn is a thin web that binds us. Especially in a distributed organization, if you lose bacn, you lose the limbic system of the organization, or even, in some cases, its nervous system. Organizations are not made of processes; they are made of people, and those people have to connect, have to belong. Bacn supports belonging and connection. But, of course, it can go too far. It is always worth remembering that 21 seconds of bacn is another person’s time gone (for a large company, it might be a second or less) and that person might have been doing something really productive with all of that lost time. But to get rid of bacn makes no more sense than to get rid of brain cells because they don’t address your current needs. An organization, not just its members, has to think and feel, and bacn is part of that thinking and feeling. As ever, though, there is a happy balance.

Being a cog

cogs

I’ve saved this one till last because it is not like the others. Being a cog is about the kind of thing that requires individuals to do the work of a machine. For instance, leave-reporting systems that require you to calculate how much leave you have left, how many hours there are in a day, or which days are public holidays (yes, we have one of those). Or systems for reclaiming expenses that require you to know the accounting codes, tax rates, accounting regulations, and approvers for expenses (yes, we have one of those too). Or customer relationship management systems that bombard you with demands that actually have nothing to do with you or that you have already dealt with (yes – we have one of those as well). Or that demand that you record the number of minutes spent using a machine that is perfectly capable of recording those minutes itself (yup). This is real work that demands concentration and attention, but it does nothing to help with thinking or social cohesion and does nothing to help the organization grow or adapt. In fact, precisely the opposite. It is a highly demotivating drain on time and energy that saps the life out of an organization, a minute or two at a time. No one benefits from having to do work that machines can do faster, more accurately and more reliably (we used to have one of those). It is plain common sense that investing in someone who can build build and maintain better cogs is a lot more efficient and effective than trying (and failing) to train everyone to act exactly like a cog. This is one of those tragedies of hierarchically managed systems. Our ICT department has been set the task of saving money and its managers only control their own staff and systems, so the only place they can make ‘savings’ is in getting rid of the support burden of making and managing cogs. I bet that looks great on paper – they can probably claim to have saved hundreds of thousands or even millions of dollars although, actually, they have not only wasted tens of millions of dollars, but they have probably set the organization on a suicide run. But they could as easily have gone the other way and it might have been just as bad. Over-zealous cog-making is harmful, both because ICT departments have a worrisome tendency to over-do it (I cannot have assignments with no marks, for example, if I wish to enter them into our records system, which I have to do because otherwise the cog that pays tutors will not turn) and because systems change, which means many of the cogs inside them have to change too, and it is not just the devil’s work but an accounting nightmare to get them all to change at the right time. Well-designed ICT systems make it easy to take out a cog or some other sub-assembly and replace it, and they use tools that make cog production fast and simple. Poorly designed systems without such flexibility enslave their users, just as much as those that have to submit to cog-retraining are enslaved when their systems change. As ever, there is a happy balance.

Wasting time?

I’m not sure that time is ever lost – it is just spent doing other things. It can certainly be wasted, though, if those other things do not make a positive difference. But it is complicated. Here are just a few of the things I have done today – not a typical day, but few of them are:

  • reading/responding to emails from staff, students and others: roughly 2.5 hours
  • writing a forward for a book: roughly 2 hours
  • writing this post: roughly 1 hour
  • walking: roughly 45 minutes
  • making/consuming food and drink: about 30 minutes
  • reading/ making notes on books and papers: roughly 1 hour
  • replying to interview questions: approximately 45 minutes
  • checking my boat didn’t die in the rainstorm: roughly half an hour
  • cleaning and tidying: maybe half an hour
  • writing a book: about 20-30 minutes
  • replying to student posts: roughly 1 hour
  • marking: roughly 1 hour
  • waiting for computers: perhaps half an hour
  • grooming/washing/etc: maybe half an hour
  • checking/listening to the news and weather: roughly 45 minutes
  • taking an afternoon nap: about half an hour
  • Skyping: roughly 15 minutes
  • Deleting spam from the Elgg community site: about 10 minutes
  • Drying a wet dog: about 5 minutes
  • serious thinking: roughly 12 hours

There are still a couple of hours left of my day before I read a book and eventually go to sleep. Maybe I’ll catch a movie while reading some news after preparing some more food. Maybe I’ll play some guitar or try to get the hang of the sansula one more time. With a bit of luck I might get to chat with my wife (who has been out all day but would normally figure in the list quite a bit). But I hope you get the drift. I don’t think it makes much sense to measure anyone’s life in minutes spent on activities, except for the worst things they do. Time may be worth measuring and accounting for when it is spent doing the things that make us less than human, but it would be better to not do such things in the first place. I have put off responding to the CRM system today and only spent a few minutes checking admin systems in general because, hell, it’s Monday and I have had other things to do. It is all about achieving a happy balance.

The LMS as a paywall

I was writing about openness in education in a chapter I am struggling with today, and had just read Tony Bates’s comments on iQualify, an awful cloud rental service offering a monolithic locked-in throwback that just makes me exclaim, in horror, ‘Oh good grief! Seriously?’ And it got me thinking.

Learning management systems, as implemented in academia, are basically paywalls. You don’t get in unless you pay your fees. So why not pick up on what publishers infamously already do and allow people to pay per use? In a self-paced model like that used at Athabasca it makes perfect sense and most of the infrastructure – role-based time-based access etc – and of course the content already exists. Not every student needs 6 months of access or the trimmings of a whole course but, especially for those taking a challenge route (just the assessment), it would often be useful to have access to a course for a little while in order to get a sense of what the expectations might be, the scope of the content, and the norms and standards employed. On occasion, it might even be a good idea to interact with others. Perhaps we could sell daily, weekly or monthly passes. Or we could maybe do it at a finer level of granularity too/instead: a different pass for different topics, or different components like forums, quizzes or assignment marking. Together, following from the publishers’ lead, such passes might cost 10 or 20 times the total cost of simply subscribing to a whole course if every option were purchased, but students could strategically pick the parts they actually need, so reducing their own overall costs.

This idea is, of course, stupid. This is not because it doesn’t make economic and practical sense: it totally does, notwithstanding the management, technical and administrative complexity it entails. It is stupid because it flips education on its head. It makes chunks of learning into profit centres rather than the stuff of life. It makes education into a product rather than celebrating its role as an agent of personal and societal growth. It reduces the rich, intricately interwoven fabric of the educational experience to a set of instrumentally-driven isolated events and activities. It draws attention to accreditation as the be-all and end-all of the process. It is aggressively antisocial, purpose-built to reduce the chances of forming a vibrant learning community. This is beginning to sound eerily familiar. Is that not exactly what, in too high a percentage of our courses, we are doing already?

If we and other universities are to survive and thrive, the solution is not to treat courses and accreditation as products or services. The ongoing value of a university is to catalyze the production and preservation of knowledge: that is what we are here for, that is what makes us worthwhile having. Courses are just tools that support that process, though they are far from the only ones, while accreditation is not even that: it’s just a byproduct, effluent from the educational process that happens to have some practical societal value (albeit at enormous cost to learning). In physical universities there are vast numbers of alternatives that support the richer purpose of creating and sustaining knowledge: cafes, quads, hallways, common rooms, societies, clubs, open lectures, libraries, smoking areas, student accommodation, sports centres, theatres, workshops, studios, research labs and so on. Everywhere you go you are confronted with learning opportunities and people to learn with and from, and the taught courses are just part of the mix, often only a small part. At least, that is true in a slightly idealized world – sadly, the vast majority of physical universities are as stupidly focused on the tools as we are, so those benefits are an afterthought rather than the main thing to celebrate, and are often the first things to suffer when cuts come along. Online, such beyond-the-course opportunities are few and far between: the Landing is (of course) built with exactly that concern in mind, but there’s precious little sign of it anywhere else at AU, one of the most advanced online universities in the world.  The nearest thing most students get to it is the odd Facebook group or Twitter interaction, which seems an awful waste to me, though a fascinating phenomenon that blurs the lines between the institution and the broader community.

It is already possible to take a high quality course for free in almost any subject that interests you and, more damagingly, any time now there will soon be sources of accreditation that are as prestigious as those awarded by universities but orders of magnitude cheaper, not to mention compellingly cut-price  options from universities that can leverage their size and economies of scale (and, perhaps, cheap labour) to out-price the rest of us. Competing on these grounds makes no sense for a publicly funded institution the role of which is not to be an accreditation mill but to preserve, critique, observe, transform and support society as a whole. We need to celebrate and cultivate the iceberg, not just its visible tip. Our true value is not in our courses but in our people (staff and students) and the learning community that they create.

Niggles about NGDLEs – lessons from ELF

Malcom Brown has responded to Tony Bates and me in an Educause guest post in which he defends the concept of the NGDLE and expands a bit on the purposes behind it. This does help to clarify the intent although, as I mentioned in my earlier post, I am quite firmly in favour of the idea, so I am already converted on the main points. I don’t mind the Lego metaphor if it works, but I do think we should concentrate more on the connections than the pieces. I also see that it is fairly agnostic to pedagogy, at least in principle. And I totally agree that we desperately need to build more flexible, assemblable systems along these lines if we are to enable effective teaching, management of the learning process and, much much more importantly, if we are to support effective learning. Something like the proposed environment (more of an ecosystem, I’d say) is crucial if we want to move on.

But…

It has been done before, over ten years ago in the form of ELF, in much more depth and detail and with large government and standards bodies supporting it, and it is important to learn the lessons of what was ultimately a failed initiative. Well – maybe not failed, but certainly severely stalled. Parts persist and have become absorbed, but the real value of it was as a model for building tools for learning, and that model is still not as widespread as it should be. The fact that the Educause initiative describes itself as ‘next generation’ is perhaps the most damning evidence of its failure.

Elves

Why ELF ‘failed’

I was not a part of nor close to the ELF project but, as an outsider, I suspect that it suffered from four major and interconnected problems:

  1. It was very technically driven and framed in the language of ICTs, not educators or learners. Requirements from educators were gathered in many ways, with workshops, working groups and a highly distributed team of experts in the UK, Australia, the US, Canada, the Netherlands and New Zealand (it was a very large project). Some of the central players had a very deep understanding of the pedagogical and organizational needs of not just learners but organizations that support them, and several were pioneers in personal learning environments (PLEs) that went way beyond the institution. But the focus was always on building the technical infrastructure – indeed, it had to be, in order to operationalize it. For those outside the field, who had not reflected deeply on the reasons this was necessary, it likely just seemed like a bunch of techies playing with computers. It was hard to get the message across.
  2. It was far too over-ambitious, perhaps bolstered by the large amounts of funding and support from several governments and large professional bodies. The e-learning framework was just one of several strands like e-science, e-libraries and so on, that went to make up the e-framework. After a while, it simply became the e-framework and, though conceptually wonderful, in practical terms it was attempting far too much in one fell swoop. It became so broad, complex and fuzzy that it collapsed under its own weight. It was not helped by commercial interests that were keen to keep things as proprietary and closed as they could get away with. Big players were not really on board with the idea of letting thousands of small players enter their locked-in markets, which was one of the avowed intents behind it. So, when government funding fizzled out, there was no one to take up such a huge banner. A few small flags might have been way more successful.
  3. It was too centralized (oddly, given its aggressively decentralized intent and the care taken to attempt to avoid that). With the best of intent, developers built over-engineered standards relying on web service architectures that the rest of the world was abandoning because they were too clunky, insufficiently agile and much too troublesome to implement. I am reminded, when reading many of the documents that were produced at the time, of the ISO OSI network standards of the late 80s that took decades to reach maturity through ornate webs of committees and working groups, were beautifully and carefully engineered, and that were thoroughly and completely trounced by the lighter, looser, more evolved, more distributed TCP/IP standards that are now pretty much ubiquitous. For large complex systems, evolution beats carefully designed engineering every single time.
  4. The fact that it was created by educators whose framing was entirely within the existing system meant that most of the pieces that claimed to relate to e-learning (as opposed to generic services) had nothing to do with learning at all, but were representative of institutional roles and structures: marking, grading, tracking, course management, resource management, course validation, curriculum, reporting and so on. None of this has anything to do with learning and, as I have argued on many occasions elsewhere, may often be antagonistic to learning. While there were also components that were actually about learning, they tended to be framed in the context of existing educational systems (writing lessons, creating formal portfolios, sequencing of course content, etc). Though very much built to support things like PLEs as well as institutional environments, the focus was the institution far more than the learner.

As far as I can tell, any implementation of the proposed NGDLE is going to run into exactly the same problems. Though the components described are contemporary and the odd bit of vocabulary has evolved a bit, all of them can be found in the original ELF model and the approach to achieving it seems pretty much the same. Moreover, though the proposed architecture is flexible enough to support pretty much anything – as was ELF – there is a tacit assumption that this is about education as we know it, updated to support the processes and methods that have been developed since (and often in response to) the heinous mistakes we made when we designed the LMSs that dominate education today. This is not surprising – if you ask a bunch of experts for ideas you will get their expertise, but you will not get much in the way of invention or new ideas. The methodology is therefore almost guaranteed to miss the next big thing. Those ideas may come up but they will be smoothed out in an averaging process and dissenting models will not become part of the creed. This is what I mean when I criticize it as a view from the inside.

Much better than the LMS

If implemented, a NGDLE will undoubtedly be better than any LMS, with which there are manifold problems. In the first place, LMSs are uniformly patterned on mediaeval educational systems, with all their ecclesiastic origins, power structures and rituals intact. This is crazy, and actually reinforces a lot of things we should not be doing in the first place, like courses, intimately bound assessment and accreditation, and laughably absurd attempts to exert teacher control, without the slightest consideration of the fact that pedagogies determined by the physics of spaces in which we lock doors and keep learners controlled for an hour or two at a time make no sense whatsoever in online learning. In the second place, centralized systems have to maintain an uneasy and seldom great balance between catering to every need and remaining usably simple. This inevitable leads to compromises, from small things (e.g. minor formatting annoyances in discussion forums) to the large (e.g. embedded roles or units of granularity that make everything a course). While customization options can soften this a little, centralized systems are structurally flawed by their very nature. I have discussed such things in some depth elsewhere, including both my published books. Suffice to say, the LMS shapes us in its own image, and its own image is authoritarian, teacher-controlled and archaic. So, a system that componentizes things so that we can disaggregate any or all of it, provide local control (for teachers and other learners as well as institutions and administrators) and allow creative assemblies is devoutly to be wished for. Such a system architecture can support everything from the traditional authoritarian model to the loosest of personal learning environments, and much in between.

Conclusion

NGDLE is a misnomer. We have already seen that generation come and go. But, as a broad blueprint for where we should be going and what we should be doing now, both ELF and NGDLE provide patterns that we should be using and thinking about whenever we implement online learning tools and content and, for that, I welcome it. I am particularly appreciative that NGDLE provides reinvigorated support for approaches that I have been pushing for over a decade but that ICT departments and even faculty resist implacably. It’s great to be able to point to the product of so many experts and say ‘look, I am not a crank: this is a mainstream idea’. We need a sea-change in how we think of learning technologies and such initiatives are an important part of creating the culture and ethos that lets this happen. For that I totally applaud this initiative.

In practical terms, I don’t think much of this will come from the top-down, apart from in the development of lightweight, non-prescriptive standards and the norming of the concepts behind it. Of current standards, I think TinCan is hopeful, though I am a bit concerned that it is becoming over-ornate in its emerging development. LTI is a good idea, sufficiently mature, and light enough to be usable but, again, in its new iteration it is aiming higher than might be wise. Caliper is OK but also showing signs of excessive ambition. Open Badges are great but I gather that is becoming less lightweight in its latest incarnation. We need more of such things, not more elaborate versions of them. Unfortunately, the nature of technology is that it always evolves towards increasing complexity. It would be much better if we stuck with small, working pieces and assembled those together rather than constantly embellishing good working tools. Unix provides a good model for that, with tools that have worked more or less identically for decades but that constantly gain new value in recombination.

Footnote: what became of ELF?

It is quite hard to find information about ELF today. It seems (as an outsider) that the project just ground to a halt rather than being deliberately killed. There were lots of exemplar projects, lots of hooks and plenty of small systems built that applied the idea and the standards, many of which are still in use today, but it never achieved traction. If you want to find out more, here is a small reading list:

http://www.elframework.org/ – the main site (the link to the later e-framework site leads to a broken page)

http://www.elframework.org/projects.html  – some of the relevant projects ELF incorporated.

https://web.archive.org/web/20061112235250/http://www.jisc.ac.uk/uploaded_documents/Altilab04-ELF.pd – good, brief overview from 2004 of what it involved and how it fitted together

 https://web.archive.org/web/20110522062036/http://www.jisc.ac.uk/uploaded_documents/AltilabServiceOrientedFrameworks.pdf  – spooky: this is about ‘Next Generation E-Learning Environments’ rather than digital ones. But, though framed in more technical language, the ideas are the same as NGDLE.

http://www.webarchive.org.uk/wayback/archive/20110621221935/http://www.elearning.ac.uk/features/nontechguide2 – a slightly less technical variant (links to part 1, which explains web services for non-technical people)

See also https://web.archive.org/web/20090330220421/http://www.elframework.org/general/requirements/scenarios/Scenario%20Apparatus%20UK%205%20(manchester%20lipsig).doc and https://web.archive.org/web/20090330220553/http://www.elframework.org/general/requirements/use_cases/EcSIGusecases.zip, a set of scenarios and use cases that are eerily similar to those proposed for NGDLE.

If anyone has any information about what became of ELF, or documents that describe its demise, or details of any ongoing work, I’d be delighted to learn more!

 

 

Why so many questions?

Athabasca River Flood

At Athabasca University, our proposed multi-million dollar investment in a student relationship management system, dubbed the ‘Student Success Centre’ (SSC), is causing quite a flood of discussion and debate among faculty and tutors at the moment. Though I do see some opportunities in this if (and only if) it is very intelligently and sensitively designed, there are massive and potentially fatal dangers in creating such a thing.  See a previous post of mine for some of my worries. I have many thoughts on the matter, but one thing strikes me as interesting enough to share more widely and, though it has a lot to do with the SSC, it also has broader implications.

Part of the justification for the SSC is that an alleged 80% of current interactions with students are about administrative rather than academic issues. I say ‘alleged’ because such things are notoriously hard to measure with any accuracy. But let’s assume that it actually is accurate.

How weird is that?

Why is it that our students (apparently) need to contact us for admin support in overwhelming numbers but actually hardly talk at all about the complicated subjects they are taking? Assuming that these 80% of interactions are not mostly to complain about things that have gone wrong (if so, an SSC is not the answer!) then it seems, on the face of it, more than a bit topsy turvy.
 
One reasonable explanation might be that our course materials are so utterly brilliant that they require little further interaction, but I am not convinced that this sufficiently explains the disparity. Students are mostly spending 100+ hours on academic work for each course whereas (I hope) at most a couple of hours are spent on administrivia. No matter how amazing our courses might be, the difference is remarkable. It is doubly remarkable when you consider that a fair number of our courses do involve at least some required level of interaction which, alone, should easily account for most if not more than all of that remaining 20%. In my own courses it is a lot more than that and I am aware of many others with very active Landing groups, Moodle forums, webinar sessions, and even the occasional visit to an immersive world.
 
It is also possible that our administrative processes are extremely opaque and ill-explained. This certainly accords with my own experience of trying to work out something as simple as how much a course would cost or the process needed to submit project work. But, if that is the case, and assuming our distance, human-free teaching works as well as we believe it does, then why can we not a) simplify the processes and b) provide equally high quality learning materials for our admin processes so that students don’t need to bother our admin staff so much? If our course materials are so great then that would seem, on the face of it, very much more cost-effective than spending millions on a system that is at least as likely to have a negative as a positive impact and that actually increases our ongoing costs considerably. It is also quite within the capabilities of our existing skillset.
 
Even so, it seems very odd to me that students can come to terms with inordinately complex subjects from philosophy to biochemistry, but that they are foiled by a simple bit of bureaucracy and need to seek human assistance. It may be hard, but it is not beyond the means of a motivated learner to discover, especially given that we are specialists in producing high quality learning materials that should make such things very clear. And in motivation, I think, lies the key.
 
 

Other people matter

Other people are wonderful things when you need to learn something, pretty much across the board. Above all they matter when there is no obvious reason that you should be interested or care about it for its own merits, and bureaucratic procedures are seldom very interesting. I have known only one person in my whole life that actually likes filling in forms (I think it is a meditative pursuit – my father felt much the same way about dishwashing and log sawing) but, for the most part, this is not a thing that excites most people.  
 
I hypothesize that our students tend to need less academic than bureaucratic help at least partly because, by and large, for the coursework they are very self-motivated people learning things that interest them whereas our bureaucracy is at most a means to an end, at worst a demotivating barrier. It would not help much to provide great teaching materials for bureaucratic procedures because 99% of students would have no intrinsic interest in learning about them, and it would have zero value to them in any future activity. Why would they bother? It is far easier to ask someone.
 
Our students actually like the challenge of facing and solving problems in their chosen subjects – in fact, that’s one of the great joys of learning. They don’t turn to tutors to discuss things because there are plenty of other ways of getting the help they need, both in course materials and elsewhere, and it is fun to overcome obstacles. The more successful ones tend to have supportive friends, families or colleagues, or are otherwise very single-minded. They tend to know why they are doing what they are doing. We don’t get many students that are not like this, at least on our self-paced courses, because either they don’t bother coming in the first place or they are among the scarily large percentage that drop out before starting (we don’t count them in our stats though, in fairness, neither to face-to-face universities).
 
But, of course, that only applies for students that do really like the process of learning and most of what they are learning, that know how to do it and/or that have existing support networks. It does not apply to those that hit very difficult or boring spots, that give up before they start, that hit busy times that mean they cannot devote the energy to the work, that need a helping hand with the process but cannot find it elsewhere, or that don’t bother even looking at a distance option at all because they do not like the isolation it (apparently) entails. For those students, other people can help a lot. Even for our own students, over half (when asked) claim that they would appreciate more human interaction. And those are the ones that have knowingly self-selected a largely isolated process and that have not already dropped out. 
 
Perhaps more worryingly, it raises concerns about the quality of the learning experience. Doing things alone means that you miss out on all the benefits of a supportive learning community. You don’t get to argue, to explain, to question, save in your own head or in formal, largely one-way, assignments. You don’t get multiple perspectives, different ways of seeing, opportunities to challenge and be challenged. You don’t get the motivation of writing for an audience of people that you care about. You don’t get people that care about you and the learning community providing support when times are hard, nor the pleasure of helping when others are in difficulty. You don’t get to compare yourself with others, the chance to reflect on how you differ and whether that is a good or bad thing. You don’t get to model behaviours or see those behaviours being modelled. These are just some of the notable benefits of traditional university systems that are relatively hard to come by in Athabasca’s traditional self-paced model (not in all courses, but in many). It’s not at all about asking questions and getting solutions. It’s about engaging in a knowledge creation process with other people. There are distinct benefits of being alone, notably in the high degree of control it brings, but a bit of interaction goes a long long way. It takes a very special kind of person to get by without that and the vast majority of our successful students (at least in undergraduate self-paced courses) are exactly that special kind of person. 
 
If it is true that only 20% of interactions are currently concerned with academic issues, that is a big reason for concern, because it means our students are missing out on an incredibly rich set of opportunities in which they can help one another as well as interact with tutors. Creating an SSC system that supports what is therefore, for those that are not happy alone (i.e. the ones we lose or never get in the first place), an impoverished experience, seems simply to ossify a process that should at least be questioned. It is not a solution to the problem – it is an exacerbation of it, further entrenching a set of approaches and methods that are inadequate for most students (the ones we don’t get or keep) in the first place.

A sustainable future?

As a university seeking sustainability we could simply continue to concentrate on addressing the needs of self-motivated, solitary students that will succeed almost no matter what we do to them, and just make the processing more cost-efficient with the SSC.  If we have enough of those students, then we will thrive for some time to come, though I can’t say it fits well with our open mission and I worry greatly about those we fail to help. If we want to get more of those self-guided students then there are lots of other things we should probably do too like dropping the whole notion of fixed-length courses (smaller chunks means the chances of hitting the motivation sweet-spot are higher) and disaggregating assessment from learning (because extrinsic motivation kills intrinsic motivation).
 
But, if we are sticking with the idea of traditional courses, the trouble is that we are no longer almost alone in offering such things and there is a finite market of self-motivated, truly independent learners who (if they have any sense) will find cheaper alternatives that offer the same or greater value. If all we are offering is the opportunity to learn independently and a bit of credible certification at the end of it, we will wind up competing on price with institutions and businesses that have deeper coffers, cheaper staff, and less constraints. In a cut-throat price war with better funded peers, we are doomed.
 
If we are to be successful in the future then we need to make more of the human side of our teaching, not less, and that means creating richer, more direct channels to other people in this learning community, not automating methods that are designed for the era of correspondence learning. This is something that, not uncoincidentally, the Landing is supposed to help with, though it is just an exemplar and at most a piece of the puzzle – we ideally want connection to be far more deeply embedded everywhere rather than in a separate site. It is also something that current pilot implementations of the SSC are antagonistic towards, thanks mainly to equating time and effort, focusing on solving specific problems rather than human connection, failing to support technological diversity, and standing as an obstacle between people that just need to talk. It doesn’t have to be built that way. It could almost as easily vanish into the background, be seamlessly hooked into our social environments like email, Moodle and the Landing, could be an admin tool that gives support when needed but disappears when not. And there is no reason whatsoever that it needs to be used to pay tutors by the recorded minute, a bad idea that has been slung on the back of it that has no place in our culture. Though not what the pilot systems do at all, a well-designed system like this could step in or be called upon when needed, could support analytics that would be genuinely helpful, could improve management information, all without getting in the way of interaction at all. In fact, it could easily be used to enhance it, because it could make patterns of dialogue more visible and comprehensible.
 

In conclusion

At Athabasca we have some of the greatest distance educators and researchers on the planet, and that greatness rubs off on those around them. As a learning community, knowledge spreads among us and we are all elevated by it. We talk about such things in person, in meetings, via Skype, in webinars, on mailing lists, on the Landing, in pubs, in cafes, etc. And, as a result, ideas, methods and values get created, transformed and flow through our network. This makes us quite unique – as all learning communities are unique – and creates the distinctive culture and values of our university that no other university can replicate. Even when people leave, they leave traces of their ideas and values in those that remain, that get passed along for long after they have gone, become part of the rich cultural identity that defines us. It’s not mainly about our structures, processes and procedures: except when they support greater interaction, those actually get in the way much of the time. It’s about a culture and community of learning. It’s about the knowledge that flows in and through this shifting but identifiable crowd. This is a large part of what gives us our identity. It’s exactly the same kind of thing that means we can talk about (say) the Vancouver Canucks or Apple Inc as a meaningful persistent entity, even though not one of the people in the organization is the same as when it began and virtually all of its processes, locations, strategies and goals beyond the most basic have changed, likely many times. The thing is, if we hide those people behind machines and processes, separate them through opaque hierarchies, reduce the tools and opportunities for them to connect, we lose almost all of the value. The face of the organization becomes essentially the face of the designer of the machine or the process and the people are simply cogs implementing it. That’s not a good way forward, especially as there are likely quite a few better machine and process designers out there. Our people – staff and students – are the gold we need to mine, and they are also the reason we are worth saving. We need to be a university that takes the distance out of distance learning, that connects, inspires, supports and nurtures both its staff and its students. Only then will we justly be able to claim to have a success centre.

 

The cost of time

A few days back, an email was sent to our ‘allstaff’ mailing list inviting us to join in a bocce tournament. This took me a bit of time to digest, not least because I felt impelled to look up what ‘bocce’ means (it’s an Italian variant of pétanque, if you are interested). I guess this took a couple of minutes of my time in total. And then I realized I was probably not alone in this – that over a thousand people had also been reading it and, perhaps, wondering the same thing. So I started thinking about how we measure costs.
 

The cost of reading an email

A single allstaff email at Athabasca will likely be read by about 1200 people, give or take. If such an email takes one minute to read, that’s 1200 minutes – 20 hours – of the institution’s time being taken up with a single message. This is not, however, counting the disruption costs of interrupting someone’s train of thought, which may be quite substantial. For example, this study from 2002 reckons that, not counting the time taken to read email, it takes an average of 64 seconds to return to previous levels of productivity after reading one. Other estimates based on different studies are much higher – some studies suggest the real recovery time from interruptions to tasks could be as high as 15-20 minutes. Conservatively, though, it is probably safe to assume that, taking interruption costs into account, an average allstaff email that is read but not acted upon consumes an average of two minutes of a person’s time: in total, that’s about 40 hours of the institution’s time, for every message sent. Put another way, we could hire another member of staff for a week for the time taken to deal with a single allstaff message, not counting the work entailed by those that do act on the message, nor the effort of writing it. It would therefore take roughly 48 such messages to account for a whole year of staff time. We get hundreds of such messages each year.
 
But it’s not just about such tangible interruptions. Accessing emails can take a lot of time before we even get so far as reading them. Page rendering just to view a list of messages on our web front end for our email system is an admirably efficient 2 seconds (i.e. 40 minutes of the organization’s time for everyone to be able to see a page of emails, not even to read their titles). Let’s say we all did that an average of 12 times a day –  that’s 8 hours, or more than a day of the institution’s time, taken up with waiting for that page to render each day. Put another way, as we measure such things, if it took four seconds, we would have to fire someone to pay for it. As it happens, for another university for which I have an account, using MS Exchange, simply getting to the login screen of its web front end takes 4 seconds. Once logged in (a further few seconds thanks to Exchange’s insistence on forcing you to tell it that your computer is not shared even though you have told it that a thousand times before), loading the page containing the list of emails takes a further 17 seconds. If AU were using the same system, using the same metric of 12 visits each day, that could equate to around 68 hours of the institution’s time every single day, simply to view a list of emails, not including a myriad of other delays and inefficiencies when it comes to reading, responding to and organizing such messages. Of course, we could just teach people to use a proper email client and reduce the delay to one that is imperceptible, because it occurs in the background – webmail is a truly terrible idea for daily use – or simply remind them not to close their web browsers so often, or to read their emails less regularly. There are many solutions to this problem. Like all technologies, especially softer ones that can be used in millions of ways, it ain’t what you do it’s the way that you do it. 
 

But wait – there’s more

Email is just a small part of the problem, though: we use a lot of other websites each day. Let’s conservatively assume that, on average, everyone at AU visits, say, 24 pages in a working day (for me that figure is always vastly much higher) and that each page averages out at about 5 seconds to load. That’s two minutes per person. Multiplied by 1200, it’s another week of the institution’s time ‘gone’ every day simply waiting to read a page.
 
And then there are the madly inefficient bureaucratized processes that are dictated and mediated by poorly tailored software. When I need to log into our CRM system I reckon that simply reading my tasks takes a good five minutes. Our leave reporting system typically eats 15 minutes of my time each time I request leave (it replaces one that took 2-3 minutes).  Our finance system used to take me about half an hour to add in expenses for a conference but, since downgrading to a baseline version, now takes me several hours, and it takes even more time from others that have to give approvals along the way. Ironically, the main intent behind implementing this was to save us money spent on staffing. 
 
I could go on, but I think you see where this is heading. Bear in mind, though, that I am just scratching the surface. 
 

Time and work

My point in writing this is not to ask for more efficient computer and admin systems, though that would indeed likely be beneficial. Much more to the point, I hope that you are feeling uncomfortable or even highly sceptical about how I am measuring this. Not with the figures: it doesn’t much matter whether I am wrong with the detailed timings or even the math. It is indisputable that we spend a lot of time dealing with computer systems and the processes that surround them every day, and small inefficiencies add up. There’s nothing particularly peculiar to ICTs about this either – for instance, think of the time taken to walk from one office to another, to visit the mailroom, to read a noticeboard, to chat with a colleague, and so on. But is that actually time lost or does it even equate precisely to time spent?  I hope you are wondering about the complex issues with equating time and dollars, how we learn, why and how we account for project costs in time, the nature of technologies, the cost vs value of ICTs, the true value of bocce tournament messages to people that have no conceivable chance of participating in them (much greater than you might at first imagine), and a whole lot more. I know I am. If there is even a shred of truth in my analysis, it does not automatically lead to the conclusion that the solution is simply more efficient computer systems and organizational procedures. It certainly does bring into question how we account for such things, though, and, more interestingly, it highlights even bigger intangibles: the nature and value of work itself, the nature and value of communities of practice, the role of computers in distributed intelligence, and the meaning, identity and purpose of organizations. I will get to that in another post, because it demands more time than I have to spend right now (perhaps because I receive around 100 emails a day, on average).
 

Beyond the group: how education is changing and why institutions need to catch up

Understanding the ways people interact in an online context matters if we are interested in deliberate learning, because learning is almost always with and/or from other people: people inform us, inspire us, challenge us, motivate us, organize us, help us, engage with us. In the process, we learn. Intentional learning is now, more than ever, whether informally, non-formally or formally, an activity that occurs outside a formal physical classroom. We are no longer limited to what our schools, universities, teachers and libraries in our immediate area provide for us, nor do we need to travel and pay the costs of getting to the experts in teaching and subject matter that we need. We are not limited to classes and courses any more. We don’t even need books. Anyone and everyone can be our teachers. This matters.

Traditional university education

Traditional university education is all about groups, from classes to courses to committees to cohorts (Dron & Anderson, 2014). I use the word ‘group’ in a distinctive and specific way here, following a pattern set by Wellman, Downes and others before and since. Groups have names, owners, members, roles and hierarchies. Groups have purposes and deliberate boundaries. Groups have rules and structures. Groups embody a large set of highly evolved mechanisms that have developed over millenia to deal with the problems of coordinating large numbers of people in physical spaces and, in the context they have evolved, they are a pretty effective solution.

But there are two big problems with using groups in their current form in online learning. The first is that the online context changes group dynamics. In the past, professors were able to effectively trap students in a room for an hour or more, and to closely control their activities throughout that time. That is the context in which our most common pedagogies evolved. Even in the closest simulations of a face-to-face context (immersive worlds or webmeetings) this is no longer possible.

The second problem is more significant and follows from the first: group technologies, from committees to classrooms, were developed in response to the constraints and affordances of physical contexts that do not exist in an online and connected world. For example, it has been a long time since the ability to be in hearing range of a speaker has mattered if we wish to understand what he or she says. Teachers needed to control such groups because, apart from anything else, in a physical context, it would have been impossible to otherwise be heard without disruption. It was necessary to avoid such disruption and to coordinate behaviour because there was no other easy way to gain the efficiencies of one person teaching many (books notwithstanding). We also had to be disciplined enough to be in the same place at the same time – this involved a lot of technologies like timetables, courses, and classroom furniture. We needed to pay close attention because there was no persistence of content. The whole thing was shaped by the need to solve problems of access to rival resources in a physical space. 

We do not all have to be together in one place at one time any more. It is no longer necessary for the teacher to have to control a group because that group does not (always or in the same way) need to be controlled.

Classrooms used to be the only way to make efficient use of a single teacher with a lot of learners to cater for, but compromises had to be made: a need for discipline, a need to teach to the norm, a need to schedule and coordinate activities (not necessarily when learners needed or wanted to learn), a need to demand silence while the teacher spoke, a need to manage interactions, a perceived need to guide unwilling learners, brought on by the need to teach things guaranteed to be boring or confusing to a large segment of a class at any given time. We therefore had to invent ways to keep people engaged, either by force or intentional processes designed to artificially enthuse. This is more than a little odd when you think about it. Given that there is hardly anything more basically and intrinsically motivating than to learn something you actually want to learn when you want to learn it, the fact that we had to figure ways to motivate people to learn suggests something went very wrong with the process. It did not go wonderfully. A whole load of teaching had worse than no effect and little resulted in persistent and useful learning – at least, little of what was intentionally taught. It was a compromise that had to be made, though. The educational system was a technology designed to make best use of limited resources and the limitations imposed by physics, without which the spread of knowledge and skills would have been (and used to be and, in pockets where education is unavailable, still is) very limited.

Online learning

For those of us who are online (you and me) we don’t need to make all of those compromises any more. There are millions of other ways to learn online with great efficiency and relevance that do not involve groups at all, from YouTube to Facebook to Reddit to StackExchange, to this post. These are under the control of the learners, each at the centre of his or her own network and in control of the flow, each able to choose which sets of people to engage with, and to what attention should be paid.

Networks have no boundaries, names, roles or rules – they are just people we know.

Sets have no ties, no rituals of joining, no allegiances or social connections – they are just collections of people temporarily occupying a virtual or physical space who share similar interests without even a social network to bind them.

Sets and networks are everywhere and they are the fundamental social forms from which anyone with online access learns and they are all driven by people or crowds of people, not by designed processes and formal patterns of interaction.

Many years ago Chambers, then head of Cisco, was ridiculed for suggesting that e-learning would make email look like a rounding error. He was absolutely right, though, if not in quite the way he meant it: how many people reading this do not turn first to Google, Wikipedia or some other online, crowd-driven tool when needing or wanting to learn something? Who does not learn significant amounts from their friends, colleagues or people they follow through social networks or email? We are swimming in a sea of billions of teachers: those who inform, those with whom we disagree, those who act as role models, those who act as anti-models, those that inspire, those that affirm, those that support, those we doubt, those we trust. If there was ever a battle for supremacy between face-to-face and e-learning (an entirely artificial boundary) then e-learning has won hands down, many times over. Not so’s you’d know it if you look at our universities. Very oddly, even an online university like Athabasca is largely trapped in the same constrained and contingent pattern of teaching that has its origins in the limitations of physical space as its physical counterparts. It is largely as though the fact of the Internet has had no significant impact beyond making things slightly more convenient. Odd.

Replicating the wrong things

Those of us who teach entirely online are still, on the whole, making use of the single social form of the group, with all of its inherent restrictions, hierarchies and limitations inherited from its physical ancestors. Athabasca is at least a little revolutionary in providing self-paced courses at undergraduate level (albeit rarely with much social engagement at all – its inspiration is as much the book as the classroom) , but it still typically keeps the rest of the trappings, and it uses groups like all the rest in most of its graduate level courses. Rather than maintaining discipline in classrooms through conventional means, we instead make extensive use of assessments which have become, in the absence of traditional disciplinary hierarchies that give us power in physical spaces, our primary form of control as well as the perceived primary purpose of at least higher education (the one follows from the other). It has become a transaction: if you do what I say and learn how I tell you to learn then, if you succeed, I will give you a credential that you can use as currency towards getting a job. If not, no deal. Learning and the entire process of education has become secondary to the credential, and focused upon it. We do this to replicate a need that was only there in the first place thanks to physics, not because it made sense for learning.

As alternative forms of accreditation become more commonplace and more reliable, it is hard to see us sustaining this for much longer. Badges, social recommendations, commercial credits, online portfolios, direct learning record storage, and much much more are gaining credence and value.

It is hard to see what useful role a university might play when it is not the best way to learn what you want to learn and it is not the best way to gain accreditation for your skills and knowledge.

Will universities become irrelevant? Maybe not. A university education has always been about a lot more than what is taught. It is about learning ways of thinking, habits of mind, ways of building knowledge with and learning from others. It is about being with others that are learning, talking with them, socializing with them, bumping serendipitously into new ideas and ways of being. All of this is possible when you throw a bunch of smart people together in a shared space, and universities are a good gravitational force of attraction for that. It is, and has always been, about networks and sets as much as if not more than groups. The people we meet and get to know are not just networks of friends but of knowledge. The sets of people around us, explicit and implicit, provide both knowledge and direction. And such sets and nets have to form somewhere – they are not mere abstractions. Universities are good catalysts. But that is only true as long as we actually do play this role. Universities like Athabasca focus on isolated individuals or groups in boundaried courses. Only in odd spaces like here, on the Landing, or in external social sites like Twitter, Facebook or RateMyProfessor, is there a semblance of those other roles a university plays, a chance to extend beyond the closed group and credential-focused course process.

Moving on

We can still work within the old constraints, if we think it worthwhile – I am not suggesting we should suddenly drop all the highly evolved methods that worked in the past at once. Like a horse and cart or a mechanical watch, education still does the job it always did, in ways that more evolved methods will never not replicate, any more than folios beat scrolls or cars beat horses. There will be both gains and losses as things shift. Like all technologies (Kelly, 2010), the old ways of teaching will never go away completely and will still have value for some.  Indeed, they might retain quite a large niche for many years to come. 

But now we can do a whole lot more as well and instead, and the new ways work better, on the whole. In a competitive ecosystem, alternatives that work better will normally come to dominate. All the pieces are in place for this to happen: it is just taking us a little while to collectively realize that we don’t need the trainer-wheels any more. Last gasp attempts to revamp the model, like first-generation xMOOCs, merely serve to illustrate the flaws in the existing model, highlighting in sharp relief the absurdities of adopting group-based forms on an Internet-based scale. imposing structural forms designed to keep learners on track in physical classrooms have no sense or meaning when applied to a voluntary, uncredentiallled and interest-driven course. I think we can do better than that.

The key steps are to disaggregate learning and assessment, and to do away with uniform courses with fixed schedules and pre-determined processes and outcomes. Outsiders, from MOOC providers (they are adapting fast) to publishers are beginning to realize this, as are a few universities like WGU.

It is time to surf the adjacent possible (Kauffman, 2000), to discover ways of learning with others that take advantage of the new horizons, that are not trapped like horseless carriages replicating the limitations of a bygone era. Furthermore, we need to learn to build new virtual environments and learning ecosystems in ways that do not just mimic patterns of the past, but that help people to learn in more flexible, richer ways that take advantage of the freedoms they enable – not personalized (with all the power assertion that implies) but both personal and social. If we build tools like learning management systems or the first generation xMOOC environments like edX, that are trapped into replicating traditional classroom-bound forms, we not only fail to take advantage of the wealth of the network, but we actually reinforce and ossify the very things we are reacting against rather than opening up new vistas of pedagogical opportunity. If we sustain power structures by linking learning and formal assessment, we hobble our capacity to teach. If we enclose learning in groups that are defined as much by who they exclude as who they encompass (Shirky, 2003) then we actively prevent the spread of knowledge. If we design outcome-based courses on fixed schedules, we limit the potential for individual control, and artificially constrain what need not be constrained.

Not revolution but recognition of what we already do

Any and all of this can change. There have long been methods for dealing with the issues of uniformity in course design and structure and/or tight integration of summative assessment to fixed norms, even within educational institutions. European-style PhDs (the ones without courses), portfolio-based accreditation (PLAR, APEL, etc), challenge exams, competency-based ‘courses’,  open courses with negotiable outcomes, assessments and processes (we have several at AU), whole degrees by negotiated learning outcomes, all provide different and accepted ways to do this and have been around for at least decades if not hundreds of years. Till recently these have mostly been hard to scale and expensive to maintain. Not any more. With the growth of technologies like OpenBadges, Caliper and xAPI, there are many ways to record and accredit learning that do not rely on fixed courses, pre-designed outcomes-based learning designs and restrictive groups. Toolsets like the Landing, Mahara or LPSS provide learner-controlled ways to aggregate and assemble both the process and evidence of learning, and to facilitate the social construction of knowledge – to allow the crowd to teach – without demanding the roles and embodied power structures of traditional learning environments. By either separating learning and accreditation or by aligning accreditation with individual learning and competences, it would be fairly easy to make this change and, whether we like it or not, it will happen: if universities don’t do it, someone else will. 

All of traditional education is bound by historical constraint and path dependencies. It has led to a vast range of technologies to cope, such as terms and semesters, libraries, classrooms, courses, lessons, exams, grading, timetables, curricula, learning objectives, campuses, academic forms and norms in writing, disciplinary divisions and subdivisions, textbooks, rules and disciplinary procedures, avoidance of plagiarism, homework, degrees, award ceremonies and a massive range of other big and small inventions and technologies that have nothing whatsoever to do with learning.

Nothing at all.

All are contingent. They are simply a reaction to barriers and limitations that made good sense while those barriers existed. Every one of them is up for question. We need to imagine a world in which any or all of these constraints can be torn down. That is why we need to think about different social forms, that is why we continue to build the Landing, that is why we continue to explore the ways that learning is evolving outside the ivory tower, that is why we are trying to increase learner control in our courses (even if we cannot yet rid ourselves of all their constraints), that is why we are exploring alternative and open forms of accreditation. It is not just about doing what we have always done in slightly better, more efficient ways. Ultimately, it is about expanding the horizons of education itself. Education is not about courses, awards, classes and power hierarchies. Education is about learning. more accurately, it is about technologies of learning – methods, tools, processes, procedures and techniques. These are all inventions, and inventions can be superseded and improved. Outside formal institutions, this has already begun to happen. It is time we in universities caught up.

References

Dron, J., & Anderson, T. (2014). Teaching crowds: social media and distance learning. Athabasca: AU Press. 

Kauffman, S. (2000). Investigations (Kindle ed.). New York: Oxford University Press. 

Kelly, K. (2010). What Technology Wants (Kindle ed.). New York: Viking. 

Shirky, C. (2003). A Group Is Its Own Worst Enemy. Retrieved from http://www.shirky.com/writings/group_enemy.html

 

 

 

Time to change education again: let's not make the same mistakes this time round

We might as well start with exams

In case anyone missed it, one of countless examples of mass cheating in exams is being reported quite widely, such as at http://www.ctvnews.ca/world/hundreds-expelled-in-india-for-cheating-on-pressure-packed-exams-1.2289032.

The videos are stunning (Chrome and Firefox users – look for the little shield or similar icon somewhere in or near you browser’s address field to unblock the video. IE users will probably have a bar appearing in the browser asking if you want to trust the site – you do. Opera, Konqueror and Safari users should be able to see the video right away), e.g.:

As my regular readers will know, my opinions of traditional sit-down, invigilated, written exams could not be much lower. Sitting in a high-stress environment, unable to communicate with anyone else, unable to refer to books or the Internet, with enormous pressure to perform in a fixed period to do someone else’s bidding, in an atmosphere of intense powerlessness, typically using a technology you rarely encounter anywhere else (pencil and paper), knowing your whole future depends on what you do in the next 3 hours, is a relatively unusual situation to find yourself in outside an exam hall. It is fair enough for some skills – journalism, for example, very occasionally leaves you in similar conditions. But, if it actually is an authentic skill needed for a particular field, then it should be explicitly taught and, if we are serious about it, it should probably be examined under truly authentic conditions (e.g. for a journalist, in a hotel room, cafe, press room, or trench). This is seldom done. It is not surprising, therefore, that exams are an extremely poor indicator of competence and an even worse indicator of teaching effectiveness. By and large, they assess things that we do not teach.

If that were all, I might not be so upset with the idea – it would just be weird and ineffective. However, exams are not just inefficient in a system designed to teach, they are positively antagonistic to learning. This is an incredibly wasteful tragedy of the highest order. Among the most notable of the many ways that they oppose teaching are that:

  • they shift the locus of control from the learner to the examiner
  • they shift the focus of attention from the activity to the accreditation
  • they typically punish cooperation and collaboration
  • they typically focus on content rather than performance
  • they typically reward conformity and punish creativity
  • they make punishments or rewards the reasons for performing, rather than the love of the subject
  • they are unfair – they reward exam skills more than subject skills.

In short, the vast majority of unseen written exams are deeply demotivating (naysayers, see footnote), distract attention away from learning, and fail to discriminate effectively or fairly. They make the whole process of learning inefficient, not just in the wasted time and energy involved surrounding the examination itself, but in (at the very least) doubling the teaching effort needed just to overcome their ill effects. Moreover, especially in the sciences and technologies, they have a strong tendency to reinforce and encourage ridiculous content-oriented ways of teaching that map some abstract notion of what a subject is concerned with to exercises that relate to that abstract model, rather than to applied practices, problem solving and creative synthesis – i.e. the things that really matter.  The shortest path for an exam-oriented course is usually bad teaching and it takes real creativity and a strong act of will to do otherwise. Professional bodies are at least partly culpable for such atrocities.

There is one and only one justification for 99% of unseen written exams that makes any sense at all, which is that it allows us to relatively easily and with some degree of assurance (if very expensively, especially given the harmful effects on learning) determine that the learner receiving accreditation is the one that has learned. It’s not the only way, but it is one of them. That sounds reasonable enough. However, as examples like this show in very sharp relief, exams are not particularly good at that either. If you create a technology that has a single purpose of preventing cheating, then cheats (bearing in mind that the only thing we have deliberately and single-mindedly taught them from start to finish is that the single purpose of everything they do is to pass an exam) will simply find better ways to cheat – and they do so, in spades. There is a whole industry dedicated to helping people to cheat in exams, and it evolves at least as fast as the technologies that we use to prevent it. At least twenty percent of students in North America admit to having at some point in the last year cheated in exams. Some studies show much higher rates overall – 58% of high school students in Canada, for example.  It is hard to think of a more damning indictment of a broken system than this. The problem is likely even worse in other regions of the world. For instance, Davis et al (2009) reckon a whopping 83% of Chinese and 70% of Russian schoolkids cheat on exams. Let me repeat that: only 17% of Chinese people claim never to have cheated in an exam. See a previous post of mine for some intriguing examples of how that happens. When something that most people believe to be wrong is so deeply endemic, it is time to rethink the whole thing. No amount of patching over and tweaking at the edges is going to fix this.

But it’s not just exams

This is part of a much broader problem, and it is a really simple and obvious one: if you teach people that accreditation rather than learning is the purpose of education, especially if such accreditation makes a massive difference to what kind and quality of life they might have as a result of having or not having it, then it is perfectly reasonable that they should find better ways of achieving accreditation, rather than better ways of learning. Even most of our ‘best’ students, the ones that put in some of the hardest work, tend to be focused on the grades first and foremost, because that is our implicit and/or explicit subtext. To my shame, I’m as guilty as anyone of having used grades to coerce: I have been known to annoy my students with a little song that includes the lines ‘If a good mark is what you seek, blog, blog, blog, every week’.  Even if we assume that student will not cheat (and, on the whole, mature students like those that predominate at Athabasca U do not cheat, putting the lie to the nonsense some have tried to promote about distance education leading to more cheating) it challenges teachers to come up with ways of constructively aligning assessment and learning, so that assessment actually contributes to rather than detracts from learning. With skill and ingenuity, it can be done, but it is hard work and an uphill struggle. We really shouldn’t have to be doing that in the first place because learning is something that all humans do naturally and extremely willingly when not pressured to do so. We don’t need to be forced to do what we love to do. We love the challenge, the social value, the control it brings. In fact, forcing us to do things that we love always takes away some or all of the love we feel for them. That’s really sad. Educational systems make the rods that beat themselves.

Moving forwards a little

We can start with the simple things first. I think that there are ways to make exams much less harmful. My friend and colleague Richard Huntrods, for example, simply asks students to reflect about what they have done on his (open, flexible and learner-centred) course. The students know exactly what they will be asked to do in advance, so there is no fear of the unknown, and there is no need for frantic revising because, if they have done the work, they can be quite assured of knowing everything they need to know already. It is a bit odd not to be able to talk with others or refer to notes or the Web, but that’s about all that is inauthentic. This is a low-stress approach that demands nothing more than coming to an exam centre and writing about what they have done, which is an activity that actually contributes substantially to effective learning rather than detracting from it. It is constructively aligned in a quite exemplary way and would be part of any effective learning process anyway, albeit not at an exam centre.  It is still expensive, it still creates a bit more stress for students who have learned to fear exams, but it makes sense if we feel we don’t know our students well enough or we do not trust them enough to credit them for the work they have done. Of course, it demands a problem- or enquiry-based, student-centred pedagogy in the first place. This would not be effective for a textbook wraparound or other content-centric course. But then, we should not be writing those anyway as little is more certain to discourage a love of learning, a love of the subject, or a satisfying learning experience. 

There are plenty of exam-like things that can make sense, in the right kind of context, when approached with care: laboratory exercises, driving tests, and other experiences that closely resemble those of the practice being examined, for example, are quite sensible approaches to accreditation that are aligned with and can even be supportive of the learning process. There are also ways of doing exams that can markedly reduce the problems associated with them, such as allowing conversation and the use of the Internet, open-book papers that allow students to come and go as needed, questions that challenge students to creatively solve problems, exams that use questions created by the students themselves, oral exams that allow examiners to have a useful learning dialogue with examinees, and so on. There are different shades of grey and not all are as awful as the worst, by any means. There are other ways that tend to work better – for instance, badges, portfolios, and many other approaches that allow us to demonstrate competence rather than compliance, that rely on us coming to know our students, and that allow multiple approaches and different skills to be celebrated – but not all exam-like things are as bad as the worst of them.

And, of course, if we avoid exams altogether then we can do much more useful things, like involving students in creating the assignments; giving feedback instead of grades for work done; making the work relevant to student needs, allowing multiple paths, different evidence; giving badges for achievement, not to goad it, etc, etc. There’s a book or two in what we can do to limit the problems though, ultimately, this can only take us so far because, looming at the end of every learning path at an institution, is the accreditation. And therein lies the rub.

Moving forwards a lot

The central problem that we have to solve is not so much the exam itself as the unbreakable linkage of teaching and accreditation. Exams are just a symptom of a flawed system taken to its obvious and most absurd conclusion. But all forms of accreditation that become the purpose of learning are carts driving horses. horse pulling car I recognize and celebrate the value of authentic and meaningful accreditation, but there is no reason whatsoever that learning and accreditation should be two parts of the same system, let alone of the same process.  It it were entirely clear that the purpose of taking a course (or any other learning activity – courses are another demon we need to think carefully about) were to learn, rather than to succeed in a test, then education would work a great deal better. We would actually be able to do things that support learning, rather than that support credit scores; to give feedback that leads to improvement, rather than as a form of punishment or reward; to allow students to expand and explore pathways that diverge rather than converge; to get away from our needs and to concentrate on those of our students; to support people’s growth rather than to stunt it by setting false goals; to valorize creativity and ingenuity; to allow people to gain the skills they actually need rather than those we choose to teach; to empower them, rather than to become petty despots ourselves. And, in an entirely separate process of assessment that teachers may have little or anything to do with at all, we could enable multiple ways to demonstrate learning that are entirely dissociated from the process. Students might use evidence from learning activities we help them with as something to prove their competence, but our teaching would not be focused on that proof. It’s a crucial distinction that makes all the difference in the world.  This is not a revolutionary idea about credentialling – it’s exactly what many of the more successful and enlightened companies already do when hiring or promoting people: they look at the whole picture presented, take evidence from multiple sources, look at the things that matter in the context of application, and treat each individual as a human being with unique strengths, skills and weaknesses, given the evidence available. Credentials from institutions may be part of that right now, but there is no reason for that idea to persist and plenty of alternative ways of showing skills and knowledge that are becoming increasingly popular and significant, from social network recommendations to open badges to portfolios. In fact, we even have pockets of such processes well entrenched within universities. Traditional British PhDs, for example, while they are examined through the thesis and an oral exam (a challenging but flexible process), are examined on evidence that is completely unique to the individual student. Students may target the final assessment a bit, but the teaching itself is not much focused on that. Instead, it is on helping them to do what they want to do. And, of course, there are no grades involved at all – only feedback.

Conclusion

It’s going to be a long slow struggle to change the whole of the educational system across most of the world, especially as there’s a good portion of the world that would be delighted to have these kinds of problems in the first place. We need education before we can have cheating. But we do need to change this, and exams are a good place to start. It changed once before, with far less research to support the change, and far weaker technologies and communication to enable it. And it changed recently. In the grand scheme of things, the first ever university exam of the kind we now recognize as almost universal was the blink of an eye ago. The first ever written exam of the kind we use now (not counting a separate branch for the Chinese Civil Service that began a millenium before) was at the end of the 18th Century (the Cambridge Tripos) and it was only near the end of the 19th Century that written exams began to gain a serious foothold. This was within the lifetime of my grandparents. This is not a tradition steeped in history – it’s an invention that appeared long after the steam engine and only became significant as the internal combustion engine was born.  I just hope institutions like ours are not heading back down the tunnel or standing still, because those heading into the light are going to succeed while those that stay in the shadows will at best become the laughing stock of the world. 

On the subject of which, do watch the video. It is kind-of funny in a way, but the humour is very dark and deeply tragic. The absurdity makes me want to laugh but the reality of how this crazy system is wrecking people’s lives makes me want to cry. On balance, I am much more saddened and angered by it than amused. These are not bad people: this is a bad system. 

Reference

Davis, S., Drinan, P., and Gallant, T. (2009). Cheating in School: What We Know and What We Can Do. West Sussex, UK: Wiley-Blackwell.

Footnote

I know some people will want to respond that the threat or reward of assessment is somehow motivating. If you are one of those, this postscript is for you. 

I understand what you are saying. That is what many of us were taught to believe and it is one way we justify persisting despite the evidence that it doesn’t work very well. I agree that it is motivating, after a fashion, very much like paying someone to do something you want them to do, or hitting them if they don’t. Very much indeed. You can create an association between a reward/punishment and some other activity that you want your subject to perform and, as long as that association persists, you might actually make them do it. Personally speaking, I find that quite offensive, not to mention only mildly effective at achieving its own limited ends, but each to their own. But notice how you have replaced the interest in the activity with an interest in the reward and/or the desire to avoid punishment. Countless research studies from several fields have pretty conclusively shown that both reward and punishment are strongly antagonistic to intrinsic motivation and, in many cases, actually destroy it altogether. So, you can make someone do something by destroying their love of doing it – good job. But that doesn’t make a lot of sense to me, especially as what they have learned is presumably meant to be of ongoing value and interest, to help them in their lives. It is my belief that, if you want to teach effectively, you should never make people learn anything – you should support them in doing so if that is what they want to do. It is good to encourage and enthuse them so that they want to do it and can see the value – that’s a useful teacher role – but it’s a whole different ballgame altogether to coerce them. Alas, it is very hard to avoid it altogether until we change education, and that’s one good reason (I hope you agree) we need to do that.

For further information, you could do worse that to read pretty much anything by Alfie Kohn. If you are seeking a broader range of in-depth academic work, try the Self Determination Theory site.

Defaults matter

I have often written about the subtle and not-so-subtle constraints of learning management systems (LMSs) that channel teaching down a limited number of paths, and so impose implicit pedagogies on us that may be highly counter productive and dissuade us from teaching well – this paper is an early expression of my thoughts on the matter. I came across another example today.

When a teacher enters comments on assignments in Moodle (and in most LMSs), it is a one-time, one-way publication event. The student gets a notification and that’s it. While it is perfectly possible for a dialogue to continue via email or internal messaging, or to avoid having to use such a system altogether, or to overlay processes on top of it to soften the hard structure of the tool, the design of the software makes it quite clear this is not expected or normal. At best, it is treated as a separate process. The design of such an assignment submission system is entirely about delivering a final judgement. It is a tacit assertion of teacher power. The most we can do to subvert that in Moodle is to return an assignment for resubmission, but that carries its own meanings and, on resubmission, still returns us to the same single feedback box.

Defaults are very powerful things that profoundly shape how we behave (e.g. see here, here and here). Imagine how different the process would be if the comment box were, by default, part of a dialogue, inviting response from the student. Imagine how different it would be if the student could respond by submitting a new version (not replacing the old) or by posting amendments in a further submission, to keep going until it is just right, not as a process of replacement but of evolution and augmentation. You might think of this as being something like a journal submission system, where revisions are made in response to reviewers until the article is acceptable. But we could go further. What if it were treated as a debugging process, using approaches like those in Bugzilla or Github to track down issues and refine solutions until they were as good as they could be, incorporating feedback and help from students and others on or beyond the course? It seems to me that, if we are serious about assignments as a formative means of helping someone to learn (and we should be), that’s what we should be doing. There is really no excuse, ever, for a committed student to get less than 100% in the end. If students are committed and willing to persist until they have learned what they come here to learn, it is not ever the students’ failure when they achieve less than the best: it is the teachers’.

This is, of course, one of the motivations behind the Landing. In part we built this site to enable pedagogies like this that do not fit the moulds that LMSs ever-so-subtly press us into. The Landing has its own set of constraints and assumptions, but it is an alternative and complementary set, albeit one that is designed to be soft and malleable in many more ways than a standard LMS. The point, though, is not that any one system is better than any other but that all of them embed pedagogical and process assumptions, some of which are inherently incompatible.

The solution is, I think, not to build a one-size-fits-all system. Yes, we could easily enough modify Moodle to behave the way I suggest and in myriad other ways (e.g. I’d love to see dialogue available in every component, to allow student-controlled spaces wherever we need them, to allow students to add to their own courses, etc) but that doesn’t work either. The more we pack in, the softer the system becomes, and so the harder it is to operate it effectively. Greater flexibility always comes at a high price, in cognitive load, technical difficulty and combinatorial complexity. Moreover, the more we make it suit one group of people, the less well it suits others. This is the nature of monolithic systems.

There are a few existing ways to greatly reduce this problem, without massive reinvention and disruption. One is to disaggregate the pieces. We could build the LMS out of interoperable blocks so that we could, for instance, replace the standard submission system with a different one, without impacting other parts of the system. That was the goal of OKI and the now-defunct E-Framework although, in both cases, assembly was almost always a centralized IT management function and not available to those who most needed it – students and teachers. Neither have really made it to the mainstream. Sakai (an also-ran LMS that still persists) continues to use OKI technologies under the hood but the e-framework (a far better idea) seems dead in the water. These were both great ideas. There just wasn’t the will or the money, and competition from incumbents like Moodle and Blackboard was too strong. Other widget-based methods (e.g. using Wookie) offer more hope, because they do not demand significant retooling of existing systems, but they are currently far from on the ascendent and the promising EU TENCompetence project that was a leader behind this seems moribund, its site offline.

Another approach is to use modules/plugins/building blocks within an existing system. However, this can be difficult or impossible to manage in a manner that delivers control to the end user without at the same time making it difficult for those that do not want or need such control, because LMSs are monoliths that have to address the needs of many people. Not everyone needs a big toolkit and, for many, it would actively make things worse if they had one. Judicious use of templates can help with that, but the real problem is that one size does not fit all. Also, it locks you in to a particular platform, making evolution dependent on designers whose goals may not align with how you want to teach.

Bearing that in mind, another way to cope with the problem is to use multiple independent systems bound by interoperability standards – LTI, OpenBadges or TinCan, for example. With such standards, different learning platforms can become part of the same federated environment, sharing data, processing, learning paths and so on, allowing records to be kept centrally while enabling incompatible pedagogies to run independently within each system. That seems to me to be the most sensible option right now. It’s still more complex for all concerned than taking the easy path, and it increases management burden as well as replicating too much functionality for no particularly good reason. But sometimes the easy path is the wrong one, and diversity drives growth and improvement.

x-literacies

There is an ever-growing assortment of x-literacies. Here are just a few that have entered the realms of academic discourse:

  • Computer literacy
  • Internet literacy
  • Digital literacy
  • Information literacy
  • Network literacy
  • Technology literacy
  • Critical literacy
  • Health literacy
  • Ecological literacy
  • Systems literacy
  • Statistical literacy
  • New literacies
  • Multimedia literacy
  • Media literacy
  • Visual literacy
  • Music literacy
  • Spatial literacy
  • Physical literacy
  • Legal literacy
  • Scientific literacy
  • Transliteracy
  • Multiliteracy
  • Metamedia literacy

This list is a small subset of x-literacies: if there is some generic thing that people do that demands a set of skills, there is probably a literacy that someone has invented to match.  I’ll be arguing in this post that the majority of these x-literacies miss the point, because they focus on tools and technologies more than the reasons and contexts for using them. 

The confusion starts with the name. ‘Literacy’, literally, means the ability to read and write, so most other literacies are not. We might just as meaningfully talk about ‘multinumeracy’ or ‘digital numeracy’ as ‘multiliteracy’ or ‘digital literacy’ and, for some (e.g. ‘statistical literacy’), ‘numeracy’ would actually make far more sense. But that’s fine – words shift in meaning all the time and leave their origins behind. It is not too hard to see how the term might evolve, without bending the meaning too much, to relate to the ability to use not just text but any kind of symbol system. That sometimes makes sense – visual, media or musical literacy, for example, might benefit from this extension of meaning. But most of the literacies I list above have at best only a partial relationship to symbol systems. I think what really appeals to their inventors is that describing a set of skills as ‘x-literacy’ makes ‘x’ seem more important than just a set of skills. They bask in the reflected glory of reading and writing, which actually are awfully important. 

I’m OK with a bit of bigging up, though. The trouble is that prefixing ‘literacy’ with something else infects how we see the thing. It has certainly led to many silly educational initiatives with poorly defined goals and badly considered outcomes. This is because, all too often, it draws attention far too much to the technology and skills, and far too far away from its application in a specific culture. This context-sensitive application (as I shall argue below) is actually what makes it ‘literacy’, as opposed to ‘skill’, and is in fact what makes literacy important.

So this is my rough-draft attempt to unravel the confusion so that at least I can understand it – it’s a bit of sense-making for me. Perhaps you will find it useful too. Some of this is not far off the underpinnings of the multiliteracy camp (albeit with notably different conclusions) and one of my main conclusions will be very similar to what many others have concluded too: that literacy spans many skills, tools and modalities, and is highly contextualized to a given culture at a given time. 

Culture and technology

When they pass a certain level of size and complexity, societies need more than language, ritual, stories, structures and laws passed by word of mouth (mostly things that demand physical co-presence) in order to function. They need tools to manage the complexity, to distribute cognition, replicate patterns, preserve structures, build new ones, pass ideas around, and to bind a dispersed society together. Since the invention of printing, most of the tools that play this role have been based on the technologies of text, which makes reading and writing fundamental to participation in a modern society and its numerous cultures and subcultures.

To be literate has, till recently, simply meant that you can do text. There may also be some suggestion of the ability to use text that relate to abilities to decipher, analyze, synthesize and appreciate: these are at least the product of literacy if not a part of it, and they are among the main reasons we need literacy. But the central point here is that people who are literate, in the traditional sense, are simply able to operate the technology of writing, whether as consumers, producers or both. Why this is ‘literacy’ rather than simply a skillset like any other, is that text manipulation is a prerequisite for people to participate in their culture. It lets them draw on accumulated knowledge, add to it, and be able to operate the social and organizational machinery. At its most basic, this is a pragmatic need: from filling in forms and writing letters to reading signs, labels on food, news, books, contracts and so on. Beyond that, it is also a means to disseminate ideas, challenges, and creative thought in a society. It is futhermore a fundamental technology for learning, arguably second only to language itself in importance. More than that, it is a technology to think with and extend our thinking far beyond what we could manage without such assistance. It lets us offload and enhance our cognition. This remains true, despite multiple other media vying for our attention, most of which incorporate text as well as other forms. I could not do what I am doing right now without text because it is scaffolding and extending the ideas I started with. Other media and modalities can in some contexts achieve this end too and, for some purposes, might even do it better. But only text does it so sweepingly across multiple cultures, and nothing but text has such power and efficiency. In all but the most limited of cultures, text performs culture, and text makes culture: not all of it, by any means, but enough to matter more than most other learned technology skills.

Other ways to perform culture

There have for countless millenia been many other media and tools for cultural transmission and coordination, including many from way before the invention of writing. Paintings, drawings, sculpture, dance, music, rituals, maps, architecture, furniture, transport systems, sport, games, roads, numbers, icons, clothing, design, money, jewellery, weapons, decoration, litany, laws, myths, drama, boats, screwdrivers, door-knobs and many many more technologies, serve (often amongst their other functions) as repositories of cognition, belief, structure and process. They are not just the signs of a culture: they play an active role in its embodiment and enactment. But text, maybe hand in hand with number, holds a special place because of its immense flexibility and ubiquitous application. Someone else can make roads or paintings or door-knobs and everyone else can benefit without needing such skills – this is one of the great benefits of distributed labour. But almost everyone needs skill in text, or at least needs to be close to someone with it. It is far from the only fruit but everyone needs it, just to participate in the cultures of a society.

Cultures and technologies

There are many senses in which we might consider technology and culture to be virtually synonymous. Both are, as Ursula Franklin puts it, ‘the way things are done around here’. Both concern process, structure and purpose. However, I think that there are many significant things about cultures  – attitudes, frames of mind, beliefs, ways of seeing, values, ideologies, for instance – that may be nurtured or enacted by technology, but that are quite distinct from it. Such things are not technological inventions – they are the consequence, precursors and shapers of inventions. Cultures may, however, be ostensively defined by technologies even if they are not functionally identical with them. Archeologists, sociologists and historians do it all the time. Things like language, clothing, architecture, tools, laws and so on are typically used to distinguish one culture from another.

One of the notable things about technologies is that they tend to evolve towards both increasing complexity and increasing specialization. This is a simple dynamic of the adjacent possible. The more we add, the more we are able to add, the more combinations and the more new possibilities that were unavailable to us before reveal themselves, so the more we diversify, subdivide, concatenate and invent. Thus it goes on ad infinitum (or at least ad singularum). Technologies tend to continuously change and evolve, in the absence of unusual forces or events that stop them. Of course, there are countless ways that technologies, notably in the form of religions, can slow this down or reverse it, as well as catastrophes that may be extrinsic or that may result from a particularly poor choice of technologies (over-cultivation of the land, development of oil-dependency, nuclear power, etc). There are also many technologies that play a stabilizing rather than a disruptive role (education systems, for example). Overall, however, viewed globally, in large cultures, the rate of technological change increases, with ever more rapid lifecycles and lifespans.  This means that skills in using technologies are increasingly deictic and increasingly short-lived or, if they survive, increasingly marginalized. In other words, they relate specifically to contexts outside of which they have different or no meaning, and those contexts keep changing thanks to the ever-expanding adjacent possible. Skills and techniques become redundant as contexts change and cultures evolve. That’s a slight over-simplification, but the broad pattern is relentless.

Towards a broader definition of ‘literacy’

Literal literacy is the ability to use a particular technology (text) to give us the ability to learn from, interact with and add to our various different cultures. The label implies more than just reading and writing: to be literate implies that, as a consequence of reading and writing, stuff has been and will be read – not just reading primers, but books, news, reports and other cultural artefacts. In the recent past, text was about the most significant way (after talking and showing) that cultural knowledge was disseminated. In recent decades, there have been plentiful other channels, including movies, radio, TV, websites, multimedia and so on. It was only natural that people would see the significance of this and begin to talk about different kinds of literacy, because these media were playing a very similar cultural role to reading and writing. The trouble is that, in doing so, the focus shifted from the cultural role to the technology itself. At its most absurd, it resulted in terms like ‘computer literacy’ that led to initiatives that were largely focused on building technical skills messily divorced from the cultures they were supporting and of little or no relevance to being an active  member of such a culture.

So here’s a tentative (re)definition of ‘literacy’ that restores the focus: literacy is the prerequisite set of technological skills needed for participation in a culture.  And, of course, we are all members of many cultures. There are other things that matter in a culture apart from technological skills, such as (for example) a playful spirit, honesty, caring for others, good judgement, curiosity, ethical sensibility, as well as an ability to interpret, synthesize, classify, analyze, remix, create and seek within the cultural context. These are probably more important foundations of most cultures than the tools and techniques used to enact them. But, though traits like these can certainly be nurtured, inculcated, encouraged, shown, practiced, learned and improved, they are not literacies. These are the values and valued traits in a culture, not the skills needed to be a part of it, though there is an intimate iterative relationship between the two. In passing, I think it is those traits and others like them that education is really aimed at developing: the rest, the literacy part, is transient and supportive. We don’t have values and propensities in order to achieve literacy. We learn most of them at least partly through the use of literacies, and literacies are there to support them and let them flourish, to provide mechanisms through which they can be exercised.

My suggestion is that, rather than defining a literacy in terms of its technologies, we should define it in terms of the particular culture it supports. If a culture exists, then there is a literacy for it, which is comprised of a set of skills needed to participate in that culture. There is literacy for being a Canadian, but there is equally literacy for being part of the learning technologies community (and for each of its many subcultures), being a researcher, a molecular scientist, a member of a family or of a local chess club. There is literacy for every culture we belong to. Some technological skillsets cross multiple cultures, and some are basic to them. The first of these is nearly always language. Most cultures, no matter how trivial and constrained, have their own vocabularies and acceptable/expected forms of language but, apart from cases where languages are actually a culturally distinguishing factor (e.g. many nations or tribes) they tend to inherit most of the language they use from a super-culture they are a part of. Reading and writing are equally obvious examples of skills that cross multiple cultures, as are numeracy skills. This is why they matter so much – they are foundational. Beyond that, different technologies and consequent skills may matter as much or more in different cultures. In a religious culture these might include the rules, rituals, principles, mythologies and artefacts that define the religion. In a city culture they could include knowledge of bylaws, transit systems, road layouts, map-reading, zones, and norms. In an academic culture it might relate to (for instance) methodologies, corpora, accepted tenets, writing conventions, dress standards, pedagogies, as well as the particular tools and methods relating to the subject matter. In combination, these skills are what makes someone in a given culture literate in that culture.

For instance

Is there such a thing as computer literacy? I’d say hardly at all. In fact, it makes little sense at all to think in those terms. It’s a bit like claiming there is pen literacy, table literacy or wall literacy.  But there might be computing literacy, inasmuch as there may be a culture of computing. In fact, once upon a time, when dinosaurs roamed the earth and people who used computers had to program them themselves, it might have been a pretty important culture that any people who wished to use computers for any purpose at all would need to at least dip their toes in and, most likely, become a part of. That culture is still very much there but it is not a prerequisite of owning a computer that one needs to be a part of it any more – computing culture is now the preserve of a relatively tiny band of geeks who are dwarfed in number by those that simply use computers. The average North American home has dozens of computers, but few of their users need to or want to be part of a computing culture. They just want to operate their TVs, drive their cars, use their phones, take photos, browse the Web, play the keyboard, etc. This is as it should be. Those in a computing culture are undoubtedly still an important tiny band who do important things that affect the rest of the world a lot, but they are just another twig at the end of a branch of the cultural tree, not the large stem that they once were. Within what is left of that computing culture there are a lot of overlapping computing sub-cultures: engineers, bricoleurs, hardware freaks, software specialists, interaction designers, server managers, programmers, object-oriented programmers, PHP enthusiasts, iOS/Mac users, Android/Windows users, big-endians, little-endians. Each sub-culture has its own literacy, its own language, its own technologies on which it is founded, as well as many shared commonalities and cross-cutting concerns. 

Is there such a thing as ‘digital literacy’? Hardly. There is no significant distinctive thing that is digital culture, so there is no such thing as digital literacy. Again, like computing culture, once upon a time, there probably was such a thing and it might have mattered. I recall a point near the start of the 1990s, as we started to build web servers, connect Gopher servers, use email and participate in Usenet Newsgroups, at which it really did seem that we were participating in a new culture, with its own evolving values, its own technologies, its own methods, rules, and ethics. This has almost entirely evaporated now. That culture has in part been absorbed and diffused, in part branched into subcultures. Being ‘digital’ is no longer a way of defining a culture that we are a part of, no longer a way of being. Unless you are one of the very few that has not in the last decade or so bought a telephone, a TV, a washing machine, a stove, or one of countless other digital devices, you are ‘digital’. And, if there were such a thing as a digital culture, you would almost certainly be a part of the digital culture if you are reading this. This is too tenuous a thing – it has nothing to bind it apart from the use of digital devices that are almost entirely ubiquitous, at least in first world cultures, and that are too diverse to bind a culture together. There are, as a result, insufficient shared values to make it meaningful any more. It is, however, still possible to be anti-digital. Some digital luddites (I mean this non-perjoratively to refer to anyone who deliberately eschews digital technologies) do very much have cultures and probably have their own literacies. And there might well be literacies that relate to specific digital technologies and subsets of them. Twitter has a culture, for instance, that implies rules, norms, behaviours, language and methods that anyone participating should probably know. The same may be (and at some point certainly was) true of Facebook, but I think that is less obvious now.

Network culture is probably still a thing, but it is already fading in much the same way that digital culture has already faded, with ubiquity, diversity and specialization each taking bites out of it. We have seen network culture norms develop and spread. New vocabularies have been developed with subtle nuances (LOL, ROFL, LMFAO) that often branch into meanings that may only be deciphered by a few sub-cultures but that may subsequently spread into other cultures (TIL, RT, TLDR, LPT).   We have had to learn new skills, figuring out how to negotiate privacy, filter bubbles, trolls, griefing, effective tagging, filtering, sorting, unfriending and friending, and much much more, in order to participate in a social network culture, one that is (for now) still a bit distinct from other cultures. But that culture has already diversified, spread, diffused, and it is getting more diffuse every day. As it becomes larger and more diverse it ceases to be a relevant means of identifying people, and it ceases to be something we can identify with.

Much of the reason for network culture’s retreat is technological. It was enabled by an assembly of technologies and spawned new ones (norms, conventions, languages, etc) but, as they evolve, other technologies will render it irrelevant. Technologies often help to establish cultures and may even form their foundation but, as they and the cultures co-develop, the technologies that helped build those cultures stop being definitional of them. Partly this results from diffusion, as ways of thinking creep back into the broader super-culture and as more and more diverse cultures spread into it. Partly it is because new technologies take their place and diversify into niches. Partly it is because, rather than us learning to use technologies, they learn to use us. This sounds creepier than it really is: what I mean is that individual inventors see the adjacent possibles and grab them, so technologies change and, in many cases, become embedded, replacing our manual roles in them with pre-orchestrated equivalents. Take, for example, a trivial thing like emoticons, images built from arbitrary text characters, that take some of the role of phatic communication in text communication – like this :-). Emoticons are increasingly being replaced by standardized emojis, like this Smile. Bizarrely, there are now social networks based on emoji that use no text at all. I am intrigued by the kind of culture that this will entail or support but the significant point here is that what we used to have to orchestrate ourselves is now orchestrated in the machine. Consequently, the context changes, problems are solved, and new problems emerge, often as a direct result of the solution. Like, how on earth do you communicated effectively with nothing but emojis Undecided?

Where do we go from here? 

Rather than constantly sub-divide literacies into ever more absurdly-named niches named for the tools to which they relate, or attempt to find bridging competences or values that underly them and call those multiliteracies (or whatever), I propose that we should think of a literacy as being a highly situated set of skills that enable us to play a role as an operator in any given social machine, as creators and/or consumers of a culture – any culture and every culture.  The specificity we choose should be determined by the culture that interests us, not by any predetermined formula. Each subculture has its own language, tools, methods, and signs, and each comes with a set of shared (often contested) attitudes, beliefs, values and passions, that both drive and are driven by the technologies they use.  As a result, each has its own history, that branches from the histories of other subcultures, helping to make it more distinct. This chain of path dependencies helps to reinforce a culture and emphasize its differences. It can also lead to its demise.

In most if not all cases, literacy is an assembly of skills and techniques, not a single skill. ‘Literacy’ is thus simply a label for the essential skills and techniques needed to actively participate in a given culture. Such a culture may be big or small. It may span millenia or centuries but it may span only decades, years or (maybe) months or even weeks or days. It may span continents or exist only in a single room. I have, for example, been involved with courses, workshops and conferences that have evolved their own fleeting cultures, or at least something prototypical of one. In my former job I shared an office with a set of colleagues that developed a slightly different culture from that of the office next door. Of course, the vast majority of our culture was shared because we performed similar roles in the same department in the same organization, the same country, the same field, the same language, the same ethos. But there were differences that might, in some contexts and for some purposes, be important. For most contexts, they were probably not.

Researching literacies 

Assuming that we know what culture we are looking at, identifying literacy in any given culture is simply (well…simply-ish) a question of looking at the technologies that are used in that culture.  While technology use is far from a complete definition of a culture, what makes it distinct from another may be described in terms of its technologies, including its rules, tools, methods, language, techniques, practices, standards and structures. This seems a straightforward way of thinking about it, if seemingly a bit circular. We identify cultures by their technology uses, and define literacy by technology use in a culture. I don’t think this apparent circularity is a major issue, however, as this is an iterative process of discovery: we may start with coarse differentiators that distinguish one culture from another but, as we examine them more closely, will almost certainly find others, or find further differentiators that indicate subcultures. A range of methods and methodologies may be used here, from grounded theory to ethnography, from discourse analysis to Delphi methods, simple observation, questionnaires, interviews, focus groups, and so on. If we want to know about literacy in a culture, we have to discover what technologies are foundational in that culture.

Most of the cultures we belong to are subcultures of some other or others, while others straddle borders between different and otherwise potentially unrelated cultures.  Some skills that partially constitute a given literacy will cross many other cultural boundaries. Almost all will involve language, most will involve reading and writing, many will involve number, lots will involve visual expression, quite a few will involve more or less specific skills using machines (particularly software running on computers, some of which may be common). The ability to create will usually trump the ability to consume although, in some cultures, prosumption may be a defining or overwhelmingly common characteristic (those that emerge in social networks, for instance).

This all implies that a first concern when researching literacy for a given culture, is to identify that culture in the first place, and decide why it is of interest. While this may in some cases be obvious, there may often be subcultures and cross-cultural concerns that could make it more complex to define. One way to help separate out different cultures is to look at the skills, terminology, technologies, implicit and explicit rules, norms, and patterns of technology use in the subset of people that we are looking at. If there are patterns of differences, then there is a good chance that we have identified a cultural divide of some kind. A little more easily, we can also look both at why people are excluded from a culture, and seek to discover the things people need to learn to become a part of it – to look at the things that distinguish an outsider from an insider and how people transition from one to the other.

For example, the literacy for the culture of a country is almost entirely defined by invention. Countries are technologies, first and foremost. They have legislated (if often disputed) borders and boundaries, laws, norms, language, ways of doing things, patterns, establishments, and institutions that are almost entirely enshrined in technology. It is dead easy to spot this particular culture and mostly simple enough to figure out who is not in it and, normally, what they need to do to become a part of it. To be literate in the context of a country is to have the tools to be able to know and to actively interact with the technologies that define it. To give a simple example, although it is quite possible to be Canadian with only a limited grasp of English and/or French, part of what it means to be literate in Canadian culture is to speak one or (ideally) both languages. Other languages are a bonus, but those two are foundational. It is also possible to see similar patterns in religious cultures, academic cultures, sports cultures, sailing cultures and so on. We can see it in subcultures – for example, goths and hipsters are easily identified by a set of technologies that they use and create, because many of them are visible and definitional.  It gets trickier once we try to find subcultures of such easily identified sets but, on the whole, different technologies mark different cultures.

What makes all this technical detail worth knowing is not that different sets of people use different tools but that there are consequences of doing so. Technologies have a deep impact on attitudes, values, beliefs and relationships between people. In turn these values and beliefs equally impact the technologies that are used, developed, and valued. This is what matters and this is what is worth investigating. This is the kind of knowledge that is needed in order to effect change, whether to improve literacy within a culture or to change the culture itself. For example, imagine a university that runs on highly prescriptive processes and a reward structure based on awards for performance. You may not have to look far to find an example. Such a university might be dysfunctional on many counts, either because of lack of literacy in the technologies or because the technologies themselves are poorly considered (or both). One way to improve this would be to ensure that all its members are able to operate the processes and gain awards. This would be to improve literacy within the culture and would, consequently, reinforce it and sustain it. This might be very bad news if the surrounding context changes, making it significantly harder to adapt and change to new demands, but it would be an improvement by some measures. Another, not necessarily conflicting, approach would be to change or eliminate some of the processes, and get rid of or change the nature of rewards for performance: to modify the machinery that drives the culture. This would change the culture and thus change the literacy needed to operate within it. It might do unexpected things, especially as the existing attitudes and values may be at odds with the new culture: people within it would be literate in things that are not relevant or useful any more, while not having literacy needed to operate the new tools and structures. Much existing work surrounding x-literacies fails to clearly make this crucial distinction. By focusing largely on the technological requirements and ignoring the culture, we may reinforce things that are useless, redundant or possibly harmful. For instance, multimedia literacy might be great, sure. But for what and for whom? And in what forms? Different skillsets are needed in different contexts, and will have different value in different cultures.

To conclude

I have proposed that we should define literacy as the skills needed to operate the technologies that underpin a particular culture. While some of those skills are common to many cultures, the precise set and the form they take is likely different in almost every culture, and cultures evolve all the time so no literacy is forever. I think this is a potentially useful perspective.

We cannot sensibly define a set of skills or propensities without reference to the culture that they support, and we should expect differences in literacies both between different cultures and across time and space in any given culture. We can ask meaningful questions about literacy in a culture of (say) people who use Twitter for learning and research as opposed to those needed by people that only use Twitter to stay in touch with one another.  We can look at different literacies for people who are Canadian, people who are in schools, people of a particular religion, people who like a particular sport, people who research learning technologies, people in a particular office, people who live in Edmonton, not to mention their intersections and their subsets. By looking at literacy as simply a set of skills needed for a given culture we can gain large insights into the nature of that culture and its values. As a result, we can start to think more carefully about which skills are important, whether we want to simply support the acquisition of those skills, or whether we want to transform the culture itself.

This is just my little bit of sense making. I have very probably trodden territory that is very familiar to a lot of people who research such things with more rigour, and I doubt very much that any of it is at all original. But I have been bothered by this issue for a while and it now seems a little clearer to me what I think about this. I hope it has encouraged you to think about what you think too. Feel free to share your thoughts in the comment box!