Beyond the group: how education is changing and why institutions need to catch up

Understanding the ways people interact in an online context matters if we are interested in deliberate learning, because learning is almost always with and/or from other people: people inform us, inspire us, challenge us, motivate us, organize us, help us, engage with us. In the process, we learn. Intentional learning is now, more than ever, whether informally, non-formally or formally, an activity that occurs outside a formal physical classroom. We are no longer limited to what our schools, universities, teachers and libraries in our immediate area provide for us, nor do we need to travel and pay the costs of getting to the experts in teaching and subject matter that we need. We are not limited to classes and courses any more. We don’t even need books. Anyone and everyone can be our teachers. This matters.

Traditional university education

Traditional university education is all about groups, from classes to courses to committees to cohorts (Dron & Anderson, 2014). I use the word ‘group’ in a distinctive and specific way here, following a pattern set by Wellman, Downes and others before and since. Groups have names, owners, members, roles and hierarchies. Groups have purposes and deliberate boundaries. Groups have rules and structures. Groups embody a large set of highly evolved mechanisms that have developed over millenia to deal with the problems of coordinating large numbers of people in physical spaces and, in the context they have evolved, they are a pretty effective solution.

But there are two big problems with using groups in their current form in online learning. The first is that the online context changes group dynamics. In the past, professors were able to effectively trap students in a room for an hour or more, and to closely control their activities throughout that time. That is the context in which our most common pedagogies evolved. Even in the closest simulations of a face-to-face context (immersive worlds or webmeetings) this is no longer possible.

The second problem is more significant and follows from the first: group technologies, from committees to classrooms, were developed in response to the constraints and affordances of physical contexts that do not exist in an online and connected world. For example, it has been a long time since the ability to be in hearing range of a speaker has mattered if we wish to understand what he or she says. Teachers needed to control such groups because, apart from anything else, in a physical context, it would have been impossible to otherwise be heard without disruption. It was necessary to avoid such disruption and to coordinate behaviour because there was no other easy way to gain the efficiencies of one person teaching many (books notwithstanding). We also had to be disciplined enough to be in the same place at the same time – this involved a lot of technologies like timetables, courses, and classroom furniture. We needed to pay close attention because there was no persistence of content. The whole thing was shaped by the need to solve problems of access to rival resources in a physical space. 

We do not all have to be together in one place at one time any more. It is no longer necessary for the teacher to have to control a group because that group does not (always or in the same way) need to be controlled.

Classrooms used to be the only way to make efficient use of a single teacher with a lot of learners to cater for, but compromises had to be made: a need for discipline, a need to teach to the norm, a need to schedule and coordinate activities (not necessarily when learners needed or wanted to learn), a need to demand silence while the teacher spoke, a need to manage interactions, a perceived need to guide unwilling learners, brought on by the need to teach things guaranteed to be boring or confusing to a large segment of a class at any given time. We therefore had to invent ways to keep people engaged, either by force or intentional processes designed to artificially enthuse. This is more than a little odd when you think about it. Given that there is hardly anything more basically and intrinsically motivating than to learn something you actually want to learn when you want to learn it, the fact that we had to figure ways to motivate people to learn suggests something went very wrong with the process. It did not go wonderfully. A whole load of teaching had worse than no effect and little resulted in persistent and useful learning – at least, little of what was intentionally taught. It was a compromise that had to be made, though. The educational system was a technology designed to make best use of limited resources and the limitations imposed by physics, without which the spread of knowledge and skills would have been (and used to be and, in pockets where education is unavailable, still is) very limited.

Online learning

For those of us who are online (you and me) we don’t need to make all of those compromises any more. There are millions of other ways to learn online with great efficiency and relevance that do not involve groups at all, from YouTube to Facebook to Reddit to StackExchange, to this post. These are under the control of the learners, each at the centre of his or her own network and in control of the flow, each able to choose which sets of people to engage with, and to what attention should be paid.

Networks have no boundaries, names, roles or rules – they are just people we know.

Sets have no ties, no rituals of joining, no allegiances or social connections – they are just collections of people temporarily occupying a virtual or physical space who share similar interests without even a social network to bind them.

Sets and networks are everywhere and they are the fundamental social forms from which anyone with online access learns and they are all driven by people or crowds of people, not by designed processes and formal patterns of interaction.

Many years ago Chambers, then head of Cisco, was ridiculed for suggesting that e-learning would make email look like a rounding error. He was absolutely right, though, if not in quite the way he meant it: how many people reading this do not turn first to Google, Wikipedia or some other online, crowd-driven tool when needing or wanting to learn something? Who does not learn significant amounts from their friends, colleagues or people they follow through social networks or email? We are swimming in a sea of billions of teachers: those who inform, those with whom we disagree, those who act as role models, those who act as anti-models, those that inspire, those that affirm, those that support, those we doubt, those we trust. If there was ever a battle for supremacy between face-to-face and e-learning (an entirely artificial boundary) then e-learning has won hands down, many times over. Not so’s you’d know it if you look at our universities. Very oddly, even an online university like Athabasca is largely trapped in the same constrained and contingent pattern of teaching that has its origins in the limitations of physical space as its physical counterparts. It is largely as though the fact of the Internet has had no significant impact beyond making things slightly more convenient. Odd.

Replicating the wrong things

Those of us who teach entirely online are still, on the whole, making use of the single social form of the group, with all of its inherent restrictions, hierarchies and limitations inherited from its physical ancestors. Athabasca is at least a little revolutionary in providing self-paced courses at undergraduate level (albeit rarely with much social engagement at all – its inspiration is as much the book as the classroom) , but it still typically keeps the rest of the trappings, and it uses groups like all the rest in most of its graduate level courses. Rather than maintaining discipline in classrooms through conventional means, we instead make extensive use of assessments which have become, in the absence of traditional disciplinary hierarchies that give us power in physical spaces, our primary form of control as well as the perceived primary purpose of at least higher education (the one follows from the other). It has become a transaction: if you do what I say and learn how I tell you to learn then, if you succeed, I will give you a credential that you can use as currency towards getting a job. If not, no deal. Learning and the entire process of education has become secondary to the credential, and focused upon it. We do this to replicate a need that was only there in the first place thanks to physics, not because it made sense for learning.

As alternative forms of accreditation become more commonplace and more reliable, it is hard to see us sustaining this for much longer. Badges, social recommendations, commercial credits, online portfolios, direct learning record storage, and much much more are gaining credence and value.

It is hard to see what useful role a university might play when it is not the best way to learn what you want to learn and it is not the best way to gain accreditation for your skills and knowledge.

Will universities become irrelevant? Maybe not. A university education has always been about a lot more than what is taught. It is about learning ways of thinking, habits of mind, ways of building knowledge with and learning from others. It is about being with others that are learning, talking with them, socializing with them, bumping serendipitously into new ideas and ways of being. All of this is possible when you throw a bunch of smart people together in a shared space, and universities are a good gravitational force of attraction for that. It is, and has always been, about networks and sets as much as if not more than groups. The people we meet and get to know are not just networks of friends but of knowledge. The sets of people around us, explicit and implicit, provide both knowledge and direction. And such sets and nets have to form somewhere – they are not mere abstractions. Universities are good catalysts. But that is only true as long as we actually do play this role. Universities like Athabasca focus on isolated individuals or groups in boundaried courses. Only in odd spaces like here, on the Landing, or in external social sites like Twitter, Facebook or RateMyProfessor, is there a semblance of those other roles a university plays, a chance to extend beyond the closed group and credential-focused course process.

Moving on

We can still work within the old constraints, if we think it worthwhile – I am not suggesting we should suddenly drop all the highly evolved methods that worked in the past at once. Like a horse and cart or a mechanical watch, education still does the job it always did, in ways that more evolved methods will never not replicate, any more than folios beat scrolls or cars beat horses. There will be both gains and losses as things shift. Like all technologies (Kelly, 2010), the old ways of teaching will never go away completely and will still have value for some.  Indeed, they might retain quite a large niche for many years to come. 

But now we can do a whole lot more as well and instead, and the new ways work better, on the whole. In a competitive ecosystem, alternatives that work better will normally come to dominate. All the pieces are in place for this to happen: it is just taking us a little while to collectively realize that we don’t need the trainer-wheels any more. Last gasp attempts to revamp the model, like first-generation xMOOCs, merely serve to illustrate the flaws in the existing model, highlighting in sharp relief the absurdities of adopting group-based forms on an Internet-based scale. imposing structural forms designed to keep learners on track in physical classrooms have no sense or meaning when applied to a voluntary, uncredentiallled and interest-driven course. I think we can do better than that.

The key steps are to disaggregate learning and assessment, and to do away with uniform courses with fixed schedules and pre-determined processes and outcomes. Outsiders, from MOOC providers (they are adapting fast) to publishers are beginning to realize this, as are a few universities like WGU.

It is time to surf the adjacent possible (Kauffman, 2000), to discover ways of learning with others that take advantage of the new horizons, that are not trapped like horseless carriages replicating the limitations of a bygone era. Furthermore, we need to learn to build new virtual environments and learning ecosystems in ways that do not just mimic patterns of the past, but that help people to learn in more flexible, richer ways that take advantage of the freedoms they enable – not personalized (with all the power assertion that implies) but both personal and social. If we build tools like learning management systems or the first generation xMOOC environments like edX, that are trapped into replicating traditional classroom-bound forms, we not only fail to take advantage of the wealth of the network, but we actually reinforce and ossify the very things we are reacting against rather than opening up new vistas of pedagogical opportunity. If we sustain power structures by linking learning and formal assessment, we hobble our capacity to teach. If we enclose learning in groups that are defined as much by who they exclude as who they encompass (Shirky, 2003) then we actively prevent the spread of knowledge. If we design outcome-based courses on fixed schedules, we limit the potential for individual control, and artificially constrain what need not be constrained.

Not revolution but recognition of what we already do

Any and all of this can change. There have long been methods for dealing with the issues of uniformity in course design and structure and/or tight integration of summative assessment to fixed norms, even within educational institutions. European-style PhDs (the ones without courses), portfolio-based accreditation (PLAR, APEL, etc), challenge exams, competency-based ‘courses’,  open courses with negotiable outcomes, assessments and processes (we have several at AU), whole degrees by negotiated learning outcomes, all provide different and accepted ways to do this and have been around for at least decades if not hundreds of years. Till recently these have mostly been hard to scale and expensive to maintain. Not any more. With the growth of technologies like OpenBadges, Caliper and xAPI, there are many ways to record and accredit learning that do not rely on fixed courses, pre-designed outcomes-based learning designs and restrictive groups. Toolsets like the Landing, Mahara or LPSS provide learner-controlled ways to aggregate and assemble both the process and evidence of learning, and to facilitate the social construction of knowledge – to allow the crowd to teach – without demanding the roles and embodied power structures of traditional learning environments. By either separating learning and accreditation or by aligning accreditation with individual learning and competences, it would be fairly easy to make this change and, whether we like it or not, it will happen: if universities don’t do it, someone else will. 

All of traditional education is bound by historical constraint and path dependencies. It has led to a vast range of technologies to cope, such as terms and semesters, libraries, classrooms, courses, lessons, exams, grading, timetables, curricula, learning objectives, campuses, academic forms and norms in writing, disciplinary divisions and subdivisions, textbooks, rules and disciplinary procedures, avoidance of plagiarism, homework, degrees, award ceremonies and a massive range of other big and small inventions and technologies that have nothing whatsoever to do with learning.

Nothing at all.

All are contingent. They are simply a reaction to barriers and limitations that made good sense while those barriers existed. Every one of them is up for question. We need to imagine a world in which any or all of these constraints can be torn down. That is why we need to think about different social forms, that is why we continue to build the Landing, that is why we continue to explore the ways that learning is evolving outside the ivory tower, that is why we are trying to increase learner control in our courses (even if we cannot yet rid ourselves of all their constraints), that is why we are exploring alternative and open forms of accreditation. It is not just about doing what we have always done in slightly better, more efficient ways. Ultimately, it is about expanding the horizons of education itself. Education is not about courses, awards, classes and power hierarchies. Education is about learning. more accurately, it is about technologies of learning – methods, tools, processes, procedures and techniques. These are all inventions, and inventions can be superseded and improved. Outside formal institutions, this has already begun to happen. It is time we in universities caught up.

References

Dron, J., & Anderson, T. (2014). Teaching crowds: social media and distance learning. Athabasca: AU Press. 

Kauffman, S. (2000). Investigations (Kindle ed.). New York: Oxford University Press. 

Kelly, K. (2010). What Technology Wants (Kindle ed.). New York: Viking. 

Shirky, C. (2003). A Group Is Its Own Worst Enemy. Retrieved from http://www.shirky.com/writings/group_enemy.html

 

 

 

Time to change education again: let's not make the same mistakes this time round

We might as well start with exams

In case anyone missed it, one of countless examples of mass cheating in exams is being reported quite widely, such as at http://www.ctvnews.ca/world/hundreds-expelled-in-india-for-cheating-on-pressure-packed-exams-1.2289032.

The videos are stunning (Chrome and Firefox users – look for the little shield or similar icon somewhere in or near you browser’s address field to unblock the video. IE users will probably have a bar appearing in the browser asking if you want to trust the site – you do. Opera, Konqueror and Safari users should be able to see the video right away), e.g.:

As my regular readers will know, my opinions of traditional sit-down, invigilated, written exams could not be much lower. Sitting in a high-stress environment, unable to communicate with anyone else, unable to refer to books or the Internet, with enormous pressure to perform in a fixed period to do someone else’s bidding, in an atmosphere of intense powerlessness, typically using a technology you rarely encounter anywhere else (pencil and paper), knowing your whole future depends on what you do in the next 3 hours, is a relatively unusual situation to find yourself in outside an exam hall. It is fair enough for some skills – journalism, for example, very occasionally leaves you in similar conditions. But, if it actually is an authentic skill needed for a particular field, then it should be explicitly taught and, if we are serious about it, it should probably be examined under truly authentic conditions (e.g. for a journalist, in a hotel room, cafe, press room, or trench). This is seldom done. It is not surprising, therefore, that exams are an extremely poor indicator of competence and an even worse indicator of teaching effectiveness. By and large, they assess things that we do not teach.

If that were all, I might not be so upset with the idea – it would just be weird and ineffective. However, exams are not just inefficient in a system designed to teach, they are positively antagonistic to learning. This is an incredibly wasteful tragedy of the highest order. Among the most notable of the many ways that they oppose teaching are that:

  • they shift the locus of control from the learner to the examiner
  • they shift the focus of attention from the activity to the accreditation
  • they typically punish cooperation and collaboration
  • they typically focus on content rather than performance
  • they typically reward conformity and punish creativity
  • they make punishments or rewards the reasons for performing, rather than the love of the subject
  • they are unfair – they reward exam skills more than subject skills.

In short, the vast majority of unseen written exams are deeply demotivating (naysayers, see footnote), distract attention away from learning, and fail to discriminate effectively or fairly. They make the whole process of learning inefficient, not just in the wasted time and energy involved surrounding the examination itself, but in (at the very least) doubling the teaching effort needed just to overcome their ill effects. Moreover, especially in the sciences and technologies, they have a strong tendency to reinforce and encourage ridiculous content-oriented ways of teaching that map some abstract notion of what a subject is concerned with to exercises that relate to that abstract model, rather than to applied practices, problem solving and creative synthesis – i.e. the things that really matter.  The shortest path for an exam-oriented course is usually bad teaching and it takes real creativity and a strong act of will to do otherwise. Professional bodies are at least partly culpable for such atrocities.

There is one and only one justification for 99% of unseen written exams that makes any sense at all, which is that it allows us to relatively easily and with some degree of assurance (if very expensively, especially given the harmful effects on learning) determine that the learner receiving accreditation is the one that has learned. It’s not the only way, but it is one of them. That sounds reasonable enough. However, as examples like this show in very sharp relief, exams are not particularly good at that either. If you create a technology that has a single purpose of preventing cheating, then cheats (bearing in mind that the only thing we have deliberately and single-mindedly taught them from start to finish is that the single purpose of everything they do is to pass an exam) will simply find better ways to cheat – and they do so, in spades. There is a whole industry dedicated to helping people to cheat in exams, and it evolves at least as fast as the technologies that we use to prevent it. At least twenty percent of students in North America admit to having at some point in the last year cheated in exams. Some studies show much higher rates overall – 58% of high school students in Canada, for example.  It is hard to think of a more damning indictment of a broken system than this. The problem is likely even worse in other regions of the world. For instance, Davis et al (2009) reckon a whopping 83% of Chinese and 70% of Russian schoolkids cheat on exams. Let me repeat that: only 17% of Chinese people claim never to have cheated in an exam. See a previous post of mine for some intriguing examples of how that happens. When something that most people believe to be wrong is so deeply endemic, it is time to rethink the whole thing. No amount of patching over and tweaking at the edges is going to fix this.

But it’s not just exams

This is part of a much broader problem, and it is a really simple and obvious one: if you teach people that accreditation rather than learning is the purpose of education, especially if such accreditation makes a massive difference to what kind and quality of life they might have as a result of having or not having it, then it is perfectly reasonable that they should find better ways of achieving accreditation, rather than better ways of learning. Even most of our ‘best’ students, the ones that put in some of the hardest work, tend to be focused on the grades first and foremost, because that is our implicit and/or explicit subtext. To my shame, I’m as guilty as anyone of having used grades to coerce: I have been known to annoy my students with a little song that includes the lines ‘If a good mark is what you seek, blog, blog, blog, every week’.  Even if we assume that student will not cheat (and, on the whole, mature students like those that predominate at Athabasca U do not cheat, putting the lie to the nonsense some have tried to promote about distance education leading to more cheating) it challenges teachers to come up with ways of constructively aligning assessment and learning, so that assessment actually contributes to rather than detracts from learning. With skill and ingenuity, it can be done, but it is hard work and an uphill struggle. We really shouldn’t have to be doing that in the first place because learning is something that all humans do naturally and extremely willingly when not pressured to do so. We don’t need to be forced to do what we love to do. We love the challenge, the social value, the control it brings. In fact, forcing us to do things that we love always takes away some or all of the love we feel for them. That’s really sad. Educational systems make the rods that beat themselves.

Moving forwards a little

We can start with the simple things first. I think that there are ways to make exams much less harmful. My friend and colleague Richard Huntrods, for example, simply asks students to reflect about what they have done on his (open, flexible and learner-centred) course. The students know exactly what they will be asked to do in advance, so there is no fear of the unknown, and there is no need for frantic revising because, if they have done the work, they can be quite assured of knowing everything they need to know already. It is a bit odd not to be able to talk with others or refer to notes or the Web, but that’s about all that is inauthentic. This is a low-stress approach that demands nothing more than coming to an exam centre and writing about what they have done, which is an activity that actually contributes substantially to effective learning rather than detracting from it. It is constructively aligned in a quite exemplary way and would be part of any effective learning process anyway, albeit not at an exam centre.  It is still expensive, it still creates a bit more stress for students who have learned to fear exams, but it makes sense if we feel we don’t know our students well enough or we do not trust them enough to credit them for the work they have done. Of course, it demands a problem- or enquiry-based, student-centred pedagogy in the first place. This would not be effective for a textbook wraparound or other content-centric course. But then, we should not be writing those anyway as little is more certain to discourage a love of learning, a love of the subject, or a satisfying learning experience. 

There are plenty of exam-like things that can make sense, in the right kind of context, when approached with care: laboratory exercises, driving tests, and other experiences that closely resemble those of the practice being examined, for example, are quite sensible approaches to accreditation that are aligned with and can even be supportive of the learning process. There are also ways of doing exams that can markedly reduce the problems associated with them, such as allowing conversation and the use of the Internet, open-book papers that allow students to come and go as needed, questions that challenge students to creatively solve problems, exams that use questions created by the students themselves, oral exams that allow examiners to have a useful learning dialogue with examinees, and so on. There are different shades of grey and not all are as awful as the worst, by any means. There are other ways that tend to work better – for instance, badges, portfolios, and many other approaches that allow us to demonstrate competence rather than compliance, that rely on us coming to know our students, and that allow multiple approaches and different skills to be celebrated – but not all exam-like things are as bad as the worst of them.

And, of course, if we avoid exams altogether then we can do much more useful things, like involving students in creating the assignments; giving feedback instead of grades for work done; making the work relevant to student needs, allowing multiple paths, different evidence; giving badges for achievement, not to goad it, etc, etc. There’s a book or two in what we can do to limit the problems though, ultimately, this can only take us so far because, looming at the end of every learning path at an institution, is the accreditation. And therein lies the rub.

Moving forwards a lot

The central problem that we have to solve is not so much the exam itself as the unbreakable linkage of teaching and accreditation. Exams are just a symptom of a flawed system taken to its obvious and most absurd conclusion. But all forms of accreditation that become the purpose of learning are carts driving horses. horse pulling car I recognize and celebrate the value of authentic and meaningful accreditation, but there is no reason whatsoever that learning and accreditation should be two parts of the same system, let alone of the same process.  It it were entirely clear that the purpose of taking a course (or any other learning activity – courses are another demon we need to think carefully about) were to learn, rather than to succeed in a test, then education would work a great deal better. We would actually be able to do things that support learning, rather than that support credit scores; to give feedback that leads to improvement, rather than as a form of punishment or reward; to allow students to expand and explore pathways that diverge rather than converge; to get away from our needs and to concentrate on those of our students; to support people’s growth rather than to stunt it by setting false goals; to valorize creativity and ingenuity; to allow people to gain the skills they actually need rather than those we choose to teach; to empower them, rather than to become petty despots ourselves. And, in an entirely separate process of assessment that teachers may have little or anything to do with at all, we could enable multiple ways to demonstrate learning that are entirely dissociated from the process. Students might use evidence from learning activities we help them with as something to prove their competence, but our teaching would not be focused on that proof. It’s a crucial distinction that makes all the difference in the world.  This is not a revolutionary idea about credentialling – it’s exactly what many of the more successful and enlightened companies already do when hiring or promoting people: they look at the whole picture presented, take evidence from multiple sources, look at the things that matter in the context of application, and treat each individual as a human being with unique strengths, skills and weaknesses, given the evidence available. Credentials from institutions may be part of that right now, but there is no reason for that idea to persist and plenty of alternative ways of showing skills and knowledge that are becoming increasingly popular and significant, from social network recommendations to open badges to portfolios. In fact, we even have pockets of such processes well entrenched within universities. Traditional British PhDs, for example, while they are examined through the thesis and an oral exam (a challenging but flexible process), are examined on evidence that is completely unique to the individual student. Students may target the final assessment a bit, but the teaching itself is not much focused on that. Instead, it is on helping them to do what they want to do. And, of course, there are no grades involved at all – only feedback.

Conclusion

It’s going to be a long slow struggle to change the whole of the educational system across most of the world, especially as there’s a good portion of the world that would be delighted to have these kinds of problems in the first place. We need education before we can have cheating. But we do need to change this, and exams are a good place to start. It changed once before, with far less research to support the change, and far weaker technologies and communication to enable it. And it changed recently. In the grand scheme of things, the first ever university exam of the kind we now recognize as almost universal was the blink of an eye ago. The first ever written exam of the kind we use now (not counting a separate branch for the Chinese Civil Service that began a millenium before) was at the end of the 18th Century (the Cambridge Tripos) and it was only near the end of the 19th Century that written exams began to gain a serious foothold. This was within the lifetime of my grandparents. This is not a tradition steeped in history – it’s an invention that appeared long after the steam engine and only became significant as the internal combustion engine was born.  I just hope institutions like ours are not heading back down the tunnel or standing still, because those heading into the light are going to succeed while those that stay in the shadows will at best become the laughing stock of the world. 

On the subject of which, do watch the video. It is kind-of funny in a way, but the humour is very dark and deeply tragic. The absurdity makes me want to laugh but the reality of how this crazy system is wrecking people’s lives makes me want to cry. On balance, I am much more saddened and angered by it than amused. These are not bad people: this is a bad system. 

Reference

Davis, S., Drinan, P., and Gallant, T. (2009). Cheating in School: What We Know and What We Can Do. West Sussex, UK: Wiley-Blackwell.

Footnote

I know some people will want to respond that the threat or reward of assessment is somehow motivating. If you are one of those, this postscript is for you. 

I understand what you are saying. That is what many of us were taught to believe and it is one way we justify persisting despite the evidence that it doesn’t work very well. I agree that it is motivating, after a fashion, very much like paying someone to do something you want them to do, or hitting them if they don’t. Very much indeed. You can create an association between a reward/punishment and some other activity that you want your subject to perform and, as long as that association persists, you might actually make them do it. Personally speaking, I find that quite offensive, not to mention only mildly effective at achieving its own limited ends, but each to their own. But notice how you have replaced the interest in the activity with an interest in the reward and/or the desire to avoid punishment. Countless research studies from several fields have pretty conclusively shown that both reward and punishment are strongly antagonistic to intrinsic motivation and, in many cases, actually destroy it altogether. So, you can make someone do something by destroying their love of doing it – good job. But that doesn’t make a lot of sense to me, especially as what they have learned is presumably meant to be of ongoing value and interest, to help them in their lives. It is my belief that, if you want to teach effectively, you should never make people learn anything – you should support them in doing so if that is what they want to do. It is good to encourage and enthuse them so that they want to do it and can see the value – that’s a useful teacher role – but it’s a whole different ballgame altogether to coerce them. Alas, it is very hard to avoid it altogether until we change education, and that’s one good reason (I hope you agree) we need to do that.

For further information, you could do worse that to read pretty much anything by Alfie Kohn. If you are seeking a broader range of in-depth academic work, try the Self Determination Theory site.

Defaults matter

I have often written about the subtle and not-so-subtle constraints of learning management systems (LMSs) that channel teaching down a limited number of paths, and so impose implicit pedagogies on us that may be highly counter productive and dissuade us from teaching well – this paper is an early expression of my thoughts on the matter. I came across another example today.

When a teacher enters comments on assignments in Moodle (and in most LMSs), it is a one-time, one-way publication event. The student gets a notification and that’s it. While it is perfectly possible for a dialogue to continue via email or internal messaging, or to avoid having to use such a system altogether, or to overlay processes on top of it to soften the hard structure of the tool, the design of the software makes it quite clear this is not expected or normal. At best, it is treated as a separate process. The design of such an assignment submission system is entirely about delivering a final judgement. It is a tacit assertion of teacher power. The most we can do to subvert that in Moodle is to return an assignment for resubmission, but that carries its own meanings and, on resubmission, still returns us to the same single feedback box.

Defaults are very powerful things that profoundly shape how we behave (e.g. see here, here and here). Imagine how different the process would be if the comment box were, by default, part of a dialogue, inviting response from the student. Imagine how different it would be if the student could respond by submitting a new version (not replacing the old) or by posting amendments in a further submission, to keep going until it is just right, not as a process of replacement but of evolution and augmentation. You might think of this as being something like a journal submission system, where revisions are made in response to reviewers until the article is acceptable. But we could go further. What if it were treated as a debugging process, using approaches like those in Bugzilla or Github to track down issues and refine solutions until they were as good as they could be, incorporating feedback and help from students and others on or beyond the course? It seems to me that, if we are serious about assignments as a formative means of helping someone to learn (and we should be), that’s what we should be doing. There is really no excuse, ever, for a committed student to get less than 100% in the end. If students are committed and willing to persist until they have learned what they come here to learn, it is not ever the students’ failure when they achieve less than the best: it is the teachers’.

This is, of course, one of the motivations behind the Landing. In part we built this site to enable pedagogies like this that do not fit the moulds that LMSs ever-so-subtly press us into. The Landing has its own set of constraints and assumptions, but it is an alternative and complementary set, albeit one that is designed to be soft and malleable in many more ways than a standard LMS. The point, though, is not that any one system is better than any other but that all of them embed pedagogical and process assumptions, some of which are inherently incompatible.

The solution is, I think, not to build a one-size-fits-all system. Yes, we could easily enough modify Moodle to behave the way I suggest and in myriad other ways (e.g. I’d love to see dialogue available in every component, to allow student-controlled spaces wherever we need them, to allow students to add to their own courses, etc) but that doesn’t work either. The more we pack in, the softer the system becomes, and so the harder it is to operate it effectively. Greater flexibility always comes at a high price, in cognitive load, technical difficulty and combinatorial complexity. Moreover, the more we make it suit one group of people, the less well it suits others. This is the nature of monolithic systems.

There are a few existing ways to greatly reduce this problem, without massive reinvention and disruption. One is to disaggregate the pieces. We could build the LMS out of interoperable blocks so that we could, for instance, replace the standard submission system with a different one, without impacting other parts of the system. That was the goal of OKI and the now-defunct E-Framework although, in both cases, assembly was almost always a centralized IT management function and not available to those who most needed it – students and teachers. Neither have really made it to the mainstream. Sakai (an also-ran LMS that still persists) continues to use OKI technologies under the hood but the e-framework (a far better idea) seems dead in the water. These were both great ideas. There just wasn’t the will or the money, and competition from incumbents like Moodle and Blackboard was too strong. Other widget-based methods (e.g. using Wookie) offer more hope, because they do not demand significant retooling of existing systems, but they are currently far from on the ascendent and the promising EU TENCompetence project that was a leader behind this seems moribund, its site offline.

Another approach is to use modules/plugins/building blocks within an existing system. However, this can be difficult or impossible to manage in a manner that delivers control to the end user without at the same time making it difficult for those that do not want or need such control, because LMSs are monoliths that have to address the needs of many people. Not everyone needs a big toolkit and, for many, it would actively make things worse if they had one. Judicious use of templates can help with that, but the real problem is that one size does not fit all. Also, it locks you in to a particular platform, making evolution dependent on designers whose goals may not align with how you want to teach.

Bearing that in mind, another way to cope with the problem is to use multiple independent systems bound by interoperability standards – LTI, OpenBadges or TinCan, for example. With such standards, different learning platforms can become part of the same federated environment, sharing data, processing, learning paths and so on, allowing records to be kept centrally while enabling incompatible pedagogies to run independently within each system. That seems to me to be the most sensible option right now. It’s still more complex for all concerned than taking the easy path, and it increases management burden as well as replicating too much functionality for no particularly good reason. But sometimes the easy path is the wrong one, and diversity drives growth and improvement.

x-literacies

There is an ever-growing assortment of x-literacies. Here are just a few that have entered the realms of academic discourse:

  • Computer literacy
  • Internet literacy
  • Digital literacy
  • Information literacy
  • Network literacy
  • Technology literacy
  • Critical literacy
  • Health literacy
  • Ecological literacy
  • Systems literacy
  • Statistical literacy
  • New literacies
  • Multimedia literacy
  • Media literacy
  • Visual literacy
  • Music literacy
  • Spatial literacy
  • Physical literacy
  • Legal literacy
  • Scientific literacy
  • Transliteracy
  • Multiliteracy
  • Metamedia literacy

This list is a small subset of x-literacies: if there is some generic thing that people do that demands a set of skills, there is probably a literacy that someone has invented to match.  I’ll be arguing in this post that the majority of these x-literacies miss the point, because they focus on tools and technologies more than the reasons and contexts for using them. 

The confusion starts with the name. ‘Literacy’, literally, means the ability to read and write, so most other literacies are not. We might just as meaningfully talk about ‘multinumeracy’ or ‘digital numeracy’ as ‘multiliteracy’ or ‘digital literacy’ and, for some (e.g. ‘statistical literacy’), ‘numeracy’ would actually make far more sense. But that’s fine – words shift in meaning all the time and leave their origins behind. It is not too hard to see how the term might evolve, without bending the meaning too much, to relate to the ability to use not just text but any kind of symbol system. That sometimes makes sense – visual, media or musical literacy, for example, might benefit from this extension of meaning. But most of the literacies I list above have at best only a partial relationship to symbol systems. I think what really appeals to their inventors is that describing a set of skills as ‘x-literacy’ makes ‘x’ seem more important than just a set of skills. They bask in the reflected glory of reading and writing, which actually are awfully important. 

I’m OK with a bit of bigging up, though. The trouble is that prefixing ‘literacy’ with something else infects how we see the thing. It has certainly led to many silly educational initiatives with poorly defined goals and badly considered outcomes. This is because, all too often, it draws attention far too much to the technology and skills, and far too far away from its application in a specific culture. This context-sensitive application (as I shall argue below) is actually what makes it ‘literacy’, as opposed to ‘skill’, and is in fact what makes literacy important.

So this is my rough-draft attempt to unravel the confusion so that at least I can understand it – it’s a bit of sense-making for me. Perhaps you will find it useful too. Some of this is not far off the underpinnings of the multiliteracy camp (albeit with notably different conclusions) and one of my main conclusions will be very similar to what many others have concluded too: that literacy spans many skills, tools and modalities, and is highly contextualized to a given culture at a given time. 

Culture and technology

When they pass a certain level of size and complexity, societies need more than language, ritual, stories, structures and laws passed by word of mouth (mostly things that demand physical co-presence) in order to function. They need tools to manage the complexity, to distribute cognition, replicate patterns, preserve structures, build new ones, pass ideas around, and to bind a dispersed society together. Since the invention of printing, most of the tools that play this role have been based on the technologies of text, which makes reading and writing fundamental to participation in a modern society and its numerous cultures and subcultures.

To be literate has, till recently, simply meant that you can do text. There may also be some suggestion of the ability to use text that relate to abilities to decipher, analyze, synthesize and appreciate: these are at least the product of literacy if not a part of it, and they are among the main reasons we need literacy. But the central point here is that people who are literate, in the traditional sense, are simply able to operate the technology of writing, whether as consumers, producers or both. Why this is ‘literacy’ rather than simply a skillset like any other, is that text manipulation is a prerequisite for people to participate in their culture. It lets them draw on accumulated knowledge, add to it, and be able to operate the social and organizational machinery. At its most basic, this is a pragmatic need: from filling in forms and writing letters to reading signs, labels on food, news, books, contracts and so on. Beyond that, it is also a means to disseminate ideas, challenges, and creative thought in a society. It is futhermore a fundamental technology for learning, arguably second only to language itself in importance. More than that, it is a technology to think with and extend our thinking far beyond what we could manage without such assistance. It lets us offload and enhance our cognition. This remains true, despite multiple other media vying for our attention, most of which incorporate text as well as other forms. I could not do what I am doing right now without text because it is scaffolding and extending the ideas I started with. Other media and modalities can in some contexts achieve this end too and, for some purposes, might even do it better. But only text does it so sweepingly across multiple cultures, and nothing but text has such power and efficiency. In all but the most limited of cultures, text performs culture, and text makes culture: not all of it, by any means, but enough to matter more than most other learned technology skills.

Other ways to perform culture

There have for countless millenia been many other media and tools for cultural transmission and coordination, including many from way before the invention of writing. Paintings, drawings, sculpture, dance, music, rituals, maps, architecture, furniture, transport systems, sport, games, roads, numbers, icons, clothing, design, money, jewellery, weapons, decoration, litany, laws, myths, drama, boats, screwdrivers, door-knobs and many many more technologies, serve (often amongst their other functions) as repositories of cognition, belief, structure and process. They are not just the signs of a culture: they play an active role in its embodiment and enactment. But text, maybe hand in hand with number, holds a special place because of its immense flexibility and ubiquitous application. Someone else can make roads or paintings or door-knobs and everyone else can benefit without needing such skills – this is one of the great benefits of distributed labour. But almost everyone needs skill in text, or at least needs to be close to someone with it. It is far from the only fruit but everyone needs it, just to participate in the cultures of a society.

Cultures and technologies

There are many senses in which we might consider technology and culture to be virtually synonymous. Both are, as Ursula Franklin puts it, ‘the way things are done around here’. Both concern process, structure and purpose. However, I think that there are many significant things about cultures  – attitudes, frames of mind, beliefs, ways of seeing, values, ideologies, for instance – that may be nurtured or enacted by technology, but that are quite distinct from it. Such things are not technological inventions – they are the consequence, precursors and shapers of inventions. Cultures may, however, be ostensively defined by technologies even if they are not functionally identical with them. Archeologists, sociologists and historians do it all the time. Things like language, clothing, architecture, tools, laws and so on are typically used to distinguish one culture from another.

One of the notable things about technologies is that they tend to evolve towards both increasing complexity and increasing specialization. This is a simple dynamic of the adjacent possible. The more we add, the more we are able to add, the more combinations and the more new possibilities that were unavailable to us before reveal themselves, so the more we diversify, subdivide, concatenate and invent. Thus it goes on ad infinitum (or at least ad singularum). Technologies tend to continuously change and evolve, in the absence of unusual forces or events that stop them. Of course, there are countless ways that technologies, notably in the form of religions, can slow this down or reverse it, as well as catastrophes that may be extrinsic or that may result from a particularly poor choice of technologies (over-cultivation of the land, development of oil-dependency, nuclear power, etc). There are also many technologies that play a stabilizing rather than a disruptive role (education systems, for example). Overall, however, viewed globally, in large cultures, the rate of technological change increases, with ever more rapid lifecycles and lifespans.  This means that skills in using technologies are increasingly deictic and increasingly short-lived or, if they survive, increasingly marginalized. In other words, they relate specifically to contexts outside of which they have different or no meaning, and those contexts keep changing thanks to the ever-expanding adjacent possible. Skills and techniques become redundant as contexts change and cultures evolve. That’s a slight over-simplification, but the broad pattern is relentless.

Towards a broader definition of ‘literacy’

Literal literacy is the ability to use a particular technology (text) to give us the ability to learn from, interact with and add to our various different cultures. The label implies more than just reading and writing: to be literate implies that, as a consequence of reading and writing, stuff has been and will be read – not just reading primers, but books, news, reports and other cultural artefacts. In the recent past, text was about the most significant way (after talking and showing) that cultural knowledge was disseminated. In recent decades, there have been plentiful other channels, including movies, radio, TV, websites, multimedia and so on. It was only natural that people would see the significance of this and begin to talk about different kinds of literacy, because these media were playing a very similar cultural role to reading and writing. The trouble is that, in doing so, the focus shifted from the cultural role to the technology itself. At its most absurd, it resulted in terms like ‘computer literacy’ that led to initiatives that were largely focused on building technical skills messily divorced from the cultures they were supporting and of little or no relevance to being an active  member of such a culture.

So here’s a tentative (re)definition of ‘literacy’ that restores the focus: literacy is the prerequisite set of technological skills needed for participation in a culture.  And, of course, we are all members of many cultures. There are other things that matter in a culture apart from technological skills, such as (for example) a playful spirit, honesty, caring for others, good judgement, curiosity, ethical sensibility, as well as an ability to interpret, synthesize, classify, analyze, remix, create and seek within the cultural context. These are probably more important foundations of most cultures than the tools and techniques used to enact them. But, though traits like these can certainly be nurtured, inculcated, encouraged, shown, practiced, learned and improved, they are not literacies. These are the values and valued traits in a culture, not the skills needed to be a part of it, though there is an intimate iterative relationship between the two. In passing, I think it is those traits and others like them that education is really aimed at developing: the rest, the literacy part, is transient and supportive. We don’t have values and propensities in order to achieve literacy. We learn most of them at least partly through the use of literacies, and literacies are there to support them and let them flourish, to provide mechanisms through which they can be exercised.

My suggestion is that, rather than defining a literacy in terms of its technologies, we should define it in terms of the particular culture it supports. If a culture exists, then there is a literacy for it, which is comprised of a set of skills needed to participate in that culture. There is literacy for being a Canadian, but there is equally literacy for being part of the learning technologies community (and for each of its many subcultures), being a researcher, a molecular scientist, a member of a family or of a local chess club. There is literacy for every culture we belong to. Some technological skillsets cross multiple cultures, and some are basic to them. The first of these is nearly always language. Most cultures, no matter how trivial and constrained, have their own vocabularies and acceptable/expected forms of language but, apart from cases where languages are actually a culturally distinguishing factor (e.g. many nations or tribes) they tend to inherit most of the language they use from a super-culture they are a part of. Reading and writing are equally obvious examples of skills that cross multiple cultures, as are numeracy skills. This is why they matter so much – they are foundational. Beyond that, different technologies and consequent skills may matter as much or more in different cultures. In a religious culture these might include the rules, rituals, principles, mythologies and artefacts that define the religion. In a city culture they could include knowledge of bylaws, transit systems, road layouts, map-reading, zones, and norms. In an academic culture it might relate to (for instance) methodologies, corpora, accepted tenets, writing conventions, dress standards, pedagogies, as well as the particular tools and methods relating to the subject matter. In combination, these skills are what makes someone in a given culture literate in that culture.

For instance

Is there such a thing as computer literacy? I’d say hardly at all. In fact, it makes little sense at all to think in those terms. It’s a bit like claiming there is pen literacy, table literacy or wall literacy.  But there might be computing literacy, inasmuch as there may be a culture of computing. In fact, once upon a time, when dinosaurs roamed the earth and people who used computers had to program them themselves, it might have been a pretty important culture that any people who wished to use computers for any purpose at all would need to at least dip their toes in and, most likely, become a part of. That culture is still very much there but it is not a prerequisite of owning a computer that one needs to be a part of it any more – computing culture is now the preserve of a relatively tiny band of geeks who are dwarfed in number by those that simply use computers. The average North American home has dozens of computers, but few of their users need to or want to be part of a computing culture. They just want to operate their TVs, drive their cars, use their phones, take photos, browse the Web, play the keyboard, etc. This is as it should be. Those in a computing culture are undoubtedly still an important tiny band who do important things that affect the rest of the world a lot, but they are just another twig at the end of a branch of the cultural tree, not the large stem that they once were. Within what is left of that computing culture there are a lot of overlapping computing sub-cultures: engineers, bricoleurs, hardware freaks, software specialists, interaction designers, server managers, programmers, object-oriented programmers, PHP enthusiasts, iOS/Mac users, Android/Windows users, big-endians, little-endians. Each sub-culture has its own literacy, its own language, its own technologies on which it is founded, as well as many shared commonalities and cross-cutting concerns. 

Is there such a thing as ‘digital literacy’? Hardly. There is no significant distinctive thing that is digital culture, so there is no such thing as digital literacy. Again, like computing culture, once upon a time, there probably was such a thing and it might have mattered. I recall a point near the start of the 1990s, as we started to build web servers, connect Gopher servers, use email and participate in Usenet Newsgroups, at which it really did seem that we were participating in a new culture, with its own evolving values, its own technologies, its own methods, rules, and ethics. This has almost entirely evaporated now. That culture has in part been absorbed and diffused, in part branched into subcultures. Being ‘digital’ is no longer a way of defining a culture that we are a part of, no longer a way of being. Unless you are one of the very few that has not in the last decade or so bought a telephone, a TV, a washing machine, a stove, or one of countless other digital devices, you are ‘digital’. And, if there were such a thing as a digital culture, you would almost certainly be a part of the digital culture if you are reading this. This is too tenuous a thing – it has nothing to bind it apart from the use of digital devices that are almost entirely ubiquitous, at least in first world cultures, and that are too diverse to bind a culture together. There are, as a result, insufficient shared values to make it meaningful any more. It is, however, still possible to be anti-digital. Some digital luddites (I mean this non-perjoratively to refer to anyone who deliberately eschews digital technologies) do very much have cultures and probably have their own literacies. And there might well be literacies that relate to specific digital technologies and subsets of them. Twitter has a culture, for instance, that implies rules, norms, behaviours, language and methods that anyone participating should probably know. The same may be (and at some point certainly was) true of Facebook, but I think that is less obvious now.

Network culture is probably still a thing, but it is already fading in much the same way that digital culture has already faded, with ubiquity, diversity and specialization each taking bites out of it. We have seen network culture norms develop and spread. New vocabularies have been developed with subtle nuances (LOL, ROFL, LMFAO) that often branch into meanings that may only be deciphered by a few sub-cultures but that may subsequently spread into other cultures (TIL, RT, TLDR, LPT).   We have had to learn new skills, figuring out how to negotiate privacy, filter bubbles, trolls, griefing, effective tagging, filtering, sorting, unfriending and friending, and much much more, in order to participate in a social network culture, one that is (for now) still a bit distinct from other cultures. But that culture has already diversified, spread, diffused, and it is getting more diffuse every day. As it becomes larger and more diverse it ceases to be a relevant means of identifying people, and it ceases to be something we can identify with.

Much of the reason for network culture’s retreat is technological. It was enabled by an assembly of technologies and spawned new ones (norms, conventions, languages, etc) but, as they evolve, other technologies will render it irrelevant. Technologies often help to establish cultures and may even form their foundation but, as they and the cultures co-develop, the technologies that helped build those cultures stop being definitional of them. Partly this results from diffusion, as ways of thinking creep back into the broader super-culture and as more and more diverse cultures spread into it. Partly it is because new technologies take their place and diversify into niches. Partly it is because, rather than us learning to use technologies, they learn to use us. This sounds creepier than it really is: what I mean is that individual inventors see the adjacent possibles and grab them, so technologies change and, in many cases, become embedded, replacing our manual roles in them with pre-orchestrated equivalents. Take, for example, a trivial thing like emoticons, images built from arbitrary text characters, that take some of the role of phatic communication in text communication – like this :-). Emoticons are increasingly being replaced by standardized emojis, like this Smile. Bizarrely, there are now social networks based on emoji that use no text at all. I am intrigued by the kind of culture that this will entail or support but the significant point here is that what we used to have to orchestrate ourselves is now orchestrated in the machine. Consequently, the context changes, problems are solved, and new problems emerge, often as a direct result of the solution. Like, how on earth do you communicated effectively with nothing but emojis Undecided?

Where do we go from here? 

Rather than constantly sub-divide literacies into ever more absurdly-named niches named for the tools to which they relate, or attempt to find bridging competences or values that underly them and call those multiliteracies (or whatever), I propose that we should think of a literacy as being a highly situated set of skills that enable us to play a role as an operator in any given social machine, as creators and/or consumers of a culture – any culture and every culture.  The specificity we choose should be determined by the culture that interests us, not by any predetermined formula. Each subculture has its own language, tools, methods, and signs, and each comes with a set of shared (often contested) attitudes, beliefs, values and passions, that both drive and are driven by the technologies they use.  As a result, each has its own history, that branches from the histories of other subcultures, helping to make it more distinct. This chain of path dependencies helps to reinforce a culture and emphasize its differences. It can also lead to its demise.

In most if not all cases, literacy is an assembly of skills and techniques, not a single skill. ‘Literacy’ is thus simply a label for the essential skills and techniques needed to actively participate in a given culture. Such a culture may be big or small. It may span millenia or centuries but it may span only decades, years or (maybe) months or even weeks or days. It may span continents or exist only in a single room. I have, for example, been involved with courses, workshops and conferences that have evolved their own fleeting cultures, or at least something prototypical of one. In my former job I shared an office with a set of colleagues that developed a slightly different culture from that of the office next door. Of course, the vast majority of our culture was shared because we performed similar roles in the same department in the same organization, the same country, the same field, the same language, the same ethos. But there were differences that might, in some contexts and for some purposes, be important. For most contexts, they were probably not.

Researching literacies 

Assuming that we know what culture we are looking at, identifying literacy in any given culture is simply (well…simply-ish) a question of looking at the technologies that are used in that culture.  While technology use is far from a complete definition of a culture, what makes it distinct from another may be described in terms of its technologies, including its rules, tools, methods, language, techniques, practices, standards and structures. This seems a straightforward way of thinking about it, if seemingly a bit circular. We identify cultures by their technology uses, and define literacy by technology use in a culture. I don’t think this apparent circularity is a major issue, however, as this is an iterative process of discovery: we may start with coarse differentiators that distinguish one culture from another but, as we examine them more closely, will almost certainly find others, or find further differentiators that indicate subcultures. A range of methods and methodologies may be used here, from grounded theory to ethnography, from discourse analysis to Delphi methods, simple observation, questionnaires, interviews, focus groups, and so on. If we want to know about literacy in a culture, we have to discover what technologies are foundational in that culture.

Most of the cultures we belong to are subcultures of some other or others, while others straddle borders between different and otherwise potentially unrelated cultures.  Some skills that partially constitute a given literacy will cross many other cultural boundaries. Almost all will involve language, most will involve reading and writing, many will involve number, lots will involve visual expression, quite a few will involve more or less specific skills using machines (particularly software running on computers, some of which may be common). The ability to create will usually trump the ability to consume although, in some cultures, prosumption may be a defining or overwhelmingly common characteristic (those that emerge in social networks, for instance).

This all implies that a first concern when researching literacy for a given culture, is to identify that culture in the first place, and decide why it is of interest. While this may in some cases be obvious, there may often be subcultures and cross-cultural concerns that could make it more complex to define. One way to help separate out different cultures is to look at the skills, terminology, technologies, implicit and explicit rules, norms, and patterns of technology use in the subset of people that we are looking at. If there are patterns of differences, then there is a good chance that we have identified a cultural divide of some kind. A little more easily, we can also look both at why people are excluded from a culture, and seek to discover the things people need to learn to become a part of it – to look at the things that distinguish an outsider from an insider and how people transition from one to the other.

For example, the literacy for the culture of a country is almost entirely defined by invention. Countries are technologies, first and foremost. They have legislated (if often disputed) borders and boundaries, laws, norms, language, ways of doing things, patterns, establishments, and institutions that are almost entirely enshrined in technology. It is dead easy to spot this particular culture and mostly simple enough to figure out who is not in it and, normally, what they need to do to become a part of it. To be literate in the context of a country is to have the tools to be able to know and to actively interact with the technologies that define it. To give a simple example, although it is quite possible to be Canadian with only a limited grasp of English and/or French, part of what it means to be literate in Canadian culture is to speak one or (ideally) both languages. Other languages are a bonus, but those two are foundational. It is also possible to see similar patterns in religious cultures, academic cultures, sports cultures, sailing cultures and so on. We can see it in subcultures – for example, goths and hipsters are easily identified by a set of technologies that they use and create, because many of them are visible and definitional.  It gets trickier once we try to find subcultures of such easily identified sets but, on the whole, different technologies mark different cultures.

What makes all this technical detail worth knowing is not that different sets of people use different tools but that there are consequences of doing so. Technologies have a deep impact on attitudes, values, beliefs and relationships between people. In turn these values and beliefs equally impact the technologies that are used, developed, and valued. This is what matters and this is what is worth investigating. This is the kind of knowledge that is needed in order to effect change, whether to improve literacy within a culture or to change the culture itself. For example, imagine a university that runs on highly prescriptive processes and a reward structure based on awards for performance. You may not have to look far to find an example. Such a university might be dysfunctional on many counts, either because of lack of literacy in the technologies or because the technologies themselves are poorly considered (or both). One way to improve this would be to ensure that all its members are able to operate the processes and gain awards. This would be to improve literacy within the culture and would, consequently, reinforce it and sustain it. This might be very bad news if the surrounding context changes, making it significantly harder to adapt and change to new demands, but it would be an improvement by some measures. Another, not necessarily conflicting, approach would be to change or eliminate some of the processes, and get rid of or change the nature of rewards for performance: to modify the machinery that drives the culture. This would change the culture and thus change the literacy needed to operate within it. It might do unexpected things, especially as the existing attitudes and values may be at odds with the new culture: people within it would be literate in things that are not relevant or useful any more, while not having literacy needed to operate the new tools and structures. Much existing work surrounding x-literacies fails to clearly make this crucial distinction. By focusing largely on the technological requirements and ignoring the culture, we may reinforce things that are useless, redundant or possibly harmful. For instance, multimedia literacy might be great, sure. But for what and for whom? And in what forms? Different skillsets are needed in different contexts, and will have different value in different cultures.

To conclude

I have proposed that we should define literacy as the skills needed to operate the technologies that underpin a particular culture. While some of those skills are common to many cultures, the precise set and the form they take is likely different in almost every culture, and cultures evolve all the time so no literacy is forever. I think this is a potentially useful perspective.

We cannot sensibly define a set of skills or propensities without reference to the culture that they support, and we should expect differences in literacies both between different cultures and across time and space in any given culture. We can ask meaningful questions about literacy in a culture of (say) people who use Twitter for learning and research as opposed to those needed by people that only use Twitter to stay in touch with one another.  We can look at different literacies for people who are Canadian, people who are in schools, people of a particular religion, people who like a particular sport, people who research learning technologies, people in a particular office, people who live in Edmonton, not to mention their intersections and their subsets. By looking at literacy as simply a set of skills needed for a given culture we can gain large insights into the nature of that culture and its values. As a result, we can start to think more carefully about which skills are important, whether we want to simply support the acquisition of those skills, or whether we want to transform the culture itself.

This is just my little bit of sense making. I have very probably trodden territory that is very familiar to a lot of people who research such things with more rigour, and I doubt very much that any of it is at all original. But I have been bothered by this issue for a while and it now seems a little clearer to me what I think about this. I hope it has encouraged you to think about what you think too. Feel free to share your thoughts in the comment box!

Researching things that don't exist

As the end of my sabbatical is approaching fast, I am still tinkering with a research methodology based on tinkering (or the synonymous bricolage, to make it sound more academic). Tinkering is an approach to design that involves making things out of what we find around us, rather than as an engineered, designed process. This is relatively seldom seen as valid approach to design (though there are strong arguments to be made for it), let alone to research, though it underpins much invention and discovery. Tinkering is, by definition, a step into the unknown, and research is generally concerned with knowing the unknown (or at least clarifying, confirming or denying the partly- or tentatively-known). This is not a direct path, however.

Research can take many forms but, typically and I think essentially, the sort that we do in academia is a process of discovery, rather than one of invention. This is there in the name – ‘recherche’ (the origin of the term) means to go about seeking, which implies there is something to be found. The word ‘discovery’ suggests that there is something that exists that can be discovered, whereas inventions, by definition, do not exist, so they are never exactly discovered as such.

While we can seldom substitute ‘invention’ for ‘discovery’, the borders are blurry. Did Maxwell discover his equations or did he invent them? What he discovered was something about the order of the universe, that his (invented) equations express, but the equations formed an essential and inextricable part of that discovery. R&D labs get around the problem by simply using two terms so that you know they are using both. The distinction is similarly blurry in art: an artwork is normally not, at least in a traditional sense, research because, for most art, it is a form of invention rather than discovery. But sculptors often talk of discovering a form in stone or wood. And, even for the most mundane of paintings or drawings, artists are in a dialogue with their media and with what they have created, each stroke building on and being influenced by those that came before. A relative of mine recently ran an exhibition of works based on the forms suggested by blots of ink and water, which illustrates this in sharper relief than most, and I do rather like these paintings from Bradley Messer that follow the forms of wood grain. Such artists discover as much as they create and, like Maxwell’s equations, their art is an expression of their discovery, not the discovery itself, though the art is equally a means of making that discovery. Discovery is even more obvious in ‘found’ art such as that of some of the Dadaists, though the ‘art’ part of it is arguably still the invention, not the discovered object itself. Duchamp Fountaine And, as Dombois observes  there are some very important ways research and art can connect: research can inform art and be about art, and art can be about research, can support research and can arise from it. Dombois also believes art can be a means of performing research. Komar and Melamid’s ‘most-wanted paintings’ project is a good example of art not only being informed by research itself being a form of research. Their paintings resulted from research into what ‘the people’ wanted in their paintings. The paintings themselves challenge what collective taste means, and the value of it, changing how we know and make use of such information. And the artwork itself is the research, of which the paintings are just a part. 

Inventions (including art works) use discoveries and, from our inventions, we can make discoveries (including discoveries about our inventions). Invention makes it possible to make novel discovery, but the research is that discovery, not the inventions that lead to it. Research perceived as invention means discovering not what is there but what is not there, which is a little bizarre. More accurately, perhaps, it is seeking to discover what is latently there. It is about discovering possible futures. But even this is a bit strange, inasmuch as latent possibilities are, in many cases, infinite. I don’t think it counts as discovery if you are picking a few pieces from a limitless range of possibilities. It is creation that depends entirely on what you put into it, not on something that can be discovered in that infinity. But, perhaps, the discovery of patterns and regularities in that infinite potential palette is the research. This is because those infinite possibilities are maybe not as infinite as they seem. They are at the very least constrained by what came before, as well as by a wide range of structural constraints that we impose, or have imposed upon us. What is nice about tinkering is that, because it is concerned with using things around us, the forms we work on already have such patterns and constraints. 

Tinkering is concerned with exploring the adjacent possible. It is about looking at the things around you (which, in Internet space, means practically everywhere) and finding ways to put them together in new ways to do new things. These new things can then, themselves, create new adjacent possibles, and so it goes on. Beyond invention, tinkering is a tool for making new discoveries. It is a way of having a conversation with objects in which the tinker manipulates the objects and the objects in turn suggest ways of putting them together. It can inspire new ways of thinking. We discover what our creations reveal. Writing (such as this) is a classic example of this process. The process of writing is not one of recording thoughts so much as it is one of making new ones. We scaffold our thoughts with the words we write, pulling ourselves up by our own bootstraps as we do so in order to build further thoughts and connections.

The construction of all technologies works the same way, though it is often hidden behind walls of abstraction and deliberate design. If, rather than design-then-build, we simply tinker, then the abstraction falls away. The paths we go down are unknown and unknowable in advance, because the process of construction leads to new ideas, new concepts, new possibilities that only become visible as we build. Technologies are (all) tools to think with at least as much as they are tools to perform the tasks we build them for, and tinkering is perhaps the purest way of building them. And this is what makes tinkering a process of discovery. The focus is not on what we build, but on what we discover as a direct result of doing so – both process and product. Tinkering is a scaffold for discovery, not discovery itself. This begins to feel like something that could underpin a methodology.

With this in mind, here is an evolving set of considerations and guidelines for tinkering-based research that have occurred to me as I go along.

Exploring the possible

To be able to explore the adjacent possible, it is first necessary to explore the possible. In fact, it is necessary to be immersed in the possible. At a simple level, this because the bigger your pile of junk, the more chances there are of finding interesting pieces and interesting combinations. But there are other sub-aspects of this that matter as much: the nature of the pile of junk, the skills to assemble the junk, and immersion in the problem space. 

1) The pile of junk

Tinkering has to start with something – some tools, some pieces, some methods, some principles, some patterns. It is important that these are as diverse as possible, on the whole. If you just have a pile of engine parts then the chances are you are going to make another engine although, with a tinker-space containing sufficiently diverse patterns, you might make something else. There is a store near me that sells clocks, lights and other household objects made from bits of old electrical equipment and machinery, and it is wonderful. Similarly, some of the finest blues musicians can make infinite complexity out of just three chords and a (loosely) pentatonic scale. But having diverse objects, methods, patterns and principles certainly makes it easier than just having a subset of it all.

It is important that the majority of the junk is relatively complex and self-contained in itself – that it does something on its own, that it is already an assembly of something. Doing bricolage with nothing but raw materials is virtually impossible – they are too soft (in a technology sense). You have to start with something, otherwise the adjacent possible is way too far away and what is close is way too boring. The chances are that, unless you have a brilliant novel idea (which is a whole other territory and very rare) you will wind up making something that already exists and has probably been done better. This is still scrabbling around in the realms of the possible. The whole point is to start with something and assemble it with something else to make it better, in order to do something that has never been done before. That’s what makes it possible to discover new things. Of course, the complexity does not need to be in physical objects: you might have well-assembled theories, models, patterns, belief systems, aesthetic sensibilities and so on that could be and probably will be part of the assembly. And, since we are not just talking about physical objects but methods, principles, patterns etc, this means you need to immerse yourself in the process – to do it, read about it, talk about it, try it. 

2) The tools of assembly

It is not enough to have a great tinker-space full of bits and pieces. You need tools to assemble them. Not just physical tools, but conceptual tools, skills, abilities, etc. You can buy, make, beg, borrow or steal the tools, but skills to use them take time to develop. Of course, one of the time-honoured and useful ways to do that is to tinker, so this works pretty well. Again, this is about immersion. You cannot gain skills unless you apply them, reflect on it, apply them again, in a never-ending cycle.

There is a flip side to this though. If you get to be too skillful then you start to ignore things that you have discovered to be irrelevant, and irrelevant things aren’t always as irrelevant as they seem. They are only irrelevant to the path you have chosen to tread. Treading multiple paths is essential so, once you become too much of an expert, it is probably time to learn new skills. It is hard to know when you are too much of an expert. Often, the clue is that someone with no idea about the area suggests something and you laughingly tell them it cannot be done. Of course it can. This is technology. It’s about invention. You are just too smart to know it.

Being driven by your tools (including skills) is essential and a vital part of the methodology – it’s how the adjacent possible reveals itself. But it’s a balance. Sometimes you go past an adjacent possible on your way and then leave it so far behind that you forget it is there at all. It sometimes takes a beginner to see things that experts believe are not there. It can be done in all sorts of ways. For example, I know someone who, because he does not want to be trapped by his own expertise, constantly retunes his guitar to new tunings, partly to make discoveries through serendipity, partly to be a constant amateur. But, of course, a lot of his existing knowledge is reusable in the new context. You do not (and cannot) leave expertise behind when learning new things – you always bring your existing baggage. This is good – it’s more junk to play with. The trick is to have a ton of it and to keep on adding to it.

3) The problem space

While simply playing with pieces can get you to some interesting places, once you start to see the possibilities, tinkering soon becomes a problem-solving process and, as you follow a lead, the problem becomes more and more defined, almost always adding new problems with each one solved. Being immersed in a problem space is crucial, which tends to make tinkering a personal activity, not one that lends itself well to formally constructed groups. Scratching your own itch is a pretty good way to get started on the tinkering process because, having scratched one itch, it always leads to more or, at least, you notice other itches as you do so.

If you are scratching someone else’s itch then it can be too constraining. You are just solving a known problem, which seldom gets you far beyond the possible and, if it does, your obligations to the other person make it harder for you to follow the seam of gold that you have just discovered along the way that is really the point of it. It’s the unknown problems, the ones that only emerge as we cross the border of the adjacent possible, that matter here. Again, though, this is a balance. A little constraint can help to sustain a focus and doing something that is not your own idea can spark serendipitous ideas that turn out to be good.

Just because it is not really a team process doesn’t mean that other people are not important to it. Talking with others, exchanging ideas, gaining inspiration, receiving critique, seeing the world through different eyes – all this is very good. And it can also be great to work closely with a small number of others, particularly in pairs – XP relies on this for its success. A small number of people do not need to be bogged down with process, schedules, targets, and other things that get in the way of effective tinkering, can inspire one another, spot more solutions, and sustain motivation when the going gets rough. 

The Structural Space

One of the points of bricolage is that it is structured from the bottom up, not the top down. Just because it is bottom-up structure does not mean it is not structure. This is a classic of example of shaping our tools and our tools shaping us (as McLuhan put it), or shaping our dwellings while our dwellings shape our lives (as Churchill put it a couple of decades earlier). Tinkering starts with forms that influence what we do with them, and what we do with them influences what we do next – our creations and discoveries become the raw material for further creations and discoveries. Though rejecting deliberate structured design processes, I have toyed with and tried things like prototyping, mock-ups and sketches of designs, but I have come to the opinion that they get in the way – they abstract the design too much. What matters in bricolage is picking up pieces and putting them together. Anything beyond vague ideas and principles is too top-down. You are no longer talking with the space but with a map of the space, which is not the same thing at all.

Efficiency

One of the big problems with tinkering is that it tends to lead to highly inefficient design, from an engineering perspective. Part of the reason for that is that path dependencies set in early on. A bad decision early can seriously constrain what you do later. One has only to look at our higher education systems, the result of massively distributed large scale tinkering over nearly a thousand years, to see the dangers here. The vast majority of what we continue to do today is mediaeval in origin and, in a lot of cases, has survived unscathed, albeit assembled with a few other things along the way.

Building from existing pieces can limit the damage – at least you don’t have to pull everything apart if it turns out that it is not a fruitful path. It is also very helpful to start with something like Lego, that is designed to be fitted together this way. Most of my work during my sabbatical has involved programming using the Elgg framework, which is very elegantly designed so that, as long as you follow the guidelines, it naturally forms into at least a decent outline structure. On the other hand, as I have found to my cost, it is easy to put enough work into something that it makes it very discouraging when you to have to start again. As the example of educational systems shows, some blocks are so foundational and deeply linked with everything else, that they affect everything that follows and simply cannot be removed without breaking everything.

Working together

Tinkering is quite hard to do in teams, apart from as sounding boards for reflection on a process already in motion. It is instructive to visit LegoLand to see how it can work, though. In the play spaces of LegoLand one sees kids (and more than a few adults) working alone on building things, but they are doing so in a very social space. They talk about what they are doing, see what others are doing and, sometimes, put their bits of assemblies together, making bigger and more complex artefacts. We can see similar processes at work in GitHub, a site where programmers, often working alone, post projects that others can fork and, through pull-requests, return in modified form to their originators or others, with or without knowing them or ineracting with them in any other way. It’s a wonderful evolutionary tinker-space. If programs are reasonably modular, people can work on different pieces independently, that can then be assembled and reassembled by others. Inspiration, support, patterns of thinking and problem solving, as well as code, flow through the system. The tinkering of others becomes a part of your own tinker-space.  It’s a learning space – a space where people learn but also a space that learns. The fundamental social forms for tinkering are not traditional, purpose-driven, structured and scheduled teams (groups), but networks and, more predominantly, sets of people connected by nothing but shared interest and a shared space in which to tinker.

Planning

As well as resulting in inefficient systems, tinkering is not easy to plan. At the start, one never knows much more than the broad goal (that may change or may not even be there at all) and the next steps. You can build very big systems by tinkering (back to education again but let’s go large on this and think of the whole of gaia) but it is very hard to do so with a fixed purpose in mind and harder still to do so to a schedule. At best, you might be able to roughly identify the kind of task and look to historical data to help get some statistical approximation of how long it might take for something useful to emerge.

A corollary of the difficulty of planning (indeed, that it is counter-productive to do so) is that it is very easy to be thrown off track. Other things, especially those that involve other people that rely on you, can very quickly divert the endeavour. At the very least, time has to be set aside to tinker and, come hell or high water, that time should be used. Tinkering often involves following tenuous threads and keeping many balls in the air at once (mixing metaphors is a good form of tinkering) so distractions are anethema to the effective tinkerer. That said, coming up for a breath of air can remind you of other items in the tinker-chest that may inspire or provoke new ways of assembling things. It is a balance.

Evolution, not design

Naive creationists have in the past suggested that the improbability of finding something as complex as even a watch, let alone the massively more complex mechanisms of the simplest of organisms, means that there must be an intelligent designer. This is sillier than silly. Evolution works by a ratchet, each adaptation providing the basis for the next, with some neat possibilities emerging from combinatorial complexity as well. Given enough time and a suitable mechanism, exponentially increasingly complex systems are not just possible put overwhelmingly probable. In fact, it would be vastly more difficult to explain their absence than their existence. But they are not the result of a plan. Likewise for tinkering with technologies. If you take two complex things and put them together, there is a better than fair chance that you will wind up with something more complex that probably does more than you imagined or intended when you stuck them together.  And, though maybe there is a little less chance of disaster than the random-ish recombinations of natural evolution, the potential for the unexpected increases with the complexity. Most unexpected things are not beneficial – the bugs in every large piece of software attest to that, as do most of my attempts at physical tinkering over the course of my lifetime. However, now and then, some can lead to more actual possibles. The adjacent possible is what might happen next but, in many cases, changes simply come with baggage. Gould calls these exaptations – they are not adaptations as such, but a side-effect or consequence of adaptation. Gould uses the example of the Spandrels of St Marco to illustrated this point, showing how the structure of the cathedral of St Marco, with its dome sitting on rounded arches, unintentionally but usefully created spaces where they met that proved to be the perfect place to put images of saints – in fact, they seem made for them. But they are not – the spaces are just a by-product of the design that were coopted by the creators of the cathedral to a useful purpose. A lot of systems work that way. It is the nature of their assembly to create both constraints and affordances, path dependencies and patterns early on deeply defining later growth and change. Effective tinkering involves using such spandrels, and that means having to think about what you have built. Thinking deeply.

The Reflection Space

Just tinkering can be fun but, to make it a useful research process, it should involve more than just invention. It should also involve discovery. It is essential, therefore, that the process is seen as one of reflective dialogue with the creations we make. Reflection is not just part of an iterative cycle – it is embedded deeply and inextricably throughout the process. Only if we are able to constructively think about what we are doing as well as what we have done can this generate ideas, models, principles and foundations for further development. It is part of the dialogue with the objects (physical, conceptual, etc) that we produce and, perhaps even more importantly, it is the real research output of the tinkering process. Reflection is the point at which we discover rather than just invent. In part it is to think about the meaning and consequence, in part to discover the inevitable exaptions, in part to spot the next adjacent possible. This is not a simple collaboration. Much of the time we argue with the objects we create – they want to be one way but we want them to be another and, from that tension, we co-create something new.  

We need to build stories and rich pictures as much as we need to build technologies. Indeed, it doesn’t really matter that much if we fail to produce any useful artefact through tinkering, as long as the stories have value.  From those stories spin ideas, inspirations, and repeatable patterns. Stories allow us to critique what we have done and learn from it, to see it in a broader context and, perhaps, to discover different contexts where the ideas might apply. And, of course, these stories should be shared, whether with a few friends or the world, creating further feedback loops as well as spreading around what we have discovered.

Stories don’t have to be in words. Pictures are equally and often more useful and, often most useful of all, the interactions with our creations can tell a story too. This is obviously the case in things like games, Arduino projects or interactive site development but is just as true of making things like furniture, accessories and most of the things that can be made or enhanced with Sugru.

Here are two brief stories that I hope begin to reveal a little of what I mean.

A short illustrative story

Early in my sabbatical I wrote one Elgg plugin that, as it emerged, I was very pleased with, because it scratched an itch that I have had for a long time. It allowed anyone to tag anything, and for duplicate tags used by different people to be displayed as a tag cloud instead of the normal list of tags that comes with a post. This was an assembly of many ideas, and was a conversation with the Elgg framework, which provided a lot of the structure and form of what I wanted to achieve. In doing it, I was learning how to program in Elgg but, in shaping Elgg, I was also teaching it about the theories that I had developed over many years. If it had worked, it would have given me a chance to test those theories, and the results would probably have led to some refinements, but that was really a secondary phase of the research process and not the one that I was focusing on.

Before any other human being got to use the system, the research process was shaping and refining the ideas. With each stage of development I was making discoveries. A big one was the per-post tag cloud. My initial idea had simply been to allow people to tag one another’s posts. This would have been very useful in two main ways. Firstly, it would give people the chance to meaningfully bookmark things they had found interesting. Rather than the typical approach of putting bookmarks into organized hierarchies, tags could be used to apply faceted categorizations, allowing posts to cross hierarchical boundaries easily and enabling faceted classification of the things people found interesting. Secondly, the tags would be available to others, allowing social construction of an ontology-like thing, better search, a more organized site. Tags are already very useful things but, in Elgg, they are applied by post authors and there are not enough of them for strong patterns to develop on their own in any but quite large systems. One of the first things I realized was that this meant the same tag might be used for the same post more than once.  It was hard to miss in fact, because what I saw when I ran the program was multiple tags for each post – the system I had assembled was shouting at me. Having built a tag cloud system in the 1990s before I even knew the word ‘tag’ let alone ‘tag cloud’ I was primed to spot the opportunity for a tag cloud, which is a neat way to give shape and meaning to a social space. Individually, tags categorize into binary categories. Collectively, they become fuzzy and scalar – an individual post can be more of one tag than another, not because some individual has decided so, but because a crowd has decided so. This is more than a folksonomy. It is a kind of collaborative recommender system, a means to help people recognize not just whether something is good or bad but in what ways it is good or bad. Already, I was thinking of my PhD work which involved fuzzy tags I called ‘qualities’ (e.g. ‘good for beginners’, ‘comprehensive’, ‘detailed’, etc) that allowed users of my CoFIND system not just to categorize but to rate posts, on multiple pedagogical dimensions. Higher tag weight is an implicity proxy for saying that, in the context of what is described by this tag, the post has been recommended. As I write this (writing is great tinkering – this is the power of reflection) I realize that I could explicitly separate such tags from Elgg’s native tags, which might be a neat way to overcome the limitations of the system I wrote about 15 years ago, that was a good idea but very unusable. Anyway…

It worked like a dream, exactly as I had planned, up to the point that I tried to allow people to see the things they had tagged, which was pretty central to the idea and without which the whole thing was pretty pointless: it is highly improbably that individuals would see great value in tagging things unless they could use those tags to find and organize stuff on the site. As it turns out, the Elgg developers never thought tags might be used this way, so the owner of a tag is not recorded in the system. The person that tags a post is just assumed to be the owner of the post. I’m not a great Elgg developer (which is why I did not realise this till it was too late) but I do know the one cardinal rule – you never, ever, ever mess with the core code or the data model. There was nothing I could do except start again, almost completely from scratch. That was a lot of work – weeks of effort. It was not entirely wasted – I learned a lot in the process and that was the central purpose of it all. But it was very discouraging. Since then, as I have become more immersed in Elgg, my skills have improved. I think I can now see roughly how this could be made to work. The reason I know this is because I have been tinkering with other things and, in the process, found a lightweight way of using relationships to link individuals and objects that, in the ways that matter, can behave much like tags. Now that I have the germ of an idea about how to make this pedagogically powerful, hopefully I will have time to do that. 

Another illustrative story

One of my little sabbatical projects (that actually it turned out to be about the biggest, and it’s not over yet) was to build an OpenBadge plugin. This was actually prompted by and written for someone else. I would not thought of it as a good itch to scratch because I happen to know something about badges and something about learning and, from what I have seen, badges (as implemented so far) are at best of mixed value in learning. In the vast majority of instances that I have seen them used, they can be at the very best as demotivating as they are motivating. Much of the time it is worse than that: they turn into extrinsic proxies that divert motivation away from learning almost entirely. They embed power structures and create divisions. From a learning perspective, they are a pretty bad idea. On the plus side, they are a very neat way to do credentials which is great if that is what you are aiming for, opening up the potential for much more interesting separation of teaching and accreditation, diverse learning paths, and distributed learning, so I don’t hate them. In fact, I quite like them. But their pedagogical risks mean that I don’t love them enough to have even considered writing a plugin that implements them.

Despite reservations, I said I would do it. It didn’t seem like a big task because I reckoned I could just lightly modify one of a couple of existing (non-open) badge plugins that had already been written for Elgg.  I also happened to have some parts lying round – my pedagogical principles, the Elgg framework, the Mozilla OpenBadge standard documentation, various code snippets for implementing OpenBadges – that I could throw together. Putting these pieces together made me realize early on that social badging could be a good idea that might help overcome several of my objections to their usual implementations. Because of the nature of Elgg, the obvious way to build such a plugin would be such that anyone could make a badge, and anyone could award one, making use of Elgg’s native fine-grained bottom-up permissions. This meant that the usual power relationships implied in badging would not be such a problem. This was an interesting start.

Because Elgg has no roles in its design (apart from a single admin role for the site builder and manager), and so no explicit teaching roles, this could have been potentially tricky from a trust perspective – although its network features would mean you could trust awards by people you know, how would you trust an award from someone you don’t know and who is not playing a traditional teacher role in a power hierarchy? Even with the native Elgg option to ‘recommend’ a badge (so more people could assert its validity) this could become chaotic. But my principles told me that teacher control is a bad thing so I was not about to add a teacher role.

After tossing this idea around for a few minutes, I came up with the idea of inheritable badges – in other words, a badge could be configured so that you could only award a badge if you had received it yourself. In an instant, this began to look very plausible. If you could trace the badge to someone you trust (eg. a teacher or a friend or someone you know is trustworthy), which is exactly what Elgg would make possible by default, then you could trust anyone else who had awarded the badge to at least have the competence that the badge signifies, and so be more likely to be able to accurately recognize it in someone else. This was neat – it meant that accreditation could be distributed across a network of strangers (as in a MOOC) without the usual difficulties of the blind accrediting the blind that tend to afflict peer assessment methods in such contexts. Better still, this is a great way to signify and gain social capital, and to build deeper and richer bonds in a community of strangers. It is, I think, among the first scalable approaches to accreditation in a connectivist context, though I have not looked too deeply into the literature, so stand to be corrected.

Later, as I tinkered and became immersed in the problem, thinking how it would be used, I added a further option to let a badge creator specify a prerequisite award (any arbitrarily chosen badge) that must be held before a badge could be awarded. As well as allowing more flexibility than simple inheritance, this meant that you could introduce roles by the back door if you wished, by allowing someone to award a ‘teacher’ badge or similar, and only allowing people holding that badge to make awards of other badges.  I then realized this was a generalized case of the same thing as the inheritance feature, so got rid of the inheritance feature and just added the option to make a prerequisite of the current badge itself. It is worthy of note that this was quite difficult to do – had I planned it from the start, it would have been trivial, but I had to unpick what I had done as well as build it afresh.

Social badging, peer assessment, scalable viral accreditation, social capital, motivation  – this was looking cool. Furthermore, tinkering with an existing framework suggested other cool things. By default, it was a lot easier to build this if people could award badges to themselves. The logical next step would have been to prevent them from doing this but, as I saw it working, I realised self-badging was a very good idea! It bothered me for a moment that it might be a bit confusing, at least, not to mention appearing narcissistic if people started awarding themselves badges. However, Elgg posts can be private, so people giving themselves badges would not have to show them to others. But they could, and that could be useful. They could make a learning contract with someone else or a group of people, and allow them to observe, thus not only improving motivation and honesty, but also building bonding social capital. So, people could set goals for themselves and award themselves badges when they accomplished them, and do so in a safe social context that they would be in control of. It might be useful in many self-directed learning contexts. 

These were not ideas that simply flowed in my head from start to finish: it was a direct result of dialogue with what I was creating that this came about, and it could only have done so because I already had ideas and principles about things like portfolios, learning contracts and social learning floating around in my toolkit, ready to be assembled. I did include the admin option to turn off self-awarding at a system level in case anyone disagreed with me, and because I could imagine contexts where it might get out of hand. I even (a little reluctantly) made it possible to limit badge awarding to admins only, so that there could be a ‘root’ badge or two that would provide the source of all accreditation and awarding. Even then, it could still be a far more social approach to accreditation than most, making expertise not just something that is awarded with an extrinsic badge, but also something that gives real power to its holder to play an important role in a learning community.

This is not exactly what my sponsors asked for: they wanted automation, so that an administrator could set some criteria and the system would automatically award badges when those criteria had been met.  Although I reckon my social solution meets the demand for scalability that lay at the heart of that request, I realized that, with some effort, I could assemble all of this with a karma point plugin that I happened to have in my virtual toolshed in order to enable automated badge awarding for things like posting blogs, etc. Because there was no obvious object for which such an award could be given as it could relate to any arbitrary range of activities, I made the object providing evidence to be the user’s own profile. Again, this was just assembling what was there – it was an adjacent possible, so I took it. I could, if I had not been lazy, have generated a page displaying all of the evidence, but I did not (though I still might – it is an adjacent possible that might be worth exploring). And so, of course, now it is possible to award a badge to a user, rather than for a specific post which, though not normally a good idea from a motivation perspective, could have a range of uses, especially when assembled with the tabbed profile we built earlier (what I refer to in academic writings as a ‘context switcher’ and that can be used as a highly flexible portfolio system).

These are just a sample of many conversations I had with the tools and objects that were available to me. I influenced them, they influenced me. There were plenty of others – exaptions like my discovery that the design I had opted for, which made awards and badges separate objects, meant that I had a way of making awards persistent and not allowing badge owners to sneakily change them afterwards, for example, thus enhancing trust in the system. Or that the Elgg permissions model made it very simple to reliably assert ownership, which is very important if you are going to distribute accreditation over multiple sites and systems. Or that the fact that it turned out to be an incredibly complex task to make it all work in an Elgg Group context was a blessing because I therefore looked for alternatives, and found that the pre-requisite functionality does the job at least as well, and much more elegantly. Or that the Elgg views system made it possible to fairly easily create OpenBadge assertions for use on other sites. The list goes on. 

It was not all wonderful though. Sometimes the conversation got weird. My plan to start with an existing badge plugin quickly bit the dust. It turns out that the badge plugins that were available were both of the kind I hate – they awarded badges to individuals, not for specific competences. To add injury to injury, they could be awarded only by the administrator, either automatically through accrued points or manually. This was exactly the kind of power structure that I wanted to get away from. From an architectural perspective, making these flawed plugins work the way I wished would have been much harder than writing the plugin from scratch. However, in the spirit of tinkering, I didn’t start completely from scratch. I looked around for a plugin that would do some of the difficult stuff for me. After playing with a few, I opted standard Elgg Files plugin, because that ought to have made light work of storing and organizing the badge images. In retrospect, maybe not the best plan, but it was a starting point. After a while I realized I had deleted or not used 90% of the original plugin, which was more effort than it was worth. I also got stuck in a path dependency again, when I wanted to add multiple prerequisites (ie you could specify more than one badge as a prerequisite) : by that time, my ingenious single-prerequisite model was so firmly embedded that it would have taken more than a solid week to change it. I did not have the energy, or the time.  And, relatedly, my limited Elgg skills and lack of forward planning meant that I did not always divide the code into neatly reusable chunks. This still continues to cause me trouble as I try to make the OpenBadge feature work. Reflecting on such issues is useful – I now know that multiple inheritence makes sense for this kind of system, which would not have occurred to me if I hadn’t built a system with a single-prerequisite data model. And I have a better idea about what kind of modularity works best in an Elgg system.

Surfing the adjacent possible

Like all stories worthy of the name, my examples are highly selective and probably contain elements of fiction in some of the details of the process. Distance in time and space changes memories so I cannot promise that everything happened in the order and manner presented here – it  was certainly a lot more complicated, messy and detailed than I have described it to be. I think this fictionlizing is crucial, though. Objective reporting is exactly not what is needed in a bricolage process. It is the sense-making that matters, not religious adherence to standards of objectivity. What matters are the things we notice, the things we reflect on and things we consider to be important. Those are the discoveries. 

This is a brief and condensed set of ten of the main principles that I think matter in effective tinkering for research:

  1. do not design – just build
  2. start with pieces that are fully formed
  3. surround yourself with both quantity and diversity in tools, materials, methods, and perspectives
  4. dabble hard – gain skills, but be suspicious of expertise
  5. look for exaptations and surf the adjacent possible
  6. avoid schedules and goals, but make time and space for tinkering, and include time for daydreaming
  7. do not fear dismantling and starting afresh
  8. beware of teams, but cultivate networks: seek people, not processes
  9. talk with your creations and listen to what they have to say
  10. reflect, and tell stories about your reflections, especially to others

As I read these ideas it strikes me that this is the very antithesis of how research, at least in fields I work in, is normally done and that it would be extremely hard to get a grant for this. With a deliberate lack of process control, no clear budgets, no clear goals, this is not what grant awarders would normally relish. Whatever. It is still worth doing.

Tinkering as a research methodology offers a lot – it is a generative process of discovery that builds ideas and connections as much as it builds objects that are interesting or useful. It is far from being a random process but it is unpredictable. That is why it is interesting. I think that some aspects of it resemble systematic literature review: the discovery and selection of appropriate pieces to assemble, in particular, is something that can be systematized to some extent and, just as in a literature review, once you start with a few pieces, other pieces fall naturally into place. It is very closely related to design-based research and action research, with their formal cycles and iterative processes, although the iteration cycle in tinkering is far finer grained, it is not as rigid in its requirements, and it deliberately avoids the kind of abstractions that such methodologies thrive on. It might be a subspecies though. It definitely resembles and can benefit from soft systems methodologies, because it is the antithesis of hard systems design. Rich pictures have a useful role to play, in particular, though not at the early stages they are used in soft systems methods. And, unlike soft systems, the system isn’t the goal.

Finally, tinkering is not a solution to everything. It is a means of generating knowledge. On the whole, if the products are worthwhile, then they should probably feed into a better engineered system. Note, however, that this is not prototyping. Though products of tinkering may sometimes play the role of a prototype at a later stage in a product cycle, the point of the process is not to produce a working model of something yet to come. That would imply that we know what we are looking for and, to a large extent, how we will go about achieving it. The point is to make discoveries. 

This is not finished yet. It might just turn out to be a lazy way to do research or, perhaps, just another name for something that is already well pinned down. It certainly lacks rigour but, since the purpose is generative, I am not too concerned about that, as long as it works to produce new knowledge. I tinker on, still surfing the adjacent possible.

Three glimpses of a fascinating future

I’d normally post these three links as separate bookmarks but each, which have popped up in the last few days, share a common theme that is worth noting:

http://singularityhub.com/2014/09/04/experimental-rat-brain-fighter-pilot-may-yield-insights-into-how-the-brain-works/

In this, a neural network made out of the brain cells of a rat is trained to fly a flight simulator.

http://news.sky.com/story/1329954/world-first-as-message-sent-from-brain-to-brain

In this, signals are transmitted directly from one brain to another, using non-invasive technologies (well – if you can call a large cap covered in sensors and cables ‘non-invasive’!)

http://singularityhub.com/2014/09/03/neuromodulation-2-0-new-developments-in-brain-implants-super-soldiers-and-the-treatment-of-chronic-disease/

This reports on a DARPA neuromodulation/neuroaugmentation project to embed tiny electronic devices in brains to (amongst other things) cure brain diseases and conditions, augment brain function and interface with the outside world (including, presumably, other brains). This article contains an awesome paragraph:

“What makes all of this so much more interesting is the fact that, unlike all the other systems of the body, which tend to reject implants, the nervous system is incorporative—meaning it’s almost custom-designed to handle these technologies. In other words, the nervous system is like your desktop computer— as long as you have the right cables, you can hook up just about any peripheral device you want.”

I’m both hugely excited and deeply nervous about these developments and others like them. This is serious brain hacking. Artificial intelligence is nothing like as interesting as augmented intelligence and these experiments show different ways this is beginning to happen. It’s a glimpse into an awe-inspiring future where such things gain sophistication and ubiquity. The potential for brain cracking, manipulation, neuro-digital divides, identity breakdown, privacy intrusion, large-scale population monitoring and control, spying, mass-insanity and so on is huge and scary, as is the potential for things to go horribly wrong in so many new and extraordinary ways. But I would be one of the first to sign up for things like augmenting my feeble brain with the knowledge of billions (and maybe giving some of my knowledge back in return), getting to see the world through someone else’s eyes or even just being able to communicate instantly, silently and unambiguously with loved ones wherever they might be. This is transhumanity writ large, a cyborg future where anything might happen. Smartphones, televisions, the web, social media, all the visible trappings of our information and communication technologies that we know now, might very suddenly become amusing antiques, laughably quaint, redundant and irrelevant. A world wide web of humans and machines (biological and otherwise), making global consciousness (of a kind, at least) a reality. It is hard but fascinating to imagine what the future of learning and knowledge might be in the kind of super-connected scenario that this implies. At the very least, it would disrupt our educational systems beyond anything that has ever come before! From the huge to the trivial, everything would change. What would networked humans (not metaphorically, not through symbolic intermediaries, but literally, in real time) be like? What would it be like to be part of that network? In what new ways would we know one another, how would are attitudes to one another change? Where would our identities begin and end? What would happen if we connected our pets? What would be the effects of a large solar flare that wiped out electronic devices and communication once we had grown used to it all? Everything blurs, everything connects. So very, very cool. So very, very frightening.

The trouble with (most) courses

I recently did a session at the University of Brighton’s Learning and Teaching Conference on the trouble with modules – the name used for what are more commonly known as ‘courses’ in North America, ‘units’ in Australia and ‘papers’ in New Zealand. A couple of people who missed the session have asked for more details than what was shown in the slides that I posted from the session, so this post is a summary of some of the main points. It is mostly gleaned from my notes that accompanied the short presentation part, tidied up and slightly expanded on a bit for the blog.  I have not gone into much detail about what would happen if we did away with courses altogether, nor described the results of any of the reflective activities that were involved in the original session as I have no notes on those parts and not enough time to write them. It does contain a bunch of ideas and suggestions about how to overcome some of the innate weaknesses of courses though, that I hope will have some value to somebody. If anything is unclear or arguable, I’m very happy to follow up via the comments on this post!

Why (most) courses are a bad idea

The taught university course as we know it today started out as nothing more than the study of a (single) book, in schools in pre-university times and in the early days of universities, nearly a thousand years ago. The master or lecturer would read the book and, perhaps, comment on it and discuss it with students. This made a lot of sense. Books were very expensive and rare objects, and so were scholars. It was by far the most efficient way to make use of a rival good (the teacher and/or the book) to reach as many people as possible. Whether or not it was the best way to learn, without it there would be no learning about or from the book at all. These efficiencies remained significant for the next 900 years or so after universities were invented (first in Bologna and, later, Paris, Oxford and the slow-moving flood that followed over the next few centuries, right up to the recent trend in MOOCs). The course slowly evolved into more subject-specific areas that often drew from many books and, later, papers, and the printing press made books slightly less of a luxury, but the general principle, that knowledge was thinly disributed and the most efficient way to make it available was one-to-many transmission in a physical room, continued to make sense. As universities grew, it was equally sensible that processes and architectures were designed to make this still more efficient. Timetables were used to schedule these scarce resources, lecture theatres designed to reach as many ears and eyes as possible, desks invented to take notes, blackboards invented to provide a source for them, written exams invented to make assessments easier to mark (the first were in 1789) and libraries and classification systems invented to store and retrieve books and periodicals. And, of course, if students and teachers were not around, there was no point in scheduling classes, so courses naturally divided around the holidays of Christmas, Easter and during harvest time in the summer, when (perhaps – this is disputed) students were called back to work on farms. All of this made perfect sense and made the best use of limited means – perhaps the only means that could have worked at all. And this is what we have inherited, whether or not we observe Christian holidays, whether or not we have almost free access to a cornucopia of information on the web and mobile devices, whether or not we have sophisticated information systems that make scheduling and organization of resources more flexible, or tools to connect us with anyone, anywhere, any time around the world. Around it we have built innumerable structures – notions of course equivalence that are related to accreditation and assessment, replicability, resource allocations, pay structures, etc – that have become very deeply embedded, not just within universities but in society as a whole. Universities have become gatekeepers that filter students as they come in and warrant their competencies as they leave, not just to become academics but to work in many occupations. And the unit of measurement is based around the course. Courses are so deeply embedded that, when people attempt educational reform, they are seldom even noticed, let alone questioned. If people want to make things better in education, they normally explicitly mean ‘better courses’. Even open and distance universities like Athabasca, that dumped prerequisites, the schedule and traditional lecture/tutorial/seminar format, adhere to the broad pattern of course length (measured now in hours of study, like most of the rest of the world outside North America) fixed outcomes and assessments.  Likewise, companies unwisely create or purchase courses for their employees to go out and learn stuff, albeit usually with fewer institutional constraints on timing, accreditation and format.  But there is no pedagogical reason whatsover that it should be this way.

What this means

The trouble is that courses, at least as they have mostly evolved, are not pedagogically neutral technologies. This is pretty obvious to anyone who has ever created one. It is a completely insane idea that every subject can be taught in multiples of precisely the same period or requires the same amount of study as every other. Typically (varying from place to place but usually unvaryingly within a given institution) this means 10-15 weeks or some multiple of that, or 100-200 hours of student effort. Taught courses, as we know them in our institutions today, have objectives and/or outcomes, and assessments to match, which conspire to mean that the intent is that everyone learns exactly the same thing or skill, whether they already know it, don’t need to know it, or not. Courses therefore differentiate – you pass them or fail them. Maybe you pass or fail them well or badly. As an incidental peculiarity, the blame for failure to teach is transferred to the students – they fail, not their teachers. This has big implications for an individual’s sense of self worth and on their ability to seek employment, and it impacts society (and individuals who suffer this process) deeply. Another consequence of this is that, thanks to the need for economies of scale and/or fitting things into timeslots or with other courses that might be similar, typically everyone is taught the same way on a given course, and taught the same things, whether or not it suits their needs, prior knowledge, interests and aspirations. While the notion of teaching to learning styles is palpable nonsense, there is no doubt that people have very different needs and preferences from one another, so parts of every course will bore or confuse some of their students some or all of the time and nearly all will contain parts of little or no relevance to a learner’s needs. None of this makes any pedagogical sense whatsoever. Bloom’s two-sigma problem (based on the fact that there is roughly a two sigma difference between results for those taught in traditional classrooms and those taught one-to-one) is a difficult challenge to address because, quite apart from their innate peculiarities, these features of the typical pattern followed by courses lead to one extremely big and elephant-in-the-room: they are inherently demotivating. 

Courses and motivation

People love to and need to learn, constantly and voraciously. It’s in our nature. If someone wants and/or needs to learn something, you have to do something pretty substantial to prevent them from doing so. Enter the taught course.

The first way that courses stand in the way of learning is, at first glance, relatively innocuous. The fixed nature and form of the course combined with its length necessarily means that, for the vast majority of students, parts will be boring, parts will be irrelevant, and parts will be over-taxing. This means most students’ need for challenge at an attainable level will not be met, at least some of the time.  It means that course content, process, rules of conduct, expectations and methods are strongly determined by someone else, sapping control away. Self-determination theory, a powerful construct that has been validated countless times over several decades, makes it very clear that, unless people feel in control, are challenged with achievable goals and experience relatedness, they will not be intrinsically motivated, no matter what other factors motivate them. Though often supporting relatedness (connection to something or someone beyond yourself), taught courses are, by and large, structured to reduce two of those three vital factors. It is no surprise then that teachers have to find ways to get around the lack of motivation engendered by the course format. There are a few teachers, sadly, who positively relish the exercise of their power, who enjoy rewarding and punishing students, who like to apply rigid control over behaviour in the classroom, who take a kind of sick pleasure in watching students suffer, who make students do things ‘because it’s for their own good’. They need our pity and support, but should not be allowed to teach until they have overcome this sickness. Luckily, by far the majority of us do our best to inspire, to actively encourage students to reflect on and actively align their intrinsic hopes and desires with what we are teaching, to offer flexibility and control, to empower students, to nurture their creativity, and to give some attention to each student. That’s the pleasure most of us get from teaching. We certainly don’t all succeed all of the time, even the best fail pretty regularly, and we could all improve, but at least we try. However, it’s an uphill battle.

This leads to the second and far more harmful effect of taught courses on motivation.  Most of us who work in higher education are constrained by the nature of the course and its accreditation to apply extrinsic rewards and punishments in the form of grades, even though we know it is a truly terrible idea. The reasoning behind the use of grades as motivators is understandable. We can easily observe that extrinsic methods do actually, on the whole, to some extent work, in the short term. Depending on the context, the effect can last from minutes to months. Indeed, behaviourists (who only ever did short-term studies) based a whole psychological movement on this idea. What is less obvious, and the most crucial structural disaster in the way the vast majority of courses are designed, is that they invariably and totally predictably utterly destroy any intrinsic motivation that people may already have, often irreparably. A big part of the reason for this is that it creates a locus of causality for a task or behaviour that is perceived as being controlled by someone or something else, so it does again come back to an issue of control, but this time the effects are devastating, not just reducing motivation but actively militating against it. This crowding effect has been demonstrated over and over again in well-designed and hard-to-refute research studies for decades. In many cases, rewards and punishments don’t even achieve what they set out to do in the first place. For example, companies that offer performance-related bonuses typically get lower performance from their workers, and daycare that punish parents who are late picking up their children find that parents actually pick them up even later. Worse, once the damage is done, it is very hard if not (sometimes) impossible to entirely undo it. It’s like the motivation pathways have been permanently short-circuited. Worse still, how we are taught is often a major factor in determining how we learn, and we come to expect and (like addicts) even depend on extrinsic motivation to drive us. This is one of the reasons I sometimes describe my role as ‘un-teaching’ – there is often a lifetime of awful learning habits to undo before we can even start. 

If you are not convinced, do check out a few of the hundreds of papers at http://www.selfdeterminationtheory.org/publications/ or read pretty much anything by Alfie Kohn, or Edward Deci, or Richard Ryan. There are plenty of studies from the field of education that look at the effects of rewards and punishments and find them worse than wanting.

Breaking the cycle

There are alternatives to typical institutional taught courses, some of them very common, others less so. The University of Brighton has a great program, the MSc/MA by Learning Objectives, in which students work with supervisors to develop a set of outcomes, a means of assessment, and a work plan to reach their goals. While there are a few time and process constraints here and there for practical reasons, they are not too onerous. Students on this program tend to pass it, not because its standards are low, but because everything is aligned with what they want and need to do. A few programs at Athabasca University have similarly flexible courses that act as a kind of catch-all to enable people to do things that matter to them.  PhD programs, of the traditional variety used in the UK, have (or had – the course-based American model is sadly becoming more prevalent) no obligatory courses and are entirely customized to and often by the individual student, with nothing but a few processes to ensure students remain on track and supported. They can take from 2-10 years to complete. This length can be a problem as our motivation usually changes over such a long time and extrinsic factors are often introduced that can affect it badly, but the general principle is a good one. Athabasca University’s challenge process makes it possible to completely separate accreditation from learning, which (almost) avoids the whole course problem altogether, though it does unfortunately only work if you happen to have the precise set of competences provided by actual taught courses. Its self-paced undergraduate courses, though still markedly constrained by a notional equivalence to their paced brethren, free students from the tyrrany of schedules, even if they do have other features that are overly limiting. PLAR/APEL processes that are common in institutions across the world separate learning from accreditation almost entirely. And that’s not to mention a huge host of teach-yourself methods and resources from Google Search to Wikipedia to the Khan Academy to Stack Exchange and hundreds of other fine online systems that most of us use when we actually want and need to learn something. And, of course, there are books, which have the great benefit of allowing us to skip things, re-read things, look up references and so on, so our paths through them are seldom linear and always under out control – unless we are forced to read them because of a course. 

But what about the run-of-the-mill?

Though there is much to be learned from existing methods that entirely or partially by-pass the harmful effects of taught courses, teachers in higher education operate under a set of ugly constraints that make it very difficult and often impossible for us to completely avoid their ill effects, especially when student numbers are large and things like professional standards bodies come into the picture. Until we achieve massive educational reform, which might allow us to provide multiple paths to achieving competence, that might separate learning from accreditation, that might be chunked in ways that suit the needs of learner and subject, we are mostly stuck with the offspring of a mediaeval system that has evolved to defend itself against change. Most of us have to grade things, we have  to make use of learning objectives/outcomes, and we don’t have much control over course length. Often, especially in lower-level courses and/or where standards bodies are involved, we have little control over the competences that need to be attained, whether or not we are competent to teach them. Moreover, many of the most effective existing methods of teaching without courses are very resource-hungry. It would be great to apply the (UK-style) PhD process to all of our teaching but it is economically infeasible. PhDs are expensive for a very good reason – many of the economic and physical constraints that drove the development of courses in the first place have not gone away, even though some have been notably diminished. Given these issues, I will finish this post with a few general ideas, suggestions and patterns to help reduce the ill effects of courses without destroying the system of which they are a part. 

Give control

Traditional teaching seems determined to take control away from learners, but we can do much to give it back. Amongst other things:

  • allow students to choose what they do and how they do it. For instance, I have a web development course that centres around a site that students build throughout the course, that is about something they choose and they care about, and a course process that encourages them to choose between (or discover for themselves or their peers) multiple resources and methods to learn the requisite skills along the way. It makes extensive use of peer support and encourages sharing of problems and solutions, so that students teach one another as a natural fall-out of the process. It uses reflection to support the process, and an assessment based on evidence (that the students select for themselves) of meeting specified learning outcomes. It’s far from perfect, and it does often cause problems (especially at first) for those who have learned dependence via our broken educational system, but it shows one way that learners can take the reins.
  • allow students to choose the learning outcomes. This is trickier to enact because of the rigid requirements we usually have to develop curricula and match them with those delivered elsewhere. However, if the outcomes we specify are not too specific, relating to broad competences, it is still possible to allow some flexibility to students to identify finer-grained outcomes that suit their needs and that are exemplars of the general overarching outcomes. I’ve found this approach easier to follow in graduate level courses in ill-defined subject areas – I don’t really have a way of doing this well for those that are constrained by disciplinary standards.
  • allow students to design their own assessments. This one is easier. Learning contracts are one way to do this, supported with scaffolding that allows students to develop their own plans for assessment. Similarly, we can ask for them to provide evidence in a form that suits them (one of the best computing assignments I have ever seen was mostly done as poetry, and I once had a great explanation of the ISO model of network management explained using Santa Claus’s elves). At the very least, we can offer alternative pre-written forms of assessment that students can choose between according to their preferences.
  • allow students to pick their own content. This is a trick I have used for several courses. I offer a menu of options that address the intended (broad) outcomes and negotiate which parts we/they will cover during the course. It takes a little more effort to prepare, but the payoff is large. For graduate level courses I sometimes encourage students to develop their own content that we all then use.
  • allow students to choose their own tools, media, platforms, etc. Where possible, students should not be limited in their choice of technologies needed to complete the course. This can be tricky where we are constrained by things like institutional platforms, but there are often ways to allow at least some flexibility (e.g. mobile-friendly versions, PDF and e-book formats, standard formats that allow the use of any editor or development tool, etc)
  • allow students to pick the time and place. This is the default at Athabasca University for most courses, but can be trickier when there are timetables and constraints of working with others according a schedule. Classroom flipping can help a bit, limiting what is done in the class to things that actually benefit from being somewhere with other people (feedback, dialogue, collaboration, problem-solving, etc), and leaving a lot to self-paced study. This is true online as well as in face-to-face teaching. Indeed, counter-intuitively, it is even one of the odd potential benefits of traditional lectures, inasmuch as they typically only take an hour of a student’s time once a week, between which students are free to learn as they please (not a completely serious point, but worth pointing out because of the important and universally applicable lesson it reminds us of, that teaching behaviours only have a tangential relationship with learning behaviours).
  • allow students to control social interaction. I am a huge fan of learning with other people but we all have different needs for engagement with others in our learning, and it doesn’t suit everyone equally all the time. Where possible, I try to build processes that let those that benefit from social interaction to work with others, but that let those that prefer a different approach to work alone, using evidence-based assessments rather than process-based ones. For instance, evidence can include help given to others or conversations with others, but can as easily come from individual work (unless social competences are on the menu for learning). I find it useful to build simple sharing (as opposed to dialogue) into the process so that even the least sociable of students share things and therefore support the learning of others.

Use better forms of extrinsic motivation

Extrinsic motivation is not all equally awful and some is barely distinguishable or even a part of intrinsic motivation. Extrinsic motivators lie on a spectrum from bad (externally imposed reward and punishment) to much better and more internally regulated varieties, such as:

  • doing things out of a sense of duty, guilt or obligation (introjected regulation) or, better,
  • doing things because they are perceived as worthwhile in themselves (identified regulation, e.g. losing weight) or, better still,
  • doing things because they are necessary steps to achieve something else we are really motivated to achieve (integrated regulation).

See http://www.selfdeterminationtheory.org/theory/ for more about these differentiations. There are plenty of ways to use this to our advantage. It can often, for instance, be useful to encourage reflection on a learning activity. This can be used to think about why we are doing something, how it relates to our needs and goals, and what it means to us. Reflection can kindle more effective forms of extrinsic motivation that are far less harmful than externally imposed rewards and punishments. It is also valuable to nurture community, so that students feel obligations to the team or to one another, and support one another when the going gets rougher. Also, seeing how others are motivated can inspire us to recognize similar motivations in ourselves. Shared reflections (e.g. via blogs) can be particularly valuable.

Grades are not always necessary. While getting rid of the need to summatively assess is seldom possible, we can often avoid the use of grades (pass/fail is a little better than a mark), and we can make it possible for students to keep at it without grading until it is right, thus reducing the chance of failure. My courses tend to have feedback opportunities scattered throughout but I explictly avoid giving any grades until the last possible moment. It can upset some students who have learned grade-dependence, so it is important that they are fully aware of the reasoning and intent, and that the feedback is good enough that they can judge for themselves how well they are doing (I don’t always get that bit right!). Of course, I am only suggesting that we lose the grades, not the useful feedback. Feedback is crucial to allowing students to feel in control – they need to know what they are doing well and what could be improved, and plentiful feedback can be hugely motivating, showing that other people care, contributing to a sense of achievement, and more. Good, descriptive feedback that focuses on the work (never the student) is a cornerstone of effective educational practice. Grades tell us little or nothing, while encouraging an extrinsic focus that is harmful to motivation.

Step outside the course

Making links beyond a single course can be very beneficial to motivation. I attended an interesting presentation (at the same conference this originated in) the other day by Norman Jackson who talks about lifewide as opposed to lifelong learning, an idea that captures this principle well. Creating opportunities for students to engage in external activities like (for example) clubs, societies, geological digs, competitions, community work, conferences, charitable work, kickstarters, Wikipedia articles, coding camps and so on can fill in a lot of motivational gaps, making it easier to see the relevance of a course, to feed new ideas into a course, to gain a greater sense of personal relevance and responsibility for one’s own learning, to expand on work done in a course in greater detail wihout the imposition of extrinsic motivation. Of course, students should be free to choose which of these they engage with and, better still, should find them for themselves. However, there is no harm in advertising such things, nor in designing courses that allow students to capitalize on learning from other activities within the course itself such as projects, show-and-tell sessions, flexible discussions and so on. There are also often opportunities for doing things across multiple courses, using outputs of one to feed another, or bringing together different skillsets for joint projects. Another way to reduce the harm slightly is to build multiple courses into a single overarching one, of lengths appropriate to the needs of the students and subject. 

Build learning communities and spaces rather than courses

Given the wealth of potential resources and people’s time that are available for free on the Web (not to mention in libraries) there is often no need to provide much, if any content (in the sense of stuff presenting subject matter). A couple of the most successful courses I have ever run have had no curriculum or content to speak of, just a set of broad outcomes, a very flexible and student-designed assessment, an approach to making use of the learning community and a responsive process to make it all happen. The process can take a surprising amount of time to develop, as it is important that it is both understood well by the students (including how it is assessed, expectations, norms, etc) and that it can be guaranteed to result in the intended outcomes (assuming these are not negotiated too). Getting that process and community right can be hard work both in the design phase and (especially) during the course but, when it does go right, it is very rewarding. I have often learned as much if not more than my students on those courses, and they are the only courses I have ever run with more than a couple of students where I have had nothing but grade A students (moderated by external examiners as well as by peers). The massive enthusiasm and passion that results from a rich learning community of learners who are in control of their own learning has to be seen to be believed. The essence of the method is to let go just enough but no more: a teacher’s role is to provide plentiful prodding, ideas, critical feedback and, above all, scaffolding so that students feel confident that they are making progress in useful directions (and get help when they are not). It is also a bit of a juggling act to make sure that even loose outcomes are met, especially as students tend to diverge in all sorts of different directions, some of which are brilliant and worth pursuing – getting those outcomes loose enough in the first place but sufficiently recognizable and relevant to academic careers is a bit of an art that I am still learning. It also takes a lot of energy and dedication to make it work so, if you are having a bad week or two, things can go topsy turvy pretty fast.  It is worth putting a huge amount of effort into the first few weeks, responding enthusiastically and personally at any time of day or night that you can afford in order to set the tone, show that you care, explain your approach and soothe any fears. Once you have established trust that you care, and have nurtured a strong learning community, students tend to help one another a lot and forgive you when you are less attentive later on. I try to design the process so that I can intentionally let go in later weeks too.

In conclusion

As an intrinsic design feature, traditional university taught courses and their attendant processes and regulations impose unnatural restrictions on both teachers and students, reducing control and stunting motivation. It would be great to throw off these restrictions altogether. We could make enormous gains simply through separating teaching from accreditation (at least, wherever possible – in extremely rare cases it really is true that there is only one person who can reliably judge competence and that person is the teacher). This may soon become a necessity rather than a virtue if MOOCs continue to evolve faster than the means to reliably accredit the results. Athabasca University already has the challenge process to cope with that, though is significantly fettered by the need to match competences achieved with those that apply to existing courses – our challenge process is insufficiently fine-grained to allow real flexibility. There would be equally great gains if we made courses the right size (typically though not necessarily small) to fit the needs of different students rather than shoehorning them to fit the needs of institutions. We have technologies than can take the hard work out of managing the ensuing complexity so traditional timetabling woes need not impede us, and it would make it much easier to mix and match, including to accredit learning done in different ways. However, there is plenty that can be done even within the constraints of a typical university course, as long as we are aware of the dangers and take steps to reduce the harm. I hope that this little piece and this smattering of suggestions has sparked an idea or two about how we might go about doing that. Perhaps, if more of us start to question the system and apply such ideas, it might help to make a climate where bigger change is possible. If you’re interested in finding out more, I have written about this kind of thing once or twice before, with slightly different emphases, such as at https://landing.athabascau.ca/blog/view/177831/the-monkeys-paw-effect-in-higher-education and at https://landing.athabascau.ca/blog/view/496760/cargo-cult-courses 

 

Two conferences in two days

I’ve just got back from a flying visit to the UK. The first thing I saw on arriving at the new and not at all unpleasant Heathrow Terminal 2 was Stephen Downes. Small world. We were getting luggage from different areas and lost each other in the rush to get to different places, but it was nice to see him, however briefly.

The main reasons I was in the UK were two conferences, The First European Conference on Social Media and the umpteenth Learning & Teaching Conference at the University of Brighton.  Sadly, they overlapped, which meant I only got to attend a day of each, but I managed to give two quite different sessions at both conferences. The first, at ECSM, was a traditional slide-based presentation about the Landing, why and how we built it, and what we might do differently if we started again. As an experiment, rather than my usual handful of images that sit behind most of my presentations, I threw nearly 50 slides (some with multiple build stages) at the stunned audience in 20 minutes. Quite fun. The second, at the L&T conference, was a much more discursive hour-long session  that questioned the fundamental notion of courses, which involved a few thought experiments and a lot of conversation among a very engaged crowd. 

ECSM was a very well-organized affair (disclaimer – the chairs were my friends Sue Greener and Asher Rospigliosi) which provided what I have hoped to see in a social media for some years but have previously been disappointed: diversity. When I put together my first social computing course a few years ago I tried to offer much the same kind of range as this conference provided, but have since been a bit worried that I was defining a discipline too early in its lifecycle. This is because most social media/social computing conferences I have been involved with over the past few years have fallen heavily into computer algorithm territory, which my course touches on but doesn’t make a central focus. I have sometimes thought that they would be better named as social network analysis conferences, as variations on that theme have totally dominated the proceedings. I have come across some social media conferences that drift entirely the other way, looking at social and sociological consequences, and a few that focus on a single subject area or context (education and/or learning being the ones that usually interest me most). In contrast, ECSM was delightfully broad, with offerings across the spectrum, with coverage that I feel vindicates my choice of subject matter and approach for a social computing course. It included a lot of papers related to business, politics, media, education and other general areas, and a wide range of research attitudes and methods from the highly algorithmic to the softest and fuzziest of media analyses and critical inquiries. There were plenty of case studies from lots of contexts and demonstrations or reports on plentiful interesting systems. I think this is a sign of a maturing area of study. Though they were not keynoting, I was impressed that the conference attracted the marvellous guru couple of Jenny Preece and Ben Schneiderman. My favourite discovery of the day was that Dutch police have a room in Habbohotel. At the conference dinner I sat next to John Traxler, who was doing the next day’s keynote (that I would miss). He continues to impress me as a creative and incisive thinker. We spoke more about beer, Brighton and music than mobile and social media, but it was fun.

I was not expecting as much out of the parochial Learning & Teaching conference the next day, but I was wrong. The first keynote by Sue Clegg on the arguable failure of widening participation was thought provoking and went down well. Though provocative, it was a bit dry for my taste – I’m not a fan of presentations read from sheets of notes. I’d rather read the notes and have a conversation. Its focus was also very UK-centric, which should have been interesting but I did not have sufficient background knowledge of the events and acronyms to which she referred. She also seemed unusually approving of higher education access rates in the US, ranking it highest in the world, which was more than a bit of a surprise to me: I guess it depends how you measure such things, but the OECD ranks the US well below Korea, Japan, Canada (we’re third!) and several European countries, including the UK, when it comes to higher education participation. None-the-less, her talk was mostly tightly argued and backed up by plentiful research. I had planned to leave and return to ECSM after my session, which followed Sue Clegg’s talk, but I was enjoying meeting old friends and sufficiently intrigued by later sessions to stay on. I am glad that I did, not just because it gave me a chance to catch up with old friends and colleagues.

The first presentation I saw was about use of the e-portfolio system Mahara for professional and personal development. The University of Brighton has a mature and well-implemented Mahara instance that is used for a great many things, from personal publication to coursework to CV writing. I was a bit sad to see that, in combination with a WordPress instance and a SharePoint system used by staff, it had pretty much replaced the innovative Elgg system, community@brighton, that was part of the inspiration for the Landing and that largely surpassed all three put together in functionality. After 8 or 9 years, the last few of those in a state of slow and painful decline, community@brighton is about to be decommissioned. Community@brighton was a little ahead of its time; it suffered greatly in an upgrade process after its first successful couple of years that resulted in the loss of a great deal of the network and communities that had thrived beforehand, and it never fully recovered the trust of its users; it was insufficiently diverse in its primary uses, being quite focused on teaching and, in its latter years, finding shared local accommodation; and it was not helped that its introduction coincided with the massive rise of Facebook (before most people realised how evil that site was). But it was a great system that was (and even as it nears exinction, possibly still is) the world’s largest social media site in an HE institution and a lot of innovative work was done on and through it.

I was interested to learn that the University of Brighton has outsourced its Mahara, Blackboard and some other systems to the cloud. Mahara runs on Amazon’s Cloud service and is managed by Catalyst IT, ( www.catalyst-eu.net) the company behind Mahara, all for around £12,000 (roughly $CAD20,000) per year, plus fairly minimal cloud charges. This seems pretty good value to me – very hard for an internal IS team to compete with that. Similarly, though Blackboard is the work of the devil and the costs are astronomical, moving away from Blackboard would be very difficult for the University of Brighton. This is thanks to the massive investment in materials and training already sunk into it, combined with Blackboard’s strenuous efforts to encourage that dependency and notoriously bad tools for getting data out. Bearing that in mind, it makes sense for the University to move to a hosted solution, especially given the terrible performance, countless bugs, regular and irregular downtime, and the large amount of effort needed to keep it running and to answer technical problems. At least it should now perform reasonably, get timely updates, rarely go down and just work, most of the time. On a cautionary note I was, however, intrigued to learn that the university’s outsourcing of student email (to Microsoft’s Irish branch – Google was rejected due to lack of adherence to European data protection laws) had met with an unfortunate disaster, inasmuch as Microsoft changed the terms and conditions that had formerly meant students would have an email address for life, to a much more limited term. Outsourcing is fine when it works, but it always depends on another company with very different goals than one’s own. I normally prefer to keep things in-house, despite the cost. It means that you retain control of the data no matter what and, just as importantly, the knowledge to use it.

After a very fine lunch, I attended a double-length session reporting on the University of Brighton’s findings and work resulting from the very large Higher Education Academy ‘What Works’ research initiative. ‘What Works’ was focused on improving retention rates, seeking reasons for students giving up on courses and programs, and seeking ways to help them succeed. Brighton was one of the 22 institutions involved in the £1M study. A large team from Brighton gave a very lively and highly informative sequence of presentations on the background, the research and the various interventions that had been attempted following the study, not all with equal success, but all of them interesting. The huge take-home for me was the crucial importance of a culture of belonging. This was singled out in the HEA research that fed into this as the most significant factor in determining whether or not a student continues. Other factors are closely related to this – supportive peers, meaningful interactions, developing knowledge and confidence, and relevance to future goals, and all contribute to belongingness. There are also other factors like perseverence, engagement and internalization that play a role. It is intriguing to me that the research into this started with something of a blank slate, and did not draw significantly on the extensive literature on motivation outside of an educational setting. If it had done, they would probably have identified control as a major factor too although, given the context (traditional educational systems are not great for giving students control, especially to those in their early months of study), it is not surprising that it was missed. In recent years I have typically followed self-determination theory‘s vocabulary of ‘relatedness’ for this aspect of motivation, but ‘belonging’ is a far better word that captures a lot of what is distinctive about the nature and value of traditional academic communities and practices. Significantly for me, that is something which we at Athabasca University tend not to do so well. With self-paced courses, a large number of visiting students and relatively limited communication tools (apart from the Landing, of course!) it is very hard for us to build that sense of belonging. When tutoring works well, it goes quite a long way to achieving it and occasionally a bit of community develops via Moodle discussions but, apart from the Landing, we do nothing much to support a wider sense of belonging. At least, not in undergraduate programs. I think we tend to do it fairly well in graduate programs, where it is easier to build more personal relationships, peer support and cohorts into the system. I intend to follow this up and explore more of the background research that led to the HEA team’s conclusions. 

The afternoon ended with Pimms, but not before a closing keynote by Norman Jackson on life-wide (as distinct from life-long) learning. I found the notion of lifewide learning pleasing, concentrating on a person’s whole learning life, of which intentional academic behaviour is just a small part. The idea is related to the notion of learning trajectories as posited by Michael Eraut, with whom Jackson has worked. There was lots to like in his talk, and it drew attention away from the very course-centric view that underpins much university thinking, and that I had criticized in my own session. He had lots of nice examples based on studies and interviews with students, none of whom simply followed a ‘course’, though perhaps the examples were a little too glibly chosen – this was appreciative enquiry. He also placed a great onus on his version of ‘learning ecologies’ to describe the lifewide process. His definition of a learning ecology differs considerably from mine, and others who have used the term. As far as I could tell, the focus was very much on an individual, and his definition of a ‘learning ecology’ related to the various things that individuals do to support their learning. This is not a very rich ecology! I think that simply means that we tend to do a lot of things when learning that affect our learning in other things, all in a richly connected self-nourishing fashion. While he did, when questioned, agree that there was much richness to be gained from ‘overlapping’ ecologies and learning with and from others, I don’t think he sees the overlap as anything more than that. For me and, I think, most others who have used the term, a learning ecology has emergent patterns and behaviours that are quite different from its parts, full of rich self-organization, and it is crucial to negotiating meaning and creating knowledge in a social context. In a learning ecology, everyone’s learning affects everyone else’s, with positive and negative feedback loops creating knowledge that goes far beyond what any individual could develop alone. 

I am back in Canada now and trying to catch up with the load of things that two conferences inevitably delayed. I usually reckon that a conference takes up at least three times the time taken by the conference travel itself – preparation and recovery time are always a significant factor. In fact, it should take longer to recover because it would be great to reflect further to help consolidate and connect the learning that inevitably happens during the intensive sessions and conversations that characterize conferences: too many learning opportunities are lost when we rush back into a pile of over-delayed work after such things. At the very least, posts like this are a necessity to help make sense of it all, not an optional extra, but there is a lot more that I would like to follow up on if I had the time. It is also a pity because the weather in Vancouver is stunning (maybe too hot and dry) and I have a newly purchased but very old boat floating outside that keeps calling me. 

 

Classrooms may one day learn us – but not yet

Thanks to Jim and several others who have recently brought my attention to IBM’s rather grandiose claim that, in a few years, classrooms will learn us. The kinds of technology described in this article are not really very new. They have been just around the corner since the 60s and have been around in quantity since the early 90s when adaptive hypermedia (AH) and intelligent tutoring systems (ITS) rose to prominence, spawning a great many systems, and copious research reported on in hundreds of conferences, books and journal articles. A fair bit of my early work in the late 90s was on applying such things to an open corpus, which is the kind of thing that has blossomed (albeit indirectly) into the recently popular learning analytics movement. Learning analytics systems are essentially very similar to AH systems but mostly leave the adaptation stage of the process up to the learner and/or teacher and tend to focus more on presenting information about the learning process in a useful way than on acting on the results. I’ve maintained more than a passing interest in this area but I remain a little on the edge of the field because my ambitions for such tools have never been to direct the learning process. For me, this has always been about helping people to help one another to learn, not to tell them or advise them on how to learn, because people are, at least till now, the best teachers and an often-wasted resource. This seemed intuitively obvious to me from the start and, as a design pattern, it has served me well. Of late, I have begun to understand better why it works, hence this post.

The general principle behind any adaptive system for learning is that there are learners, some kind of content, and some means of adapting the content to the learners. This implies some kind of learner model and a means of mapping that to the content, although I believe (some disagree) that the learner model can be disembodied in constituent pieces and can even happily exist outside the systems we build, in the heads of learners. Learning analytics systems are generally all about the learner model and not much else, while adaptive systems also need a content model and a means of bringing the two together.  

Beyond some dedicated closed-corpus systems, there are some big obstacles to building effective adaptive systems for learning, or that support the learning process by tracking what we are doing.  It’s not that these are bad ideas in principle – far from it. The problem is more to do with how they are automated and what they automate. Automation is a great idea when it works. If the tasks are very well defined and can be converted into algorithms that won’t need to be changed too much over time, then it can save a lot of effort and let us do things we could not do before, with greater efficiency. If we automate the wrong things, use the wrong data, or get the automation a little wrong, we create at least as many problems as we solve. Learning management systems are a simple case in point: they automated abstracted versions of existing teaching practice, thus making it more likely that existing practices would be continued in an online setting, even though they had in many cases emerged for pragmatic rather than pedagogic reasons that made little sense in an online environment. In fact, the very process of abstraction made this more likely to happen. Worse, we make it very much harder to back out when we automate, because we tend to harden a system, making it less flexible and less resilient. We set in stone what used to be flexible and open. It’s worse still if we centralize that, because then whole systems depend on what we have set in stone and you cannot implement big changes in any area without scrapping the whole thing. If the way we teach is wrong then it is crazy to try to automate it. Again, learning management systems show this in spades, as do many of the more popular xMOOC systems. They automate at least some of the wrong things (e.g. courses, grading, etc). So we had better be mighty sure about what we are automating and why we are doing it. And this is where things begin to look a bit worrying for IBM’s ‘vision’. At the heart of it is the assumption that classrooms, courses, grades and other paraphenalia of educational systems are all good ideas that are worth preserving. The problem here is that these evolved in an ecosystem that made them a sensible set of technologies at the time but that have very little to do with best practice or research into learning. This is not about learning – it is about propping up a poorly adapted system.

If we ignore the surrounding systems and start with a clean slate, then this should be a set of problems about learning. The first problem for learning analytics is to identify what are we should be analyzing, the second is to understand what the data mean and how to process them, the third to decide what to do about that. Our knowledge on all three stages is intermediate at best. There are issues concerning what to capture, what we can dicover about learners through the information we capture, and how we should use that knowledge to help them learn better. Central to all of this is what we actually know about education and what we have discovered works best – not just statistically or anecdotally, but for any and all individuals. Unfortunately, in education, the empirical knowledge we have to base this on is very weak indeed.

So far, the best we can come up with that is fairly generalizable (my favourite example being spaced learning) is typically only relevant to small and trivial learning tasks like memorization or simple skill acquisition. We’re pretty good at figuring out how to teach simple things well, and ITS and AH systems have done a pretty fair job under such circumstances, where goals (seldom learning goals – more often proxies like marks on tests or retention rates) are very clear and/or learning outcomes very simple. As soon as we aim for more complex learning tasks, the vast majority of studies of education are either specific, qualitative and anecdotal, or broad and statistical, or (more often than should be the case) both. Neither is of much value when trying to create an algorithmic teacher, which is the explicit goal of AH and ITS, and is implied in the teaching/learning support systems provided by learning analytics.  

There are many patterns that we do know a lot about, though they don’t help much here.  We know, for example, that one-to-one mastery teaching on average works really brilliantly – Bloom’s 2-sigma challenge still stands, about 30 years after it was first made. One-to-one teaching is not a process that can be replicated algorithmically: it is simply a configuration of people that allows the participants to adapt, interact and exchange or co-develop knowledge with each other more effectively than configurations where there is less direct contact between people.  It lets learners express confusion or enthusiasm as directly as possible, and for the teacher to provide tailored responses, giving full and undistracted attention. It allows teachers to directly care both for the subject and for the student, and to express that caring effectively. It allows targeted teaching to occur, however that teaching might be enacted. It is great for motivation because it ticks all the boxes on what makes us self-motivated. But it is not a process and tells us nothing at all about how best to teach nor how best to learn in any way that can be automated, save that people can, on the whole, be pretty good at both, at least on average.

We also know that social constructivist models can, on average, be effective, for probably related reasons. it can also be a complete disaster. But fans of such approaches wilfully ignore the rather obvious fact that lots of people often learn very well indeed without them – the throwaway ‘on average’ covers a massive range of differences between real people, teachers and learners, and between the same people at different times in different contexts. This shouldn’t come as a surprise because a lot of teaching leads to some learning and most teaching is neither one-to-one nor inspired by social constructivist thinking. Personally, I have learned phenomenal amounts, been inspired and discovered many things through pretty dreadful teaching technologies and processes, including books and lectures and even examined quizzes. Why does it work? Partly because how we are taught is not the same thing at all as how we learn. How you and I learn from the same book is probably completely different in myriad ways. Partly it is because it ain’t what you do to teach but how you do it that makes the biggest difference. We do not yet have an effective algorithmic way of making or even identifying creative and meaningful decisions about what will help people to learn best – it is something that people and only people do well. Teachers can follow an identical course design with identical subject matter and turn it into a pile of junk or a work of art, depending on how they do it, how enthusiastic they are about it, how much eye contact they make, how they phrase it, how they pace it, their intonation, whether they turn to the wall, whether they remembered to shave, whether they stammer etc, etc, etc, and the same differentiators may work sometimes and not work others, may work for some people sometimes and not others. Sometimes, even awful teaching can lead to great learning, if the learners are interested and learn despite rather than because of the teacher, taking things into their own hands because the teaching is so awful. Teaching and learning, beyond simple memory and training tasks, are arts and not sciences. True, some techniques appear to work more often than not (but not always), but there is always a lot of mysterious stuff that is not replicable from one context to the next, save in general patterns and paradigms that are mostly not easily reduced to algorithms. It is over-ambitious to think that we can automate in software something we do not understand well enough to turn into an algorithm. Sure, we learn tricks and techniques, just like any artist, and it is possible to learn to be a good teacher just as it is possible to learn to be a good sculptor, painter or designer. We can learn much of what doesn’t work, and methods for dealing with tricky situations, and even a few rules of thumb to help us to do it better and processes for learning from our mistakes. But, when it comes down to basics, it is a creative process that can be done well, badly or with inspiration, whether we follow rules of thumb or not, and it takes very little training to become proficient. Some of the best teachers I’ve ever known have used the worst techniques. I quite like the emphasis that Alexandra Cristea and others have put on designing good authoring environments for adaptive systems because they then become creative tools rather than ends in themselves, but a good authoring tool has, to date, proved elusive and far too few people are working on this problem.

‘Nothing is less productive than to make more efficient what should not be done at all’. Peter Drucker

The proponents of learning analytics reckon they have an answer to this problem, by simply providing more information, better aggregated and more easily analyzed. It is still a creative and responsive teacher doing the teaching and/or a learner doing learning, so none of the craft or art is lost,  but now they have more information, more complete, more timely, better presented, to help them with the task so that they can do it better. The trouble is that, if the information is about the wrong things, it will be worse than useless. We have very little idea what works in education from a process point of view so we do not know what to collect or how to represent it, unless all we are doing is relying on proxies that are based on an underlying model that we know with absolute certainty is at least partly incorrect or, at best, is massively incomplete. Unless we can get a clearer idea of how education works, we are inevitably going to be making a system that we know to be flawed to be more efficient than it was. Unfortunately, it is not entirely clear where the flaws lie especially as what may be a flaw for one may not be for another, and a flaw in one context may be a positive benefit in another.  When performing analytics or building adaptive systems of any kind, we focus on proxies like grades, attention, time-on-task, and so on – things that we unthinkingly value in the broken system and that mean different things to different people in different contexts.  Peter Drucker made an important observation about this kind of thing:

Nothing is less productive than to make more efficient what should not be done at all‘.

A lot of systems of this nature improve the efficiency of bad ideas. Maybe they valorize behaviourist learning models and/or mediaeval or industrial forms of teaching. Maybe they increase the focus on grading. Maybe they rely on task-focused criteria that ignore deeper connective discoveries. Maybe they contain an implied knowledge model that is based on experts’ views of a subject area, which does not normally equate to the best way to come by that knowledge. Maybe they assume that time on task matters or, just as bad, that less time spent learning means the system is working better (both and neither are true). Maybe they track progress through a system that, at its most basic level, is anti-educational. I have seen all these flaws and then some. The vast majority of tools are doing education-process analytics, not learning analytics. Even those systems that use a more open form of analytics which makes fewer assumptions about what should be measured, using data mining techniques to uncover hidden patterns, typically have risky systemic effects: they afford plentiful opportunities for filter bubbles, path dependencies, Matthew Effects and harmful feedback loops, for example. But there is a more fundamental difficulty for these systems.  Whenever you make a model it is, of necessity, a simplification, and the rules for simplification make a difference. Models are innately biased, but we need them, so the models have to be good. If we don’t know what it is that works in the first place then we cannot have any idea whether the patterns we pick out and use to help people guide their learning journeys are a cause, an effect or a by-product of something else entirely. If we lack an explicit and accurate or useful model in the first place, we could just again be making something more efficient that should never be done at all. This is not to suggest that we should abandon the effort, because it might be a step to finding a better model, but it does suggest we should treat all findings gathered this way with extreme scepticism and care, as steps towards a model rather than an end in themselves.

In conclusion, from a computing perspective, we don’t really know much about what to measure, we don’t have great grounds for deciding how to process what we have measured, and we don’t know much at all about how to respond to what we have processed. Real teachers and learners know this kind of thing and can make sense of the complexity because we don’t just rely on algorithms to think. Well, OK, that’s not necessarily entirely true, but the algorithms are likely at a neural network level as well as an abstract level and are probably combinatorially complex in ways we are not likely to understand for quite a while yet. It’s thus a little early to be predicting a new generation of education. But it’s a fascinating area to research that is full of opportunities to improve things, albeit with one important proviso: we should not be entrusting a significant amount of our learning to such systems just yet, at least not on a massive scale. If we do use them, it should be piecemeal and we should try diverse systems rather than centralizing or standardizing in ways that the likes of Knewton are trying to do. It’s bit like putting a computer in charge of decisions whether or not to launch nuclear missiles. If the computer were amazingly smart, reliable and bug-free, in a way that no existing computer even approaches, it might make sense. If not, if we do not understand all the processes and ramifications of decisions that have to be made along the way, including ways to avoid mistakes, accidents and errors, it might be better to wait. If we cannot wait, then using a lot of different systems and judging their different outputs carefully might be a decent compromise. Either way, adaptive teaching and learning systems are undoubtedly a great idea, but they are, have long been, and should remain on the fringes until we have a much clearer idea of what they are supposed to be doing. 

Being-taught habits vs learning styles

In case the news has not got through to anyone yet, research into learning styles is pointless. The research that proves this is legion but, for instance, see (for just a tiny sample of the copious and damning evidence):

Riener, C., & Willingham, D. (2010). The Myth of Learning Styles. Change: The Magazine of Higher Learning Change: The Magazine of Higher Learning, 42(5), 32-35. doi:doi: 10.1080/00091383.2010.503139

Derribo, M. H., & Howard, K. (2007). Advice about the use of learning styles: A major myth in education. Journal of college reading and learning, 37, 2.

Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. 041543).

No one denies that it is possible to classify people in all sorts of ways with regards to things that might affect how they learn, nor that everyone is different, nor that there are some similarities and commonalities between how people prefer to or habitually go about learning. When these elaborately constructed theories claim no more than that people are different in interesting and sometimes identifiably consistent ways, then I have little difficult accepting them in principle, though it’s always worth observing that there are well over 100 of these theories and they cannot all be right. There is typically almost nothing in any of them that could prove them wrong either. This is a hallmark of pseudo-science and should set our critical sensors on full alert. The problem comes when the acolytes of whatever nonsense model is their preferred flavour try to take the next step and tell us that this means we should teach people in particular ways to match their particular learning styles. There is absolutelly no plausible evidence that knowing someone’s learning style, however it is measured, should have any influence whatsoever on how we should teach them, apart from the obvious requirement that we should cater for diversity and provide multiple paths to success. None. This is despite many decades spent trying to prove that it makes a difference. It doesn’t.

It is consequently a continual source of amazement to me when people pipe up in conversations to say that we should consider student learning styles when designing courses and learning activities. Balderdash. There is a weak case to be made that, like astrology (exactly like astrology), such theories serve a useful purpose of encouraging people to reflect on what they do and how they behave. They remind teachers to consider the possibility that there might be more than one way to learn something and so they are more likely to produce useful learning experiences that cater for diverse needs, to try different things and build flexibility into their teaching. Great – I have no objection to that at all, it’s what we should be aiming for. But it would be a lot more efficient to simply remind people of that simple and obvious fact rather than to sink vast sums of money and human resources into perpetuating these foolish myths. And there is a darker side to this. If we tell people that they are (just a random choice) ‘visual’, or  ‘sensing’ or ‘intuitive’ or ‘sequential’ learners then they will inevitably be discouraged from taking different approaches. If we teach them in a way that we think fits a mythical need, we do not teach them in other ways. This is harmful. It is designed to put learners in a filter bubble. The worst of it is that learners then start to believe it themselves and ignore or undervalue other ways of learning.

Being-taught habits

The occasion for this rant came up in a meeting yesterday, where it was revealed that a surprising number of our students describe their learning style (by which they actually mean their learning preference) to be to listen to a video lecture. I’m not sure where to begin with that. I would have been flabbergasted had I not heard similar things before. Even learning style believers would have trouble with that one. One of the main things that is worth noting, however, is that this is actually a description not of a learning preference but of a ‘being-taught habit’. Not as catchy, but that’s what it is.

I have spent much of my teaching career not so much teaching as unteaching: trying to break the appalling habits that our institutional education systems beat into us until we come to believe that the way we are being taught is actually a good way to learn. This is seldom the case – on the whole, educational systems have to achieve a compromise between cost-efficiency and effective teaching –  but, luckily, people are often smart enough to learn despite poor teaching systems. Indeed, sometimes, people learn because of poor teaching systems, inasmuch as (if they are interested and have not had the passion sucked out of them) they have to find alternative ways to learn, and so become more motivated and more experienced in the process of learning itself. Indeed, problem-based and enquiry-based techniques (which are in principle a good idea) sometimes intentionally make use of that kind of dynamic, albeit usually with a design that supports it and offers help and guidance where needed.

If nothing else, one of the primary functions of an educational system should be to enable people to become self-directed, capable lifelong learners. Learning the stuff itself and gaining competence in a subject area or skill in doing something is part of that – we need foundations on which to build. But it is at least as much about learning ways of learning. There are many many ways to learn, and different ways work better for different people learning different things. We need to be able to choose from a good toolkit and use approaches that work for the job in hand, not that match the demands of some pseudo-scientific claptrap.

Rant over.