Researching things that don't exist

As the end of my sabbatical is approaching fast, I am still tinkering with a research methodology based on tinkering (or the synonymous bricolage, to make it sound more academic). Tinkering is an approach to design that involves making things out of what we find around us, rather than as an engineered, designed process. This is relatively seldom seen as valid approach to design (though there are strong arguments to be made for it), let alone to research, though it underpins much invention and discovery. Tinkering is, by definition, a step into the unknown, and research is generally concerned with knowing the unknown (or at least clarifying, confirming or denying the partly- or tentatively-known). This is not a direct path, however.

Research can take many forms but, typically and I think essentially, the sort that we do in academia is a process of discovery, rather than one of invention. This is there in the name – ‘recherche’ (the origin of the term) means to go about seeking, which implies there is something to be found. The word ‘discovery’ suggests that there is something that exists that can be discovered, whereas inventions, by definition, do not exist, so they are never exactly discovered as such.

While we can seldom substitute ‘invention’ for ‘discovery’, the borders are blurry. Did Maxwell discover his equations or did he invent them? What he discovered was something about the order of the universe, that his (invented) equations express, but the equations formed an essential and inextricable part of that discovery. R&D labs get around the problem by simply using two terms so that you know they are using both. The distinction is similarly blurry in art: an artwork is normally not, at least in a traditional sense, research because, for most art, it is a form of invention rather than discovery. But sculptors often talk of discovering a form in stone or wood. And, even for the most mundane of paintings or drawings, artists are in a dialogue with their media and with what they have created, each stroke building on and being influenced by those that came before. A relative of mine recently ran an exhibition of works based on the forms suggested by blots of ink and water, which illustrates this in sharper relief than most, and I do rather like these paintings from Bradley Messer that follow the forms of wood grain. Such artists discover as much as they create and, like Maxwell’s equations, their art is an expression of their discovery, not the discovery itself, though the art is equally a means of making that discovery. Discovery is even more obvious in ‘found’ art such as that of some of the Dadaists, though the ‘art’ part of it is arguably still the invention, not the discovered object itself. Duchamp Fountaine And, as Dombois observes  there are some very important ways research and art can connect: research can inform art and be about art, and art can be about research, can support research and can arise from it. Dombois also believes art can be a means of performing research. Komar and Melamid’s ‘most-wanted paintings’ project is a good example of art not only being informed by research itself being a form of research. Their paintings resulted from research into what ‘the people’ wanted in their paintings. The paintings themselves challenge what collective taste means, and the value of it, changing how we know and make use of such information. And the artwork itself is the research, of which the paintings are just a part. 

Inventions (including art works) use discoveries and, from our inventions, we can make discoveries (including discoveries about our inventions). Invention makes it possible to make novel discovery, but the research is that discovery, not the inventions that lead to it. Research perceived as invention means discovering not what is there but what is not there, which is a little bizarre. More accurately, perhaps, it is seeking to discover what is latently there. It is about discovering possible futures. But even this is a bit strange, inasmuch as latent possibilities are, in many cases, infinite. I don’t think it counts as discovery if you are picking a few pieces from a limitless range of possibilities. It is creation that depends entirely on what you put into it, not on something that can be discovered in that infinity. But, perhaps, the discovery of patterns and regularities in that infinite potential palette is the research. This is because those infinite possibilities are maybe not as infinite as they seem. They are at the very least constrained by what came before, as well as by a wide range of structural constraints that we impose, or have imposed upon us. What is nice about tinkering is that, because it is concerned with using things around us, the forms we work on already have such patterns and constraints. 

Tinkering is concerned with exploring the adjacent possible. It is about looking at the things around you (which, in Internet space, means practically everywhere) and finding ways to put them together in new ways to do new things. These new things can then, themselves, create new adjacent possibles, and so it goes on. Beyond invention, tinkering is a tool for making new discoveries. It is a way of having a conversation with objects in which the tinker manipulates the objects and the objects in turn suggest ways of putting them together. It can inspire new ways of thinking. We discover what our creations reveal. Writing (such as this) is a classic example of this process. The process of writing is not one of recording thoughts so much as it is one of making new ones. We scaffold our thoughts with the words we write, pulling ourselves up by our own bootstraps as we do so in order to build further thoughts and connections.

The construction of all technologies works the same way, though it is often hidden behind walls of abstraction and deliberate design. If, rather than design-then-build, we simply tinker, then the abstraction falls away. The paths we go down are unknown and unknowable in advance, because the process of construction leads to new ideas, new concepts, new possibilities that only become visible as we build. Technologies are (all) tools to think with at least as much as they are tools to perform the tasks we build them for, and tinkering is perhaps the purest way of building them. And this is what makes tinkering a process of discovery. The focus is not on what we build, but on what we discover as a direct result of doing so – both process and product. Tinkering is a scaffold for discovery, not discovery itself. This begins to feel like something that could underpin a methodology.

With this in mind, here is an evolving set of considerations and guidelines for tinkering-based research that have occurred to me as I go along.

Exploring the possible

To be able to explore the adjacent possible, it is first necessary to explore the possible. In fact, it is necessary to be immersed in the possible. At a simple level, this because the bigger your pile of junk, the more chances there are of finding interesting pieces and interesting combinations. But there are other sub-aspects of this that matter as much: the nature of the pile of junk, the skills to assemble the junk, and immersion in the problem space. 

1) The pile of junk

Tinkering has to start with something – some tools, some pieces, some methods, some principles, some patterns. It is important that these are as diverse as possible, on the whole. If you just have a pile of engine parts then the chances are you are going to make another engine although, with a tinker-space containing sufficiently diverse patterns, you might make something else. There is a store near me that sells clocks, lights and other household objects made from bits of old electrical equipment and machinery, and it is wonderful. Similarly, some of the finest blues musicians can make infinite complexity out of just three chords and a (loosely) pentatonic scale. But having diverse objects, methods, patterns and principles certainly makes it easier than just having a subset of it all.

It is important that the majority of the junk is relatively complex and self-contained in itself – that it does something on its own, that it is already an assembly of something. Doing bricolage with nothing but raw materials is virtually impossible – they are too soft (in a technology sense). You have to start with something, otherwise the adjacent possible is way too far away and what is close is way too boring. The chances are that, unless you have a brilliant novel idea (which is a whole other territory and very rare) you will wind up making something that already exists and has probably been done better. This is still scrabbling around in the realms of the possible. The whole point is to start with something and assemble it with something else to make it better, in order to do something that has never been done before. That’s what makes it possible to discover new things. Of course, the complexity does not need to be in physical objects: you might have well-assembled theories, models, patterns, belief systems, aesthetic sensibilities and so on that could be and probably will be part of the assembly. And, since we are not just talking about physical objects but methods, principles, patterns etc, this means you need to immerse yourself in the process – to do it, read about it, talk about it, try it. 

2) The tools of assembly

It is not enough to have a great tinker-space full of bits and pieces. You need tools to assemble them. Not just physical tools, but conceptual tools, skills, abilities, etc. You can buy, make, beg, borrow or steal the tools, but skills to use them take time to develop. Of course, one of the time-honoured and useful ways to do that is to tinker, so this works pretty well. Again, this is about immersion. You cannot gain skills unless you apply them, reflect on it, apply them again, in a never-ending cycle.

There is a flip side to this though. If you get to be too skillful then you start to ignore things that you have discovered to be irrelevant, and irrelevant things aren’t always as irrelevant as they seem. They are only irrelevant to the path you have chosen to tread. Treading multiple paths is essential so, once you become too much of an expert, it is probably time to learn new skills. It is hard to know when you are too much of an expert. Often, the clue is that someone with no idea about the area suggests something and you laughingly tell them it cannot be done. Of course it can. This is technology. It’s about invention. You are just too smart to know it.

Being driven by your tools (including skills) is essential and a vital part of the methodology – it’s how the adjacent possible reveals itself. But it’s a balance. Sometimes you go past an adjacent possible on your way and then leave it so far behind that you forget it is there at all. It sometimes takes a beginner to see things that experts believe are not there. It can be done in all sorts of ways. For example, I know someone who, because he does not want to be trapped by his own expertise, constantly retunes his guitar to new tunings, partly to make discoveries through serendipity, partly to be a constant amateur. But, of course, a lot of his existing knowledge is reusable in the new context. You do not (and cannot) leave expertise behind when learning new things – you always bring your existing baggage. This is good – it’s more junk to play with. The trick is to have a ton of it and to keep on adding to it.

3) The problem space

While simply playing with pieces can get you to some interesting places, once you start to see the possibilities, tinkering soon becomes a problem-solving process and, as you follow a lead, the problem becomes more and more defined, almost always adding new problems with each one solved. Being immersed in a problem space is crucial, which tends to make tinkering a personal activity, not one that lends itself well to formally constructed groups. Scratching your own itch is a pretty good way to get started on the tinkering process because, having scratched one itch, it always leads to more or, at least, you notice other itches as you do so.

If you are scratching someone else’s itch then it can be too constraining. You are just solving a known problem, which seldom gets you far beyond the possible and, if it does, your obligations to the other person make it harder for you to follow the seam of gold that you have just discovered along the way that is really the point of it. It’s the unknown problems, the ones that only emerge as we cross the border of the adjacent possible, that matter here. Again, though, this is a balance. A little constraint can help to sustain a focus and doing something that is not your own idea can spark serendipitous ideas that turn out to be good.

Just because it is not really a team process doesn’t mean that other people are not important to it. Talking with others, exchanging ideas, gaining inspiration, receiving critique, seeing the world through different eyes – all this is very good. And it can also be great to work closely with a small number of others, particularly in pairs – XP relies on this for its success. A small number of people do not need to be bogged down with process, schedules, targets, and other things that get in the way of effective tinkering, can inspire one another, spot more solutions, and sustain motivation when the going gets rough. 

The Structural Space

One of the points of bricolage is that it is structured from the bottom up, not the top down. Just because it is bottom-up structure does not mean it is not structure. This is a classic of example of shaping our tools and our tools shaping us (as McLuhan put it), or shaping our dwellings while our dwellings shape our lives (as Churchill put it a couple of decades earlier). Tinkering starts with forms that influence what we do with them, and what we do with them influences what we do next – our creations and discoveries become the raw material for further creations and discoveries. Though rejecting deliberate structured design processes, I have toyed with and tried things like prototyping, mock-ups and sketches of designs, but I have come to the opinion that they get in the way – they abstract the design too much. What matters in bricolage is picking up pieces and putting them together. Anything beyond vague ideas and principles is too top-down. You are no longer talking with the space but with a map of the space, which is not the same thing at all.

Efficiency

One of the big problems with tinkering is that it tends to lead to highly inefficient design, from an engineering perspective. Part of the reason for that is that path dependencies set in early on. A bad decision early can seriously constrain what you do later. One has only to look at our higher education systems, the result of massively distributed large scale tinkering over nearly a thousand years, to see the dangers here. The vast majority of what we continue to do today is mediaeval in origin and, in a lot of cases, has survived unscathed, albeit assembled with a few other things along the way.

Building from existing pieces can limit the damage – at least you don’t have to pull everything apart if it turns out that it is not a fruitful path. It is also very helpful to start with something like Lego, that is designed to be fitted together this way. Most of my work during my sabbatical has involved programming using the Elgg framework, which is very elegantly designed so that, as long as you follow the guidelines, it naturally forms into at least a decent outline structure. On the other hand, as I have found to my cost, it is easy to put enough work into something that it makes it very discouraging when you to have to start again. As the example of educational systems shows, some blocks are so foundational and deeply linked with everything else, that they affect everything that follows and simply cannot be removed without breaking everything.

Working together

Tinkering is quite hard to do in teams, apart from as sounding boards for reflection on a process already in motion. It is instructive to visit LegoLand to see how it can work, though. In the play spaces of LegoLand one sees kids (and more than a few adults) working alone on building things, but they are doing so in a very social space. They talk about what they are doing, see what others are doing and, sometimes, put their bits of assemblies together, making bigger and more complex artefacts. We can see similar processes at work in GitHub, a site where programmers, often working alone, post projects that others can fork and, through pull-requests, return in modified form to their originators or others, with or without knowing them or ineracting with them in any other way. It’s a wonderful evolutionary tinker-space. If programs are reasonably modular, people can work on different pieces independently, that can then be assembled and reassembled by others. Inspiration, support, patterns of thinking and problem solving, as well as code, flow through the system. The tinkering of others becomes a part of your own tinker-space.  It’s a learning space – a space where people learn but also a space that learns. The fundamental social forms for tinkering are not traditional, purpose-driven, structured and scheduled teams (groups), but networks and, more predominantly, sets of people connected by nothing but shared interest and a shared space in which to tinker.

Planning

As well as resulting in inefficient systems, tinkering is not easy to plan. At the start, one never knows much more than the broad goal (that may change or may not even be there at all) and the next steps. You can build very big systems by tinkering (back to education again but let’s go large on this and think of the whole of gaia) but it is very hard to do so with a fixed purpose in mind and harder still to do so to a schedule. At best, you might be able to roughly identify the kind of task and look to historical data to help get some statistical approximation of how long it might take for something useful to emerge.

A corollary of the difficulty of planning (indeed, that it is counter-productive to do so) is that it is very easy to be thrown off track. Other things, especially those that involve other people that rely on you, can very quickly divert the endeavour. At the very least, time has to be set aside to tinker and, come hell or high water, that time should be used. Tinkering often involves following tenuous threads and keeping many balls in the air at once (mixing metaphors is a good form of tinkering) so distractions are anethema to the effective tinkerer. That said, coming up for a breath of air can remind you of other items in the tinker-chest that may inspire or provoke new ways of assembling things. It is a balance.

Evolution, not design

Naive creationists have in the past suggested that the improbability of finding something as complex as even a watch, let alone the massively more complex mechanisms of the simplest of organisms, means that there must be an intelligent designer. This is sillier than silly. Evolution works by a ratchet, each adaptation providing the basis for the next, with some neat possibilities emerging from combinatorial complexity as well. Given enough time and a suitable mechanism, exponentially increasingly complex systems are not just possible put overwhelmingly probable. In fact, it would be vastly more difficult to explain their absence than their existence. But they are not the result of a plan. Likewise for tinkering with technologies. If you take two complex things and put them together, there is a better than fair chance that you will wind up with something more complex that probably does more than you imagined or intended when you stuck them together.  And, though maybe there is a little less chance of disaster than the random-ish recombinations of natural evolution, the potential for the unexpected increases with the complexity. Most unexpected things are not beneficial – the bugs in every large piece of software attest to that, as do most of my attempts at physical tinkering over the course of my lifetime. However, now and then, some can lead to more actual possibles. The adjacent possible is what might happen next but, in many cases, changes simply come with baggage. Gould calls these exaptations – they are not adaptations as such, but a side-effect or consequence of adaptation. Gould uses the example of the Spandrels of St Marco to illustrated this point, showing how the structure of the cathedral of St Marco, with its dome sitting on rounded arches, unintentionally but usefully created spaces where they met that proved to be the perfect place to put images of saints – in fact, they seem made for them. But they are not – the spaces are just a by-product of the design that were coopted by the creators of the cathedral to a useful purpose. A lot of systems work that way. It is the nature of their assembly to create both constraints and affordances, path dependencies and patterns early on deeply defining later growth and change. Effective tinkering involves using such spandrels, and that means having to think about what you have built. Thinking deeply.

The Reflection Space

Just tinkering can be fun but, to make it a useful research process, it should involve more than just invention. It should also involve discovery. It is essential, therefore, that the process is seen as one of reflective dialogue with the creations we make. Reflection is not just part of an iterative cycle – it is embedded deeply and inextricably throughout the process. Only if we are able to constructively think about what we are doing as well as what we have done can this generate ideas, models, principles and foundations for further development. It is part of the dialogue with the objects (physical, conceptual, etc) that we produce and, perhaps even more importantly, it is the real research output of the tinkering process. Reflection is the point at which we discover rather than just invent. In part it is to think about the meaning and consequence, in part to discover the inevitable exaptions, in part to spot the next adjacent possible. This is not a simple collaboration. Much of the time we argue with the objects we create – they want to be one way but we want them to be another and, from that tension, we co-create something new.  

We need to build stories and rich pictures as much as we need to build technologies. Indeed, it doesn’t really matter that much if we fail to produce any useful artefact through tinkering, as long as the stories have value.  From those stories spin ideas, inspirations, and repeatable patterns. Stories allow us to critique what we have done and learn from it, to see it in a broader context and, perhaps, to discover different contexts where the ideas might apply. And, of course, these stories should be shared, whether with a few friends or the world, creating further feedback loops as well as spreading around what we have discovered.

Stories don’t have to be in words. Pictures are equally and often more useful and, often most useful of all, the interactions with our creations can tell a story too. This is obviously the case in things like games, Arduino projects or interactive site development but is just as true of making things like furniture, accessories and most of the things that can be made or enhanced with Sugru.

Here are two brief stories that I hope begin to reveal a little of what I mean.

A short illustrative story

Early in my sabbatical I wrote one Elgg plugin that, as it emerged, I was very pleased with, because it scratched an itch that I have had for a long time. It allowed anyone to tag anything, and for duplicate tags used by different people to be displayed as a tag cloud instead of the normal list of tags that comes with a post. This was an assembly of many ideas, and was a conversation with the Elgg framework, which provided a lot of the structure and form of what I wanted to achieve. In doing it, I was learning how to program in Elgg but, in shaping Elgg, I was also teaching it about the theories that I had developed over many years. If it had worked, it would have given me a chance to test those theories, and the results would probably have led to some refinements, but that was really a secondary phase of the research process and not the one that I was focusing on.

Before any other human being got to use the system, the research process was shaping and refining the ideas. With each stage of development I was making discoveries. A big one was the per-post tag cloud. My initial idea had simply been to allow people to tag one another’s posts. This would have been very useful in two main ways. Firstly, it would give people the chance to meaningfully bookmark things they had found interesting. Rather than the typical approach of putting bookmarks into organized hierarchies, tags could be used to apply faceted categorizations, allowing posts to cross hierarchical boundaries easily and enabling faceted classification of the things people found interesting. Secondly, the tags would be available to others, allowing social construction of an ontology-like thing, better search, a more organized site. Tags are already very useful things but, in Elgg, they are applied by post authors and there are not enough of them for strong patterns to develop on their own in any but quite large systems. One of the first things I realized was that this meant the same tag might be used for the same post more than once.  It was hard to miss in fact, because what I saw when I ran the program was multiple tags for each post – the system I had assembled was shouting at me. Having built a tag cloud system in the 1990s before I even knew the word ‘tag’ let alone ‘tag cloud’ I was primed to spot the opportunity for a tag cloud, which is a neat way to give shape and meaning to a social space. Individually, tags categorize into binary categories. Collectively, they become fuzzy and scalar – an individual post can be more of one tag than another, not because some individual has decided so, but because a crowd has decided so. This is more than a folksonomy. It is a kind of collaborative recommender system, a means to help people recognize not just whether something is good or bad but in what ways it is good or bad. Already, I was thinking of my PhD work which involved fuzzy tags I called ‘qualities’ (e.g. ‘good for beginners’, ‘comprehensive’, ‘detailed’, etc) that allowed users of my CoFIND system not just to categorize but to rate posts, on multiple pedagogical dimensions. Higher tag weight is an implicity proxy for saying that, in the context of what is described by this tag, the post has been recommended. As I write this (writing is great tinkering – this is the power of reflection) I realize that I could explicitly separate such tags from Elgg’s native tags, which might be a neat way to overcome the limitations of the system I wrote about 15 years ago, that was a good idea but very unusable. Anyway…

It worked like a dream, exactly as I had planned, up to the point that I tried to allow people to see the things they had tagged, which was pretty central to the idea and without which the whole thing was pretty pointless: it is highly improbably that individuals would see great value in tagging things unless they could use those tags to find and organize stuff on the site. As it turns out, the Elgg developers never thought tags might be used this way, so the owner of a tag is not recorded in the system. The person that tags a post is just assumed to be the owner of the post. I’m not a great Elgg developer (which is why I did not realise this till it was too late) but I do know the one cardinal rule – you never, ever, ever mess with the core code or the data model. There was nothing I could do except start again, almost completely from scratch. That was a lot of work – weeks of effort. It was not entirely wasted – I learned a lot in the process and that was the central purpose of it all. But it was very discouraging. Since then, as I have become more immersed in Elgg, my skills have improved. I think I can now see roughly how this could be made to work. The reason I know this is because I have been tinkering with other things and, in the process, found a lightweight way of using relationships to link individuals and objects that, in the ways that matter, can behave much like tags. Now that I have the germ of an idea about how to make this pedagogically powerful, hopefully I will have time to do that. 

Another illustrative story

One of my little sabbatical projects (that actually it turned out to be about the biggest, and it’s not over yet) was to build an OpenBadge plugin. This was actually prompted by and written for someone else. I would not thought of it as a good itch to scratch because I happen to know something about badges and something about learning and, from what I have seen, badges (as implemented so far) are at best of mixed value in learning. In the vast majority of instances that I have seen them used, they can be at the very best as demotivating as they are motivating. Much of the time it is worse than that: they turn into extrinsic proxies that divert motivation away from learning almost entirely. They embed power structures and create divisions. From a learning perspective, they are a pretty bad idea. On the plus side, they are a very neat way to do credentials which is great if that is what you are aiming for, opening up the potential for much more interesting separation of teaching and accreditation, diverse learning paths, and distributed learning, so I don’t hate them. In fact, I quite like them. But their pedagogical risks mean that I don’t love them enough to have even considered writing a plugin that implements them.

Despite reservations, I said I would do it. It didn’t seem like a big task because I reckoned I could just lightly modify one of a couple of existing (non-open) badge plugins that had already been written for Elgg.  I also happened to have some parts lying round – my pedagogical principles, the Elgg framework, the Mozilla OpenBadge standard documentation, various code snippets for implementing OpenBadges – that I could throw together. Putting these pieces together made me realize early on that social badging could be a good idea that might help overcome several of my objections to their usual implementations. Because of the nature of Elgg, the obvious way to build such a plugin would be such that anyone could make a badge, and anyone could award one, making use of Elgg’s native fine-grained bottom-up permissions. This meant that the usual power relationships implied in badging would not be such a problem. This was an interesting start.

Because Elgg has no roles in its design (apart from a single admin role for the site builder and manager), and so no explicit teaching roles, this could have been potentially tricky from a trust perspective – although its network features would mean you could trust awards by people you know, how would you trust an award from someone you don’t know and who is not playing a traditional teacher role in a power hierarchy? Even with the native Elgg option to ‘recommend’ a badge (so more people could assert its validity) this could become chaotic. But my principles told me that teacher control is a bad thing so I was not about to add a teacher role.

After tossing this idea around for a few minutes, I came up with the idea of inheritable badges – in other words, a badge could be configured so that you could only award a badge if you had received it yourself. In an instant, this began to look very plausible. If you could trace the badge to someone you trust (eg. a teacher or a friend or someone you know is trustworthy), which is exactly what Elgg would make possible by default, then you could trust anyone else who had awarded the badge to at least have the competence that the badge signifies, and so be more likely to be able to accurately recognize it in someone else. This was neat – it meant that accreditation could be distributed across a network of strangers (as in a MOOC) without the usual difficulties of the blind accrediting the blind that tend to afflict peer assessment methods in such contexts. Better still, this is a great way to signify and gain social capital, and to build deeper and richer bonds in a community of strangers. It is, I think, among the first scalable approaches to accreditation in a connectivist context, though I have not looked too deeply into the literature, so stand to be corrected.

Later, as I tinkered and became immersed in the problem, thinking how it would be used, I added a further option to let a badge creator specify a prerequisite award (any arbitrarily chosen badge) that must be held before a badge could be awarded. As well as allowing more flexibility than simple inheritance, this meant that you could introduce roles by the back door if you wished, by allowing someone to award a ‘teacher’ badge or similar, and only allowing people holding that badge to make awards of other badges.  I then realized this was a generalized case of the same thing as the inheritance feature, so got rid of the inheritance feature and just added the option to make a prerequisite of the current badge itself. It is worthy of note that this was quite difficult to do – had I planned it from the start, it would have been trivial, but I had to unpick what I had done as well as build it afresh.

Social badging, peer assessment, scalable viral accreditation, social capital, motivation  – this was looking cool. Furthermore, tinkering with an existing framework suggested other cool things. By default, it was a lot easier to build this if people could award badges to themselves. The logical next step would have been to prevent them from doing this but, as I saw it working, I realised self-badging was a very good idea! It bothered me for a moment that it might be a bit confusing, at least, not to mention appearing narcissistic if people started awarding themselves badges. However, Elgg posts can be private, so people giving themselves badges would not have to show them to others. But they could, and that could be useful. They could make a learning contract with someone else or a group of people, and allow them to observe, thus not only improving motivation and honesty, but also building bonding social capital. So, people could set goals for themselves and award themselves badges when they accomplished them, and do so in a safe social context that they would be in control of. It might be useful in many self-directed learning contexts. 

These were not ideas that simply flowed in my head from start to finish: it was a direct result of dialogue with what I was creating that this came about, and it could only have done so because I already had ideas and principles about things like portfolios, learning contracts and social learning floating around in my toolkit, ready to be assembled. I did include the admin option to turn off self-awarding at a system level in case anyone disagreed with me, and because I could imagine contexts where it might get out of hand. I even (a little reluctantly) made it possible to limit badge awarding to admins only, so that there could be a ‘root’ badge or two that would provide the source of all accreditation and awarding. Even then, it could still be a far more social approach to accreditation than most, making expertise not just something that is awarded with an extrinsic badge, but also something that gives real power to its holder to play an important role in a learning community.

This is not exactly what my sponsors asked for: they wanted automation, so that an administrator could set some criteria and the system would automatically award badges when those criteria had been met.  Although I reckon my social solution meets the demand for scalability that lay at the heart of that request, I realized that, with some effort, I could assemble all of this with a karma point plugin that I happened to have in my virtual toolshed in order to enable automated badge awarding for things like posting blogs, etc. Because there was no obvious object for which such an award could be given as it could relate to any arbitrary range of activities, I made the object providing evidence to be the user’s own profile. Again, this was just assembling what was there – it was an adjacent possible, so I took it. I could, if I had not been lazy, have generated a page displaying all of the evidence, but I did not (though I still might – it is an adjacent possible that might be worth exploring). And so, of course, now it is possible to award a badge to a user, rather than for a specific post which, though not normally a good idea from a motivation perspective, could have a range of uses, especially when assembled with the tabbed profile we built earlier (what I refer to in academic writings as a ‘context switcher’ and that can be used as a highly flexible portfolio system).

These are just a sample of many conversations I had with the tools and objects that were available to me. I influenced them, they influenced me. There were plenty of others – exaptions like my discovery that the design I had opted for, which made awards and badges separate objects, meant that I had a way of making awards persistent and not allowing badge owners to sneakily change them afterwards, for example, thus enhancing trust in the system. Or that the Elgg permissions model made it very simple to reliably assert ownership, which is very important if you are going to distribute accreditation over multiple sites and systems. Or that the fact that it turned out to be an incredibly complex task to make it all work in an Elgg Group context was a blessing because I therefore looked for alternatives, and found that the pre-requisite functionality does the job at least as well, and much more elegantly. Or that the Elgg views system made it possible to fairly easily create OpenBadge assertions for use on other sites. The list goes on. 

It was not all wonderful though. Sometimes the conversation got weird. My plan to start with an existing badge plugin quickly bit the dust. It turns out that the badge plugins that were available were both of the kind I hate – they awarded badges to individuals, not for specific competences. To add injury to injury, they could be awarded only by the administrator, either automatically through accrued points or manually. This was exactly the kind of power structure that I wanted to get away from. From an architectural perspective, making these flawed plugins work the way I wished would have been much harder than writing the plugin from scratch. However, in the spirit of tinkering, I didn’t start completely from scratch. I looked around for a plugin that would do some of the difficult stuff for me. After playing with a few, I opted standard Elgg Files plugin, because that ought to have made light work of storing and organizing the badge images. In retrospect, maybe not the best plan, but it was a starting point. After a while I realized I had deleted or not used 90% of the original plugin, which was more effort than it was worth. I also got stuck in a path dependency again, when I wanted to add multiple prerequisites (ie you could specify more than one badge as a prerequisite) : by that time, my ingenious single-prerequisite model was so firmly embedded that it would have taken more than a solid week to change it. I did not have the energy, or the time.  And, relatedly, my limited Elgg skills and lack of forward planning meant that I did not always divide the code into neatly reusable chunks. This still continues to cause me trouble as I try to make the OpenBadge feature work. Reflecting on such issues is useful – I now know that multiple inheritence makes sense for this kind of system, which would not have occurred to me if I hadn’t built a system with a single-prerequisite data model. And I have a better idea about what kind of modularity works best in an Elgg system.

Surfing the adjacent possible

Like all stories worthy of the name, my examples are highly selective and probably contain elements of fiction in some of the details of the process. Distance in time and space changes memories so I cannot promise that everything happened in the order and manner presented here – it  was certainly a lot more complicated, messy and detailed than I have described it to be. I think this fictionlizing is crucial, though. Objective reporting is exactly not what is needed in a bricolage process. It is the sense-making that matters, not religious adherence to standards of objectivity. What matters are the things we notice, the things we reflect on and things we consider to be important. Those are the discoveries. 

This is a brief and condensed set of ten of the main principles that I think matter in effective tinkering for research:

  1. do not design – just build
  2. start with pieces that are fully formed
  3. surround yourself with both quantity and diversity in tools, materials, methods, and perspectives
  4. dabble hard – gain skills, but be suspicious of expertise
  5. look for exaptations and surf the adjacent possible
  6. avoid schedules and goals, but make time and space for tinkering, and include time for daydreaming
  7. do not fear dismantling and starting afresh
  8. beware of teams, but cultivate networks: seek people, not processes
  9. talk with your creations and listen to what they have to say
  10. reflect, and tell stories about your reflections, especially to others

As I read these ideas it strikes me that this is the very antithesis of how research, at least in fields I work in, is normally done and that it would be extremely hard to get a grant for this. With a deliberate lack of process control, no clear budgets, no clear goals, this is not what grant awarders would normally relish. Whatever. It is still worth doing.

Tinkering as a research methodology offers a lot – it is a generative process of discovery that builds ideas and connections as much as it builds objects that are interesting or useful. It is far from being a random process but it is unpredictable. That is why it is interesting. I think that some aspects of it resemble systematic literature review: the discovery and selection of appropriate pieces to assemble, in particular, is something that can be systematized to some extent and, just as in a literature review, once you start with a few pieces, other pieces fall naturally into place. It is very closely related to design-based research and action research, with their formal cycles and iterative processes, although the iteration cycle in tinkering is far finer grained, it is not as rigid in its requirements, and it deliberately avoids the kind of abstractions that such methodologies thrive on. It might be a subspecies though. It definitely resembles and can benefit from soft systems methodologies, because it is the antithesis of hard systems design. Rich pictures have a useful role to play, in particular, though not at the early stages they are used in soft systems methods. And, unlike soft systems, the system isn’t the goal.

Finally, tinkering is not a solution to everything. It is a means of generating knowledge. On the whole, if the products are worthwhile, then they should probably feed into a better engineered system. Note, however, that this is not prototyping. Though products of tinkering may sometimes play the role of a prototype at a later stage in a product cycle, the point of the process is not to produce a working model of something yet to come. That would imply that we know what we are looking for and, to a large extent, how we will go about achieving it. The point is to make discoveries. 

This is not finished yet. It might just turn out to be a lazy way to do research or, perhaps, just another name for something that is already well pinned down. It certainly lacks rigour but, since the purpose is generative, I am not too concerned about that, as long as it works to produce new knowledge. I tinker on, still surfing the adjacent possible.

Three glimpses of a fascinating future

I’d normally post these three links as separate bookmarks but each, which have popped up in the last few days, share a common theme that is worth noting:

http://singularityhub.com/2014/09/04/experimental-rat-brain-fighter-pilot-may-yield-insights-into-how-the-brain-works/

In this, a neural network made out of the brain cells of a rat is trained to fly a flight simulator.

http://news.sky.com/story/1329954/world-first-as-message-sent-from-brain-to-brain

In this, signals are transmitted directly from one brain to another, using non-invasive technologies (well – if you can call a large cap covered in sensors and cables ‘non-invasive’!)

http://singularityhub.com/2014/09/03/neuromodulation-2-0-new-developments-in-brain-implants-super-soldiers-and-the-treatment-of-chronic-disease/

This reports on a DARPA neuromodulation/neuroaugmentation project to embed tiny electronic devices in brains to (amongst other things) cure brain diseases and conditions, augment brain function and interface with the outside world (including, presumably, other brains). This article contains an awesome paragraph:

“What makes all of this so much more interesting is the fact that, unlike all the other systems of the body, which tend to reject implants, the nervous system is incorporative—meaning it’s almost custom-designed to handle these technologies. In other words, the nervous system is like your desktop computer— as long as you have the right cables, you can hook up just about any peripheral device you want.”

I’m both hugely excited and deeply nervous about these developments and others like them. This is serious brain hacking. Artificial intelligence is nothing like as interesting as augmented intelligence and these experiments show different ways this is beginning to happen. It’s a glimpse into an awe-inspiring future where such things gain sophistication and ubiquity. The potential for brain cracking, manipulation, neuro-digital divides, identity breakdown, privacy intrusion, large-scale population monitoring and control, spying, mass-insanity and so on is huge and scary, as is the potential for things to go horribly wrong in so many new and extraordinary ways. But I would be one of the first to sign up for things like augmenting my feeble brain with the knowledge of billions (and maybe giving some of my knowledge back in return), getting to see the world through someone else’s eyes or even just being able to communicate instantly, silently and unambiguously with loved ones wherever they might be. This is transhumanity writ large, a cyborg future where anything might happen. Smartphones, televisions, the web, social media, all the visible trappings of our information and communication technologies that we know now, might very suddenly become amusing antiques, laughably quaint, redundant and irrelevant. A world wide web of humans and machines (biological and otherwise), making global consciousness (of a kind, at least) a reality. It is hard but fascinating to imagine what the future of learning and knowledge might be in the kind of super-connected scenario that this implies. At the very least, it would disrupt our educational systems beyond anything that has ever come before! From the huge to the trivial, everything would change. What would networked humans (not metaphorically, not through symbolic intermediaries, but literally, in real time) be like? What would it be like to be part of that network? In what new ways would we know one another, how would are attitudes to one another change? Where would our identities begin and end? What would happen if we connected our pets? What would be the effects of a large solar flare that wiped out electronic devices and communication once we had grown used to it all? Everything blurs, everything connects. So very, very cool. So very, very frightening.

The trouble with (most) courses

I recently did a session at the University of Brighton’s Learning and Teaching Conference on the trouble with modules – the name used for what are more commonly known as ‘courses’ in North America, ‘units’ in Australia and ‘papers’ in New Zealand. A couple of people who missed the session have asked for more details than what was shown in the slides that I posted from the session, so this post is a summary of some of the main points. It is mostly gleaned from my notes that accompanied the short presentation part, tidied up and slightly expanded on a bit for the blog.  I have not gone into much detail about what would happen if we did away with courses altogether, nor described the results of any of the reflective activities that were involved in the original session as I have no notes on those parts and not enough time to write them. It does contain a bunch of ideas and suggestions about how to overcome some of the innate weaknesses of courses though, that I hope will have some value to somebody. If anything is unclear or arguable, I’m very happy to follow up via the comments on this post!

Why (most) courses are a bad idea

The taught university course as we know it today started out as nothing more than the study of a (single) book, in schools in pre-university times and in the early days of universities, nearly a thousand years ago. The master or lecturer would read the book and, perhaps, comment on it and discuss it with students. This made a lot of sense. Books were very expensive and rare objects, and so were scholars. It was by far the most efficient way to make use of a rival good (the teacher and/or the book) to reach as many people as possible. Whether or not it was the best way to learn, without it there would be no learning about or from the book at all. These efficiencies remained significant for the next 900 years or so after universities were invented (first in Bologna and, later, Paris, Oxford and the slow-moving flood that followed over the next few centuries, right up to the recent trend in MOOCs). The course slowly evolved into more subject-specific areas that often drew from many books and, later, papers, and the printing press made books slightly less of a luxury, but the general principle, that knowledge was thinly disributed and the most efficient way to make it available was one-to-many transmission in a physical room, continued to make sense. As universities grew, it was equally sensible that processes and architectures were designed to make this still more efficient. Timetables were used to schedule these scarce resources, lecture theatres designed to reach as many ears and eyes as possible, desks invented to take notes, blackboards invented to provide a source for them, written exams invented to make assessments easier to mark (the first were in 1789) and libraries and classification systems invented to store and retrieve books and periodicals. And, of course, if students and teachers were not around, there was no point in scheduling classes, so courses naturally divided around the holidays of Christmas, Easter and during harvest time in the summer, when (perhaps – this is disputed) students were called back to work on farms. All of this made perfect sense and made the best use of limited means – perhaps the only means that could have worked at all. And this is what we have inherited, whether or not we observe Christian holidays, whether or not we have almost free access to a cornucopia of information on the web and mobile devices, whether or not we have sophisticated information systems that make scheduling and organization of resources more flexible, or tools to connect us with anyone, anywhere, any time around the world. Around it we have built innumerable structures – notions of course equivalence that are related to accreditation and assessment, replicability, resource allocations, pay structures, etc – that have become very deeply embedded, not just within universities but in society as a whole. Universities have become gatekeepers that filter students as they come in and warrant their competencies as they leave, not just to become academics but to work in many occupations. And the unit of measurement is based around the course. Courses are so deeply embedded that, when people attempt educational reform, they are seldom even noticed, let alone questioned. If people want to make things better in education, they normally explicitly mean ‘better courses’. Even open and distance universities like Athabasca, that dumped prerequisites, the schedule and traditional lecture/tutorial/seminar format, adhere to the broad pattern of course length (measured now in hours of study, like most of the rest of the world outside North America) fixed outcomes and assessments.  Likewise, companies unwisely create or purchase courses for their employees to go out and learn stuff, albeit usually with fewer institutional constraints on timing, accreditation and format.  But there is no pedagogical reason whatsover that it should be this way.

What this means

The trouble is that courses, at least as they have mostly evolved, are not pedagogically neutral technologies. This is pretty obvious to anyone who has ever created one. It is a completely insane idea that every subject can be taught in multiples of precisely the same period or requires the same amount of study as every other. Typically (varying from place to place but usually unvaryingly within a given institution) this means 10-15 weeks or some multiple of that, or 100-200 hours of student effort. Taught courses, as we know them in our institutions today, have objectives and/or outcomes, and assessments to match, which conspire to mean that the intent is that everyone learns exactly the same thing or skill, whether they already know it, don’t need to know it, or not. Courses therefore differentiate – you pass them or fail them. Maybe you pass or fail them well or badly. As an incidental peculiarity, the blame for failure to teach is transferred to the students – they fail, not their teachers. This has big implications for an individual’s sense of self worth and on their ability to seek employment, and it impacts society (and individuals who suffer this process) deeply. Another consequence of this is that, thanks to the need for economies of scale and/or fitting things into timeslots or with other courses that might be similar, typically everyone is taught the same way on a given course, and taught the same things, whether or not it suits their needs, prior knowledge, interests and aspirations. While the notion of teaching to learning styles is palpable nonsense, there is no doubt that people have very different needs and preferences from one another, so parts of every course will bore or confuse some of their students some or all of the time and nearly all will contain parts of little or no relevance to a learner’s needs. None of this makes any pedagogical sense whatsoever. Bloom’s two-sigma problem (based on the fact that there is roughly a two sigma difference between results for those taught in traditional classrooms and those taught one-to-one) is a difficult challenge to address because, quite apart from their innate peculiarities, these features of the typical pattern followed by courses lead to one extremely big and elephant-in-the-room: they are inherently demotivating. 

Courses and motivation

People love to and need to learn, constantly and voraciously. It’s in our nature. If someone wants and/or needs to learn something, you have to do something pretty substantial to prevent them from doing so. Enter the taught course.

The first way that courses stand in the way of learning is, at first glance, relatively innocuous. The fixed nature and form of the course combined with its length necessarily means that, for the vast majority of students, parts will be boring, parts will be irrelevant, and parts will be over-taxing. This means most students’ need for challenge at an attainable level will not be met, at least some of the time.  It means that course content, process, rules of conduct, expectations and methods are strongly determined by someone else, sapping control away. Self-determination theory, a powerful construct that has been validated countless times over several decades, makes it very clear that, unless people feel in control, are challenged with achievable goals and experience relatedness, they will not be intrinsically motivated, no matter what other factors motivate them. Though often supporting relatedness (connection to something or someone beyond yourself), taught courses are, by and large, structured to reduce two of those three vital factors. It is no surprise then that teachers have to find ways to get around the lack of motivation engendered by the course format. There are a few teachers, sadly, who positively relish the exercise of their power, who enjoy rewarding and punishing students, who like to apply rigid control over behaviour in the classroom, who take a kind of sick pleasure in watching students suffer, who make students do things ‘because it’s for their own good’. They need our pity and support, but should not be allowed to teach until they have overcome this sickness. Luckily, by far the majority of us do our best to inspire, to actively encourage students to reflect on and actively align their intrinsic hopes and desires with what we are teaching, to offer flexibility and control, to empower students, to nurture their creativity, and to give some attention to each student. That’s the pleasure most of us get from teaching. We certainly don’t all succeed all of the time, even the best fail pretty regularly, and we could all improve, but at least we try. However, it’s an uphill battle.

This leads to the second and far more harmful effect of taught courses on motivation.  Most of us who work in higher education are constrained by the nature of the course and its accreditation to apply extrinsic rewards and punishments in the form of grades, even though we know it is a truly terrible idea. The reasoning behind the use of grades as motivators is understandable. We can easily observe that extrinsic methods do actually, on the whole, to some extent work, in the short term. Depending on the context, the effect can last from minutes to months. Indeed, behaviourists (who only ever did short-term studies) based a whole psychological movement on this idea. What is less obvious, and the most crucial structural disaster in the way the vast majority of courses are designed, is that they invariably and totally predictably utterly destroy any intrinsic motivation that people may already have, often irreparably. A big part of the reason for this is that it creates a locus of causality for a task or behaviour that is perceived as being controlled by someone or something else, so it does again come back to an issue of control, but this time the effects are devastating, not just reducing motivation but actively militating against it. This crowding effect has been demonstrated over and over again in well-designed and hard-to-refute research studies for decades. In many cases, rewards and punishments don’t even achieve what they set out to do in the first place. For example, companies that offer performance-related bonuses typically get lower performance from their workers, and daycare that punish parents who are late picking up their children find that parents actually pick them up even later. Worse, once the damage is done, it is very hard if not (sometimes) impossible to entirely undo it. It’s like the motivation pathways have been permanently short-circuited. Worse still, how we are taught is often a major factor in determining how we learn, and we come to expect and (like addicts) even depend on extrinsic motivation to drive us. This is one of the reasons I sometimes describe my role as ‘un-teaching’ – there is often a lifetime of awful learning habits to undo before we can even start. 

If you are not convinced, do check out a few of the hundreds of papers at http://www.selfdeterminationtheory.org/publications/ or read pretty much anything by Alfie Kohn, or Edward Deci, or Richard Ryan. There are plenty of studies from the field of education that look at the effects of rewards and punishments and find them worse than wanting.

Breaking the cycle

There are alternatives to typical institutional taught courses, some of them very common, others less so. The University of Brighton has a great program, the MSc/MA by Learning Objectives, in which students work with supervisors to develop a set of outcomes, a means of assessment, and a work plan to reach their goals. While there are a few time and process constraints here and there for practical reasons, they are not too onerous. Students on this program tend to pass it, not because its standards are low, but because everything is aligned with what they want and need to do. A few programs at Athabasca University have similarly flexible courses that act as a kind of catch-all to enable people to do things that matter to them.  PhD programs, of the traditional variety used in the UK, have (or had – the course-based American model is sadly becoming more prevalent) no obligatory courses and are entirely customized to and often by the individual student, with nothing but a few processes to ensure students remain on track and supported. They can take from 2-10 years to complete. This length can be a problem as our motivation usually changes over such a long time and extrinsic factors are often introduced that can affect it badly, but the general principle is a good one. Athabasca University’s challenge process makes it possible to completely separate accreditation from learning, which (almost) avoids the whole course problem altogether, though it does unfortunately only work if you happen to have the precise set of competences provided by actual taught courses. Its self-paced undergraduate courses, though still markedly constrained by a notional equivalence to their paced brethren, free students from the tyrrany of schedules, even if they do have other features that are overly limiting. PLAR/APEL processes that are common in institutions across the world separate learning from accreditation almost entirely. And that’s not to mention a huge host of teach-yourself methods and resources from Google Search to Wikipedia to the Khan Academy to Stack Exchange and hundreds of other fine online systems that most of us use when we actually want and need to learn something. And, of course, there are books, which have the great benefit of allowing us to skip things, re-read things, look up references and so on, so our paths through them are seldom linear and always under out control – unless we are forced to read them because of a course. 

But what about the run-of-the-mill?

Though there is much to be learned from existing methods that entirely or partially by-pass the harmful effects of taught courses, teachers in higher education operate under a set of ugly constraints that make it very difficult and often impossible for us to completely avoid their ill effects, especially when student numbers are large and things like professional standards bodies come into the picture. Until we achieve massive educational reform, which might allow us to provide multiple paths to achieving competence, that might separate learning from accreditation, that might be chunked in ways that suit the needs of learner and subject, we are mostly stuck with the offspring of a mediaeval system that has evolved to defend itself against change. Most of us have to grade things, we have  to make use of learning objectives/outcomes, and we don’t have much control over course length. Often, especially in lower-level courses and/or where standards bodies are involved, we have little control over the competences that need to be attained, whether or not we are competent to teach them. Moreover, many of the most effective existing methods of teaching without courses are very resource-hungry. It would be great to apply the (UK-style) PhD process to all of our teaching but it is economically infeasible. PhDs are expensive for a very good reason – many of the economic and physical constraints that drove the development of courses in the first place have not gone away, even though some have been notably diminished. Given these issues, I will finish this post with a few general ideas, suggestions and patterns to help reduce the ill effects of courses without destroying the system of which they are a part. 

Give control

Traditional teaching seems determined to take control away from learners, but we can do much to give it back. Amongst other things:

  • allow students to choose what they do and how they do it. For instance, I have a web development course that centres around a site that students build throughout the course, that is about something they choose and they care about, and a course process that encourages them to choose between (or discover for themselves or their peers) multiple resources and methods to learn the requisite skills along the way. It makes extensive use of peer support and encourages sharing of problems and solutions, so that students teach one another as a natural fall-out of the process. It uses reflection to support the process, and an assessment based on evidence (that the students select for themselves) of meeting specified learning outcomes. It’s far from perfect, and it does often cause problems (especially at first) for those who have learned dependence via our broken educational system, but it shows one way that learners can take the reins.
  • allow students to choose the learning outcomes. This is trickier to enact because of the rigid requirements we usually have to develop curricula and match them with those delivered elsewhere. However, if the outcomes we specify are not too specific, relating to broad competences, it is still possible to allow some flexibility to students to identify finer-grained outcomes that suit their needs and that are exemplars of the general overarching outcomes. I’ve found this approach easier to follow in graduate level courses in ill-defined subject areas – I don’t really have a way of doing this well for those that are constrained by disciplinary standards.
  • allow students to design their own assessments. This one is easier. Learning contracts are one way to do this, supported with scaffolding that allows students to develop their own plans for assessment. Similarly, we can ask for them to provide evidence in a form that suits them (one of the best computing assignments I have ever seen was mostly done as poetry, and I once had a great explanation of the ISO model of network management explained using Santa Claus’s elves). At the very least, we can offer alternative pre-written forms of assessment that students can choose between according to their preferences.
  • allow students to pick their own content. This is a trick I have used for several courses. I offer a menu of options that address the intended (broad) outcomes and negotiate which parts we/they will cover during the course. It takes a little more effort to prepare, but the payoff is large. For graduate level courses I sometimes encourage students to develop their own content that we all then use.
  • allow students to choose their own tools, media, platforms, etc. Where possible, students should not be limited in their choice of technologies needed to complete the course. This can be tricky where we are constrained by things like institutional platforms, but there are often ways to allow at least some flexibility (e.g. mobile-friendly versions, PDF and e-book formats, standard formats that allow the use of any editor or development tool, etc)
  • allow students to pick the time and place. This is the default at Athabasca University for most courses, but can be trickier when there are timetables and constraints of working with others according a schedule. Classroom flipping can help a bit, limiting what is done in the class to things that actually benefit from being somewhere with other people (feedback, dialogue, collaboration, problem-solving, etc), and leaving a lot to self-paced study. This is true online as well as in face-to-face teaching. Indeed, counter-intuitively, it is even one of the odd potential benefits of traditional lectures, inasmuch as they typically only take an hour of a student’s time once a week, between which students are free to learn as they please (not a completely serious point, but worth pointing out because of the important and universally applicable lesson it reminds us of, that teaching behaviours only have a tangential relationship with learning behaviours).
  • allow students to control social interaction. I am a huge fan of learning with other people but we all have different needs for engagement with others in our learning, and it doesn’t suit everyone equally all the time. Where possible, I try to build processes that let those that benefit from social interaction to work with others, but that let those that prefer a different approach to work alone, using evidence-based assessments rather than process-based ones. For instance, evidence can include help given to others or conversations with others, but can as easily come from individual work (unless social competences are on the menu for learning). I find it useful to build simple sharing (as opposed to dialogue) into the process so that even the least sociable of students share things and therefore support the learning of others.

Use better forms of extrinsic motivation

Extrinsic motivation is not all equally awful and some is barely distinguishable or even a part of intrinsic motivation. Extrinsic motivators lie on a spectrum from bad (externally imposed reward and punishment) to much better and more internally regulated varieties, such as:

  • doing things out of a sense of duty, guilt or obligation (introjected regulation) or, better,
  • doing things because they are perceived as worthwhile in themselves (identified regulation, e.g. losing weight) or, better still,
  • doing things because they are necessary steps to achieve something else we are really motivated to achieve (integrated regulation).

See http://www.selfdeterminationtheory.org/theory/ for more about these differentiations. There are plenty of ways to use this to our advantage. It can often, for instance, be useful to encourage reflection on a learning activity. This can be used to think about why we are doing something, how it relates to our needs and goals, and what it means to us. Reflection can kindle more effective forms of extrinsic motivation that are far less harmful than externally imposed rewards and punishments. It is also valuable to nurture community, so that students feel obligations to the team or to one another, and support one another when the going gets rougher. Also, seeing how others are motivated can inspire us to recognize similar motivations in ourselves. Shared reflections (e.g. via blogs) can be particularly valuable.

Grades are not always necessary. While getting rid of the need to summatively assess is seldom possible, we can often avoid the use of grades (pass/fail is a little better than a mark), and we can make it possible for students to keep at it without grading until it is right, thus reducing the chance of failure. My courses tend to have feedback opportunities scattered throughout but I explictly avoid giving any grades until the last possible moment. It can upset some students who have learned grade-dependence, so it is important that they are fully aware of the reasoning and intent, and that the feedback is good enough that they can judge for themselves how well they are doing (I don’t always get that bit right!). Of course, I am only suggesting that we lose the grades, not the useful feedback. Feedback is crucial to allowing students to feel in control – they need to know what they are doing well and what could be improved, and plentiful feedback can be hugely motivating, showing that other people care, contributing to a sense of achievement, and more. Good, descriptive feedback that focuses on the work (never the student) is a cornerstone of effective educational practice. Grades tell us little or nothing, while encouraging an extrinsic focus that is harmful to motivation.

Step outside the course

Making links beyond a single course can be very beneficial to motivation. I attended an interesting presentation (at the same conference this originated in) the other day by Norman Jackson who talks about lifewide as opposed to lifelong learning, an idea that captures this principle well. Creating opportunities for students to engage in external activities like (for example) clubs, societies, geological digs, competitions, community work, conferences, charitable work, kickstarters, Wikipedia articles, coding camps and so on can fill in a lot of motivational gaps, making it easier to see the relevance of a course, to feed new ideas into a course, to gain a greater sense of personal relevance and responsibility for one’s own learning, to expand on work done in a course in greater detail wihout the imposition of extrinsic motivation. Of course, students should be free to choose which of these they engage with and, better still, should find them for themselves. However, there is no harm in advertising such things, nor in designing courses that allow students to capitalize on learning from other activities within the course itself such as projects, show-and-tell sessions, flexible discussions and so on. There are also often opportunities for doing things across multiple courses, using outputs of one to feed another, or bringing together different skillsets for joint projects. Another way to reduce the harm slightly is to build multiple courses into a single overarching one, of lengths appropriate to the needs of the students and subject. 

Build learning communities and spaces rather than courses

Given the wealth of potential resources and people’s time that are available for free on the Web (not to mention in libraries) there is often no need to provide much, if any content (in the sense of stuff presenting subject matter). A couple of the most successful courses I have ever run have had no curriculum or content to speak of, just a set of broad outcomes, a very flexible and student-designed assessment, an approach to making use of the learning community and a responsive process to make it all happen. The process can take a surprising amount of time to develop, as it is important that it is both understood well by the students (including how it is assessed, expectations, norms, etc) and that it can be guaranteed to result in the intended outcomes (assuming these are not negotiated too). Getting that process and community right can be hard work both in the design phase and (especially) during the course but, when it does go right, it is very rewarding. I have often learned as much if not more than my students on those courses, and they are the only courses I have ever run with more than a couple of students where I have had nothing but grade A students (moderated by external examiners as well as by peers). The massive enthusiasm and passion that results from a rich learning community of learners who are in control of their own learning has to be seen to be believed. The essence of the method is to let go just enough but no more: a teacher’s role is to provide plentiful prodding, ideas, critical feedback and, above all, scaffolding so that students feel confident that they are making progress in useful directions (and get help when they are not). It is also a bit of a juggling act to make sure that even loose outcomes are met, especially as students tend to diverge in all sorts of different directions, some of which are brilliant and worth pursuing – getting those outcomes loose enough in the first place but sufficiently recognizable and relevant to academic careers is a bit of an art that I am still learning. It also takes a lot of energy and dedication to make it work so, if you are having a bad week or two, things can go topsy turvy pretty fast.  It is worth putting a huge amount of effort into the first few weeks, responding enthusiastically and personally at any time of day or night that you can afford in order to set the tone, show that you care, explain your approach and soothe any fears. Once you have established trust that you care, and have nurtured a strong learning community, students tend to help one another a lot and forgive you when you are less attentive later on. I try to design the process so that I can intentionally let go in later weeks too.

In conclusion

As an intrinsic design feature, traditional university taught courses and their attendant processes and regulations impose unnatural restrictions on both teachers and students, reducing control and stunting motivation. It would be great to throw off these restrictions altogether. We could make enormous gains simply through separating teaching from accreditation (at least, wherever possible – in extremely rare cases it really is true that there is only one person who can reliably judge competence and that person is the teacher). This may soon become a necessity rather than a virtue if MOOCs continue to evolve faster than the means to reliably accredit the results. Athabasca University already has the challenge process to cope with that, though is significantly fettered by the need to match competences achieved with those that apply to existing courses – our challenge process is insufficiently fine-grained to allow real flexibility. There would be equally great gains if we made courses the right size (typically though not necessarily small) to fit the needs of different students rather than shoehorning them to fit the needs of institutions. We have technologies than can take the hard work out of managing the ensuing complexity so traditional timetabling woes need not impede us, and it would make it much easier to mix and match, including to accredit learning done in different ways. However, there is plenty that can be done even within the constraints of a typical university course, as long as we are aware of the dangers and take steps to reduce the harm. I hope that this little piece and this smattering of suggestions has sparked an idea or two about how we might go about doing that. Perhaps, if more of us start to question the system and apply such ideas, it might help to make a climate where bigger change is possible. If you’re interested in finding out more, I have written about this kind of thing once or twice before, with slightly different emphases, such as at https://landing.athabascau.ca/blog/view/177831/the-monkeys-paw-effect-in-higher-education and at https://landing.athabascau.ca/blog/view/496760/cargo-cult-courses 

 

Two conferences in two days

I’ve just got back from a flying visit to the UK. The first thing I saw on arriving at the new and not at all unpleasant Heathrow Terminal 2 was Stephen Downes. Small world. We were getting luggage from different areas and lost each other in the rush to get to different places, but it was nice to see him, however briefly.

The main reasons I was in the UK were two conferences, The First European Conference on Social Media and the umpteenth Learning & Teaching Conference at the University of Brighton.  Sadly, they overlapped, which meant I only got to attend a day of each, but I managed to give two quite different sessions at both conferences. The first, at ECSM, was a traditional slide-based presentation about the Landing, why and how we built it, and what we might do differently if we started again. As an experiment, rather than my usual handful of images that sit behind most of my presentations, I threw nearly 50 slides (some with multiple build stages) at the stunned audience in 20 minutes. Quite fun. The second, at the L&T conference, was a much more discursive hour-long session  that questioned the fundamental notion of courses, which involved a few thought experiments and a lot of conversation among a very engaged crowd. 

ECSM was a very well-organized affair (disclaimer – the chairs were my friends Sue Greener and Asher Rospigliosi) which provided what I have hoped to see in a social media for some years but have previously been disappointed: diversity. When I put together my first social computing course a few years ago I tried to offer much the same kind of range as this conference provided, but have since been a bit worried that I was defining a discipline too early in its lifecycle. This is because most social media/social computing conferences I have been involved with over the past few years have fallen heavily into computer algorithm territory, which my course touches on but doesn’t make a central focus. I have sometimes thought that they would be better named as social network analysis conferences, as variations on that theme have totally dominated the proceedings. I have come across some social media conferences that drift entirely the other way, looking at social and sociological consequences, and a few that focus on a single subject area or context (education and/or learning being the ones that usually interest me most). In contrast, ECSM was delightfully broad, with offerings across the spectrum, with coverage that I feel vindicates my choice of subject matter and approach for a social computing course. It included a lot of papers related to business, politics, media, education and other general areas, and a wide range of research attitudes and methods from the highly algorithmic to the softest and fuzziest of media analyses and critical inquiries. There were plenty of case studies from lots of contexts and demonstrations or reports on plentiful interesting systems. I think this is a sign of a maturing area of study. Though they were not keynoting, I was impressed that the conference attracted the marvellous guru couple of Jenny Preece and Ben Schneiderman. My favourite discovery of the day was that Dutch police have a room in Habbohotel. At the conference dinner I sat next to John Traxler, who was doing the next day’s keynote (that I would miss). He continues to impress me as a creative and incisive thinker. We spoke more about beer, Brighton and music than mobile and social media, but it was fun.

I was not expecting as much out of the parochial Learning & Teaching conference the next day, but I was wrong. The first keynote by Sue Clegg on the arguable failure of widening participation was thought provoking and went down well. Though provocative, it was a bit dry for my taste – I’m not a fan of presentations read from sheets of notes. I’d rather read the notes and have a conversation. Its focus was also very UK-centric, which should have been interesting but I did not have sufficient background knowledge of the events and acronyms to which she referred. She also seemed unusually approving of higher education access rates in the US, ranking it highest in the world, which was more than a bit of a surprise to me: I guess it depends how you measure such things, but the OECD ranks the US well below Korea, Japan, Canada (we’re third!) and several European countries, including the UK, when it comes to higher education participation. None-the-less, her talk was mostly tightly argued and backed up by plentiful research. I had planned to leave and return to ECSM after my session, which followed Sue Clegg’s talk, but I was enjoying meeting old friends and sufficiently intrigued by later sessions to stay on. I am glad that I did, not just because it gave me a chance to catch up with old friends and colleagues.

The first presentation I saw was about use of the e-portfolio system Mahara for professional and personal development. The University of Brighton has a mature and well-implemented Mahara instance that is used for a great many things, from personal publication to coursework to CV writing. I was a bit sad to see that, in combination with a WordPress instance and a SharePoint system used by staff, it had pretty much replaced the innovative Elgg system, community@brighton, that was part of the inspiration for the Landing and that largely surpassed all three put together in functionality. After 8 or 9 years, the last few of those in a state of slow and painful decline, community@brighton is about to be decommissioned. Community@brighton was a little ahead of its time; it suffered greatly in an upgrade process after its first successful couple of years that resulted in the loss of a great deal of the network and communities that had thrived beforehand, and it never fully recovered the trust of its users; it was insufficiently diverse in its primary uses, being quite focused on teaching and, in its latter years, finding shared local accommodation; and it was not helped that its introduction coincided with the massive rise of Facebook (before most people realised how evil that site was). But it was a great system that was (and even as it nears exinction, possibly still is) the world’s largest social media site in an HE institution and a lot of innovative work was done on and through it.

I was interested to learn that the University of Brighton has outsourced its Mahara, Blackboard and some other systems to the cloud. Mahara runs on Amazon’s Cloud service and is managed by Catalyst IT, ( www.catalyst-eu.net) the company behind Mahara, all for around £12,000 (roughly $CAD20,000) per year, plus fairly minimal cloud charges. This seems pretty good value to me – very hard for an internal IS team to compete with that. Similarly, though Blackboard is the work of the devil and the costs are astronomical, moving away from Blackboard would be very difficult for the University of Brighton. This is thanks to the massive investment in materials and training already sunk into it, combined with Blackboard’s strenuous efforts to encourage that dependency and notoriously bad tools for getting data out. Bearing that in mind, it makes sense for the University to move to a hosted solution, especially given the terrible performance, countless bugs, regular and irregular downtime, and the large amount of effort needed to keep it running and to answer technical problems. At least it should now perform reasonably, get timely updates, rarely go down and just work, most of the time. On a cautionary note I was, however, intrigued to learn that the university’s outsourcing of student email (to Microsoft’s Irish branch – Google was rejected due to lack of adherence to European data protection laws) had met with an unfortunate disaster, inasmuch as Microsoft changed the terms and conditions that had formerly meant students would have an email address for life, to a much more limited term. Outsourcing is fine when it works, but it always depends on another company with very different goals than one’s own. I normally prefer to keep things in-house, despite the cost. It means that you retain control of the data no matter what and, just as importantly, the knowledge to use it.

After a very fine lunch, I attended a double-length session reporting on the University of Brighton’s findings and work resulting from the very large Higher Education Academy ‘What Works’ research initiative. ‘What Works’ was focused on improving retention rates, seeking reasons for students giving up on courses and programs, and seeking ways to help them succeed. Brighton was one of the 22 institutions involved in the £1M study. A large team from Brighton gave a very lively and highly informative sequence of presentations on the background, the research and the various interventions that had been attempted following the study, not all with equal success, but all of them interesting. The huge take-home for me was the crucial importance of a culture of belonging. This was singled out in the HEA research that fed into this as the most significant factor in determining whether or not a student continues. Other factors are closely related to this – supportive peers, meaningful interactions, developing knowledge and confidence, and relevance to future goals, and all contribute to belongingness. There are also other factors like perseverence, engagement and internalization that play a role. It is intriguing to me that the research into this started with something of a blank slate, and did not draw significantly on the extensive literature on motivation outside of an educational setting. If it had done, they would probably have identified control as a major factor too although, given the context (traditional educational systems are not great for giving students control, especially to those in their early months of study), it is not surprising that it was missed. In recent years I have typically followed self-determination theory‘s vocabulary of ‘relatedness’ for this aspect of motivation, but ‘belonging’ is a far better word that captures a lot of what is distinctive about the nature and value of traditional academic communities and practices. Significantly for me, that is something which we at Athabasca University tend not to do so well. With self-paced courses, a large number of visiting students and relatively limited communication tools (apart from the Landing, of course!) it is very hard for us to build that sense of belonging. When tutoring works well, it goes quite a long way to achieving it and occasionally a bit of community develops via Moodle discussions but, apart from the Landing, we do nothing much to support a wider sense of belonging. At least, not in undergraduate programs. I think we tend to do it fairly well in graduate programs, where it is easier to build more personal relationships, peer support and cohorts into the system. I intend to follow this up and explore more of the background research that led to the HEA team’s conclusions. 

The afternoon ended with Pimms, but not before a closing keynote by Norman Jackson on life-wide (as distinct from life-long) learning. I found the notion of lifewide learning pleasing, concentrating on a person’s whole learning life, of which intentional academic behaviour is just a small part. The idea is related to the notion of learning trajectories as posited by Michael Eraut, with whom Jackson has worked. There was lots to like in his talk, and it drew attention away from the very course-centric view that underpins much university thinking, and that I had criticized in my own session. He had lots of nice examples based on studies and interviews with students, none of whom simply followed a ‘course’, though perhaps the examples were a little too glibly chosen – this was appreciative enquiry. He also placed a great onus on his version of ‘learning ecologies’ to describe the lifewide process. His definition of a learning ecology differs considerably from mine, and others who have used the term. As far as I could tell, the focus was very much on an individual, and his definition of a ‘learning ecology’ related to the various things that individuals do to support their learning. This is not a very rich ecology! I think that simply means that we tend to do a lot of things when learning that affect our learning in other things, all in a richly connected self-nourishing fashion. While he did, when questioned, agree that there was much richness to be gained from ‘overlapping’ ecologies and learning with and from others, I don’t think he sees the overlap as anything more than that. For me and, I think, most others who have used the term, a learning ecology has emergent patterns and behaviours that are quite different from its parts, full of rich self-organization, and it is crucial to negotiating meaning and creating knowledge in a social context. In a learning ecology, everyone’s learning affects everyone else’s, with positive and negative feedback loops creating knowledge that goes far beyond what any individual could develop alone. 

I am back in Canada now and trying to catch up with the load of things that two conferences inevitably delayed. I usually reckon that a conference takes up at least three times the time taken by the conference travel itself – preparation and recovery time are always a significant factor. In fact, it should take longer to recover because it would be great to reflect further to help consolidate and connect the learning that inevitably happens during the intensive sessions and conversations that characterize conferences: too many learning opportunities are lost when we rush back into a pile of over-delayed work after such things. At the very least, posts like this are a necessity to help make sense of it all, not an optional extra, but there is a lot more that I would like to follow up on if I had the time. It is also a pity because the weather in Vancouver is stunning (maybe too hot and dry) and I have a newly purchased but very old boat floating outside that keeps calling me. 

 

Classrooms may one day learn us – but not yet

Thanks to Jim and several others who have recently brought my attention to IBM’s rather grandiose claim that, in a few years, classrooms will learn us. The kinds of technology described in this article are not really very new. They have been just around the corner since the 60s and have been around in quantity since the early 90s when adaptive hypermedia (AH) and intelligent tutoring systems (ITS) rose to prominence, spawning a great many systems, and copious research reported on in hundreds of conferences, books and journal articles. A fair bit of my early work in the late 90s was on applying such things to an open corpus, which is the kind of thing that has blossomed (albeit indirectly) into the recently popular learning analytics movement. Learning analytics systems are essentially very similar to AH systems but mostly leave the adaptation stage of the process up to the learner and/or teacher and tend to focus more on presenting information about the learning process in a useful way than on acting on the results. I’ve maintained more than a passing interest in this area but I remain a little on the edge of the field because my ambitions for such tools have never been to direct the learning process. For me, this has always been about helping people to help one another to learn, not to tell them or advise them on how to learn, because people are, at least till now, the best teachers and an often-wasted resource. This seemed intuitively obvious to me from the start and, as a design pattern, it has served me well. Of late, I have begun to understand better why it works, hence this post.

The general principle behind any adaptive system for learning is that there are learners, some kind of content, and some means of adapting the content to the learners. This implies some kind of learner model and a means of mapping that to the content, although I believe (some disagree) that the learner model can be disembodied in constituent pieces and can even happily exist outside the systems we build, in the heads of learners. Learning analytics systems are generally all about the learner model and not much else, while adaptive systems also need a content model and a means of bringing the two together.  

Beyond some dedicated closed-corpus systems, there are some big obstacles to building effective adaptive systems for learning, or that support the learning process by tracking what we are doing.  It’s not that these are bad ideas in principle – far from it. The problem is more to do with how they are automated and what they automate. Automation is a great idea when it works. If the tasks are very well defined and can be converted into algorithms that won’t need to be changed too much over time, then it can save a lot of effort and let us do things we could not do before, with greater efficiency. If we automate the wrong things, use the wrong data, or get the automation a little wrong, we create at least as many problems as we solve. Learning management systems are a simple case in point: they automated abstracted versions of existing teaching practice, thus making it more likely that existing practices would be continued in an online setting, even though they had in many cases emerged for pragmatic rather than pedagogic reasons that made little sense in an online environment. In fact, the very process of abstraction made this more likely to happen. Worse, we make it very much harder to back out when we automate, because we tend to harden a system, making it less flexible and less resilient. We set in stone what used to be flexible and open. It’s worse still if we centralize that, because then whole systems depend on what we have set in stone and you cannot implement big changes in any area without scrapping the whole thing. If the way we teach is wrong then it is crazy to try to automate it. Again, learning management systems show this in spades, as do many of the more popular xMOOC systems. They automate at least some of the wrong things (e.g. courses, grading, etc). So we had better be mighty sure about what we are automating and why we are doing it. And this is where things begin to look a bit worrying for IBM’s ‘vision’. At the heart of it is the assumption that classrooms, courses, grades and other paraphenalia of educational systems are all good ideas that are worth preserving. The problem here is that these evolved in an ecosystem that made them a sensible set of technologies at the time but that have very little to do with best practice or research into learning. This is not about learning – it is about propping up a poorly adapted system.

If we ignore the surrounding systems and start with a clean slate, then this should be a set of problems about learning. The first problem for learning analytics is to identify what are we should be analyzing, the second is to understand what the data mean and how to process them, the third to decide what to do about that. Our knowledge on all three stages is intermediate at best. There are issues concerning what to capture, what we can dicover about learners through the information we capture, and how we should use that knowledge to help them learn better. Central to all of this is what we actually know about education and what we have discovered works best – not just statistically or anecdotally, but for any and all individuals. Unfortunately, in education, the empirical knowledge we have to base this on is very weak indeed.

So far, the best we can come up with that is fairly generalizable (my favourite example being spaced learning) is typically only relevant to small and trivial learning tasks like memorization or simple skill acquisition. We’re pretty good at figuring out how to teach simple things well, and ITS and AH systems have done a pretty fair job under such circumstances, where goals (seldom learning goals – more often proxies like marks on tests or retention rates) are very clear and/or learning outcomes very simple. As soon as we aim for more complex learning tasks, the vast majority of studies of education are either specific, qualitative and anecdotal, or broad and statistical, or (more often than should be the case) both. Neither is of much value when trying to create an algorithmic teacher, which is the explicit goal of AH and ITS, and is implied in the teaching/learning support systems provided by learning analytics.  

There are many patterns that we do know a lot about, though they don’t help much here.  We know, for example, that one-to-one mastery teaching on average works really brilliantly – Bloom’s 2-sigma challenge still stands, about 30 years after it was first made. One-to-one teaching is not a process that can be replicated algorithmically: it is simply a configuration of people that allows the participants to adapt, interact and exchange or co-develop knowledge with each other more effectively than configurations where there is less direct contact between people.  It lets learners express confusion or enthusiasm as directly as possible, and for the teacher to provide tailored responses, giving full and undistracted attention. It allows teachers to directly care both for the subject and for the student, and to express that caring effectively. It allows targeted teaching to occur, however that teaching might be enacted. It is great for motivation because it ticks all the boxes on what makes us self-motivated. But it is not a process and tells us nothing at all about how best to teach nor how best to learn in any way that can be automated, save that people can, on the whole, be pretty good at both, at least on average.

We also know that social constructivist models can, on average, be effective, for probably related reasons. it can also be a complete disaster. But fans of such approaches wilfully ignore the rather obvious fact that lots of people often learn very well indeed without them – the throwaway ‘on average’ covers a massive range of differences between real people, teachers and learners, and between the same people at different times in different contexts. This shouldn’t come as a surprise because a lot of teaching leads to some learning and most teaching is neither one-to-one nor inspired by social constructivist thinking. Personally, I have learned phenomenal amounts, been inspired and discovered many things through pretty dreadful teaching technologies and processes, including books and lectures and even examined quizzes. Why does it work? Partly because how we are taught is not the same thing at all as how we learn. How you and I learn from the same book is probably completely different in myriad ways. Partly it is because it ain’t what you do to teach but how you do it that makes the biggest difference. We do not yet have an effective algorithmic way of making or even identifying creative and meaningful decisions about what will help people to learn best – it is something that people and only people do well. Teachers can follow an identical course design with identical subject matter and turn it into a pile of junk or a work of art, depending on how they do it, how enthusiastic they are about it, how much eye contact they make, how they phrase it, how they pace it, their intonation, whether they turn to the wall, whether they remembered to shave, whether they stammer etc, etc, etc, and the same differentiators may work sometimes and not work others, may work for some people sometimes and not others. Sometimes, even awful teaching can lead to great learning, if the learners are interested and learn despite rather than because of the teacher, taking things into their own hands because the teaching is so awful. Teaching and learning, beyond simple memory and training tasks, are arts and not sciences. True, some techniques appear to work more often than not (but not always), but there is always a lot of mysterious stuff that is not replicable from one context to the next, save in general patterns and paradigms that are mostly not easily reduced to algorithms. It is over-ambitious to think that we can automate in software something we do not understand well enough to turn into an algorithm. Sure, we learn tricks and techniques, just like any artist, and it is possible to learn to be a good teacher just as it is possible to learn to be a good sculptor, painter or designer. We can learn much of what doesn’t work, and methods for dealing with tricky situations, and even a few rules of thumb to help us to do it better and processes for learning from our mistakes. But, when it comes down to basics, it is a creative process that can be done well, badly or with inspiration, whether we follow rules of thumb or not, and it takes very little training to become proficient. Some of the best teachers I’ve ever known have used the worst techniques. I quite like the emphasis that Alexandra Cristea and others have put on designing good authoring environments for adaptive systems because they then become creative tools rather than ends in themselves, but a good authoring tool has, to date, proved elusive and far too few people are working on this problem.

‘Nothing is less productive than to make more efficient what should not be done at all’. Peter Drucker

The proponents of learning analytics reckon they have an answer to this problem, by simply providing more information, better aggregated and more easily analyzed. It is still a creative and responsive teacher doing the teaching and/or a learner doing learning, so none of the craft or art is lost,  but now they have more information, more complete, more timely, better presented, to help them with the task so that they can do it better. The trouble is that, if the information is about the wrong things, it will be worse than useless. We have very little idea what works in education from a process point of view so we do not know what to collect or how to represent it, unless all we are doing is relying on proxies that are based on an underlying model that we know with absolute certainty is at least partly incorrect or, at best, is massively incomplete. Unless we can get a clearer idea of how education works, we are inevitably going to be making a system that we know to be flawed to be more efficient than it was. Unfortunately, it is not entirely clear where the flaws lie especially as what may be a flaw for one may not be for another, and a flaw in one context may be a positive benefit in another.  When performing analytics or building adaptive systems of any kind, we focus on proxies like grades, attention, time-on-task, and so on – things that we unthinkingly value in the broken system and that mean different things to different people in different contexts.  Peter Drucker made an important observation about this kind of thing:

Nothing is less productive than to make more efficient what should not be done at all‘.

A lot of systems of this nature improve the efficiency of bad ideas. Maybe they valorize behaviourist learning models and/or mediaeval or industrial forms of teaching. Maybe they increase the focus on grading. Maybe they rely on task-focused criteria that ignore deeper connective discoveries. Maybe they contain an implied knowledge model that is based on experts’ views of a subject area, which does not normally equate to the best way to come by that knowledge. Maybe they assume that time on task matters or, just as bad, that less time spent learning means the system is working better (both and neither are true). Maybe they track progress through a system that, at its most basic level, is anti-educational. I have seen all these flaws and then some. The vast majority of tools are doing education-process analytics, not learning analytics. Even those systems that use a more open form of analytics which makes fewer assumptions about what should be measured, using data mining techniques to uncover hidden patterns, typically have risky systemic effects: they afford plentiful opportunities for filter bubbles, path dependencies, Matthew Effects and harmful feedback loops, for example. But there is a more fundamental difficulty for these systems.  Whenever you make a model it is, of necessity, a simplification, and the rules for simplification make a difference. Models are innately biased, but we need them, so the models have to be good. If we don’t know what it is that works in the first place then we cannot have any idea whether the patterns we pick out and use to help people guide their learning journeys are a cause, an effect or a by-product of something else entirely. If we lack an explicit and accurate or useful model in the first place, we could just again be making something more efficient that should never be done at all. This is not to suggest that we should abandon the effort, because it might be a step to finding a better model, but it does suggest we should treat all findings gathered this way with extreme scepticism and care, as steps towards a model rather than an end in themselves.

In conclusion, from a computing perspective, we don’t really know much about what to measure, we don’t have great grounds for deciding how to process what we have measured, and we don’t know much at all about how to respond to what we have processed. Real teachers and learners know this kind of thing and can make sense of the complexity because we don’t just rely on algorithms to think. Well, OK, that’s not necessarily entirely true, but the algorithms are likely at a neural network level as well as an abstract level and are probably combinatorially complex in ways we are not likely to understand for quite a while yet. It’s thus a little early to be predicting a new generation of education. But it’s a fascinating area to research that is full of opportunities to improve things, albeit with one important proviso: we should not be entrusting a significant amount of our learning to such systems just yet, at least not on a massive scale. If we do use them, it should be piecemeal and we should try diverse systems rather than centralizing or standardizing in ways that the likes of Knewton are trying to do. It’s bit like putting a computer in charge of decisions whether or not to launch nuclear missiles. If the computer were amazingly smart, reliable and bug-free, in a way that no existing computer even approaches, it might make sense. If not, if we do not understand all the processes and ramifications of decisions that have to be made along the way, including ways to avoid mistakes, accidents and errors, it might be better to wait. If we cannot wait, then using a lot of different systems and judging their different outputs carefully might be a decent compromise. Either way, adaptive teaching and learning systems are undoubtedly a great idea, but they are, have long been, and should remain on the fringes until we have a much clearer idea of what they are supposed to be doing. 

Being-taught habits vs learning styles

In case the news has not got through to anyone yet, research into learning styles is pointless. The research that proves this is legion but, for instance, see (for just a tiny sample of the copious and damning evidence):

Riener, C., & Willingham, D. (2010). The Myth of Learning Styles. Change: The Magazine of Higher Learning Change: The Magazine of Higher Learning, 42(5), 32-35. doi:doi: 10.1080/00091383.2010.503139

Derribo, M. H., & Howard, K. (2007). Advice about the use of learning styles: A major myth in education. Journal of college reading and learning, 37, 2.

Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. 041543).

No one denies that it is possible to classify people in all sorts of ways with regards to things that might affect how they learn, nor that everyone is different, nor that there are some similarities and commonalities between how people prefer to or habitually go about learning. When these elaborately constructed theories claim no more than that people are different in interesting and sometimes identifiably consistent ways, then I have little difficult accepting them in principle, though it’s always worth observing that there are well over 100 of these theories and they cannot all be right. There is typically almost nothing in any of them that could prove them wrong either. This is a hallmark of pseudo-science and should set our critical sensors on full alert. The problem comes when the acolytes of whatever nonsense model is their preferred flavour try to take the next step and tell us that this means we should teach people in particular ways to match their particular learning styles. There is absolutelly no plausible evidence that knowing someone’s learning style, however it is measured, should have any influence whatsoever on how we should teach them, apart from the obvious requirement that we should cater for diversity and provide multiple paths to success. None. This is despite many decades spent trying to prove that it makes a difference. It doesn’t.

It is consequently a continual source of amazement to me when people pipe up in conversations to say that we should consider student learning styles when designing courses and learning activities. Balderdash. There is a weak case to be made that, like astrology (exactly like astrology), such theories serve a useful purpose of encouraging people to reflect on what they do and how they behave. They remind teachers to consider the possibility that there might be more than one way to learn something and so they are more likely to produce useful learning experiences that cater for diverse needs, to try different things and build flexibility into their teaching. Great – I have no objection to that at all, it’s what we should be aiming for. But it would be a lot more efficient to simply remind people of that simple and obvious fact rather than to sink vast sums of money and human resources into perpetuating these foolish myths. And there is a darker side to this. If we tell people that they are (just a random choice) ‘visual’, or  ‘sensing’ or ‘intuitive’ or ‘sequential’ learners then they will inevitably be discouraged from taking different approaches. If we teach them in a way that we think fits a mythical need, we do not teach them in other ways. This is harmful. It is designed to put learners in a filter bubble. The worst of it is that learners then start to believe it themselves and ignore or undervalue other ways of learning.

Being-taught habits

The occasion for this rant came up in a meeting yesterday, where it was revealed that a surprising number of our students describe their learning style (by which they actually mean their learning preference) to be to listen to a video lecture. I’m not sure where to begin with that. I would have been flabbergasted had I not heard similar things before. Even learning style believers would have trouble with that one. One of the main things that is worth noting, however, is that this is actually a description not of a learning preference but of a ‘being-taught habit’. Not as catchy, but that’s what it is.

I have spent much of my teaching career not so much teaching as unteaching: trying to break the appalling habits that our institutional education systems beat into us until we come to believe that the way we are being taught is actually a good way to learn. This is seldom the case – on the whole, educational systems have to achieve a compromise between cost-efficiency and effective teaching –  but, luckily, people are often smart enough to learn despite poor teaching systems. Indeed, sometimes, people learn because of poor teaching systems, inasmuch as (if they are interested and have not had the passion sucked out of them) they have to find alternative ways to learn, and so become more motivated and more experienced in the process of learning itself. Indeed, problem-based and enquiry-based techniques (which are in principle a good idea) sometimes intentionally make use of that kind of dynamic, albeit usually with a design that supports it and offers help and guidance where needed.

If nothing else, one of the primary functions of an educational system should be to enable people to become self-directed, capable lifelong learners. Learning the stuff itself and gaining competence in a subject area or skill in doing something is part of that – we need foundations on which to build. But it is at least as much about learning ways of learning. There are many many ways to learn, and different ways work better for different people learning different things. We need to be able to choose from a good toolkit and use approaches that work for the job in hand, not that match the demands of some pseudo-scientific claptrap.

Rant over.

 

Teaching gestalts

I’m preparing for a presentation and discussion tomorrow with some doctoral students on the orchestration of lifelong learning. Having come up with the topic some time ago on a whim I’m not entirely sure what I’ll be talking about, so this is mostly an attempt to focus my thinking a little and is very much a work in progress.

In brief, the central jumping off point for this discussion is that teachers are not isolated actors but are instead are gestalts formed from

  • numerous technologies, including pedagogies, regulations, processes, techniques and tools,
  • an uncountably large number of individuals and groups and, most notably of all,
  • learners themselves.

For it to work, everything must harmonize or must make the right kinds of dischord to bring about learning. There are various things that shake out of this perpsective, not least of which being that there are many ways to organize this teaching gestalt that do not involve an educational system of the sort we are used to, and that do not involve individuals labelled as teachers. This matters because most of the learning we do throughout our lives does not take place in or result from formal education.

The teaching gestalt

Even and perhaps particularly in a traditional educational system, teachers are not just the ones that stand (metaphorically or actually) in front of classes and explicitly perform an act that we label as teaching. Teachers are also the authors, editors, illustrators, designers and publishers of textbooks, the builders of websites, the writers of articles and so on. Teachers are designers of school systems, timetablers, architects, designers and furniture builders. Teachers are makers of videos, programmers of online environments, system administrators, TV producers, designers of door handles and technicians. And, above all, learners are teachers – of themselves and of one another. In short, teaching is always a distributed role.

Unpicking this a little further, almost all learning transactions involve at least two teachers – the one with knowledge of content, process, etc, and the learner. Learning is always an active process of knowledge construction, linking, and sense-making in which we constantly reflect, reorientate, examine, and adjust our knowledge in the light of new information or new ways of seeing. We always teach ourselves at least as much as we are taught. We are not given knowledge – we make it. Another person may help to guide us, shape the directions we go, correct us when we are confused or wrong, and motivate us to go the extra mile, but we are always a teacher in this process, whether we like it or not.

In an educational context, a vast array of actors add their own contributions to the teaching whole. Some, like authors of textbooks, or creators of curricula, or other students sharing ideas and (mis)conceptions are very obviously playing a teaching role. Others are less obviously so, but they do matter. The people that made decisions about where to place a whiteboard, which tools to enable in an LMS, or what wattage of lightbulb to include in a classroom may make a huge contribution to the success of failure of a particular learning transaction. The designer of the timetable, the legislator who demanded a particular kind of content or a particular kind of behaviour, the setter of normalized tests, the curriculum designer and the person who cleaned the classroom, all play significant and sometimes crucial roles as part of the teaching gestalt. Timetables teach, LMSs teach, hallways teach. In an educational system it is the system that educates, not just the individual teacher. I particularly like the timetable example because it is a great rejoinder to those who rather naively suggest that teachers should put pedagogy first. Sure: but first you must do it only at these times, over this period, for this amount of time, in this physical or virtual place, on this subject. Whatever. Anyway, within this context, the person who is performing the explicit role of a teacher is thus just one of the teaching gestalt but, potentially, quite a special one, sometimes (but not always) second only to the learner in importance. He or she typically acts as a filter, conduit and interpreter that orchestrates this whole, that responds, gives feedback, shows caring. It’s not too surprising that we label this person differently from the rest of the gestalt.

Orchestral manoeuvring

Since we are talking about a process of orchestration, it is natural to think of music at this point, and the analogy works quite well. A teacher may be an orchestrator, adapting to a context in which many constraints and structures have already been determined by others, using the tools, techniques and technologies to play a part in the construction of knowledge that is hopefully the outcome. Some are conductors, trying to elicit harmonious learning through tight control of the process. Like the best conductors, the best teachers of this sort make use of the materials they are working with, fitting the strengths and weaknesses of the players, the acoustics of the venue, the nature of the instruments, to the demands of the piece to be played and the intended audience. Other teachers are more like arrangers, who organize the pieces and leave the playing to someone else. Some are like players in a band, maybe drummers or bassists providing a rhythm to keep learners on track, or perhaps as soloists showing virtuosity and improvisational skills that inspire the learners to new heights. Some are content to play second fiddle, bringing out the best in the soloist but always in the background. And then there are the ones who sit in a recording studio who play all the instruments themselves, sometimes even making the instruments, and arrange everything the way they want it to be arranged. Some play blues, using the same three chords and often simple technique to play an infinite and subtle range of tunes. Some play classically, sticking closely to but always interpreting a score. Some are composers. Some are jazz improvisors, modern or trad. Some go for unusual scales, exotic rhythms and peculiar blends, others prefer the folk traditions that they learned as children. The sounds that musicians make are a function of many things, including most notably the instrument itself as well as the surroundings in which it is played and the reactions of an audience. And, in most cases, there are many instruments to consider. A lot of the process of teaching is about the technologies tools and techniques, incredibly diverse, all of which have to work to a common purpose.

But whatever the tools, genres, blends and roles that teachers play, when it comes down to basics, teachers (that is to say, the players in the teaching gestalt) have to be skilled and creative, whatever and however they try to play. Above all, teaching (emerging from all the many contributors to that role) is a broad set of human practices, not a science, not just a set of techniques. It is, moreover, a creative, active and inventive practice that cannot be emptied of soul and programmed into a machine without losing the vitality and expression that makes it wonderful. This is not to suggest that machines cannot or should not be a big part of the process, however, any more than that an orchestra should try to play without instruments or a venue. Putting aside more blatant technologies like classrooms and LMSs, for better or worse, our educational systems are machines that, depending on your perspective and the aspect you are looking at, either enable or disable our ability to learn. Likewise, Google Search and Wikipedia (my two favourite e-learning technologies) have a very large and conspicuous machine element. And, of course, the creativity and inspiration can be distributed too. A bad teacher can be saved by a good textbook, for instance, and vice versa.

Why bother with teachers anyway?

It is tempting to say that most of the intentional learning we do is self-guided – that we teach ourselves anything from cooking to philosophy. I know it’s tempting, because I’ve been known to say it, and have read many research studies purporting to show this. However, this is nearly always massively wrong. What we actually do, in almost all cases, is to orchestrate teaching done by others. In some cases this is blatant and obvious. If we learn something by reading a Wikipedia article, or a book, or by watching a video, this is very clearly not a case of us teaching ourselves. At least, not totally. We are merely picking our teachers and exercising a bit of control over the pace, time and place that they teach us. We don’t get all the benefits of teaching that way by any means – importantly, we seldom get much in the way of feedback, for example, and any tailoring that happens is up to us. These kinds of things do not show us that they care about us. Such things are co-teachers, part of the teaching gestalt. But it is all a matter of degree: we are always our own teachers to some extent, and there are almost always others involved in teaching us, no matter how informal or formal the setting. Even when we learn by dabbling and experimenting, we are not exactly pure autodidacts. Partly this is because we often have some kind of target to aspire to because we have seen, read, heard or otherwise encountered terminal behaviours of the sort we are aiming for. For many competences, it is because the things we try to learn or learn with are typically designed by humans who have other humans in mind when they design them – this is true of learning that makes use of things like pencils, paints, cookware, computers, cars, musical instruments, exercise machines, calculators and yachts.  Learning in a vacuum is not possible, unless we are learning about the vacuum which might be, incidentally, one of those rare occasions where no other teacher is directly involved in the process.

By way of example, in recent years,  I have been ‘teaching myself’ to play a new instrument at least once a year. I know what these instruments sound like when they are played well, so I can recognize the gaps between what I can do with them and what they can do. Many teachers have taught me. I have seen other people playing them so I have a fair idea how to hold them but, on the whole, they are designed to be held and manipulated so it seldom takes too long to figure that out by trial and error. Their designers have taught me. That said, I challenge anyone to watch someone else play the flute and, based on what you get out of that, to make the flute sound the same. It’s mighty hard. You might get the odd note and you might even figure out how to shape your mouth differently to switch octaves, but simply copying is probably not quite enough. Most instruments have quirks like that and it would not normally be very wise to simply rely on trial and error. The actual process I generally follow usually involves reading a bit about fingerings, tunings, breathing, embouchure and so on, usually with instrument in hand so that I can check what it all means, then a lot of trial and error, lots of YouTube videos and a great deal of practice until I reach a plateau, after which the cycle repeats again as I learn how to do more advanced stuff like overtones, harmonics, complex chords, intonation, picking or bowing styles, etc. I am never going to become a virtuoso this way, sure, but it is loosely structured in a way that leads to a bit more than the outcome of a chopsticks culture (this refers to Alan Kay’s delightful analogy of what happens when you simply put a computer in a classroom and hope for the best). Eventually I need to play with other people who play better or differently, to get a bit of coaching, to find others who will challenge me to go beyond my comfort zone, but I generally wind up being competent to carry a tune reasonably enough before getting to that point. Part of the reason that I can do this kind of thing because I have learned to teach myself and, of course, I am building on a foundation of existing knowledge. I can read music. I’ve grappled with most families of musical instrument at some point. I know the difference between 3/4 and 4/4 time, and a little bit about harmony. And I know a little about how people learn. All of this is because I have had many teachers, very few of whom were intentionally playing that role.

The unsaid

This all leads to what will, in my talk tomorrow, be the jumping off point for the real discussion, and some questions to which I have some answers but mostly not the best ones. What do all the things that go up to make teachers actually do?  What is the value professional teachers add? How can we manage our teachers? How can we replace them? As professional teachers, how can we allow our students to manage us? What aspects of educational systems teach? What alternative ways of organizing and orchestrating learning might we discover, invent or adapt? I’m particularly interested in exploring ways to overcome some of the manifestly awful teaching that our educational systems do to our students like grading, for instance, and what to do when the tunes we want to play are not in harmony with those played by the systems we are working in. But I am also interested in exploring ways that we can enable people to be better orchestrators of their own inner and outer teachers, beyond institutional contexts, beyond xMOOCs, beyond simple tutorials. I’m hoping it will be a fun discussion. How best to characterize what I’m aiming for? A bit of jazz improvisation, perhaps.

 

MOOPhD accreditation

A recent post at http://www.insidehighered.com/views/2013/06/05/essay-two-recent-discussions-massive-open-online-education reminded me that the half-formed plan that Torsten Reiners, Lincoln Wood and I dreamt up needs a bit of work.

So, to add a little kindling to get this fire burning…

Our initial ideas centred around supporting the process of doing research and writing papers for a PhD by publication. This makes sense and, we have learned, PhDs by publication are actually the norm in many countries, including Sweden, Malaysia and elsewhere, so it is, in principle, do-able and does not require us to think more than incidentally about the process of accreditation. However, there are often invisible or visible obstacles that institutions put in place to limit the flow of PhDs by publication: residency requirements, only allowing them for existing staff, high costs, and so on.

So why stop there?

Cranking the levers of this idea pump a little further, a mischievous thought occurs to me. Why not get a PhD on reputation alone? That is, after all, exactly how any doctorate is awarded, when it comes down to it: it is basically a means of using transferable reputation (think of this as more like a disease than a gift – reputations are non-rival goods), passing it on from an institution to an awardee, with a mutational process built in whereby the institution itself gets its own research reputation enhanced by a similar pass-it-on process. This system honours the institution at least as much as the awardee, so there’s a rich interchange of honour going on here. Universities are granted the right to award PhDs, typically through a government mandate, but they sustain their reputation and capacity to do so through ongoing scholarship, publication and related activities, and through the activities of those that it honours. A university that awarded PhDs without itself being a significant producer of research, or that produced doctors who never achieved any further research of any note, would not get very far. So, a PhD is only a signal of the research competence in its holder because an awarding body with a high reputation believes the holder to be competent, and it sustains its own reputation through the activities of its members and alumni. That reputation occurs because of the existence of a network of peers, and the network has, till now, mostly been linked through journals, conferences and funding bodies. In other words, though someone goes to the trouble of aggregating the data, the actual vector of reputation transmission is through individuals and teams that are linked via a publication process. 

So why not skip the middle man? What if you could get a PhD based on the direct measures of reputation that are currently aggregated at an institutional level rather than those that have been intentionally formalized and aggregated using conventional methods?

Unpicking this a little further, the fact that someone has had papers published in journals implies that they have undergone the ordeal by fire of peer review, which should mean they are of doctoral quality. But that doesn’t mean they are any good. Journals are far from equal in their acceptance rates, the quality of their reviewers – there are those with good reputations, those with bad ones, and a lot in between. Citations by others help to assure us that they may have something of value in them, but citations often come as a result of criticism, and do not imply approval of the source. We need a means to gauge quality more accurately. That’s why h-index was invented. There are lots of reasons to be critical of this and similar measures: they fail to value great contributions (Einstein would have had a very low h-index had he only published his most important contributions), they embody the Matthew Effect in ways that make their real value questionable,  they poorly distinguish large and small contributions to collaborative papers, and the way they rank importance of journals etc is positively mediaeval. It is remarkable to me to surf through Google Scholar’s rankings and find that people who are among the most respected in my field having relatively low indexes while those that just plug away at good but mundane research having higher ones. Such indexes do none-the-less imply the positive judgements of many peers with more rigour and fairness than would normally be found in a doctoral committees, and they give a usable number to grade contributions. So, a high h-index or i10-index (Google’s measure of papers with more than 10 citations) would satisfy at least part of the need for validation of quality of research output. But, by definition, they undervalue the work of new researchers so they would be poor discriminators if they were the only means to evaluate most doctorates. On the other hand, funding councils have already developed fairly mature processes for evaluating early-career researchers, so perhaps some use could be made of those. Indeed, the fact that someone has successfully gained funding from such a council might be used as partial evidence towards accreditation.

A PhD, even one by publication, is more than just an assortment of papers. It is supposed to show a sustained research program and an original contribution to knowledge. I hope that there are few institutions that would award a PhD to someone who had simply had a few unrelated papers published over a period of years, or to someone who had done a lot of mundane but widely cited reports with no particular research merit. So, we need a bit more than citation indexes or other evidence of being a world-class researcher to offer a credible PhD-standard alternative form of certification.

One way to do this would be to broadly mirror the PhD by publication process within the MOOC. We could require peer ‘marking’, by a suitable panel, of a paper linking a range of others into a coherent bit of doctoral research and perhaps defended in a public webmeeting. This would be a little like common European defence processes, in which theses are defended not just in front of professors but also any member of the public (typically colleagues, friends and families) who would want to come along. We could increase the rigour a little by making it a requirement that those participating in such a panel should have to have a sufficiently high h-index or i-index of their own in a similar subject area, and/or have a relevant doctorate. Eventually the system could become self-supporting, once a few graduates had emerged. In time, being part of such a panel would become a mark of prestige in itself. Perhaps, for pedagogic and systemic reasons, engagement in such a panel would be a prerequisite for making your own ‘doctoral’ defence. Your rating might carry a weighting that accorded with your own reputational index, with those starting out weighted quite low and those with doctorates, ‘real’ doctoral students etc having higher indexes. The candidates themselves and other more experienced examiners might rate these novice examiners, so a great review from an early-career candidate might increase their own ranking.  It might be possible to make use of OpenBadges for this, with badges carrying different weights according to who awarded them and for what they were awarded.

Apart from issues of motivation, the big problem with the peer-based approach is that it could be seen as one of the blind leading the blind, as well as potentially raising ethical issues in terms of bias and lack of accountability. A ‘real’ PhD committee/panel/etc is made up of carefully chosen gurus with an established reputation or, at least, it should be. In North America these are normally the people that supervise the student, which is dodgy, but which normally works OK due to accountability and professional ethics. Elsewhere examiners are external and deliberately unconnected with the candidate, or consist of a mix of supervisors and externals. Whatever the details, the main point here is that the examiners are fully accredited experts, chosen and vetted by the institutional processes that make universities reliable judges in the first place. So, to make it more accountable, more use needs to be made of that reputational network that sustains traditional institutions, at least at the start. To make this work, we would need to get a lot of existing academics with the relevant skills on board. Once it had been rolling for a few years, it ought to become self-sustaining.

This is just the germ of an idea – there’s lots of ways we could build a very cheap system that would have at least as much validity as the accreditation procedures used by most universities. If I were an employer, I’d be a lot more impressed by someone with such a qualification than I would by someone with a PhD from most universities. But I’m just playing with ideas here. My intent is not to create an alternative to the educational system, though that would be very interesting and I don’t object to the idea at all, but to highlight the often weird assumptions on which our educational systems are based and ask some hard questions about them. Why and on what grounds do we set ourselves up as arbiters of competence? What value do we actually add to the process? How, given propensities of new technologies and techniques, could we do it better? 

Our educational systems are not broken at all: they are actually designed not to work. Well, ‘design’ is too strong a word as it suggests a central decision-making process has led to them, whereas they are mainly the result of many interconnected decisions (most of which made sense at the time but, in aggregate, result in strange outcomes) that stretch back to mediaeval times. Things like MOOCs (and related learning tools like Wikipedia, the Khan Academy, StackOverflow, etc) provide a good opportunity to think more clearly and concretely about how we can do it better and why we do it the way we do in the first place.

Unintelligent design and the modern MOOC

Everyone is talking about MOOCs.

Every institution of higher learning I visit or talk with seems intent on joining the MOOC scrum or, if not, is coming up with arguments why it shouldn’t. There’s also a wealth of poorly considered, badly researched opinion pieces too, many of them published by otherwise fairly reputable journals and news sources. I’ve been doing my bit to add poorly researched opinion too, talking in various venues about a few ideas and opinions that are not sufficiently rigorously explored to make into a decent paper. This post is still not worthy of a paper, but I think the main idea in it is worth sharing anyway. To save you the trouble of reading the whole thing, I’m going to be making the point that MOOCs disrupt because they quietly remove two of the almost-never-questioned but most-totally-nonsensical foundations on which most traditional university teaching is based – integral accreditation and fixed course lengths – and their poor completion rates therefore encourage us/force us to ask ourselves why we do such things. My hope is that the result of such reflection will be to bring about change. To situate my opinions relative to those of others, I will start by offering a slight caricature of the three main stances that people seem to be taking on MOOCs.

Opinion 1 – it’s all rubbish and online learning is pants

The cantankerati are, of course, telling us that there is nothing new here, or that online learning isn’t as good as face to face, or that it is all hype, or that the learning outcomes are not as good as those at (insert preferred institution, preferably one’s alma mater, here) etc. This is a fad, they tell us. They look at things like drop-out rates or Udacity partnering with Georgia Tech or Coursera moving into competition with Blackboard, or the fact that millenial college students prefer traditional to online classes (err – seriously? that’s like asking iPhone users if they prefer them to Android phones) and nod their heads sagely, smugly and in an ‘I told you so’ fashion. No doubt, when the bubble bursts (as it will) they will be the first to gloat. But they are wrong about the failings of MOOCs, on most significant counts.

Opinion 2 – it’s a step in the right direction, but (insert prejudice here)

Others think that there is something worth preserving here and are trying to invent new variants – usually xOOCs of some kind, or MOOxs, or, in rare cases, xOOxs, liking some aspect of the MOOC idea such as openness or size but not liking others. The acolytes of online learning (AOLs for short, oddly enough) are getting all excited about the fact that people are at last paying attention to what they have been saying for years, though most are tempering their enthusiasm with observations about the appalling pedagogies, the creation of a two-tier system of higher education, problems with accrediting MOOC learning, and  high ‘dropout’ rates. They are wondering why these MOOCish upstarts haven’t read their own august works on the subject which would obviously steer them right.  They will, when pressed, grudgingly admit that these rank enthusiastic amateurs are (dammit) quite signally succeeding in ways they have only dreamed of, but they still know better. There are many of these,  some of which are actually very thoughtful and penetrating and by no means unsubtle in their analysis:  John Daniel’s well-informed sagacious overviewPaul Stacey’s intelligent mourning of the overshadowing of a good idea, or Carol Edwards’s slightly jaundiced but interesting and revealing first-person report for BCIT, for instance. There are far more unsubtle and far less well-informed rants that I won’t bother linking here that complain about the pedagogies, or tell us that there is nothing new at all in this, or that think they see an alternative future etc. Oh, alright – here’s one that I find particularly silly and here are my comments on it.

Opinion 3 – the sky is falling! The sky is falling!

There is a third group that is fairly sure that MOOCs are very important and that they are causing or, at least, catalyzing a seismic shift in education. The popular press clearly demonstrates that there’s a revolution happening, for better or worse, and most people who hold this position want to be on that bandwagon, wherever it may be going. If not, they fear they will be left in the dust. There are some notable holders of this perspective who justify and examine their beliefs in intelligent ways, such as the ever-brilliant Donald Clark, for example, who has recently written a great series of posts that are both critical and rabble-rousing.

And many in between…

Between and spanning these caricatures are some really interesting and perceptive commentaries, and only a few have as clear-cut an opinion as I portray here. Aaron Bady’s post casting a critical eye on the hype, for example, picks apart the sky falling very carefully, and situates itself a little in the ‘right direction’ camp without being too much on the ‘but…’ side of things. The recent Edinburgh report on their pilot MOOCs is a model of careful research and openness to critical and creative thinking.   George Siemens’s excellent analysis of x-vs-c MOOCs is another great piece that avoids much bias one way or the other while identifying some of the key issues for the future.

Where I sit

You could call me a fan. My PhD (completed well over 10 years ago) was largely about how large online crowds can learn together. I’ve signed up for (but not completed) quite a few MOOCs since 2008, and I’ve been a more active participant at times, playing a teaching role in a couple and helping to lead one in early 2011. I ran my first education-oriented web server offering what we would now call open educational resources in 1993. I read an average of two or three articles on MOOCs every day, maybe more. I’ve joined up with the newly formed WideWorldEd project and have been engaged in discussions and planning about MOOCs at three different institutions.

I am definitely not one of the cantakerati though I am highly sceptical of any blanket claim that a particular flavour of teaching leads to better or worse learning than any other, be it online or not. It ain’t what you do, it’s the way that you do it.

I do not believe that the pedagogies of most MOOCs are particularly bad or retrograde. Talking heads, objective tests and other favourite tools of early xMOOC providers are not my cup of tea, and the chaos of cMOOCs (that I like a lot more) seems to favour only a few neterate winners, but most that I have seen are actually at least as good as their paid-for counterparts. There are quite a lot that do not fall neatly into either of these main camps too – e.g. http://ds106.us – and both camps share a lot in common with each other that neither camp seems particularly happy to acknowledge: connectivist networks thread through and around xMOOCs and disrupt their neat outlines, while cMOOCs often employ what look and smell a lot like instructivist lectures as significant parts of the process. But, whatever the similarities, what and how people teach is seldom what and how people actually learn so it is not that important. Quality is not a direct correlate of the pedagogies and other technologies used. In fact, it is interesting to note that a recent article on MOOC junkies highlighted the greater significance of passion in the professor, something I and many others have been saying for quite a while. It ain’t what you do, it’s the way that you do it.

For me, the sky is not falling yet though it certainly has a few more interesting colours than it had a year or two ago and there are some fascinating systemic effects that are mostly, but not all, positive. But this is not the beginning of the end of higher education as we know it. In some ways, it could be the beginning of  something much more interesting.

What really appeals to me most about MOOCs is their almost universally low completion rates. Whatever this means for MOOCs themselves, and however much it upsets their providers (not their learners), in my opinion this is by far their most positive systemic feature. While It ain’t what you do, it’s the way that you do it, I have one important proviso that needs to be added to that: there are some things that you can do that will most probably and in some cases definitely fail to get results. And this is really what this post is about.

So, what about those completion rates?

One thing that many of the cantankerati, the fearfully curious and the AOLs amicably agree on is that that fact that most people drop out of most MOOCs shows that there is something wrong with the idea, or how it has been implemented, or both. Some MOOCs struggle to keep 2% of their students while the best (on horse feeding, as it happens) have managed a little over 40%. The vast majority (so far) have succeeded in keeping less than 10% of their students to the bitter end. This is particularly odd given that, on most MOOCs, the majority of course-takers have at least one degree, many are educators, and quite a few have post-graduate qualifications. These are, for the most part, mature learners who know how to learn and probably think about how they do it.

For some, this is proof that online learning doesn’t work (self-evidently wrong, I’m glad to say, or I and hundreds of thousands of others would be out of a job, Wikipedia would vanish and Google Search would be largely abandoned). For others, it is proof that the pedagogies don’t work (not entirely right either, or no one would take them). The more informed, also known as those who think about it for more than two seconds, realize pretty quickly that MOOCs do not require any strong interest, let alone any significant commitment to sign up to, nor do they demand any prerequisites. So, of course, most people ‘drop out’ within the first couple of weeks, if indeed they pay any attention at all beyond spending less than a minute signing up and vaguely thinking that it might be interesting to take part. They may have insufficient interest, they may find it too hard, too easy, too boring, or too engrossing and demanding of their time. Maybe they don’t like the professor. Maybe they have better things to do. Nor is it any surprise that people whose only commitment is time might drop out after the first couple of weeks – many get what they came for and stop, or they lose interest, or get distracted, or break their computers, or simply run out of time to keep working on it. There has been a little good research and a lot of useful speculation on this, for instance at http://www.katyjordan.com/MOOCproject.html and http://blogs.kqed.org/mindshift/2013/04/why-do-students-enroll-in-but-dont-complete-mooc-courses/ and http://www.openculture.com/2013/04/10_reasons_you_didnt_complete_a_mooc.html and http://mfeldstein.com/emerging_student_patterns_in_moocs_graphical_view/ and http://donaldclarkplanb.blogspot.ca/2013/01/moocs-dropout-category-mistake-look-at.html

But there is something odder going on here that seems to be mostly slipping under the radar, apart from the odd mention here and there by people like Alan Levine and a few others.  I’ve long been bothered by the mysterious and improbable fact that, in higher education, all learning is neatly divisible into 13 (or 15, or 10, or something in that region) week chunks. This normally equates to an average of around 100 hours of study time, give or take a bit. Whatever the particular length chosen, they are almost always unaccountably multiples of chunks of the same size at any given institution, and that size is broadly comparable to other courses/modules/papers/units/etc in other institutions. It’s enough to make you wonder whether there might be a god as it suggests intelligent design may be at work here.

Actually, it’s the result of unintelligent design. This is an evolutionary process in which path dependencies pile up and push their way into adjacent possibles.

So, why do we have courses (or modules/papers/units/etc depending on your geographical region)?

Well, in the first place, it is true that some things take longer to learn than others. Not everything can be mastered by asking a question or looking it up on Wikipedia. That’s completely fair and reasonable. It doesn’t, however, explain why it takes the same amount of time (or multiples of it) for everyone, regardless of skill, experience or engagement, to master everything – Modern European Philosophy, Chemistry 101, Java Data Structures, Literary Culture & the Enlightenment, Icelandic Politics: all fit the same evenly sized periods, or multiples of them. For an explanation of that, we have to turn to a combination of harvest schedules, Christian holidays and the complexities of managing scarce physical resources that are bound by physics to a single and somewhat constrained teaching space.

The word ‘lecturer’ derives from the fact that lecturers used to read from the very valuable and scarce single copies of books held by institutions. Lecture theatres and classrooms were thus the most efficient way to get the content of books heard by the largest possible number of people. If you want to get a lot of people to listen at once then it helps if they are actually there so, if they are taking a religious holiday or helping with the harvest (this last point is a little contentious as it doesn’t fully explain a long break from July to October), there is no point in standing up and talking to an empty lecture hall. So, putting aside Easter’s irritating habit of moving around from year to year that continues to mess up university teaching schedules, this divides things up quite neatly into roughly 13 week chunks separating harvest, Christmas, and Easter breaks. The period may vary a little, but the principle is the same.

This pattern has become quite deeply set into how learning happens at most universities, even though the original reasons it occurred might have faded into insignificance had they not become firmly embedded through momentum and the power of path dependencies. Assessment became intimately linked to the schedule, with ‘mid-terms’ and ‘finals’ and then came to act as a major driver in its own right. Teacher pay and time was allocated according to easily managed chunks and resources. Enrolments, registrations, convocations and the familiar rhythms of the university calendar helped to consolidate the pattern, largely driven by a need for efficiency and bureaucratic convenience. It is really hard to allocate teachers and students to rooms. Up to this point, there was no particular reason to divide the learning experience into modularized chunks and many universities did (and some still do) simply have programs (or programmes or, to confuse matters, courses lasting 3-5 years) with perhaps a few streams but without distinct modularized elements. To cap it off and set it in stone, three forces coincided. One was a laudable desire to allow students the flexibility to take some control over what they learned.  Another was the need to simplify the administration of programs. The last was the need to assert equivalence between what is taught at institutions, whether for certification purposes or for credit transfer. This last force, in particular, has meant that this way of dividing learning into modular chunks of a similar length has become a worldwide phenomenon, even in countries for which Easter and Christmas have no meaning or value.

All of this happened because there had to be a means of managing scarce resources shared among many co-present people as efficiently as possible but, for centuries, there has been no good reason for picking this particular term-length apart from the force of technological momentum.  There have been innovations, here and there. Athabasca University, for instance, gives undergraduates 6 months (extendible at a price) in which to complete work in any way and timeframe that will fit their needs. Similarly, the University of Brighton runs ‘short fat’ masters modules that last for half a week, combined with a period of self-study before and after. But, in order to maintain accreditation parity, the amount of work expected of students on such courses broadly equates to what, in conventional classes, would take – yes – 13-15 weeks. Technically, thanks to a bit of reverse engineering, this translates into roughly 100 hours of study in the UK, a little more or less elsewhere, particularly where people take the insanely bad North American approach of counting teaching hours rather than study hours (what madness gripped people that made them think that was a good idea?).  Whatever the rationale, this has nothing to do with learning, nothing to do with the nature of a topic or subject area, nothing to do with the best way to teach. It’s just the way it turned out, and certification requirements reinforce that anti-educational trend.

So what?

Courses are not neutral technologies. One of the least loveable things about them is that their content, form and process are, at least ostensibly, controlled by teachers from start to finish. Courses are a power trip for educators that, in institutional incarnations, often require some quite unpleasant measures to maintain control, typically based on long-discredited models of human psychology that rely heavily on rewards and punishments – grades, attendance requirements, behavioural constraints in classrooms, etc. That is just plain stupid if you actually want people to learn and believe that it is your job to help that process. There can be few methods apart from deliberate torture and punishment that more reliably take motivated, enthusiastic learners and sap the desire to learn from them. We do this because courses are a certain length and we think that students have to engage in the whole thing or not at all.

Students, meanwhile, have little choice but to accept this or to drop out of the system, but that’s tricky because those uniform-size credentials have become the currency for gaining career advancement and getting a job in the first place.

Teachers need to work on maintaining that control because there are very few topics that can, in and of themselves, sustain a large number of individuals’ interest for 13 solid weeks and those that do are highly unlikely to naturally fit into that precise timeframe. Sure, some students may passionately love the whole thing and may have learned to gain some immunity from the demotivating madness of it all, or the teacher may be one of those rare inspiring people that enthuses everyone she gets to teach. But, for most students, it will be, at best, a mixed bag. Even for those that enjoy much of it, some will be irrelevant, some too easy, some too complicated, some simply dull. But they have to do it because that what the teacher demands that they have to do, and teachers have to fit their courses to this absurd length limit because that’s what the institutions demand that they have to do, and institutions do it because that’s how it has always been done and everyone else does it.

This is not logical.

So much of what makes a great teacher is therefore the ability to overcome insanely stacked odds and work the system so that at least a fair number of people get something good out of it. Teachers have to find ways to enthuse and motivate, to design assessments that are constructively aligned, to perform magic tricks that limit the damage of grading, to build flexible activities that provide learners with a bit of self-determination and control. Sadly, many do not even do that, relying on this juggernaut and the whole unwieldy process to crush students into submission (of assignments). It really doesn’t have to work like that.

This systemic failure is tragic, but understandable and forgivable. There is massive momentum here and opposition to change is designed into the system. It would take a brave teacher to explain to administrators and examination boards that she has decided that the topic she is teaching actually only needs 4 weeks to teach. Or 33 weeks. Or whatever. And, no, it will not have any parity with other courses on the same subject: OK? I would not relish that fight. It is considerably more tragic and less easy to forgive when, without any of those constraints – no formal accreditation, no institutional timetables, no harvest, no regulations, no scarcity of resources  – a few MOOC purveyors do the same thing. What is going on in their heads? My sense is that it is the Meeowmix song…

Meeow-Mix song

Thankfully, an increasing number are not doing that at all: a glance through the range of MOOCs currently on offer via the (excellent) MOOC aggregator at http://www.class-central.com/ shows a range of lengths between 2 and 15 weeks as well as a goodly range of self-paced courses of somewhat indeterminate length. After early attempts mostly replicated university courses, the norm now appears to be around 6 weeks, and falling fast. The rough graphs below (that I created based on class-central’s data) of those starting soon and those that have already finished illustrate this trend quite nicely. Note in particular the relative drop in 10-week and higher courses and the rise in those of 4, 6 and 8 weeks. While it is far from all being down to better teaching – some of the rise in shorter courses is notably due to a trend towards samplers that are intended to draw people in to fee-paying courses – there is a pattern here. And, to counterbalance such forces, it should be remembered that a fair number of the longer courses have ambitions to reintegrate their students within their paid-for broken systems, so they are sometimes timetabled with learning as a secondary consideration and so retain their infeasible length.

MOOC lengths till now…

MOOC lengths (past)

 

 

Mooc lengths for courses about to start…

MOOC lengths (future)

 

Getting away from courses

Though the interest in MOOCs is fuelled and sustained by the fact they are free (though sadly, increasingly not as open as they were in the halcyon days of cMOOCs), popular and online, the really interesting thing about them is the attention they are drawing to what is wrong with the notion, form and above all the length of the course. This little thing is the real revolution. It radically changes the power dynamics. If people begin to disaggregate their courses, making them shorter and less teacher-controlled, they will put learners ever more in control of their own learning, giving them choices and the power to make those choices. Better still, it means that teachers are starting to create courses without unnecessary time constraints that are the size they need to be for the subject being taught. Pedagogy, though still not coming first, is playing a more significant role. But this is just a step in the right direction.

The power of small things

People who question completion rates for MOOCs almost never ask those same questions about Q&A sites, Wikipedia, Khan Academy, Fixya or How-Stuff-Works tutorials, OERS and Google Search. Indeed, the notion of ‘completion’ probably means nothing significant for such just-in-time tools: they are useful, or they are not, they work or they don’t. People use them or they don’t. You might waste a few minutes here and there on things that are unhelpful and those minutes add up but, on the whole, just-in-time learning does what it says on the box. And people use these tools because they need to learn. If someone needs to or wants to learn, you have to try really hard to stop them. But just-in-time is not always the way to go.

Clubs, not courses

I am not a great programmer but it is something I have been doing from time to time for about 30 years. When I’m stuck, I increasingly turn to StackOverflow, a brilliant set of sites based around a collectivized form of discussion forum – a bit more sophisticated than Reddit, a bit less intimidating that SlashDot (which remains perhaps the greatest of all learning tools for anyone with geek tendencies, but that needs a fair bit of skill and effort to get the most out of). StackOverflow doesn’t have courses, but it does have answers, it does have discussions, and it does have some very powerful tools for finding answers that are reliable, useful and appropriate to any particular need. The need can range from the very specific and esoteric (‘why am I getting this error?’) to matters of principle (‘what methodology is best for this problem?’) to general learning (‘what’s the best way to get started in Ruby-on-Rails?’) and everything in between. It’s like having your own immensely wise team of personal tutors, without a beginning date, an end date, or a fixed schedule of activities. This is not a course – it’s more like a Massive Open Online Club, with no restrictions to membership, no commitments, no threshold to joining. Conveniently, this has the same acronym as a MOOC. In fact, just as MOOCs subtly transform the social contract that is involved with traditional courses, so these ‘clubs’ are not exactly like their hierarchical, closed, membership-based forebears. They are what Terry Anderson and I have described as sets: not exactly a network of people you know, certainly not a hierarchically organized system like a group, just a bunch of people with a shared interest, some of whom know more than others about some things.

But what about accreditation?

Why should accreditation be something that happens only in and as a result of a course? It is bizarre and open to abuse that the people who teach a course should also be its accreditors. It is strange in the extreme that they should be the ones to say that students have ‘failed’ when it is obvious that this failure is not just on the part of the students but also of their teachers, which makes those teachers very poor and biased judges of success. It might be just about acceptable if those teachers really are the only ones who know the topic of the course but that is rare. In Eire, students have a right to write and defend a PhD (by definition a unique bit of learning) in Gaelic. Despite the fact that the number of Gaelic speakers who are also experts in many PhD topics is not likely to be huge (unless the topic is Irish history or somesuch) they still manage to find expert examiners for them. It can be done.

At Athabasca University we have a challenge for credit option for many of our courses that can be used to demonstrate competence for certification purposes. Alternatively, if the match in knowledge is not precisely tuned to the credentials we award, we and many others have PLAR or APEL processes that typically use some form of portfolio to demonstrate competence in an area. And then there are upcoming and increasingly significant trends like the move to Open Badges, closed LinkedIn endorsements, gamified learning, or good old fashioned h-index scores that sometimes tell us more, at least as reliably, and in some ways in greater detail than many of our traditional accreditation methods.

There is seldom a good reason to closely link accreditation and learning and every reason not to.  Giving rewards or punishments for learning is the academic equivalent of celery – to digest it consumes more calories than it actually provides, distorting motivation so much that it demotivates.

Summing up

I have no doubt that some people might bemoan the loss of attention implied by just-in-time learning or this weakly structured club-oriented perspective on learning which has no distinct beginning and no specific end. It is true that courses do sometimes include things like ‘problem solving’, ‘argument’, ‘enquiry’, ‘research’ and ‘creativity’ among their intended outcomes and, assuming they provide opportunities to exercise and develop such skills, that’s a lot better than not having them. And some (indeed, many) courses are a genuinely good idea, because it really does take x amount of time to learn some things (where x is a large number) and learning works much more smoothly when you learn with other people and have a specific goal in mind. But many are not such a good idea, and most get the value of x completely wrong. No more should we assume that a 10-week (or 100-hour) course is the right amount of time needed to learn something than we should assume that the answer to teaching is a one-hour lecture (even though it sometimes really is part of a good answer).

There are those who cynically believe that the sole purpose of going to a university is to build a network of contacts and gain credentials that will be valuable in a future career, so you can do what you like to students while they are in college and it won’t matter a bit. In fact, there’s a fair bit of research that shows that it typically doesn’t, which is yet another reason to express concern that we are not doing it right. If that were really what universities were about then I would stop teaching now because it would be boring and pointless. I think that, if we claim that what we are doing is teaching then we should at least try to do so. But accredited, fixed-length courses get in the way of doing that.

It is true that much of the really interesting learning that goes on in courses is not really about the topic, but the process of learning itself – that is why there is a vague and hard to pin down notion of graduateness that makes a fair bit of sense even if it cannot be well expressed or measured, a problem that Dave Cormier and others have grappled with in interesting ways. I’m not at all against lengthy learning paths if that is what is needed to learn, nor do I object at all to letting someone guide you along that path if that is what will get you where you want to be, and I am very much in favour of learning with other people. My problem is that the fixed-size course with fixed learning outcomes and tightly integrated accreditation is not the only way, is seldom the best way, and is often the worst way to do it. The biggest thing that MOOCs are doing, and the most disruptive, is visibly disaggregating the learning process from the unholy alliance of mediaeval bureaucracy and Victorian accreditation methods. As long as MOOCs retain the form and structure of courses that are tied to these unholies, they will (from their purveyors’ rather than their students’ perspectives) mostly fail, and that is a good thing. Even cMOOCs, that deliberately eschew learning outcomes and fixed accreditation, still often fall into a trap of fixed lengths and processes. If we can learn something from that then they have served a useful purpose.

So there you have it – another long, opinionated piece about MOOCs with little empirical data and a lot of hot air. But I think the central point, that fixed course lengths and integrated accreditation lie at the heart of much that is wrong with traditional university education and that MOOCs bring that absurdity into sharp relief, is worth making. I hope you agree.

Afterword

You may have seen my recent post on MOOPhDs and might be wondering whether I am contradicting myself here. Well, maybe a little, and there was a little hint of satirical intent when I first suggested the idea that attempted to exaggerate the concept of the MOOC to show the absurdity of courses. But the MOOPhD idea grew on me and it actually makes a little sense – it does not demand fixed length courses and completely separates out the accreditation from the process, and is far more like an open club or support network than an open course. Indeed, the way PhDs, at least those that follow a vaguely European model, tend to be taught provides an expensive-to-implement but workable model of learning that entirely (or, following a sad trend towards great bureaucratization in some countries, to a moderate extent) avoids courses. So, universities do know how to break the chains. Most just haven’t yet figured out how to do that for their mass-produced courses.

MOOCs are so unambitious: introducing the MOOPhD

Massive Open Online PhDs

During my recent visit to Curtin University, Torsten Reiners, Lincoln Wood and I started brainstorming what we think might be an interesting idea. In brief, it is to build and design what should eventually become a massive, open, online PhD program. Well, nearly. This is a work in progress, but we thought it might be worth sharing the idea to help spark other ideas, get feedback and maybe gather a few people around us who might be interested in it.
The starting point for this was thinking about ways of arranging crowd funding for PhD students, which evolved into thinking about other crowd-based/funded research support tools and systems to support that. For example, we looked at possible ways to not only crowd-fund research projects but to provide structures and tools to assist the process: forming and setting up project teams, connecting with others, providing project management support, proposal writing assistance, presenting and sharing results, helping with the process of writing reports and papers for publication, and so on. Before long, what we were designing began to look a little like a research program. And hence, the MOOPhD (or MOOD – massive open online doctorate).
A MOOPhD is a somewhat different kind of animal from a MOOC. It is much longer and much bigger, for a start – more of a program than a course. For many students it might, amongst other things, encapsulate a variety of MOOCs that would help them to gain knowledge of the research process, including a range of research methods courses and perhaps some more specific subject-related courses.  This is quite apart from the central process of supporting the conduct of original research that would form the ‘course’ itself.
A MOOPhD will also attract a very different kind of learner from those found in most MOOCs, notwithstanding the fact that, so far, a lot of MOOC-takers already have at least a first degree, not uncommonly in the same subject area as the MOOC.
Perhaps the biggest difference between a MOOPhD and a MOOC, at least of the xMOOC variety, is the inevitable lack of certainty about the path to the destination. MOOCs usually have a fairly fixed and clear trajectory, as well as moderately fixed content and coverage.  Even cMOOCs that largely lack specified resources, outcomes and assessments, have topics and timetables mapped out in advance. While the intended outcomes of a PhD are typically pretty clear (the ability to perform original and rigorous research, to write academically sound papers and reports, to design a methodology, review literature, etc), and there are commonalities in the process and landmarks along the way, the paths to reaching those goals are anything but determined. A PhD, to a far greater degree than most courses and lower level programs, specifies a method and processes, but not the content or pathways that will be taken along the way. This raises some very interesting and challenging questions about what we mean by ‘course’ and the wisdom and validity of MOOCs in general, but discussion of that can wait for another post. Suffice to say, it is a bit different from what we have seen so far.
There are many existing sites and systems that provide at least some of the tools and methods needed. I have had peripheral involvement with a support network for students investigating learning analytics, for example, and have helped to set up a site to provide resources for graduate students and their supervisors. There are commercial sites like academia.edu and ResearchGate that connect academics, including graduate students. There are some existing MOOCs on research methods and crowd-funding sites to help with fees and kick-starting projects such as http://www.rockethub.com/ or www.razoo.com.  And, of course, there is the complete system of journal and conference reviewing that provides invaluable feedback for nascent researchers. Like all technologies, what we are thinking about involves very little if anything that is radically new, but is mostly an assembly of existing pieces. 
It is likely that, for many, a PhD or other doctorate would not be the final outcome. People would pick and choose the parts that are of value, helping them to set up projects, write papers or form networks. Others might treat it as a useful resource for a more traditional doctoral learning journey.
 

So what might a MOOPhD look like? 

A MOOPhD would, of necessity, be highly modular, offering student-controlled support for all parts of the research process, from research process teaching, through initial proposals, through project management, through community support, through paper writing etc. Students would choose the parts that would be of value to them at different times. Different students would have different needs and interests, and would need different support at different points along the journey. For some, they might just need a bit of help with writing papers. For others, the need might be for gaining specific skills such as statistical analysis or learning how to do reviews.  More broadly, the role of a supervisory team in modelling practice and attitudes would be embedded throughout.
Importantly, apart from badges and certificates of ‘attendance’, a MOOPhD would not be concerned with accreditation. We would normally expect existing processes for PhDs by publication that are available at many institutions to provide the summary assessment, so the program itself would simply be preparation for that. As a result of this process, students would accrue a body of research publications that could be used as evidence of a sustained research journey, and a set of skills that would prepare them for viva voces and other more formal assessment methods. This would be good for universities as they would be able to award more PhDs without the immense resources that are normally needed, and good for students who would need to invest less money (and maybe be surrounded by a bigger learning community).
 

Some features and tools

A MOOPhD might contain (amongst other things):
  • A community of other research students, with opportunities to build and sustain networks of both peers (other students) and established researchers
  • MOOCs to help cover research methods, subject specialisms, etc
  • A great deal of scaffolding: resources to help explain the process, information about everything from ethics to citation, means and criteria to self-assess such as wizards, forms and questionnaires, guidelines for reviewing papers, etc
  • Mentors (not exactly supervisors – too tricky to deal with the numbers)  including both experienced academics and others further on in the PhD process. Mentors might provide input to a group/action learning set of students rather than to individuals, and thus allow students to observe behaviours that the academics model.
  • Exemplars – e.g. marked-up reviews of papers. This is vital as one of the ways of allowing established academics to provide role models and show what it means to be an academic
  • Plentiful resources and links relevant to the field (crowd-generated)
  • A filtering and search system to help identify people and things 
  • A means to provide peer review to others (akin to an online journal submission system)
  • A means to have one’s own ideas and papers reviewed by peers
  • Tutorial support – most likely a variant on action learning sets to support the process. This would cover the whole process from brainstorming, to literature review, to methodology design, to conduct and analysis of research, to evaluation etc. Ideally, each set would be facilitated by a professional academic or at least an experienced peer.
  • A professionally peer reviewed journal system, with experienced academic editorial committees and reviewers (who would only see papers already ranked highly in peer review), leading to publication
  • Support for gaining funding – including crowd funding – for the research, particularly with regard to projects needing resources not already available
  • Support for finding collaborators
  • Support for managing the process – both of the whole venture as well as specific projects
  • Non-academic support – counselling and advice
  • Tools and resources to find accreditors – this is not about providing qualifications but preparing students so that they can easily get them

Some issues

There are some complex and significant problems to solve before this becomes a reality, including:

Accreditation

The main idea behind this is to prepare students for a PhD by publication, not to award doctorates. It is essentially about managing a research learning process and helping students to publish results. However, sustaining motivation over a long period without the promise of accreditation might be an issue.

Access to resources

One of the biggest benefits of an institution for a PhD student is access to closed journals and libraries. While it is possible to pay for such access separately from a course, and a system would certainly contain links to ways of discovering open articles, this could be an obstacle. Of course, while we would not and could not condone the use of the community to share closed articles, it is hard to see how we could police such sharing. 

Ethics

Without an institutional backdrop, there would be no easy way to ensure ethical research. Resources could be provided, action learning sets could be used to discuss such concerns, and counselling might be available (perhaps at a price) to help ensure that a process would be followed that wouldn’t pose an obstacle to gaining accreditation, but it would be difficult to ensure an ethically sound process was followed. This is an area where different countries, regions and universities follow different procedures anyway, and there is only broad uniformity around the world, so some flexibility would be needed.

Governance

Beyond issues of ethics, there is a need to find solutions to disputes, grievances, allegations of cheating etc. This might be highly distributed and enabled through crowd-based processes. A similar issue relates to ‘approvals’ of research projects: there would probably need to be something akin to the typical review processes that determine whether a student’s progress and/or proposed path are sufficient. It is likely that action learning sets could play a big role in assisting this process.

Subject specificity

The skills (and resources) needed for different types of PhD can vary enormously – the skills and resources needed by a mathematician are worlds away from those needed by someone engaged in literary criticism, which are worlds away from those needed by a physicist, astronomer or biologist. It would probably be too big a task to cater for all, and some might be all but impossible (e.g. if they require access to large hadron colliders or telescopes, or are performing dangerous, large scale or simply complex experiments). To some extent this is not the huge problem it first appears to be. It is likely that most of those interested in pursuing this process would already be either working in a relevant field (and thus have resources to call upon) or already be enrolled in an academic program, which would reduce some of the problem, but the chances are that the most likely areas where this process could successfully be applied would be those requiring few resources beyond a good brain, commitment and a computer. There are opportunities for multiple instances of this process across multiple subject areas and disciplines. Given our interests and constraints, we would probably aim in the first instance for people interested in education, technology, business, or some combination of these. However, there is scope for a much broader diversity of systems, probably linked in some ways to gain the benefits of common shared resources and a larger community.

Cold start

As the point of this is to leverage the crowd, it will be of little value if there is not already a crowd involved. The availability of high-quality resources, links and MOOCs might be sufficient to provide an initial boost to draw people to the system, as would a team of interesting mentors and participants, but it would still take a while to pick up steam.

Trust

In some fields, students are already reluctant to share information about their research, so this might be especially tricky in an open PhD process. Building sufficient trust in action learning sets and across the broader community may be problematic. Already, the openness needed for many MOOCs poses a challenge for some, but this process would require more disclosure an an ongoing basis than normal. This might be the price to be paid for an otherwise free program. However, the anticipated high drop-out rate would make it difficult to sustain tight-knit research groups/action learning sets over a prolonged period, and we would probably need to think more about cooperative than collaborative processes, so this may be difficult to manage. 

Start-up costs and maintenance

This will not be a cheap system to build, though development might be staggered. Resources would be needed for building and maintaining the server(s), creating content, managing the editing process for the journal, and so on. Potential funding models include start-up grants, company sponsorship (the value to organizations of a process like this could be immense), crowd-funding, subscription, advertising/marketing, etc. Selling lists of participants bothers me, ethically, but a voluntary entry onto a register that might be passed on to interested companies for a fee might have high value. While we might not award doctorates, those who could stay the course would clearly be very desirable potential employees or research team members.

Encouraging academics to participate

Altruism and social capital can sustain a relatively brief open course, but this kind of process would (unless a different approach can be discovered) require long term commitment and engagement by professional academics. There may be ways to provide value to academics beyond the pleasure of contributing and learning from students. For instance, students may be expected/required to cite academics as co-authors where those academics have had some input into the process, whether in feedback along the way or in reviewing/completing papers they have written, or may be granted access to data collected by students. This would provide some incentive to academics to help ensure the quality of the research, and help students by seeing an experienced academic’s thinking processes in action.

Summary

This is a work in progress and there are some big obstacles in the way of making it a reality. We would welcome any ideas, suggestions or expressions of interest!