If you receive an unexpected email from what you might, at first glance, assume to me, especially if it is in atrocious English, don’t reply to it until you have looked very closely at the sender’s email address and have thought very carefully about whether I would (in a million years) ask you for whatever help it wants from you.
Being on sabbatical, my AU inbox has been delightfully uncrowded of late, so I rarely look at it until I’ve got a decent amount of work done most days, and occasionally skip checking it altogether, but a Skype alert from a colleague made me visit it in a hurry a couple of days back. I found a deluge of messages from many of my colleagues in SCIS, mostly telling me my identity had been stolen (it hadn’t), though a few asked if I really needed money, or wanted my groceries to be picked up. This would be a surprising, given that I live about 1000km away from most of them. All had received messages in poorly written English purporting to be from me, and at least a couple of them had replied. One – whose cell number was included in his sig – got a phishing text almost immediately, again claiming to be from me: this was a highly directed and malicious attack.
The three simple tricks that made it somewhat believable were:
the fraudsters had created a (real) Gmail account using the username, jondathabascauca. This is particularly sneaky inasmuch as Gmail allows you to insert arbitrary dots into the name part of your email address, so they turned this email@example.com, which was sufficiently similar to the real thing to fool the unwary.
the crooks simply copied and pasted the first part of my official AU page as a sig, which is pretty odd when you look at it closely because it included a plain text version of the links to different sections on the actual page (they were not very careful, and probably didn’t speak English well enough to notice), but again looks enough like a real sig to fool someone glancing at it quickly in the midst of a busy morning.
they (apparently) only sent the phishing emails to other people listed on the same departmental bio pages, rightly assuming that all recipients would know me and so would be more likely to respond. The fact that the page still (inaccurately) lists me as school Chair probably probably means I was deliberately singled out.
As far as I know they have not extended the attacks further than to my colleagues in SCIS, but I doubt that this is the end of it. If they do think I am still the Chair of the school, it might occur to them that chairs tend to be known outside their schools too.
This is not identity theft – I have experienced the real thing over the past year and, trust me, it is far more unpleasant than this – and it’s certainly not hacking. It’s just crude impersonation that relies on human fallibility and inattention to detail, that uses nothing but public information from our website to commit good old fashioned fraud. Nonetheless, and though I was not an intended victim, I still feel a bit violated by the whole thing. It’s mostly just my foolish pride – I don’t so much resent the attackers as the fact that some of the recipients jumped to the conclusion that I had been hacked, and that some even thought the emails were from me. If it were a real hack, I’d feel a lot worse in many ways, but at least I’d be able to do something about it to try to fix the problem. All that I can do about this kind of attack is to get someone else to make sure the mail filters filter them out, but that’s just a local workaround, not a solution.
We do have a team at AU that deals with such things (if you have an AU account and are affected, send suspicious emails to firstname.lastname@example.org), so this particular scam should have been stopped in its tracks, but do tell me if you get a weird email from ‘me’.
Here are my slides from E-Learn 2019, in New Orleans. The presentation was about the nature of technologies and their roles in communities (groups, networks, sets, whatever), their highly situated nature, and their deep intertwingling with culture. In general it is an argument that literacies (as opposed to skills, knowledge, etc) might most productively and usefully be seen as the hard techniques needed to operate the technologies that are required for any given culture. As well as clarifying the term and using it in the same manner as the original term “literacy”, this implies there may be an indefinitely large range of literacies because we are all members of an indefinitely large number of overlapping cultures. All sorts of possibilities and issues emerge from this perspective.
Abstract: Dozens, if not hundreds, of literacies have been identified by academic researchers, from digital- to musical- to health- to network- literacy, as well as combinatorial terms like new-, multi-, 21st Century-, and media-literacy. Proponents seek ways to support the acquisition of such literacies but, if they are to be successful, we must first agree what we mean by ‘literacy’. Unfortunately, the term is used in many inconsistent and incompatible ways, from simple lists of skills to broad characteristics or tendencies that are either ubiquitous or meaninglessly vague. I argue that ‘literacy’ is most usefully thought of as the set of learned techniques needed to participate in the technologies of a given culture. Through use and application of a culture’s techniques, increasing literacy also leads to increasing knowledge of the associated facts and adoption of the values that come with that culture. Literacy is thus contextually situated, mutates over time as a culture and its technologies evolve, and participates in that co-evolution. As well as subsuming and eliminating much of the confusion caused by the proliferation of x-literacies, this opens the door to more accurately recognizing the literacies that we wish to use, promote and teach for any given individual or group.
There are other reasons (political, aesthetic, reputational, moral, corruption/bribery/kickbacks, familiarity, etc) but I reckon those are the main ones that matter. They are all very good reasons.
Costs and debts
With each IT solution there will always be costs, both initial and ongoing. Because we are talking about technology, and all technologies evolve to greater complexity over time, the ongoing costs will inevitably escalate. It’s not optional. This is what is commonly described as the ‘technological debt’ but that is a horrible misnomer. It is not a debt, but the price we pay for the solutions we need. If we don’t do it, our IT systems decay and die, starved of their connections with the evolving business and global systems around them. It’s no more of a debt than the need to eat or receive medical care is a debt for living.
Thinking locally, not globally
When money needs to be saved in an organization, senior executives tend to look at the inevitably burgeoning cost of IT and see it as ripe for pruning. IT managers thus tend to be placed under extreme pressure to ‘save’ costs. IT managers might often be relieved about that because they are almost certainly struggling to maintain the customized apps already, unless they have carefully planned for those increased costs over years (few do). Sensibly (from their own local perspective, given what they have been charged with doing), they therefore tend to strip out customizations, then shift to baseline applications, and/or cloud-based services that offer financial savings or, at least, predictable costs, giving the illusion of control. Often, they wind up firing, repurposing, or not renewing contracts for development staff, support staff, and others with deep knowledge of the old tools and systems. This keeps the budget in check so they achieve the goals set for them.
Unfortunately, assuming that the organization continues to need to do what it has been doing up to that point, the unavoidable consequence is that things that computers used to do are now done by people in the workforce instead. When made to perform hard mechanical tasks that computers can and should do, people are invariably far more fallible, slow, inconsistent, and inefficient. Far more. They tend to be reluctant, too. To make things worse, these mundane repetitive tasks take time, and crowd out other, more important things that people need to do, such as the things they were hired for. People tend to get tired, angry, and frustrated when made to do mechanical things over which they have little agency, which reduces productivity much further than simply the time lost in doing them. To make matters even worse, there is inevitably going to be a significant learning curve, during which staff try to figure out how to do the work of machines. This tends to lead to inflated training budgets (usually involving training sessions that, as decades of research show, are rarely very effective and that have to be repeated), time to read documentation, and more time taken out of the working day. Creativity, ingenuity, innovation, problem-solving, and interaction with others all suffer. The organization as a whole consequently winds up losing many times more (usually by orders of magnitude) than they saved on IT costs, though the IT budget now looks healthy again so it is often deemed to be a success. This is like taking the wheels off a car then proudly pointing to the savings in fuel that result. Unfortunately, such general malaises seldom appear in budget reports, and are rarely accounted for at all, because they get lost in the work that everyone is doing. Often, the only visible signs that it has happened are that the organization just gets slower, less efficient, less creative, more prone to mistakes, and less happy. Things start to break, people start to leave, sick days multiply. The reputation of the organization begins to suffer.
This is usually the point that more radical large scale changes to the organization are proposed, again usually driven by senior management who (unless they listen very carefully to what the workforce is telling them) may well attribute the problems they are seeing to the wrong causes, like external competition. A common approach to the problem is to impose more austerity, thus delivering the killing blow to an already demoralized workforce. That’s an almost guaranteed disaster. Another common way to tackle it is to take greater risks, made all the more risky thanks to having just converted creative, problem-solving, inquisitive workers into cogs in the machine, in the hope of opening up new sources of revenue or different goals. When done under pressure, that seldom ends well, though at least it has some chance of success, unlike austerity. This vicious cycle is hard to escape from. I don’t know of any really effective way to deal with it once it has happened.
Thinking in systems
The way to avoid it in the first place is not to kill off and directly replace custom IT solutions with baseline alternatives. There are very good reasons for almost all of those customizations that have almost certainly not gone away: all those I mentioned at the start of the post don’t suddenly cease to apply. It is therefore positively stupid to simply remove them without an extremely deep, multifaceted analysis of how they are used and who uses them, and even then with enormous conservatism and care. However, you probably still want to get rid of them eventually anyway, because, as well as being an ever-increasing cost, they have probably become increasingly out of line with how the organization and the world around it is evolving. Unless there has been a steady increase in investment in new IT staff (too rare), so much time is probably now spent keeping old systems going that there is no time to work on improvements or new initiatives. Unless more money can be put into maintaining them (a hard sell, though important to try) the trick is not to slash and burn, and definitely not to replace old customized apps with something different and less well-tailored, but to gently evolve towards whatever long-term solution seems sensible using techniques such as those I describe below. This has a significant cost, too, but it’s not usually as high, and it can be spread over a much longer period.
If you wish to move away from reliance on a heavily customized learning management system to a more flexible and adaptive learning ecosystem made of more manageable pieces, the trick is to, first of all, build connectors into and out of your old system (if they do not already exist), to expose as many discrete services as possible, and then to make use of plugin hooks (or similar) to seamlessly replace existing functions with new ones. The same may well need to be done with the new system, if it does not already work that way. This is the most expensive part, because it normally demands development time, and what is developed will have to be maintained, but it’s worth it. What you are doing, at an abstract level, is creating boundaries around parts that can be treated as distinct (functions, components, objects, services, etc) and making sure that the signals that pass between them can be understood in the same way by subsystems on either side of the boundary.
Open industry standards (APIs, protocols, etc) are almost essential here, because apps at both sides of the boundary need to speak the same language. Proprietary APIs are risky: you do not want to start doing this then have a vendor decide to change its API or its terms and conditions. It’s particularly dangerous to do this with proprietary cloud-based services, where you don’t have any control whatsoever over APIs or backends, and where sudden changes (sometimes without even a notification that they are happening) are commonplace. It’s fine to use containers or virtual machines in the cloud – they can be replaced with alternatives if things go wrong, and can be treated much like applications hosted locally – and it’s fine to use services with very well defined boundaries, with standards-based APIs to channel the signals. It is also fine to build your own, as long as you control both sides of the boundary, though maintenance costs will tend to be higher. It is not fine to use whole proprietary applications or services in the cloud because you cannot simply replace them with alternatives, and changes are not under your control. Ideally, both old and new systems should be open source so that you are not bound to one provider, you can make any changes you need (if necessary), and you can rely on having ongoing access to older versions if things change too fast.
Having done this, you have two main ways to evolve, that you can choose according to needs:
to gradually phase in the new tools you want and phase out the old ones you don’t want in the old system until, like the ship of Theseus, you have replaced the entire thing. This lets you retain your customizations and existing investments (especially in knowledge of those systems) for the longest time, because you can replace the parts that do not rely on them before tackling those that do. Meanwhile, those same fresh tools can start to make their appearance in whatever other new systems you are trying to build, and you can make a graceful, planned transition as and when you are ready. This is particularly useful if there is a great deal of content and learning already embedded in the system, which is invariably the case with LMSs. It means people can mostly continue to work the way they’ve always worked, while slowly learning about and transitioning to a new way of working.
to make use of some services provided by the old system to power the new one. For instance, if you have a well-established means of generating class lists or collecting assessment data that involves a lot of custom code, you can offer that as a service from the old tool to your new tool, rather than reimplementing it afresh straight away or requiring users to manually replace the custom functions with fallible human work. Eventually, once the time is right to move and you can afford it, you can then simply replace it with a different service, with virtually no disruption to anyone. This is better when you want a clean break, especially useful when the new system does things that the original could not do, though it still normally allows simultaneous operation for a while if needed, as well as the option to fall back to the old system in the event of a disaster.
There are other hybrid alternatives, such as setting up other systems to link both, so that the systems do not interact directly but via a common intermediary. In the case of an LMS migration, this might be a learning record store (LRS) or student record system, for instance. The general principle, though, is to keep part or all of the old system running simultaneously for however long it is needed, parcellating its tools and services, while slowly transitioning to the new. Of course, this does imply extra cost in the short term, because you now have to manage at least two systems instead of one. However, by phasing it this way you greatly reduce risk, spread costs over a timeframe that you control, and allow for changes in direction (including reversal) along the way, which is always useful. The huge costs you save are those that are hidden from conventional accounting – the time, motivation, and morale of the workforce that uses the system. As a useful bonus, this service-oriented approach to building your systems also allows you to insert other new tools and implement other new ideas with a greatly diminished level of risk, with fewer recurring costs, and without the one-time investment of having to deal with your whole monolithic codebase and data. This is great if you want to experiment with innovations at scale. Once you have properly modularized your system, you can grow it and change it by a process of assembly. It often allows you to offer more control to end users, too: for instance, in our LMS example you might allow individuals to choose between different approaches to a discussion forum, or content presentation, or to insert a research-based component without so many of the risks (security, performance, reliability, etc) normally associated with implementing less well-managed code.
Signals and boundaries
In essence, this is all about signals and boundaries. The idea is to identify and, if they don’t exist, create boundaries between distinct parts of systems, then to focus all your management efforts on the signals that pass across them. As long as the signals remain the same from both sides, what lies on either side of the boundaries can be isolated and replaced when needed. This happens to be the way that natural systems mainly evolve too, from organisms to ecosystems. It has done pretty good service for a good billion years or so.
Tony Bates extensively referenced this report from the Royal Bank of Canada on Canadian employer demands for skills over the next few years, in his characteristically perceptive keynote at CNIE 2019 last week (it’s also referred to in his most recent blog post). It’s an interesting read. Central to its many findings and recommendations are that the Canadian education system is inadequately designed to cope with these demands and that it needs to change. The report played a big role in Tony’s talk, though his thoughts on appropriate responses to that problem were independently valid in and of themselves, and not all were in perfect alignment with the report.
The 43-page manifesto (including several pages of not very informative graphics) combines some research findings, with copious examples to illustrate its discoveries, and with various calls to action based on them. I guess not surprisingly for a document intended to ignite, it is often rather hard to tell in any detail how the research itself was conducted. The methodology section is mainly on page 33 but it doesn’t give much more than a broad outline of how the main clustering was performed, and the general approach to discovering information. It seems that a lot of work went into it, but it is hard to tell how that work was conducted.
A novel (-ish) finding: skillset clusters
Perhaps the most distinctive and interesting research discovery in the report is a predictive/descriptive model of skillsets needed in the workplace. By correlating occupations from the federal NOC (National Occupational Classification) with a US Labor Department dataset (O*NET) the researchers abstracted and identified six distinct clusters of skillsets, the possessors of which they characterize as:
solvers (engineers, architects, big data analysts, etc)
From this, they make the interesting, if mainly anecdotally supported, assertion that there are clusters of occupations across which these skills can be more easily transferred. For instance, they reckon, a dental assistant is not too far removed from a graphic designer because both are high on the facilitator spectrum (emotional intelligence needed). They do make the disclaimer that, of course, other skills are needed and someone with little visual appreciation might not be a great graphic designer despite being a skilled facilitator. They also note that, with training, education, apprenticeship models, etc, it is perfectly possible to move from one cluster to another, and that many jobs require two or more anyway (mine certainly needs high levels of all six). They also note that social skills are critical, and are equally important in all occupations. So, even if their central supposition is true, it might not be very significant.
There is a somewhat intuitive appeal to this, though I see enormous overlap between all of the clusters and find some of the exemplars and descriptions of the clusters weirdly misplaced: in what sense is a carpenter not a crafter, or a graphic designer not a provider, or an electrician not a solver, for instance? It treads perilously close to the borders of x-literacies – some variants of which come up with quite similar categories – or learning style theories, in its desperate efforts to slot the world into manageable niches regardless of whether there is any point to doing so. The worst of these is the ‘doers’ category, which seems to be a lightly veiled euphemism for ‘unskilled’ (which, as they rightly point out, relates to jobs that are mostly under a great deal of threat). ‘Doing’ is definitely ripe for transfer between jobs because mindless work in any occupation needs pretty much the same lack of skill. My sense is that, though it might be possible to see rough patterns in the data, the categories are mostly very fuzzy and blurred, and could easily be used to label people in very unhelpful ways. It’s interesting from a big picture perspective, but, when you’re applying it to individual human beings, this kind of labelling can be positively dangerous. It could easily lead to a species of the same general-to-specific thinking that caused the death of many airplane pilots prior to the 1950s, until the (obvious but far-reaching) discovery that there is no such thing as an average-sized pilot. You can classify people into all sorts of types, but it is wrong to make any further assumptions about them because you have done so. This is the fundamental mistake made by learning style theorists: you can certainly identify distinct learner types or preferences but that makes no difference whatsoever to how you should actually teach people.
Education as a feeder for the job market
Perhaps the most significant and maybe controversial findings, though, are those leading more directly to recommendations to the educational and training sector, with a very strong emphasis on preparedness for careers ahead. One big thing bothers me in all of this. I am 100% in favour of shifting the emphasis of educational institutions from knowledge acquisition to more fundamental and transferable capabilities: on that the researchers of this report hit the nail on the head. However, I don’t think that the education system should be thought of, primarily, as a feeder for industry or preparation for the workplace. Sure, it’s definitely one important role for education, but I don’t think it’s the dominant one, and it’s very dangerous indeed to make that its main focus to the exclusion of the rest. Education is about learning to be a human in the context of a society; it’s about learning to be part of that culture and at least some of its subcultures (and, ideally, about understanding different cultures). It’s a huge binding force, it’s what makes us smart, individually and collectively, and it is by no means limited to things we learn in institutions or organizations. Given their huge role in shaping how we understand the world, at the very least media (including social media) should, I think, be included whenever we talk of education. In fact, as Tony noted, the shift away from institutional education is rapid and on a vast scale, bringing many huge benefits, as well as great risks. Outside the institutions designed for the purpose, education is often haphazard, highly prone to abuse, susceptible to mob behaviours, and often deeply harmful (Trump, Brexit, etc being only the most visible tips of a deep malaise). We need better ways of dealing with that, which is an issue that has informed much of my research. But education (whether institutional or otherwise) is for life, not for work.
I believe that education is (and should be) at least partly concerned with passing on what we know, who we have been, who we are, how we behave, what we value, what we share, how we differ, what drives us, how we matter to one another. That is how it becomes a force for societal continuity and cohesion, which is perhaps its most important role (though formal education’s incidental value to the economy, especially through schools, as a means to enable parents to work cannot be overlooked). This doesn’t have to exclude preparation for work: in fact, it cannot. It is also about preparing people to live in a culture (or cultures), and to continue to learn and develop productively throughout their lives, evolving and enhancing that culture, which cannot be divorced from the tools and technologies (including rituals, norms, rules, methods, artefacts, roles, behaviours, etc) of which the cultures largely consist, including work. Of course we need to be aware of, and incorporate into our teaching, some of the skills and knowledge needed to perform jobs, because that’s part of what makes us who we are. Equally, we need to be pushing the boundaries of knowledge ever outwards to create new tools and technologies (including those of the arts, the humanities, the crafts, literature, and so on, as well as of sciences and devices) because that’s how we evolve. Some – only some – of that will have value to the economy. And we want to nurture creativity, empathy, social skills, communication skills, problem-solving skills, self-management skills, and all those many other things that make our culture what it is and that allow us to operate productively within it, that also happen to be useful workplace skills. But human beings are also much more than their jobs. We need to know how we are governed, the tools needed to manage our lives, the structures of society. We need to understand the complexities of ethical decisions. We need to understand systems, in all their richness. We need to nurture our love of arts, sports, entertainment, family life, the outdoors, the natural and built environment, fine (and not fine) dining, being with friends, talking, thinking, creating stuff, appreciating stuff, and so on. We need to develop taste (of which Hume eloquently wrote hundreds of years ago). We need to learn to live together. We need to learn to be better people. Such things are (I think) more who we are, and more what our educational systems should focus on, than our productive roles in an economy. The things we value most are, for the most part, seldom our economic contributions to the wealth of our nation, and the wealth of a nation should never be measured in economic terms. Even those few that love money the most usually love the power it brings even more, and that’s not the same thing as economic prosperity for society. In fact, it is often the very opposite.
I’m not saying economic prosperity is unimportant, by any means: it’s often a prerequisite for much of the rest, and sometimes (though far from consistently) a proxy marker for them. And I’m not saying that there is no innate value in the process of achieving economic prosperity: many jobs are critical to sustaining that quality of life that I reckon matters most, and many jobs actually involve doing the very things we love most. All of this is really important, and educational systems should cater for it. It’s just that future employment should not be thought of as the main purpose driving education systems.
Unfortunately, much of our teaching actually is heavily influenced by the demands of students to be employable, heavily reinforced on all sides by employers, families, and governments, and that tends to lead to a focus on topics, technical skillsets, and subject knowledge, not so much to the exclusion of all the rest, but as the primary framing for it. For instance, HT to Stu Berry and Terry Anderson for drawing my attention to the mandates set by the BC government for its post secondary institutions, that are a litany of shame, horribly focused on driving economic prosperity and feeding industry, to the exclusion of almost anything else (including learning and teaching, or research for the sake of it, or things that enrich us as human beings rather than cogs in an economic machine). This report seems to take the primary role of education as a driver of economic prosperity as just such a given. I guess, being produced by a bank, that’s not too surprising, but it’s worth viewing it with that bias in mind.
And now the good news
What is heartwarming about this report, though, is that employers seem to want (or think they will want) more or less exactly those things that also enrich our society and our personal lives. Look at this fascinating breakdown of the skills employers think they will need in the future (Tony used this in his slides):
There’s a potential bias due to the research methodology, that I suspect encouraged participants to focus on more general skills, but it’s really interesting to see what comes in the first half and what dwindles into unimportance at the end.
Topping the list are active listening, speaking, critical thinking, comprehension, monitoring, social perceptiveness, coordination, time management, judgement and decision-making, active learning, service orientation, complex problem solving, writing, instructing, persuasion, learning strategies, and so on. These mostly quite abstract skills (in some cases propensities, albeit propensities that can be cultivated) can only emerge within a context, and it is not only possible but necessary to cultivate them in almost any educational intervention in any subject area, so it is not as though they are being ignored in our educational systems. More on that soon. What’s interesting to me is that they are the human things, the things that give us value regardless of economic value. I find it slightly disconcerting that ethical or aesthetic sensibilities didn’t make the list and there’s a surprising lack of mention of physical and mental health but, on the whole, these are life skills more than just work skills.
Conventional education can and often does cultivate these skills. I am pleased to brag that, as a largely unintentional side-effect of what I think teaching in my fields should be about, these are all things I aim to cultivate in my own teaching, often to the virtual exclusion of almost everything else. Sometimes I have worried (a little) that I don’t have very high technical expectations of my students. For instance, my advanced graduate level course in information management provides technical skills in database design and analysis that are, for the most part, not far above high-school level (albeit that many students go far beyond that); my graduate level social computing course demands no programming skills at all (technically, they are optional); my undergraduate introduction to web programming course sometimes leads to limited programming skills that would fail to get them a passing grade in a basic computer science course (though they typically pass mine). However (and it’s a huge HOWEVER) they have a far greater chance to acquire far more of those skills that I believe matter, and (gratifyingly) employers seem to want, than those who focus only on mastery of the tools and techniques. My web programming students produce sites that people might actually want to visit, and they develop a vast range of reflective, critical thinking, complex problem-solving, active learning, judgment, persuasion, social perceptiveness and other skills that are at the top of the list. My information management students get all that, and a deep understanding of the complex, social, situated nature of the information management role, with some notable systems analysis skills (not so much the formal tools, but the ways of understanding and thinking in systems). My social computing students get all that, and come away with deep insights into how the systems and environments we build affect our interactions with one another, and they can be fluent, effective users and managers of such things. All of the successful ones develop social and communication skills, appropriate to the field. Above all, my target is to help students to love learning about the subjects of my courses enough to continue to learn more. For me, a mark of successful teaching is not so much that students have acquired a set of skills and knowledge in a domain but that they can, and actually want to, continue to do so, and that they have learned to think in the right ways to successfully accomplish that. If they have those skills, then it is not that difficult to figure out specific technical skillsets as and when needed. Conveniently, and not because I planned it that way, that happens to be what employers want too.
Employers don’t (much) want science or programming skills: so what?
Even more interesting, perhaps, than the skills employers do want are the skills they do not want, from Operation Monitoring onwards in the list, that are often the primary focus of many of our courses. Ignoring the real nuts and bolts stuff at the very bottom like installation, repairing, maintenance, selection (more on that in a minute), it is fascinating that skills in science, programming, and technology design are hardly wanted at all by most companies, but are massively over-represented in our teaching. The writers of the report do offer the proviso that it is not impossible that new domains will emerge that demand exactly these skills but, right now and for the foreseeable future, that’s not what matters much to most organizations. This doesn’t surprise me at all. It has long been clear that the demand for people that create the foundations is, of course, going to be vastly much smaller than the demand for people that build upon them, let alone the vastly greater numbers that make use of what has been built upon them. It’s not that those skills are useless – that’s a million miles from the truth – but that there is a very limited job market for them. Again, I need to emphasize that educators should not be driven by job markets: there is great value in knowing this kind of thing regardless of our ability to apply it directly in our jobs. On the other hand, nor should we be driven by a determination to teach all there is to know about foundations, when what interests people (and employers, as it happens) is what can be done with them. And, in fact, even those building such foundations desperately need to know that too, or the foundations will be elegant but useless. Importantly, those ‘foundational’ skills are actually often anything but, because the emergent structures that arise from them obey utterly different rules to the pieces of which they are made. Knowing how a cell works tells you nothing whatsoever about function of a heart, let alone how you should behave towards others, because different laws and principles apply at different levels of organization. A sociologist, say, really doesn’t need to know much about brain science, even though our brains probably contribute a lot to our social systems, because it’s the wrong foundation, at the wrong level of detail. Similarly, there is not a lot of value in knowing how CPUs work if your job is to build a website, or a database system supporting organizational processes (it’s not useless, but it’s not very useful so, given limited resources, it makes little sense to focus on it). For almost all occupations (paid or otherwise) that make use of science and technology, it matters vastly much more to understand the context of use, at the level of detail that matters, than it does to understand the underlying substructures. This is even true of scientists and technologists themselves: for most scientists, social and business skills will have a far greater effect on their success than fundamental scientific knowledge. But, if students are interested in the underlying principles and technologies on which their systems are based, then of course they should have freedom and support to learn more about them. It’s really interesting stuff, irrespective of market demand. It enriches us. Equally, they should be supported in discovering gothic literature, social psychology, the philosophy of art, the principles of graphic design, wine making, and anything else that matters to them. Education is about learning to be, not just learning to do. Nothing of what we learn is wasted or irrelevant. It all contributes to making us creative, engaged, mutually supportive human beings.
With that in mind, I do wonder a bit about some of the skills at the bottom of the list. It seems to me that all of the bottom four demand – and presuppose – just about all of those in the top 12. At least, they do if they are done well. Similarly for a few others trailing the pack. It is odd that operation monitoring is not much desired, though monitoring is. It is strange that troubleshooting is low in the ranks, but problem-solving is high. You cannot troubleshoot without solving problems. It’s fundamental. I guess it speaks to the idea of transferability and the loss of specificity in roles. My guess is that, in answering the questions of the researchers, employers were hedging their bets a bit and not assuming that specific existing job roles will be needed. But conventional teachers could, with some justification, observe that their students are already acquiring the higher-level, more important skills, through doing the low-level stuff that employers don’t want as much. Though I have no sympathy at all with our collective desire to impose this on our students, I would certainly defend our teaching of things that employers don’t want, at least partly because (in the process) we are actually teaching far more. I would equally defend even the teaching of Latin or ancient Greek (as long as these are chosen by students, never when they are mandated) because the bulk of what students learn is never the skill we claim to be teaching. It’s much like what the late, wonderful, and much lamented Randy Pausch called a head fake – to be teaching one thing of secondary importance while primarily teaching another deeper lesson – except that rather too many teachers tend to be as deceived as their students as to the real purpose and outcomes of their teaching.
Automation and outsourcing
As the report also suggests, it may also be that those skills lower in the ranking tend to be things that can often be outsourced, including (sooner or later) to machines. It’s not so much that the jobs will not be needed, but that they can be either automated or concentrated in an external service provider, reducing the overall job market for them. Yes, this is true. However, again, the methodology may have played a large role in coming to this conclusion. There is a tendency of which we are all somewhat guilty to look at current patterns of change (in this case the trend towards automation and outsourcing) and to assume that they will persist into the future. I’m not so sure.
Take the stampede to move to the cloud, for instance, which is a clear underlying assumption in at least the undervaluing of programming. We’ve had phases of outsourcing several times before over the past 50 or 60 years of computing history. Cloud outsourcing is only new to the extent that the infrastructure to support it is much cheaper and more well-established than it was in earlier cycles, and there are smarter technologies available, including many that benefit from scale (e.g. AI, big data). We are currently probably at or near peak Cloud, but it is just a trend even if it has yet to peak. It might last a little longer than the previous generations (which, of course, never actually went away – it’s just an issue of relative dominance) but it suffers from most of the problems that brought previous outsourcing hype cycles to an end. The loss of in-house knowledge, the dangers of proprietary lock-in, the surrender of control to another entity that has a different (and, inevitably, at some point conflicting) agenda, and so on, are all counter forces to hold outsourcing in check. History and common sense suggests that there will eventually be a reversal of the trend and, indeed, we are seeing it here and there already, with the emergence of private clouds, regional/vertical cloud layers, hybrid clouds, and so on. Big issues of privacy and security are already high on the agendas of many organizations, with an increasing number of governments starting to catch up with legislation that heavily restricts unfettered growth of (especially) US-based hosting, with all the very many very bad implications for privacy that entails. Increasingly, businesses are realizing that they have lost the organizational knowledge and intelligence to effectively control their own systems: decisions that used to be informed by experts are now made by middle-managers with insufficient detailed understanding of the complexities, who are easy prey for cloud companies willing to exploit their ignorance. Equally, they are liable to be flanked by those who can adapt faster and less uniformly, inasmuch as everyone gets the same tools in the Cloud so there is less to differentiate one user of it from the next. OK, I know that is a sweeping generalization – there are many ways to use cloud resources that do not rely on standard tools and services. We don’t have to buy in to the proprietary SaaS rubbish, and can simply move servers to containers and VMs while retaining control, but the cloud companies are persuasive and keen to lure us in, with offers of reduced costs, higher reliability, and increased, scalable performance that are very enticing to stressed, underfunded CIOs with immediate targets to meet. Right now, cloud providers are riding high and making ridiculously large profits on it, but the same was true of IBM (and its lesser competitors) in the 60s and 70s. They were brought down (though never fully replaced) by a paradigm change that was, for the most part, a direct reaction to the aforementioned problems, plus a few that are less troublesome nowadays, like performance and cost of leased lines. I strongly suspect something similar will happen again in a few years.
Automation and the end of all things we value
Automation – especially through the increased adoption of AI techniques – may be a different matter. It is hard to see that becoming less disruptive, albeit that the reality is and will be much more mundane than the hype, and there will be backlashes. However, I greatly fear that we have a lot of real stupidity yet to come in this. Take education, for instance. Many people whose opinions I otherwise respect are guilty of thinking that teachers can be, to a meaningful extent, replaced by chatbots. They are horribly misguided but, unfortunately, people are already doing it, and claiming success, not just in teaching but in fooling students that they are being taught by a real teacher. You can indeed help people to pass tests through the use of such tools. However, the only things that tests prove about learning is that you have learned to pass them. That’s not what education is for. As I’ve already suggested, education is really not much to do with the stuff we think we teach. It is about being and becoming human. If we learn to be human from what are, in fact, really very dumb machines with no understanding whatsoever of the words they speak, no caring for us, no awareness of the broader context of what they teach, no values to speak of at all, we will lower the bar for artificial intelligence because we will become so much dumber ourselves. It will be like being taught by an unusually tireless and creepily supportive (because why would you train a system to be otherwise?) person. We should not care for them, and that matters, because caring (both ways) is critical to the relationship that makes learning with others meaningful. But it will be even worse if and when we do start caring for them (remember the Tamagotchi?). When we start caring for soulless machines (I don’t mean ‘soul’ in a religious or transcendent sense), when it starts to matter to us that we are pleasing them, we will learn to look at one another in the same way and, in the process, lose our own souls. A machine, even one that fools us it is human, makes a very poor role model. Sure, let them handle helpdesk enquiries (and pass them on if they cannot help), let them supplement our real human interactions with useful hints and suggestions, let them support us in the tasks we have to perform, let them mark our tests to double-check we are being consistent: they are good at that kind of thing, and will get better. But please, please, please don’t let them replace teachers.
I am afraid of AI, not because I am bothered by the likelihood of an AGI (artificial general intelligence) superseding our dominant role on the planet: we have at least decades to think about that, and we can and will augment ourselves with dumb-but-sufficient AI to counteract any potential ill effects. The worst outcome of AI in the foreseeable future is that we devalue ourselves, that we mistake the semblance of humanity for humanity itself, that machines will become our role models. We may even think they are better than us, because they will have fewer human foibles and a tireless, on-demand, semblance of caring that we will mistake for being human (a bit like obsequious serving staff seeking tips in a restaurant, but creepier, less transparent, and infinitely patient). Real humans will disappoint us. Bots will be trained to be what their programmers perceive as the best of us, even though we don’t have more than the glimmerings of an idea of what ‘best’ actually means (philosophers continue to struggle with this after thousands of years, and few programmers have even studied philosophy at a basic level). That way the end of humanity lies: slowly, insidiously, barely noticeably at first. Not with a bang but with an Alicebot. Arthur C. Clark delightfully claimed that any teacher who could be replaced by a machine should be. I fear that we are not smart enough to realize that it is, in fact, very easy to successfully replace a teacher with a machine if you don’t understand the teacher’s true role in the educational machine, and you don’t make massive changes to it. As long as we think of education as the achievement of pre-specified outcomes that we measure using primitive tools like standardized tests, exams, and other inauthentic metrics, chatbots will quite easily supersede us, despite their inadequacies. It is way too easy to mistake the weirdly evolved educational system that we are part of for education itself: we already do so in countless ways. Learning management systems, for instance, are not designed for learning: they are designed to replicate mediaeval classrooms, with all the trimmings, yet they have been embraced by nearly all institutions because they fit the system. AI bots will fit even better. If we do intend to go down this path (and many are doing so already) then please let’s think of these bots as supplemental, first line support, and please let’s make it abundantly clear that they are limited, fixed-purpose mechanisms, not substitutes but supplements that can free us from trivial tasks to let us concentrate on being more human.
Co-ops and placements
The report makes a lot of recommendations, most of which make sense – e.g. lifelong support for learning from governments, focus on softer more flexible skills, focus on adaptability, etc. Notable among these is the suggestion, as one of its calls to action, that all PSE students should engage in some form of meaningful work-integrated learning placements during their studies. This is something that we have been talking about offering to our program students in computing for some time at Athabasca University, though the demand is low because a large majority of our students are already working while studying, and it is a logistical nightmare to do this across the whole of Canada and much of the rest of the globe. Though some AU programs embed it (nursing, for instance) I’m not sure we will ever get round to it in computing. I do very much agree that co-ops and placements are typically a good idea for (at least) vocationally-oriented students in conventional in-person institutions. I supervised a great many of these (for computing students) at my former university and observed the extremely positive effects it usually had, especially on those taking the more humanistic computing programs like information systems, applied computing, computer studies, and so on. When they came back from their sandwich year (UK terminology), students were nearly always far wiser, far more motivated, and far more capable of studying than the relatively few that skipped the opportunity. Sometimes they were radically transformed – I saw borderline-fail students turn into top performers more than once – but, apart from when things fell apart (not common, but not unheard of), it was nearly always worth far more than at least the previous couple of years of traditional teaching. It was expensive and disruptive to run, demanding a lot from all academic staff and especially from those who had to organize it all, but it was worth it.
But, just because it works in conventional institutions doesn’t mean that it’s a good idea. It’s a technological solution that works because conventional institutions don’t. Let’s step back a bit from this for a moment. Learning in an authentic context, when it is meaningful and relevant to clear and pressing needs, surrounded by all the complexities of real life (notwithstanding that education should buffer some of that, and make the steps less risky or painful), in a community or practice, is a really good idea. Apprenticeship models have thousands of years of successful implementation to prove their worth, and that’s essentially what co-ops or placements achieve, albeit only in a limited (typically 3-month to 1 year) timeframe. It’s even a good idea when the study area and working practices do not coincide, because it allows many more connections to be made in both aspects of life. But why not extend that to all (or almost all) of the process? To an extent, this is what we at Athabasca already do, although it tends to be more the default context than something we take intentional advantage of. Again, my courses are an exception – most of mine (and all to some extent) rely on students having a meaningful context of their own, and give opportunities to integrate work or other interests and study by default. In fact, one of the biggest problems I face in my teaching arises on those rare occasions when students don’t have sufficient aspects of work or leisure that engage them (e.g. prisoners or visiting students from other universities), or work in contexts that cannot be used (e.g. defence workers). I have seen it work for in-person contexts, too: the Teaching Company Scheme in the UK, that later became Knowledge Transfer Partnerships, has been hugely successful over several decades, marrying workplace learning with academic input, usually leading to a highly personalized MSc or MA while offering great benefits to lecturers, employers and students alike. They are fun, but resource-intensive, to supervise. Largely for this reason, in the past it might have been hard to make this scalable to lower than graduate levels of learning, but modern technologies – shared workspaces, blogs, portfolio management tools, rich realtime meeting tools, etc, and a more advanced understanding of ways to identify and record competencies – make it far more possible. It seems to me that what we want is not co-ops or placements, but a robust (and, ideally, publicly funded) approach to integrating academic and in-context learning. Already, a lot of my graduate students and a few undergraduates are funded by their employers, working on our courses at the same time as doing their existing jobs, which seems to benefit all concerned, so there’s clearly a demand. And it’s not just an option for vocational learning. Though (working in computing) much of my teaching does have a vocational grounding, if not a vocational focus, I have come across students elsewhere across the university who are doing far less obviously job-related studies with the support of their employers. In fact, it is often a much better idea for students to learn stuff that is not directly applicable to their workplace, because the boundary-crossing it entails better improves a vast range of the most important skills identified in the RBC report – creativity, communication, critical thinking, problem solving, judgement, listening, reading, and so on. Good employers see the value in that.
Though this is a long post, I have only cherry-picked a few of the many interesting issues that emerge from the report, but I think there are some general themes in my reactions to it that are consistent:
1: it’s not about money
Firstly, the notion that educational systems should be primarily thought of as feeders for industry is dangerous nonsense. Our educational systems are preparation for life (in society and its cultures), and work is only a part of that. Preparedness for work is better seen as a side-effect of education, not its purpose. And education is definitely not the best vehicle for driving economic prosperity. The teaching profession is almost entirely populated by extremely smart, capable, people who (especially in relation to their qualifications) are earning relatively little money. To cap it all, we often work longer hours, in poorer conditions than many of our similarly capable industry colleagues. Though a fair living wage is, of course, very important to us, and we get justly upset when offered unfair wages or worsening conditions, we don’t work for pay: we are paid for our work. Notwithstanding that a lack of money is a very bad thing indeed and should be avoided like the plague, we do so precisely because we think there are some things – common things – that are much more important than money (this may also partly account for a liberal bias in the profession, though it also helps that the average IQ of teachers is a bit above the norm). And, whether explicitly or otherwise, this is inevitably part of what we teach. Education is not primarily about learning a set of skills and facts: it’s about learning to be, and the examples that teachers set, the way they model roles, cannot help but come laden with their own values. Even if we scrupulously tried to avoid it, the fact of our existence serves as a prime example of people who put money relatively low on their list of priorities. If we have an influence (and I hope we do) we therefore encourage people to value things other than a large wage packet. So, if you are going to college or school in the hope of learning to make loads of money, you’re probably making the wrong choice. Find a rich person instead and learn from them.
2: it is about integrating education and the rest of our lives
Despite its relentless focus on improving the economy, I think this report is fundamentally right in most of the suggestions it makes about education, though it doesn’t go far enough. It is not so much that we should focus on job-related skills (whatever they might be) but that we should integrate education with and throughout our lives. The notion of taking someone out of their life context and inflicting a bunch of knowledge-acquisition tasks with inauthentic, teacher-led criteria for success, not to mention to subjugate them to teacher control over all that they do, is plain dumb. There may be odd occasions where retreating from and separating education from the world is worthwhile, but they are few and far between, and can be catered for on an individual needs basis.
Our educational processes evolved in a very different context, where the primary intent was to teach dogma to the many by the few, and where physical constraints (rarity of books/reading skills, limited availability of scholars, limits of physical spaces) made lecture forms in dedicated spaces appropriate solutions to those particular technical problems. Later, education evolved to focus more on creating a pliant and capable workforce to meet the needs of employers and the military, which happened to fit fairly well with the one-to-many top-down-control models devised to teach divinity etc. Though those days are mostly ended, we still retain strong echoes of these roles in much of our structure and processes – our pedagogies are still deeply rooted in the need to learn specific stuff, dictated and directed by others, in this weird, artificial context. Somehow along the way (in part due to higher education, at least, formerly being a scarce commodity) we turned into filters and gatekeepers for employment purposes. But, today, we are trying to solve different problems. Modern education has tended to tread a shifting path between supporting individual development and improving our societies: these should be mutually supportive roles though different educational systems tend to put more emphasis on one than the other. With that in mind, it no longer makes sense to routinely (in fact almost universally) take people out of their physical, social, or work context to learn stuff. There are times that it helps or may even be necessary: when we need access to expensive shared resources (that mediaeval problem again), for instance, or when we need to work with in-person communities (hard to teach acting unless you have an opportunity to act with other actors, for example), or when it might be notably dangerous to practice in the real world (though virtual simulations can help). But, on the whole, we can learn far better when we learn in a real world context, where we can put our learning directly into useful practice, where it has value to us and those around us. Community matters immensely – for learning, for motivation, for diversity of ideas, for belonging, for connection, etc – and one of the greatest values in traditional education is that it provides a ready-made social context. We should not throw the baby out with the bathwater and it is important to sustain such communities, online or in-person. But it does not have to be, and should not ever be, the only social context, and it does not need to be the main social context for learning. Pleasingly, in his own excellent keynote at CNIE, our president Neil Fassina made some very similar points. I think that Athabasca is well on course towards a much brighter future.
3: what we teach is not what you learn
Finally, the whole education system (especially in higher education) is one gigantic head fake. By and large, the subjects we teach are of relatively minor significance. We teach ways of thinking, we teach values, we teach a few facts and skills, but mainly we teach a way of being. For all that, what you actually learn is something else entirely, and it is different from what every one of your co-learners learns, because 1) you are your main and most important teacher and 2) you are surrounded by others (in person, in artefacts they create, online) who also teach you. We need to embrace that far more than we typically do. We need to acknowledge and celebrate the differences in every single learner, not teach stuff at them in the vain belief that what we have to tell you matters more than what you want to learn, or that somehow (contrary to all evidence) everyone comes in and leaves knowing the same stuff. We’ve got to stop rewarding and punishing compliance and non-compliance.
What you learn changes you. It makes you able to see things differently, do things differently, make new connections. Anything you learn. There is no such thing as useless learning. It is, though, certainly possible to learn harmful things – misconceptions, falsehoods, blind beliefs, and so on – so the most important skill is to distinguish those from the things that are helpful (not necessarily true – helpful). On the whole, I don’t like approaches to teaching that make you learn stuff faster (though they can be very useful when solving some kinds of problem) because it devalues the journey. I like approaches that help you learn better: deeper, more connected, more transformative. This doesn’t mean that the RBC report is wrong in criticizing our current educational systems, but it is wrong to believe that the answer is to stop (or reduce) teaching the stuff that employers don’t think they need. Learners should learn whatever they want or need to learn, whenever they need to do so, and educational institutions (collectively) should support that. But that also doesn’t mean teachers should teach what learners (or employers, or governments) think they should teach, because 1) we always teach more than that, whether we want to or not, and it all has value and 2) none of these entities are our customers. The heartbreaking thing is that some of the lessons most of us unintentionally teach – from mindless capitulation to authority, to the terrible approaches to learning nurtured by exams, to the truly awful beliefs that people do not like/are not able to learn certain subjects or skills – are firmly in the harmful category. It does mean that we need to be more aware of the hidden lessons, and of what our students are actually learning from them. We need to design our teaching in ways that allow them to make it relevant and meaningful in their lives. We need to design it so that every student can apply their learning to things that matter to them, we need to help them to reflect and connect, to adopt approaches, attitudes, and values that they can constantly use throughout their lives, in the workplace or not. We need to help them to see what they have learned in a broader social context, to pay it forward and spread their learning contagiously, both in and out of the classroom (or wherever they are doing their learning). We need to be partners and collaborators in learning, not providers. If we do that then, even if we are teaching COBOL, Italian Renaissance poetry, or some other ‘useless’ subject, we will be doing what employers seem to want and need. More importantly, we will be enriching lives, whether or not we make people fiscally richer.
This is roughly the content of my 3 minute pitch to explain (some of) my research, that I gave at the OUNL research day in Heerlen, Netherlands yesterday. I was allowed one slide:
This is (very roughly) what I said:
Mediaeval scholars were faced with the problem that knowledge (doctrine actually), often found in rare and expensive books, needed to be passed from the few to the many. Lecturing was an efficient solution, given the constraints of physics. Because everyone needed to be in the same place at the same time for this to work, we developed schools, universities, classes, courses, timetables and terms and semesters. We built resources like libraries. We created organizational units to manage it all, like faculties and colleges. Above all, for efficiency, we needed rules of behaviour and a natural power dynamic putting the lecturer in control for every moment of the learning activity in a classroom.
Learning (like most things) works best – by far – when learners are intrinsically motivated. It barely works at all when learners are amotivated. Self determination theory tells us that three things are needed for intrinsic motivation: support for autonomy, competence, and relatedness. The mediaeval solution was good for relatedness, but bad for competence (some found it too challenging, some not challenging enough) and terrible for autonomy. The chance of amotivation is thus very high. Many of our pedagogies, processes, and much of the art of teaching since then have been, in one way or another, attempts to deal with this one central problem. The most common solution to the lack of intrinsic motivation that resulted was to apply externally regulated extrinsic motivation – rewards like grades and qualifications, rules of attendance, punishments for non-compliance, etc – which, self determination theory shows, is infallibly fatal to intrinsic motivation, making things far worse. How crazy is it that we have to force people to do the one thing that makes us most human, a drive to learn that is arguably stronger than sex or even the pursuit of food? Good teachers using well considered teaching methods can usually overcome many of the issues, at least for many students much of the time. But that’s what good pedagogy means. It is highly situated in solving the innate problems of in-person teaching.
On the whole, for perfectly understandable reasons (much distance teaching evolved in an in-person context with which it had to interoperate) we have transferred those exact same pedagogies unthinkingly to open, self paced, self directed, distance learning. ‘Teaching is teaching’, advocates claim, and so they try, as much as possible, to replicate online what they do in a classroom. But the motivational problems faced by distance learners are almost the exact inverse of those of in-person learners. They have lots of autonomy – you can’t really take it away – and can take different paths and pacing to gain competence (e.g. rewinding or skipping videos, re-reading text, augmenting with other resources, etc), but tend to suffer from reduced relatedness, especially when learning truly independently, in a self paced modality. Given this mismatch and the lack of well evolved support and processes for this very different context, it is not surprising there is often a high rate of attrition, especially when teachers (lacking the closeness and authority or in-person colleagues) double down on rewards and punishments through grades, even to the extent of rewarding participation, thus making it even worse.
There is no such thing as a disembodied, abstract, decontextualized pedagogy – it is all about orchestrating technologies- so any solution must be as much about buildings tools and structures as it is about using techniques and methods. They are entirely inseparable. A significant part of my current research is thus an attempt to design native online pedagogies, technologies, and other parts of educational systems (including credentialling) that don’t rely on reward and punishment; that are built for supporting learning in the complex, ever changing modern world that does exist, rather than for the indoctrination of mediaeval students.
While searching for a movie using Google Search last night I got (for the first time that I can recall) the option to tag the result, as described in this article. I was pleased to discover that the tool they provide for this is virtually identical (albeit with a much slicker and more refined modern interface overhaul) to the CoFIND system that underpinned my PhD, that I built over 20 years ago now. You are presented with a list of tags, and can select one or more that describe the movie, and/or suggest your own, effectively creating a multi-dimensional rating system that other users can use to judge what the movie is like. When I rated the movie last night, for instance, popular tags presented to me included ‘terrible acting’, ‘bad writing’, ‘clichéed’, ‘boring’ and so on. Having seen the movie, I agree about the bad writing and clichés – it was at the terrible end of the scale – but actually think most of the acting was fairly good, and it was not very boring. What is interestingly different about this, compared with other tagging systems currently available, is that this kind of tag is fuzzy – it represents a value statement about the movie that exists on a continuum, not a simple categorization. The sorting algorithm for the list of tags presented to you appears (like my original CoFIND) to be based mainly on simple popularity though it is possible that (like CoFIND) it uses other metrics like tag age and perhaps even a user model as well. It’s vastly more useful and powerful than the typical thumbs-up/thumbs-down that Google normally provides. The feature has sadly not reappeared on subsequent movie searches, so I am guessing that Google is either still testing it or trying to build up a sufficient base of recommendations by occasionally showing it to people, before opening it up to everyone.
Just in case Google or anyone else has tried to patent this, and to assert my prior art, you can find a description and screenshots (p183 and p184) of my original CoFIND system in chapter 6 of my PhD thesis as well as in many papers before and since, not to mention in a fair few blog posts. It’s out there in the public domain for anyone to use. The interface of my system was, even by the standards of the day, pretty awful and not even a fraction as good as the one provided by Google, but those were different times: it did work in exactly the same way, though. As I developed it further, the interface actually became much worse. Over the course of a few years I experimented with quite a range of methods to get and display ratings/tags, including an ill-conceived Likert scale as well as a much more successful early use of tag clouds, all of which added complexity and reduced usability. Some of these later systems are described and discussed in my PhD too. In its final, refactored, and heavily evolved form that postdates my PhD by several years, a version of Cofind (last modified 2007) is actually still available, that almost reverts to the the Google-style tag selection approach of the original, with the slight tweak that, in CoFIND, you can disagree about any particular tag use (for instance, if you don’t believe it to be inane then you can cast a vote against that tag). The interface remains at least as awful as the original, though, and not a patch on Google’s. The main other differences, apart from interface variations, are that the nomenclature differs (I used ‘qualities’ rather than ‘tags), and that CoFIND could be used for anything with a URL, not just movies. If you’re interested, click on any resource link in the system and you’ll see my primitive, ugly, frame-based attempt to do very much the same as Google is doing for movies (nb. unless you are logged in you cannot add new qualities but, for authorized users, a field appears at the end that is just like Google’s). Though primarily intended to share and recommend educational resources, CoFIND was very flexible and was, over the years, used for a range of other purposes from comparing interface designs to discovering images and videos. It was always flaky, ugly, and unscalable, but it worked well enough for my research and teaching purposes, and (because it provides RSS feeds) it was my go-to tool for sharing interesting links right up until 2007, after which I reverted to more conventional but better-maintained tools like the Landing or WordPress.
A little bit of CoFIND background
I’ve written a fair bit about CoFIND, formally and informally, but not for a few years now, so here’s a little background for anyone that might be interested, and to remind myself of a little of what I learned all those years ago in the light of what I know now.
An evolving, self-organizing, social bookmarking tool
I started my PhD research in 1997 with the observation that, even then, there was a vast amount of stuff to learn from that could be easily found on the Web, but that it was really difficult to find good stuff, let alone stuff that was actually useful to a particular learner at a particular stage in their development. Remember that this was before Google even started, so things were significantly worse then than they are now. Infoseek was as good as it got.
I had also observed that, in any group of learners, people would find different things and, between them, discover a much larger range of useful resources than any one learner (or teacher) could do alone, a fact that I use in my teaching to this day. These would likely (and, it turns out, in reality) be better than what a teacher could find alone because, though individual learners might be less able to distinguish low from high quality, they would know what worked for them and sufficient numbers of eyes would weed out the bad stuff as long as there was a mechanism for it. This was where I came in.
The only such mechanisms widely available at the time were simple rating systems. However, learners have very different learning needs, so I immediately realized that ‘thumbs-up’ or simple Likert scales would not work. This was not about finding the one ‘best’ solution for everyone, but was instead concerned with finding a range of alternatives to fill different ecological niches, and somehow discovering the most useful solution in that niche for a given learner at a given time. My initial idea was to make use of a crowd, not an individual curator, and to employ a process closely akin to natural evolution to kill bad suggestions and promote good ones, in order to create an ecosystem of learning resources rather than a simple database. CoFIND was a series of software solutions that explored and extended this initial idea.
CoFIND was, on the face of it, what would eventually come to be called a social bookmarking system – a means for learners to find and to share Web resources (and, later, other things) with one another, along with a mechanism for other learners to recommend or critique them. It was by no means the first social bookmarking system, but it was certainly not a common genre at the time, and I don’t think such a dedicated system had ever been used in education before (for all such assertions, I stand to be corrected), though other means of sharing links, from simple web pages or wikis or discussion forums to purpose-built teacher-curated tools were not that uncommon. A lot of my early research involved learning about self-organization and complex systems, in particular focusing on evolution and stigmergy (self-organization through signs left in the environment). As well as the survival-of-the-fittest dynamic, evolution furnished me with many useful concepts that I made good use of, such as the importance of parcellation, the necessity of death, ways to avoid skyhooks, benefits of spandrels, ways to leverage chance (including extinction events), and various approaches to supporting speciation. As a result of learning about stigmergy I independently developed what later came to be know as tag clouds. I don’t believe that mine were the first ever tag clouds – weighted lists of one sort or another had been around for a few years – but, though mine didn’t then use the name, they were likely the first uses of such things in educational software, and almost certainly the first with this particular theoretical model to support them (again, I am happy to be corrected).
A collaborative filter
The name CoFIND is an acronym for ‘collaborative filter in n-dimensions’. The n dimensions were substantiated through what we (my supervisors and I) called qualities. We went through a long list of possible names for these, and I was drawn for a while to calling them ‘values’, but (unfortunately) we never thought of ‘tags’ because the term was not in common use for this kind of purpose at the time. After a phase of calling them q-tags, I now call qualities by the much more accessible name of ‘fuzzy tags’. Fuzzy tags are not just binary classifications of a topic but tags that describe what we value, or don’t value, in a resource, and how much we value it. While people may sometimes disagree about binary classifications (conventional tags) it is always possible to have different opinions about the application of fuzzy tags: some may find something interesting, for instance, while others may not, and others may feel it to be quite interesting, or incredibly so. Fuzzy tags are to do with fuzzy sets, that have a continuum of grades of membership, which is where the name comes from. Different versions of CoFIND used different ways to establish the fuzziness of a tag – the Likert Scale used in a few mid-period versions was my failed attempt to make it explicit but this was a nightmare for people to actually use. The first versions used the same kind of frequency-based weighting as Google’s movie tags, but that was a bit coarse – I was uncomfortable with the averaging effect and the unbridled Matthew Effect that threatened to keep early tags at the top of the list for all time, that I rather coarsely kept in check with a simple age-related weighting that was only boosted when they were used (the unfortunate side effect of which being that, if a system was not used for a few weeks, all the tags vanished in a huge extinction event, albeit that they could be revived if anyone ever used one of the dead ones again). The final version was a bit in-between, allowing an indefinitely large scale via simple up-down ratings, balanced with an algorithm that included a decaying but renewable novelty weighting that adjusted to the frequency of use of the system as a whole. This still had the peculiar effect of evening out/initializing all of the tags over time if no one used the system, but at least it caused fewer catastrophes.
‘Traditional’ collaborative filters simply discover whether things are likely to be more valued or less valued on a usually implicit single dimension (good-bad, liked-disliked, useful-useless, etc). CoFIND’s qualities/fuzzy tags allowed people to express in what ways they were better or worse – more interesting, less helpful, more complex, less funny, etc, just as Google’s movie tagging allows you to express what you like or dislike about a movie, not just whether you liked it or not. In many tag-based systems, people tend to use quite a few simple tags that are inherently fuzzy (e.g. Flickr photos tagged as ‘beautiful’) but they are seldom differentiated in the software from those that simply classify a resource as fitting a particular category, so they are rarely particularly helpful in finding stuff to help with, say, learning.
I was building CoFIND just as the field of collaborative filtering was coming out of its infancy, so the precise definition of the term had yet to be settled. At the time, a collaborative filter (then usually called an ‘automated collaborative filter’) was simply any system that used prior explicit and/or implicit preferences of a number of previous users (a usually anonymous crowd) to help make better recommendations and/or filter out weaker recommendations for the current users. The PageRank algorithm that still underpins Google Search would perhaps have then been described as a collaborative filter, as was one of its likely inspirations, PHOAKS (People Helping One Another Know Stuff), that mined Usenet newsgroups for links, taking them as an implicit recommendation within the newsgroup topic area. By this definition, CoFIND was in fact a semi-automated collaborative filter that combined explicit preferences with automated matching. Nowadays the term ‘collaborative filter’ tends to only apply to a specific subset of recommender systems that automatically predict future interests by matching individual patterns of behaviour with those of multiple others, whether by item (people who bought this also bought…) or user (people whose past or expressed preferences seem to be like yours also liked…). I think that, if I built CoFIND today, I would simply refer to it more generically as a recommender system, to avoid confusion.
Disembodied user models
Rather than a collaborative filter, back in the late 90s Peter Brusilovsky saw CoFIND as a new species of educational adaptive hypermedia, as it was perhaps the first (or at least one of the first) that worked on an open corpus rather than through a closed corpus of linked resources. However, he and I were both puzzled about where to find the user model, which was part of Peter’s definition of adaptive hypermedia. I didn’t feel that it needed one, because users chose the things that mattered to them at runtime. In retrospect, I think that the trick behind CoFIND, and what still distinguishes it from almost all other systems apart from this fairly new Google tool, is that it disembodied and exposed the user model. Qualities were, in essence, the things that would normally be invisibly stored in a user model, but I made them visible, in an extreme variant of what Judy Kay later described as scrutable adaptation. In effect, a learner chose their own learner model at the time they needed it. The reasoning behind doing so was that, for learners, past behaviour is usually a poor predictor of future needs, mainly because 1) learning changes people (so past preferences may have little bearing on future preferences), and 2) learning is driven by a vast number of things other than taste or past actions: we often have a need for it thrust upon us by an extrinsic agency, like a teacher, or a legislative demand for a driving licence, for instance. Qualities (fuzzy tags) allow us to express the current value of something to us, in a form that we can leave behind without a lot of sticky residue, and that future users can use. In fact, later versions did tend to slightly emphasize similar things to those people had added, categorized, or rated (fuzzily tagged) earlier, but this was just a pragmatic attempt to make the system more valuable as a personal bookmark store, and therefore to encourage more use of it, rather than an attempt to build a full-blown collaborative filter in the modern sense of the word.
I still believe that, in principle, this is an excellent approach and I have been a little disappointed that more people have not taken up the idea and improved on it. The big and, at the time, insurmountable obstacles that I hit were 1) that it demands a lot of its users to provide both tags and resources, with little obvious personal benefit, so it is unlikely to get a lot of use, 2) that the cold-start problem that affects most collaborative filters (it relies on many users to be useful but no one will use it until it is useful) is magnified exponentially by every one of those n dimensions so it really demands a big lot of users, and 3) that it is fiendishly hard to represent the complex ecological niches effectively in an interface, making the cognitive load unusably high. Google seems to have made good progress on the last point (an evolution enabled by improved web standards and browsers combined with a simplification of the process, which together are enough to reduce the cognitive load by a sizeable amount), and has plenty sufficient numbers of users to cope with the first and second points, at least with regard to movie recommendations. It remains challenging to see how this would work in an educational setting in anything less than the largest of MOOCs or the most passionately focused of user bases. However, I would love to see Google extend this mechanism to OERs, courses, and other educational resources, from Quora answers to Kahn Academy tutorials, because they do have the numbers, and it would work well. For the same reasons, it would also be great to see it applied to something like StackExchange or similar large-scale systems (Reddit perhaps) where people go to seek solutions to learning problems. I doubt that I will build a new version of CoFIND as such, but the ideas behind it should live on, I think, and it’s great to see them back on a system as big as Google Search, even if it is so far only experimental and, so far, just used to recommend movies.
Along with quite a few people that I know, I am amazed that I stuck it out for over 3 years. I was a most reluctant Chair in the first place, because I’d been in middle management roles before and knew much of what to expect. It’s really not my kind of thing at all. Ideologically and temperamentally I loathe hierarchies but I’d rather be at the top or at the bottom if I have to be in one at all. However, with the help of some cajoling, I eventually convinced myself that being a Chair is essentially much the same as being a teacher, which is an activity that I both enjoy and can mostly do reasonably well. Like a teacher (at least one that does the job well), the job of a Chair is to help nurture a learning community, and to make it possible for those in that community to achieve what they most want to achieve with as few obstacles as possible. Like teaching, it is not at all about telling, but about listening, supporting, and helping others to orchestrate the process for themselves, not so much about leadership as followership, about being a supportive friend. It’s a bit about nudging and inspiring, too, of sharing the excitement of discovery and growth with other people. It’s a bit about challenging people to be who they want to be, collectively and individually. It’s a bit about solving problems, a bit about being a shoulder to cry on, a bit about being a punchbag for those needing to let off steam, an arbiter in disputes. It could be fun. And I could always give it up after a few months if it didn’t work out. That was what I convinced myself.
On the bright side, I don’t think that I broke anything vital. I did help a couple of good things to happen, and I think that most of my staff were reasonably happy and empowered, a few of them more than before. One or two were probably less happy. But, in the grand scheme of it all, I left things much the same as or a little better than I found them, despite often strenuous efforts to bring about far more exciting changes. My tenure as Chair was, on the whole, not great, but not terrible. I have been wondering a bit about why that happened, and what I could or should have done differently, which is what the next part of this post is about.
Authority vs influence, responsibility vs power
One of my most notable discoveries (more accurately, rediscoveries) is that authority and responsibility barely, if at all, correlate with power and influence. In fact, for a middle management role like this, the precise inverse is true. One of the strange paradoxes of being in a position of more responsibility and authority has been that, in many ways, I feel that I’ve actually had considerably less capacity to bring about change, or to control my own life, than I had as a plain old professor. It’s just possible that I may have overused the joke about a Chair being the one everyone gets to sit on, but it resonated with me. And this is not to contradict Uncle Ben’s sage advice to Spiderman – it may be true that with great power comes great responsibility, but that doesn’t mean that with great responsibility comes great power.
Partly the problem was just the myriad small but draining demands that had to be done throughout the course of a typical day (most of which were insufferably tedious and mostly mindless bureaucratic tasks that anyone else could do at least as well), as well as having to attend many more meetings, and to engage in a few much lengthier tasks like workload planning. It wore me down. I put a lot of things that were important to me, but that didn’t contribute to my role, to one side because there were too few chunks of uninterrupted time to do them. Blogging and sharing on social media, for instance.
Partly it was because I felt that my role was primarily to support those that reported to me – I had to do their bidding much more than they had to do mine. Instead of doing what I would intrinsically wish to do, much of the time I was trying to do what those that I supervised required of me. This was not just a result of my own views on leadership. I think a lot of it would have affected most people in the same position.
Partly it was because I often felt (with a little external reinforcement) that I must shut up and/or toe the line because I represented the School or the Dean or the University. Being the ‘face’ of the school meant that I often felt obliged to try to represent the opinions and demands of others, even when I disagreed with them. Often, I had to present a collective agenda, or that of an individual higher up the foodchain, rather than my own, whether or not I found it dull, mistaken, or pointless. Also, being a Chair puts you in some sensitive situations where a wrong step can easily lead to litigation, grievance proceedings, or (worse) very unhappy people. I’m not naturally tactful or taciturn, to say the least, so this was tricky at times. I sometimes stayed quiet when I might otherwise have spoken out.
The upshot of it is that, as a Chair, I was directly responsible both to my Dean and to the people I supervised (not to mention more or less directly to students, visitors, admins, tech staff, VPAs, etc, etc), and I consequently felt that I had very little control over my own life at all. Admittedly it was at least partly due to my very intentional approach to the role, but I think similar issues would emerge no matter what leadership style I had adopted. There’s a surprising amount of liberty in being at the bottom of a hierarchy, at least when (like all academics) you are expected – nay, actually required – to be creative, self-starting, and largely autonomous in your work. Academic freedom is a wonderful thing, and some of it is subdued when you move a little way up the scale.
There have been plentiful compensations, of course. I wouldn’t have stayed this long if it had been uniformly awful. Being a Chair made some connections easier to make, within and beyond the university, and has helped me get to know my colleagues a lot better. And I have some great colleagues: it would have been much harder to manage had I not had such friendly, supportive, smart, creative, willing, and capable team to work with. I solved or at least made fair progress on a few problems, none huge but all annoying, and helped to lay the groundwork for some ongoing improvements. There were opportunities for creativity here and there. I will miss some of the ways I could help shape our values and systems simply thanks to being a Chair, rather than having to actually work at it. I’ll miss being the default person people came to with interesting ideas. I’ll miss the very small but not trivial stipend. I’ll miss being involved by default in most decisions that affect the school. I’ll miss the kudos. I’ll miss being a formal hub in a network, albeit a small one.
Not quite like teaching
In most ways I was right about the job being much like teaching. Most of the skills, techniques, goals, and patterns are very similar, but there’s one big difference that I had not thought enough about. On the whole, most actual teachers engage with learners over a fairly fixed period, or at least for a fixed project, and there is a clear beginning, middle, and end, with well defined rituals, rules, and processes to mark their passage. This is even true to an extent of more open forms of teaching like apprenticeship and mentorship. Although this in some ways relates to any kind of project, the fact that people, working together in a social group, are both the focus and the object of change, makes it fairly distinctive. I can’t think of many other human activities that are particularly similar to teaching in this regard, apart from perhaps some team sports or, especially, performing arts.
To be a teacher without a specific purpose in mind is a surprisingly different kind of activity, like producing an improvised play that has no script, no plot, no beginning, and no end. Although a teacher is responsible to their students, much as I was responsible to my staff, the responsibility is tightly delimited in time and in scope, so it remains quite manageable, for the most part. In retrospect, I think I should have planned it better. I probably should have set more distinct goals, milestones, tasks, sub-projects, etc. I should have planned for a very clear and intentional end, and set much firmer boundaries. It would not have been easy, though, as many goals emerged over the years, a lot changed when we got our new (and much upgraded) administration, and a lot depended on serendipity and opportunism. I had, at first, no idea how long I would stick with the role. Until quite some time into it, I had only a limited idea about what changes I might even be allowed to accomplish (not much, as it happens, with no budget, a freeze on course development, diminishing staff numbers, need to fit faculty plans, etc). It might have been difficult to plan too far ahead, though it would have been really useful to have had a map showing the directions we might have gone and the limits of the territory. I think there may be useful lessons to be learned from this about support for self-directed lifelong learning.
Lessons for learning and teaching
A curse of institutional learning can be the many scales of rigid structure it provides, that too often take agency away from learners and limit support for diversity. However, it also supports an individual learner’s agency to have a good map of the journey ahead, even if all that they are given is the equivalent of a bus route, showing only the fixed paths their learning will take. I have long grappled with the tensions and trade-offs between surfing the adjacent possible and following a planned learning path. I spent a lot of time in the late 1990s and early 2000s designing online systems that leveraged the crowd to allow learners to help one another to learn, but most of them only helped with finding what to do next, or to solve a current problem, not to chart a whole journey. Figuring out an effective way to plan ahead without sacrificing learner control was one of the big outstanding research problems left to be solved when I finished my PhD (in self-organized learning in networks) very many moons ago, and it still is. There are lots of ineffective ways that I and others have tried, of course. Obvious approaches like matching paths through collaborative filtering or similar techniques are a dead-end: there are way too many extraneous variables to confound it, way too much variation in start and end points to effectively cater for, even if you start with a huge dataset. This is not to mention the blind-leading-the-blind issues, the fact that learning changes people so past activity poorly predicts future behaviour, and the fact that there is often a narrative context that assumes specific prior activities have occurred and known future activities will follow. Using ontologies is even worse, because the knowledge map of a subject developed by subject experts is seldom if ever the best map for learning and may be among the worst. The most promising approaches I have seen, and that I had a doctoral student working on myself until he had to give up in the mid 2000s, mine the plans of many experts (e.g. by looking at syllabuses) to identify common paths and branches for a particular subject, combining them with whatever other information can be gleaned to come up with a good direction for a specific learner and learning need. However, there are plenty of issues with that, too, not least of which being the fact that institutional teaching assumes a very distinctive context, and suffers from a great many constraints (from having to be squashed into a standardized length to fitting preferred teaching patterns and schedules), that learners unhindered by such arbitrary concerns would neither want nor need. Many syllabuses are actually thoughtlessly copied from the same templates (e.g. from a professional association model syllabus), or textbooks, and may be awful in the same ways. And, again, narrative matters. If you took a chunk out of one of my courses and inserted it somewhere else it would often change its meaning and value utterly.
This is a problem I would dearly love to solve. Though I stand by my teaching approaches, one of the biggest perennial complaints about the tools and methods I tend to use is that it is easy to feel lost, especially if the helping hands of others are not around when needed. There are always at least a few students who would, as a matter of principle, rather be told what to do, how to do it, and where to go next. The majority would prefer to work in an environment that avoids the need for unnecessary decisions, such as where to upload a file, that have little to do with what they are trying to learn. My role (and that of my tutors, and the design of my courses) is to help them through all that, to relieve them of their dependency on being told what to do, and to help them at least understand why things are done the way they are done. However, that can result in quite inconsistent experiences if I or tutors let the ball slip for a moment. It can be hard for people who have been taught, often over decades, that teaching is telling, and that learning can reliably be accomplished by following a set of teacher-determined steps, to be set adrift to figure it out in their own ways.
It is made far worse by the looming threat of grades that, though eliminated in my teaching itself, still lie in wait at the end of the path as extrinsic targets. Students often find it hard to know in advance how they will meet the criteria, or even whether they have met them when they reach the end. I can and do tell them all of this, of course, usually repeatedly and in many ways and using many media, but the fact that at least some remain puzzled just proves the point: teaching is not telling. Again, a lot of manual social intervention is necessary. But that leads to the issue that following one of my courses demands a big leap of faith (mainly in me) that it will turn out OK in the end. It usually takes effort and time to build such trust, which is costly for all concerned, and is easily lost with a careless word or a missed message. It would be really useful for my students to have a better map that allows them to plan detours and take more alternative transit options for themselves, especially with overlays to show recommended routes, warnings of steep hills and traffic, and real-time information about the whereabouts of people on their network and points of interest along the way. It would, of course, also be really handy to have a big ‘you are here’ label. I would have really liked such a map when I started out as Chair.
Leaving the Chair role behind still feels a little like stepping off a boat after a rough voyage, and either the land or my legs feel weird, I’m not sure which. As my balance returns, I am much looking forward to catching up with things I put to one side over the past 3 years. I’m happy to be getting back to doing more of what I do best, and I hope to be once more sharing more of my discoveries and cogitations in posts like this. It’s easier to move around with your feet on the ground than when you are sitting on a chair.
Blogs have evolved a bit over the past 20 years or so, and diversified. The always terrific Ben Werdmuller here makes the distinction between thinkpieces (what I tend to think of as vaguely equivalent to keynote presentations at a conference, less than a journal article, but carefully composed and intended as a ‘publication’) and weblogging (kind of what I am doing here when I bookmark interesting things I have been reading, or simply a diary of thoughts and observations). Among the surprisingly large number of good points that he makes in such a short post is that a weblog is best seen as a single evolving entity, not as a bunch of individual posts:
“Blogging is distinct from journalism or formal writing: you jot down your thoughts and hit “publish”. And then you move on. There isn’t an editorial process, and mistakes are an accepted part of the game. It’s raw.
A consequence of this frequent, short posting is that the product isn’t a single post: it’s the weblog itself. Your website becomes a single stream of consciousness, where one post can build on another. The body of knowledge that develops is a reflection of your identity; a database of thoughts that you’ve put out into the world.
This is in contrast to a series of thinkpieces, which are individual articles that live by themselves. With a thinkpiece, you’re writing an editorial; with a blog, you’re writing the book of you, and how you think.“
This is a good distinction. I also think that, especially in the posts of popular bloggers like Ben, the blog is also comprised of the comments, trackbacks, and pings that develop around it, as well as tweets, pins, curations, and connections made in other social media. Ideas evolve in the web of commentary and become part of the thing itself. The post is a catalyst and attractor, but it is only part of the whole, at least when it is popular enough to attract commentary.
This distributed and cooperative literary style can also be seen in other forms of interactive publication and dialogue – a Slashdot or Reddit thread, for instance, can sometimes be an incredibly rich source of knowledge, as can dialogue around a thinkpiece, or (less commonly) the comments section of online newspaper articles. What makes the latter less commonly edifying is that their social form tends to be that of the untarnished set, perhaps with a little human editorial work to weed out the more evil or stupid comments: basically, what matters is the topic, not the person. Untarnished sets are a magnet for trolls, and their impersonal nature that obscures the individual can lead to flaming, stupidity, and extremes of ill-informed opinion that crowd out the good stuff. Sites like Slashdot, StackExchange, and Reddit are also mostly set-based, but they use the crowd and an algorithm (a collective) to modulate the results, usually far more effectively than human editors, as well as to provide shape and structure to dialogues, so that dialogues become useful and informative. At least, they do when they work: none are close to perfect (though Slashdot, when used well, is closer than the rest because its algorithms and processes are far more evolved and far more complex, and individuals have far more control over the modulation) but the results can often be amazingly rich.
Blogs, though, tend to develop the social form of a network, with the blogger(s) at the centre. It’s a more intimate dialogue, more personal, yet also more public as they are almost always out in the open web, demanding no rituals of joining in order to participate, no membership, no commitment other than to the person writing the blog. Unlike dedicated social networks there is no exclusion, no pressure to engage, no ulterior motives of platforms trying to drive engagement, less trite phatic dialogue, more purpose, far greater ownership and control. There are plenty of exceptions that prove the rule and plenty of ways this egalitarian structure can be subverted (I have to clean out a lot of spam from my own blogs, for instance) but, as a tendency, it makes blogs still very relevant and valuable, and may go some way to explaining why around a quarter of all websites now run on WordPress, the archetypal blogging platform.
Earlier today I responded to a prospective student who was, amongst other things, seeking advice on strategies for success on a couple of our self-paced programming courses. My response was just a stream of consciousness off the top of my head but I think it might be useful to others. Here, then, with some very light editing to remove references to specific courses, are a few fairly random thoughts on how to succeed on a self-paced online programming course (and, for the most part, other courses) at Athabasca University. In no particular order:
Try to make sure that people close to you know what you are doing and, ideally, are supportive. Other people can really help, not just for the mechanical stuff but for the emotional support. Online learning, especially the self-paced form we use, can feel a bit isolating at times, but there are lots of ways to close the gap and they aren’t all found in the course materials and processes. Find support wherever you can.
Make a schedule and try to keep to it, but don’t blame yourself if your deadlines slip a bit here and there – just adjust the plan. The really important thing is that you should feel in control of the process. Having such control is one of the huge benefits of our way of teaching, but you need to take ownership of the process yourself in order to experience the benefits.
If the course provides forums or other social engagement try to proactively engage in them. Again, other people really help.
You will have way more freedom than those in traditional classrooms, who have to follow a teacher simply because of the nature of physics. However, that freedom is a two-edged sword as you can sometimes be swamped with choices and not know which way to go. If you are unsure, don’t be afraid to ask for help. But do take advantage of the freedom. Set your own goals. Look for the things that excite you and explore further. Take breaks if you are getting tired. Play. Take control of the learning process and enjoy the ride.
Enjoy the challenges. Sometimes it will be hard, and you should expect that, especially in programming courses like these. Programming can be very frustrating at times – after 35 years of programming I can still spend days on a problem that turns out to involve a misplaced semi-colon! Accept that, and accept that even the most intractable problems will eventually be solved (and it is a wonderful feeling when you do finally get it to work). Make time to sleep on it. If you’re stuck, ask for help.
Get your work/life/learning balance right. Be realistic in your aspirations and expect to spend many hours a week on this, but make sure you make time to get away from it.
Keep a learning journal, a reflective diary of what you have done and how you have addressed the struggles, even if the course itself doesn’t ask for one. There are few more effective ways to consolidate and connect your learning than to reflect on it, and it can help to mark your progress: good to read when your motivation is flagging.
Get used to waiting for responses and find other things to learn in the meantime. Don’t stop learning because you are waiting – move on to something else, practice something you have already done, or reflect on what you have been doing so far.
Programming is a performance skill that demands constant and repeated practice. You just need to do it, get it wrong, do it again, and again, and again, until it feels like second nature. In many ways it is like learning a musical instrument or maybe even driving. It’s not something you can learn simply by reading or by being told, you really have to immerse yourself in doing it. Make up your own challenges if you run out of things to do.
Don’t just limit yourself to what we provide. Find forums and communities with appropriate interests. I am a big fan of StackOverflow.com for help and inspiration from others, though relevant subreddits can be useful and there are many other sites and systems dedicated to programming. Find one or two that make sense to you. Again, other people can really help.
Online learning can be great fun as long as you are aware of the big differences, primarily relating to control and personal agency. Our role is to provide a bit of structure and a supportive environment to enable you to learn, rather than to tell you stuff and make you do things, which can be disconcerting at first if you are used to traditional classroom learning. This puts more pressure on you, and more onus on you to organize and manage your own learning, but don’t ever forget that you are not ever really alone – we are here to help.
In summary, I think it really comes down to three big things, all of which are really about motivation, and all of which are quite different when learning online compared to face-to-face:
Autonomy – you are in control, but you must take responsibility for your own learning. You can always delegate control to us (or others) when the going gets hard or choices are hard to make, but you are always free to take it back again, and there will be no one standing over you making you do stuff apart from yourself.
Competence – there are few things more satisfying than being able to do more today than you could do yesterday. We provide some challenges and we try to keep them difficult-but-achievable at every stage along the way, but it is a great idea for you to also seek your own challenges, to play, to explore, to discover, especially if the challenges we offer are too difficult or too boring. Reflection can help a lot with this, as a means to recognize what, how, and why you have learned.
Relatedness – never forget the importance of other people. You don’t have to interact with them if you don’t want to do so (that’s another freedom we offer), but it is at the very least helpful to think about how you belong in our community, your own community, and the broader community of learners and programmers, and how what and how you are learning can affect others (directly or indirectly).
This advice is by no means comprehensive! If you have other ideas or advice, or things that have worked for you, or things that you disagree with, do feel free to share them in the comments.
I had the pleasure to gatecrash the HCI 2017 conference in Vancouver today, which gave me the chance to see Dr Ali Dewan present three excellent papers in a row (two with his name on them) on a variety of themes, as well as a great paper written and presented by one of our students, Miao-Han Chang.
Both did superb jobs of presenting to a receptive crowd. Ali got particular acclaim from the audience for the first work he presented (Combinatorial Auction based Mechanism Design for Course Offering Determination by Anton Vassiliev, Fuhua Lin & M. Ali Akber Dewan) for its broad applicability in many areas beyond scheduling courses.
Athabasca, and especially the School of Computing and Information Systems, has made a great showing at this prestigious conference, with contributions not just from Ali and Miao-Han, but also from Oscar (Fuhua) Lin, Dunwei Wen, Maiga Chang and Vive Kumar. Kurt Reifferscheid and Xiaokun Zhang also had a paper in the proceedings but were sadly not able to attend to present it.
Jon and Ali at the Vancouver Conference Centre after Ali’s marathon presentation stint. I detect a look of relief on Ali’s face!
Combinatorial Auction based Mechanism Design for Course Offering Determination Anton Vassiliev, Fuhua Lin, M. Ali Akber Dewan, Athabasca University, Canada
Enhance the Use of Medical Wearables through Meaningful Data Analytics Kurt Reifferscheid, Xiaokun Zhang, Athabasca University, Canada
Classification of Artery and Vein in Retinal Fundus Images Based on the Context-Dependent Features Yang Yan, Changchun Normal University, P.R. China; Dunwei Wen, M. Ali Akber Dewan, Athabasca University, Canada; Wen-Bo Huang, Changchun Normal University, P.R. China
ECG Identification Based on PCA-RPROP Jinrun Yu, Yujuan Si, Xin Liu, Jilin University, P.R. China; Dunwei Wen, Athabasca University, Canada; Tengfei Luo, Jilin University, P.R. China; Liuqi Lang, Zhuhai College of Jilin University, P.R. China
Usability Evaluation Plan for Online Annotation and Student Clustering System – A Tunisian University Case Miao-Han Chang, Athabasca University, Canada; Rita Kuo, New Mexico Institute of Mining and Technology, United States; Fathi Essalmi, University of Kairouan, Tunisia; Maiga Chang, Vive Kumar, Athabasca University, Canada; Hsu-Yang Kung, National Pingtung University of Science and Technology, Taiwan