My learning style

I am a visual, aural, read/write, kinaesthetic, introvert, extravert, sensing, intuitive, analytic, thinking, feeling, judging, perceiving, independent, dependent, collaborative, competitive, participant, avoidant, wholist, analytic, verbalizing, imaging, visualizing, deductive, synthetic, expansive, serialist, holist, field-dependent, field-independent, intrinsically motivated, extrinsically motivated, impulsive, reflexive, convergent, divergent, levelling, sharpening, concrete-sequential, concrete-random, abstract-sequential, abstract-random, assimilating, exploring, adaptive, innovative, reproductive, experiencing, thinking, doing, reflective, directed, self-directed, undirected, application-directed, meaning-directed, deep, surface, strategic, apathetic, elaborative, impulsive, concrete, independent, self-assertive, cerebral,  affective, type 1, type 2, type 3, global, scanning, focusing, physical, logical, social, solitary, musical-rhythmic, interpersonal, intrapersonal, spatial, body, active, common sense, dynamic, imaginative, quadrant 1, quadrant 2, quadrant 3, quadrant 4, theorizing, organizing, humanitarian, legislative, judicial, executive, tactile, pragmatic, versatile learner.

My birth sign is Aquarius, and I was born in the Year of the Rat.

Incidentally…

It appears that 97% of American teachers actually believe in learning styles, by which I mean the belief that there are persistent traits describing how people learn that can be used to determine the best way to teach them. This is despite at least most, if not all, of the many scores of such theories existing somewhere between astrology and fairies in terms of evidence for their relevance or applicability in real life learning. Though there may be ever-shifting conditions under which we may at times prefer one or other of whatever learning styles the theory we like offers – this may be a source of the persisting appeal of the idea – there is no reliable evidence that this is in any way relevant to whether or not we will learn better or worse (whatever we think that means) when offered a learning experience that is tailored to that preference. It’s not by any means for want of trying – countless studies exist, and that’s not counting probably many more that never saw the light of day because they had only null results to report and so were not deemed worthy of publication – so the obvious conclusion to be drawn is that these theories are most likely false.

It wouldn’t be so worrying were it not that there is evidence that such beliefs are harmful to learners and, even if there were not, then the time, effort, and money put into trying to use them would be far better spent on things that actually might work.

In the extremely unlikely event that it were one day proven that an individual has a persistent style of learning that, when we teach to that style, consistently leads to improved learning (however we measure that), then it would be my duty as a teacher to try to teach them to learn in other ways, because here’s the thing: the real world in which we are and must be lifelong learners doesn’t come neatly packaged in ways that fit your learning style. We can all learn to learn in all the ways that I list above, and then some, and we can all become better and smarter by applying the right strategy at the right time. We therefore need to cultivate as many diverse learning strategies as we can, and learn when to use them. That’s just common sense which, as it happens and surprisingly enough, is itself a learning style, according to the 4MAT model.

Signals, boundaries, and change: how to evolve an information system, and how not to evolve it

primitive cell development

For most organizations there tend to be three main reasons to implement an information system:

  1.     to do things the organization couldn’t do before
  2.     to improve things the organization already does (e.g. to make them more efficient/cheaper/better quality/faster/more reliable/etc)
  3.     to meet essential demands (e.g. legislation, keep existing apps working, etc)

There are other reasons (political, aesthetic, reputational, moral, corruption/bribery/kickbacks, familiarity, etc) but I reckon those are the main ones that matter. They are all very good reasons.

Costs and debts

With each IT solution there will always be costs, both initial and ongoing. Because we are talking about technology, and all technologies evolve to greater complexity over time, the ongoing costs will inevitably escalate. It’s not optional. This is what is commonly described as the ‘technological debt’ but that is a horrible misnomer. It is not a debt, but the price we pay for the solutions we need. If we don’t do it, our IT systems decay and die, starved of their connections with the evolving business and global systems around them. It’s no more of a debt than the need to eat or receive medical care is a debt for living.

Thinking locally, not globally

When money needs to be saved in an organization, senior executives tend to look at the inevitably burgeoning cost of IT and see it as ripe for pruning. IT managers thus tend to be placed under extreme pressure to ‘save’ costs. IT managers might often be relieved about that because they are almost certainly struggling to maintain the customized apps already, unless they have carefully planned for those increased costs over years (few do). Sensibly (from their own local perspective, given what they have been charged with doing), they therefore tend to strip out customizations, then shift to baseline applications, and/or cloud-based services that offer financial savings or, at least, predictable costs, giving the illusion of control. Often, they wind up firing, repurposing, or not renewing contracts for development staff, support staff, and others with deep knowledge of the old tools and systems. This keeps the budget in check so they achieve the goals set for them.

Unfortunately, assuming that the organization continues to need to do what it has been doing up to that point, the unavoidable consequence is that things that computers used to do are now done by people in the workforce instead. When made to perform hard mechanical tasks that computers can and should do, people are invariably far more fallible, slow, inconsistent, and inefficient. Far more. They tend to be reluctant, too. To make things worse, these mundane repetitive tasks take time, and crowd out other, more important things that people need to do, such as the things they were hired for. People tend to get tired, angry, and frustrated when made to do mechanical things over which they have little agency, which reduces productivity much further than simply the time lost in doing them. To make matters even worse, there is inevitably going to be a significant learning curve, during which staff try to figure out how to do the work of machines. This tends to lead to inflated training budgets (usually involving training sessions that, as decades of research show, are rarely very effective and that have to be repeated), time to read documentation, and more time taken out of the working day. Creativity, ingenuity, innovation, problem-solving, and interaction with others all suffer. The organization as a whole consequently winds up losing many times more (usually by orders of magnitude) than they saved on IT costs, though the IT budget now looks healthy again so it is often deemed to be a success. This is like taking the wheels off a car then proudly pointing to the savings in fuel that result. Unfortunately, such general malaises seldom appear in budget reports, and are rarely accounted for at all, because they get lost in the work that everyone is doing. Often, the only visible signs that it has happened are that the organization just gets slower, less efficient, less creative, more prone to mistakes, and less happy. Things start to break, people start to leave, sick days multiply. The reputation of the organization begins to suffer.
 
This is usually the point that more radical large scale changes to the organization are proposed, again usually driven by senior management who (unless they listen very carefully to what the workforce is telling them) may well attribute the problems they are seeing to the wrong causes, like external competition. A common approach to the problem is to impose more austerity, thus delivering the killing blow to an already demoralized workforce. That’s an almost guaranteed disaster. Another common way to tackle it is to take greater risks, made all the more risky thanks to having just converted creative, problem-solving, inquisitive workers into cogs in the machine, in the hope of opening up new sources of revenue or different goals. When done under pressure, that seldom ends well, though at least it has some chance of success, unlike austerity. This vicious cycle is hard to escape from. I don’t know of any really effective way to deal with it once it has happened.

Thinking in systems

The way to avoid it in the first place is not to kill off and directly replace custom IT solutions with baseline alternatives. There are very good reasons for almost all of those customizations that have almost certainly not gone away: all those I mentioned at the start of the post don’t suddenly cease to apply. It is therefore positively stupid to simply remove them without an extremely deep, multifaceted analysis of how they are used and who uses them, and even then with enormous conservatism and care. However, you probably still want to get rid of them eventually anyway, because, as well as being an ever-increasing cost,  they have probably become increasingly out of line with how the organization and the world around it is evolving. Unless there has been a steady increase in investment in new IT staff (too rare), so much time is probably now spent keeping old systems going that there is no time to work on improvements or new initiatives. Unless more money can be put into maintaining them (a hard sell, though important to try) the trick is not to slash and burn, and definitely not to replace old customized apps with something different and less well-tailored, but to gently evolve towards whatever long-term solution seems sensible using techniques such as those I describe below. This has a significant cost, too, but it’s not usually as high, and it can be spread over a much longer period.
 

For example…

If you wish to move away from reliance on a heavily customized learning management system to a more flexible and adaptive learning ecosystem made of more manageable pieces, the trick is to, first of all, build connectors into and out of your old system (if they do not already exist), to expose as many discrete services as possible, and then to make use of plugin hooks (or similar) to seamlessly replace existing functions with new ones. The same may well need to be done with the new system, if it does not already work that way. This is the most expensive part, because it normally demands development time, and what is developed will have to be maintained, but it’s worth it. What you are doing, at an abstract level, is creating boundaries around parts that can be treated as distinct (functions, components, objects, services, etc) and making sure that the signals that pass between them can be understood in the same way by subsystems on either side of the boundary.

Open industry standards (APIs, protocols, etc) are almost essential here, because apps at both sides of the boundary need to speak the same language. Proprietary APIs are risky: you do not want to start doing this then have a vendor decide to change its API or its terms and conditions. It’s particularly dangerous to do this with proprietary cloud-based services, where you don’t have any control whatsoever over APIs or backends,  and where sudden changes (sometimes without even a notification that they are happening) are commonplace. It’s fine to use containers or virtual machines in the cloud – they can be replaced with alternatives if things go wrong, and can be treated much like applications hosted locally – and it’s fine to use services with very well defined boundaries, with standards-based APIs to channel the signals. It is also fine to build your own, as long as you control both sides of the boundary, though maintenance costs will tend to be higher.  It is not fine to use whole proprietary applications or services in the cloud because you cannot simply replace them with alternatives, and changes are not under your control. Ideally, both old and new systems should be open source so that you are not bound to one provider, you can make any changes you need (if necessary), and you can rely on having ongoing access to older versions if things change too fast.
 
Having done this, you have two main ways to evolve, that you can choose according to needs:

  1.  to gradually phase in the new tools you want and phase out the old ones you don’t want in the old system until, like the ship of Theseus, you have replaced the entire thing. This lets you retain your customizations and existing investments (especially in knowledge of those systems) for the longest time, because you can replace the parts that do not rely on them before tackling those that do. Meanwhile, those same fresh tools can start to make their appearance in whatever other new systems you are trying to build, and you can make a graceful, planned transition as and when you are ready. This is particularly useful if there is a great deal of content and learning already embedded in the system, which is invariably the case with LMSs. It means people can mostly continue to work the way they’ve always worked, while slowly learning about and transitioning to a new way of working.
  2.  to make use of some services provided by the old system to power the new one. For instance, if you have a well-established means of generating class lists or collecting assessment data that involves a lot of custom code, you can offer that as a service from the old tool to your new tool, rather than reimplementing it afresh straight away or requiring users to manually replace the custom functions with fallible human work. Eventually, once the time is right to move and you can afford it, you can then simply replace it with a different service, with virtually no disruption to anyone. This is better when you want a clean break, especially useful when the new system does things that the original could not do, though it still normally allows simultaneous operation for a while if needed, as well as the option to fall back to the old system in the event of a disaster.

There are other hybrid alternatives, such as setting up other systems to link both, so that the systems do not interact directly but via a common intermediary. In the case of an LMS migration, this might be a learning record store (LRS) or student record system, for instance. The general principle, though, is to keep part or all of the old system running simultaneously for however long it is needed, parcellating its tools and services, while slowly transitioning to the new. Of course, this does imply extra cost in the short term, because you now have to manage at least two systems instead of one. However, by phasing it this way you greatly reduce risk, spread costs over a timeframe that you control, and allow for changes in direction (including reversal) along the way, which is always useful. The huge costs you save are those that are hidden from conventional accounting – the time, motivation, and morale of the workforce that uses the system. As a useful bonus, this service-oriented approach to building your systems also allows you to insert other new tools and implement other new ideas with a greatly diminished level of risk, with fewer recurring costs, and without the one-time investment of having to deal with your whole monolithic codebase and data. This is great if you want to experiment with innovations at scale. Once you have properly modularized your system, you can grow it and change it by a process of assembly. It often allows you to offer more control to end users, too: for instance, in our LMS example you might allow individuals to choose between different approaches to a discussion forum, or content presentation, or to insert a research-based component without so many of the risks (security, performance, reliability, etc) normally associated with implementing less well-managed code.

Signals and boundaries

In essence, this is all about signals and boundaries. The idea is to identify and, if they don’t exist, create boundaries between distinct parts of systems, then to focus all your management efforts on the signals that pass across them. As long as the signals remain the same from both sides, what lies on either side of the boundaries can be isolated and replaced when needed. This happens to be the way that natural systems mainly evolve too, from organisms to ecosystems. It has done pretty good service for a good billion years or so.

 
 

Education for life or Education for work? Reflections on the RBC Future Skills Report

Tony Bates extensively referenced this report from the Royal Bank of Canada on Canadian employer demands for skills over the next few years, in his characteristically perceptive keynote at CNIE 2019 last week (it’s also referred to in his most recent blog post). It’s an interesting read. Central to its many findings and recommendations are that the Canadian education system is inadequately designed to cope with these demands and that it needs to change. The report played a big role in Tony’s talk, though his thoughts on appropriate responses to that problem were independently valid in and of themselves, and not all were in perfect alignment with the report. Tony Bates at CNIE 2019

The 43-page manifesto (including several pages of not very informative graphics) combines some research findings, with copious examples to illustrate its discoveries, and with various calls to action based on them. I guess not surprisingly for a document intended to ignite, it is often rather hard to tell in any detail how the research itself was conducted. The methodology section is mainly on page 33 but it doesn’t give much more than a broad outline of how the main clustering was performed, and the general approach to discovering information. It seems that a lot of work went into it, but it is hard to tell how that work was conducted.

A novel (-ish) finding: skillset clusters

Perhaps the most distinctive and interesting research discovery in the report is a predictive/descriptive model of skillsets needed in the workplace. By correlating occupations from the federal NOC (National Occupational Classification) with a US Labor Department dataset (O*NET) the researchers abstracted and identified six distinct clusters of skillsets, the possessors of which they characterize as:

  • solvers (engineers, architects, big data analysts, etc)
  • providers (vets, musicians, bloggers, etc)
  • facilitators (graphic designers, admin assistants, Uber drivers, etc)
  • technicians (electricians, carpenters, drone assemblers, etc)
  • crafters (fishermen, bakers, couriers, etc)
  • doers (greenhouse workers, cleaners, machine-learning trainers, etc)

From this, they make the interesting, if mainly anecdotally supported, assertion that there are clusters of occupations across which these skills can be more easily transferred. For instance, they reckon, a dental assistant is not too far removed from a graphic designer because both are high on the facilitator spectrum (emotional intelligence needed). They do make the disclaimer that, of course, other skills are needed and someone with little visual appreciation might not be a great graphic designer despite being a skilled facilitator. They also note that, with training, education, apprenticeship models, etc, it is perfectly possible to move from one cluster to another, and that many jobs require two or more anyway (mine certainly needs high levels of all six). They also note that social skills are critical, and are equally important in all occupations. So, even if their central supposition is true, it might not be very significant.

There is a somewhat intuitive appeal to this, though I see enormous overlap between all of the clusters and find some of the exemplars and descriptions of the clusters weirdly misplaced: in what sense is a carpenter not a crafter, or a graphic designer not a provider, or an electrician not a solver, for instance? It treads perilously close to the borders of x-literacies – some variants of which come up with quite similar categories – or learning style theories, in its desperate efforts to slot the world into manageable niches regardless of whether there is any point to doing so. The worst of these is the ‘doers’ category, which seems to be a lightly veiled euphemism for ‘unskilled’ (which, as they rightly point out, relates to jobs that are mostly under a great deal of threat). ‘Doing’ is definitely ripe for transfer between jobs because mindless work in any occupation needs pretty much the same lack of skill. My sense is that, though it might be possible to see rough patterns in the data, the categories are mostly very fuzzy and blurred, and could easily be used to label people in very unhelpful ways. It’s interesting from a big picture perspective, but, when you’re applying it to individual human beings, this kind of labelling can be positively dangerous. It could easily lead to a species of the same general-to-specific thinking that caused the death of many airplane pilots prior to the 1950s, until the (obvious but far-reaching) discovery that there is no such thing as an average-sized pilot. You can classify people into all sorts of types, but it is wrong to make any further assumptions about them because you have done so. This is the fundamental mistake made by learning style theorists: you can certainly identify distinct learner types or preferences but that makes no difference whatsoever to how you should actually teach people.

Education as a feeder for the job market

Perhaps the most significant and maybe controversial findings, though, are those leading more directly to recommendations to the educational and training sector, with a very strong emphasis on preparedness for careers ahead. One big thing bothers me in all of this. I am 100% in favour of shifting the emphasis of educational institutions from knowledge acquisition to more fundamental and transferable capabilities: on that the researchers of this report hit the nail on the head. However, I don’t think that the education system should be thought of, primarily, as a feeder for industry or preparation for the workplace. Sure, it’s definitely one important role for education, but I don’t think it’s the dominant one, and it’s very dangerous indeed to make that its main focus to the exclusion of the rest. Education is about learning to be a human in the context of a society; it’s about learning to be part of that culture and at least some of its subcultures (and, ideally, about understanding different cultures). It’s a huge binding force, it’s what makes us smart, individually and collectively, and it is by no means limited to things we learn in institutions or organizations. Given their huge role in shaping how we understand the world,  at the very least media (including social media) should, I think, be included whenever we talk of education. In fact, as Tony noted, the shift away from institutional education is rapid and on a vast scale, bringing many huge benefits, as well as great risks. Outside the institutions designed for the purpose, education is often haphazard, highly prone to abuse, susceptible to mob behaviours, and often deeply harmful (Trump, Brexit, etc being only the most visible tips of a deep malaise). We need better ways of dealing with that, which is an issue that has informed much of my research. But education (whether institutional or otherwise) is for life, not for work.

I believe that education is (and should be) at least partly concerned with passing on what we know, who we have been, who we are, how we behave, what we value, what we share, how we differ, what drives us, how we matter to one another. That is how it becomes a force for societal continuity and cohesion, which is perhaps its most important role (though formal education’s incidental value to the economy, especially through schools, as a means to enable parents to work cannot be overlooked). This doesn’t have to exclude preparation for work: in fact, it cannot.  It is also about preparing people to live in a culture (or cultures), and to continue to learn and develop productively throughout their lives, evolving and enhancing that culture, which cannot be divorced from the tools and technologies (including rituals, norms, rules, methods, artefacts, roles, behaviours, etc) of which the cultures largely consist, including work. Of course we need to be aware of, and incorporate into our teaching, some of the skills and knowledge needed to perform jobs, because that’s part of what makes us who we are. Equally, we need to be pushing the boundaries of knowledge ever outwards to create new tools and technologies (including those of the arts, the humanities, the crafts, literature, and so on, as well as of sciences and devices) because that’s how we evolve. Some – only some – of that will have value to the economy. And we want to nurture creativity, empathy, social skills, communication skills, problem-solving skills, self-management skills, and all those many other things that make our culture what it is and that allow us to operate productively within it, that also happen to be useful workplace skills. But human beings are also much more than their jobs. We need to know how we are governed, the tools needed to manage our lives, the structures of society. We need to understand the complexities of ethical decisions. We need to understand systems, in all their richness. We need to nurture our love of arts, sports, entertainment, family life, the outdoors, the natural and built environment, fine (and not fine) dining, being with friends, talking, thinking, creating stuff, appreciating stuff, and so on. We need to develop taste (of which Hume eloquently wrote hundreds of years ago).  We need to learn to live together. We need to learn to be better people. Such things are (I think) more who we are, and more what our educational systems should focus on, than our productive roles in an economy. The things we value most are, for the most part, seldom our economic contributions to the wealth of our nation, and the wealth of a nation should never be measured in economic terms.  Even those few that love money the most usually love the power it brings even more, and that’s not the same thing as economic prosperity for society. In fact, it is often the very opposite.

I’m not saying economic prosperity is unimportant, by any means: it’s often a prerequisite for much of the rest, and sometimes (though far from consistently) a proxy marker for them. And I’m not saying that there is no innate value in the process of achieving economic prosperity: many jobs are critical to sustaining that quality of life that I reckon matters most, and many jobs actually involve doing the very things we love most. All of this is really important, and educational systems should cater for it. It’s just that future employment should not be thought of as the main purpose driving education systems.

Unfortunately, much of our teaching actually is heavily influenced by the demands of students to be employable, heavily reinforced on all sides by employers, families, and governments, and that tends to lead to a focus on topics, technical skillsets, and subject knowledge, not so much to the exclusion of all the rest, but as the primary framing for it. For instance, HT to Stu Berry and Terry Anderson for drawing my attention to the mandates set by the BC government for its post secondary institutions, that are a litany of shame, horribly focused on driving economic prosperity and feeding industry, to the exclusion of almost anything else (including learning and teaching, or research for the sake of it, or things that enrich us as human beings rather than cogs in an economic machine). This report seems to take the primary role of education as a driver of economic prosperity as just such a given. I guess, being produced by a bank, that’s not too surprising, but it’s worth viewing it with that bias in mind.

And now the good news

What is heartwarming about this report, though, is that employers seem to want (or think they will want) more or less exactly those things that also enrich our society and our personal lives. Look at this fascinating breakdown of the skills employers think they will need in the future (Tony used this in his slides):

Projected skills demands, from the RBC future skills report

 

There’s a potential bias due to the research methodology, that I suspect encouraged participants to focus on more general skills, but it’s really interesting to see what comes in the first half and what dwindles into unimportance at the end.

Topping the list are active listening, speaking, critical thinking, comprehension, monitoring, social perceptiveness, coordination, time management, judgement and decision-making, active learning, service orientation, complex problem solving, writing, instructing, persuasion, learning strategies, and so on. These mostly quite abstract skills (in some cases propensities, albeit propensities that can be cultivated) can only emerge within a context, and it is not only possible but necessary to cultivate them in almost any educational intervention in any subject area, so it is not as though they are being ignored in our educational systems. More on that soon. What’s interesting to me is that they are the human things, the things that give us value regardless of economic value. I find it slightly disconcerting that ethical or aesthetic sensibilities didn’t make the list and there’s a surprising lack of mention of physical and mental health but, on the whole, these are life skills more than just work skills.

Conventional education can and often does cultivate these skills. I am pleased to brag that, as a largely unintentional side-effect of what I think teaching in my fields should be about, these are all things I aim to cultivate in my own teaching, often to the virtual exclusion of almost everything else. Sometimes I have worried (a little) that I don’t have very high technical expectations of my students. For instance, my advanced graduate level course in information management provides technical skills in database design and analysis that are, for the most part, not far above high-school level (albeit that many students go far beyond that); my graduate level social computing course demands no programming skills at all (technically, they are optional); my undergraduate introduction to web programming course sometimes leads to limited programming skills that would fail to get them a passing grade in a basic computer science course (though they typically pass mine). However (and it’s a huge HOWEVER) they have a far greater chance to acquire far more of those skills that I believe matter, and (gratifyingly) employers seem to want, than those who focus only on mastery of the tools and techniques. My web programming students produce sites that people might actually want to visit, and they develop a vast range of reflective, critical thinking, complex problem-solving, active learning, judgment, persuasion, social perceptiveness and other skills that are at the top of the list. My information management students get all that, and a deep understanding of the complex, social, situated nature of the information management role, with some notable systems analysis skills (not so much the formal tools, but the ways of understanding and thinking in systems). My social computing students get all that, and come away with deep insights into how the systems and environments we build affect our interactions with one another, and they can be fluent, effective users and managers of such things. All of the successful ones develop social and communication skills, appropriate to the field. Above all, my target is to help students to love learning about the subjects of my courses enough to continue to learn more. For me, a mark of successful teaching is not so much that students have acquired a set of skills and knowledge in a domain but that they can, and actually want to, continue to do so, and that they have learned to think in the right ways to successfully accomplish that. If they have those skills, then it is not that difficult to figure out specific technical skillsets as and when needed. Conveniently, and not because I planned it that way, that happens to be what employers want too.

Employers don’t (much) want science or programming skills: so what?

Even more interesting, perhaps, than the skills employers do want are the skills they do not want, from Operation Monitoring onwards in the list, that are often the primary focus of many of our courses. Ignoring the real nuts and bolts stuff at the very bottom like installation, repairing, maintenance, selection (more on that in a minute), it is fascinating that skills in science, programming, and technology design are hardly wanted at all by most companies, but are massively over-represented in our teaching. The writers of the report do offer the proviso that it is not impossible that new domains will emerge that demand exactly these skills but, right now and for the foreseeable future, that’s not what matters much to most organizations. This doesn’t surprise me at all. It has long been clear that the demand for people that create the foundations is, of course, going to be vastly much smaller than the demand for people that build upon them, let alone the vastly greater numbers that make use of what has been built upon them. It’s not that those skills are useless – that’s a million miles from the truth – but that there is a very limited job market for them. Again, I need to emphasize that educators should not be driven by job markets: there is great value in knowing this kind of thing regardless of our ability to apply it directly in our jobs. On the other hand, nor should we be driven by a determination to teach all there is to know about foundations, when what interests people (and employers, as it happens) is what can be done with them. And, in fact, even those building such foundations desperately need to know that too, or the foundations will be elegant but useless. Importantly, those ‘foundational’ skills are actually often anything but, because the emergent structures that arise from them obey utterly different rules to the pieces of which they are made. Knowing how a cell works tells you nothing whatsoever about function of a heart, let alone how you should behave towards others, because different laws and principles apply at different levels of organization. A sociologist, say, really doesn’t need to know much about brain science, even though our brains probably contribute a lot to our social systems, because it’s the wrong foundation, at the wrong level of detail. Similarly, there is not a lot of value in knowing how CPUs work if your job is to build a website, or a database system supporting organizational processes (it’s not useless, but it’s not very useful so, given limited resources, it makes little sense to focus on it). For almost all occupations (paid or otherwise) that make use of science and technology, it matters vastly much more to understand the context of use, at the level of detail that matters, than it does to understand the underlying substructures. This is even true of scientists and technologists themselves: for most scientists, social and business skills will have a far greater effect on their success than fundamental scientific knowledge. But, if students are interested in the underlying principles and technologies on which their systems are based, then of course they should have freedom and support to learn more about them. It’s really interesting stuff, irrespective of market demand. It enriches us. Equally, they should be supported in discovering gothic literature, social psychology, the philosophy of art, the principles of graphic design, wine making, and anything else that matters to them. Education is about learning to be, not just learning to do. Nothing of what we learn is wasted or irrelevant. It all contributes to making us creative, engaged, mutually supportive human beings.

With that in mind, I do wonder a bit about some of the skills at the bottom of the list. It seems to me that all of the bottom four demand – and presuppose – just about all of those in the top 12. At least, they do if they are done well. Similarly for a few others trailing the pack. It is odd that operation monitoring is not much desired, though monitoring is. It is strange that troubleshooting is low in the ranks, but problem-solving is high. You cannot troubleshoot without solving problems. It’s fundamental. I guess it speaks to the idea of transferability and the loss of specificity in roles. My guess is that, in answering the questions of the researchers, employers were hedging their bets a bit and not assuming that specific existing job roles will be needed. But conventional teachers could, with some justification, observe that their students are already acquiring the higher-level, more important skills, through doing the low-level stuff that employers don’t want as much. Though I have no sympathy at all with our collective desire to impose this on our students, I would certainly defend our teaching of things that employers don’t want, at least partly because (in the process) we are actually teaching far more. I would equally defend even the teaching of Latin or ancient Greek (as long as these are chosen by students, never when they are mandated) because the bulk of what students learn is never the skill we claim to be teaching. It’s much like what the late, wonderful, and much lamented Randy Pausch called a head fake – to be teaching one thing of secondary importance while primarily teaching another deeper lesson – except that rather too many teachers tend to be as deceived as their students as to the real purpose and outcomes of their teaching.

Automation and outsourcing

As the report also suggests, it may also be that those skills lower in the ranking tend to be things that can often be outsourced, including (sooner or later) to machines. It’s not so much that the jobs will not be needed, but that they can be either automated or concentrated in an external service provider, reducing the overall job market for them. Yes, this is true. However, again, the methodology may have played a large role in coming to this conclusion. There is a tendency of which we are all somewhat guilty to look at current patterns of change (in this case the trend towards automation and outsourcing) and to assume that they will persist into the future. I’m not so sure.

Outsourcing

Take the stampede to move to the cloud, for instance, which is a clear underlying assumption in at least the undervaluing of programming. We’ve had phases of outsourcing several times before over the past 50 or 60 years of computing history. Cloud outsourcing is only new to the extent that the infrastructure to support it is much cheaper and more well-established than it was in earlier cycles, and there are smarter technologies available, including many that benefit from scale (e.g. AI, big data). We are currently probably at or near peak Cloud, but it is just a trend even if it has yet to peak. It might last a little longer than the previous generations (which, of course, never actually went away – it’s just an issue of relative dominance) but it suffers from most of the problems that brought previous outsourcing hype cycles to an end. The loss of in-house knowledge, the dangers of proprietary lock-in, the surrender of control to another entity that has a different (and, inevitably, at some point conflicting) agenda, and so on, are all counter forces to hold outsourcing in check. History and common sense suggests that there will eventually be a reversal of the trend and, indeed, we are seeing it here and there already, with the emergence of private clouds, regional/vertical cloud layers, hybrid clouds, and so on. Big issues of privacy and security are already high on the agendas of many organizations, with an increasing number of governments starting to catch up with legislation that heavily restricts unfettered growth of (especially) US-based hosting, with all the very many very bad implications for privacy that entails. Increasingly, businesses are realizing that they have lost the organizational knowledge and intelligence to effectively control their own systems: decisions that used to be informed by experts are now made by middle-managers with insufficient detailed understanding of the complexities, who are easy prey for cloud companies willing to exploit their ignorance. Equally, they are liable to be flanked by those who can adapt faster and less uniformly, inasmuch as everyone gets the same tools in the Cloud so there is less to differentiate one user of it from the next. OK, I know that is a sweeping generalization – there are many ways to use cloud resources that do not rely on standard tools and services. We don’t have to buy in to the proprietary SaaS rubbish, and can simply move servers to containers and VMs while retaining control, but the cloud companies are persuasive and keen to lure us in, with offers of reduced costs, higher reliability, and increased, scalable performance that are very enticing to stressed, underfunded CIOs with immediate targets to meet. Right now, cloud providers are riding high and making ridiculously large profits on it, but the same was true of IBM (and its lesser competitors) in the 60s and 70s. They were brought down (though never fully replaced) by a paradigm change that was, for the most part, a direct reaction to the aforementioned problems, plus a few that are less troublesome nowadays, like performance and cost of leased lines. I strongly suspect something similar will happen again in a few years.

Automation and the end of all things we value

Automation – especially through the increased adoption of AI techniques – may be a different matter. It is hard to see that becoming less disruptive, albeit that the reality is and will be much more mundane than the hype, and there will be backlashes. However, I greatly fear that we have a lot of real stupidity yet to come in this. Take education, for instance. Many people whose opinions I otherwise respect are guilty of thinking that teachers can be, to a meaningful extent, replaced by chatbots. They are horribly misguided but, unfortunately, people are already doing it, and claiming success, not just in teaching but in fooling students that they are being taught by a real teacher.  You can indeed help people to pass tests through the use of such tools. However, the only things that tests prove about learning is that you have learned to pass them. That’s not what education is for. As I’ve already suggested, education is really not much to do with the stuff we think we teach. It is about being and becoming human. If we learn to be human from what are, in fact, really very dumb machines with no understanding whatsoever of the words they speak, no caring for us, no awareness of the broader context of what they teach, no values to speak of at all, we will lower the bar for artificial intelligence because we will become so much dumber ourselves. It will be like being taught by an unusually tireless and creepily supportive (because why would you train a system to be otherwise?) person. We should not care for them, and that matters, because caring (both ways) is critical to the relationship that makes learning with others meaningful. But it will be even worse if and when we do start caring for them (remember the Tamagotchi?).  When we start caring for soulless machines (I don’t mean ‘soul’ in a religious or transcendent sense), when it starts to matter to us that we are pleasing them, we will learn to look at one another in the same way and, in the process, lose our own souls.  A machine, even one that fools us it is human, makes a very poor role model. Sure, let them handle helpdesk enquiries (and pass them on if they cannot help), let them supplement our real human interactions with useful hints and suggestions, let them support us in the tasks we have to perform, let them mark our tests to double-check we are being consistent: they are good at that kind of thing, and will get better. But please, please, please don’t let them replace teachers.

I am afraid of AI, not because I am bothered by the likelihood of an AGI (artificial general intelligence) superseding our dominant role on the planet: we have at least decades to think about that, and we can and will augment ourselves with dumb-but-sufficient AI to counteract any potential ill effects. The worst outcome of AI in the foreseeable future is that we devalue ourselves, that we mistake the semblance of humanity for humanity itself, that machines will become our role models. We may even think they are better than us, because they will have fewer human foibles and a tireless, on-demand, semblance of caring that we will mistake for being human (a bit like obsequious serving staff seeking tips in a restaurant, but creepier, less transparent, and infinitely patient). Real humans will disappoint us. Bots will be trained to be what their programmers perceive as the best of us, even though we don’t have more than the glimmerings of an idea of what ‘best’ actually means (philosophers continue to struggle with this after thousands of years, and few programmers have even studied philosophy at a basic level). That way the end of humanity lies: slowly, insidiously, barely noticeably at first. Not with a bang but with an Alicebot. Arthur C. Clark delightfully claimed that any teacher who could be replaced by a machine should be. I fear that we are not smart enough to realize that it is, in fact, very easy to successfully replace a teacher with a machine if you don’t understand the teacher’s true role in the educational machine, and you don’t make massive changes to it. As long as we think of education as the achievement of pre-specified outcomes that we measure using primitive tools like standardized tests, exams, and other inauthentic metrics, chatbots will quite easily supersede us, despite their inadequacies. It is way too easy to mistake the weirdly evolved educational system that we are part of for education itself: we already do so in countless ways. Learning management systems, for instance, are not designed for learning: they are designed to replicate mediaeval classrooms, with all the trimmings, yet they have been embraced by nearly all institutions because they fit the system. AI bots will fit even better. If we do intend to go down this path (and many are doing so already) then please let’s think of these bots as supplemental, first line support, and please let’s make it abundantly clear that they are limited, fixed-purpose mechanisms, not substitutes but supplements that can free us from trivial tasks to let us concentrate on being more human.

Co-ops and placements

The report makes a lot of recommendations, most of which make sense – e.g. lifelong support for learning from governments, focus on softer more flexible skills, focus on adaptability, etc. Notable among these is the suggestion, as one of its calls to action, that all PSE students should engage in some form of  meaningful work-integrated learning placements during their studies. This is something that we have been talking about offering to our program students in computing for some time at Athabasca University, though the demand is low because a large majority of our students are already working while studying, and it is a logistical nightmare to do this across the whole of Canada and much of the rest of the globe. Though some AU programs embed it (nursing, for instance) I’m not sure we will ever get round to it in computing. I do very much agree that co-ops and placements are typically a good idea for (at least) vocationally-oriented students in conventional in-person institutions. I supervised a great many of these (for computing students) at my former university and observed the extremely positive effects it usually had, especially on those taking the more humanistic computing programs like information systems, applied computing, computer studies, and so on. When they came back from their sandwich year (UK terminology), students were nearly always far wiser, far more motivated, and far more capable of studying than the relatively few that skipped the opportunity. Sometimes they were radically transformed – I saw borderline-fail students turn into top performers more than once – but, apart from when things fell apart (not common, but not unheard of), it was nearly always worth far more than at least the previous couple of years of traditional teaching. It was expensive and disruptive to run, demanding a lot from all academic staff and especially from those who had to organize it all, but it was worth it.

But, just because it works in conventional institutions doesn’t mean that it’s a good idea. It’s a technological solution that works because conventional institutions don’t. Let’s step back a bit from this for a moment. Learning in an authentic context, when it is meaningful and relevant to clear and pressing needs, surrounded by all the complexities of real life (notwithstanding that education should buffer some of that, and make the steps less risky or painful), in a community or practice, is a really good idea. Apprenticeship models have thousands of years of successful implementation to prove their worth, and that’s essentially what co-ops or placements achieve, albeit only in a limited (typically 3-month to 1 year) timeframe. It’s even a good idea when the study area and working practices do not coincide, because it allows many more connections to be made in both aspects of life. But why not extend that to all (or almost all) of the process? To an extent, this is what we at Athabasca already do, although it tends to be more the default context than something we take intentional advantage of. Again, my courses are an exception – most of mine (and all to some extent) rely on students having a meaningful context of their own, and give opportunities to integrate work or other interests and study by default. In fact, one of the biggest problems I face in my teaching arises on those rare occasions when students don’t have sufficient aspects of work or leisure that engage them (e.g. prisoners or visiting students from other universities), or work in contexts that cannot be used (e.g. defence workers). I have seen it work for in-person contexts, too: the Teaching Company Scheme in the UK, that later became Knowledge Transfer Partnerships, has been hugely successful over several decades, marrying workplace learning with academic input, usually leading to a highly personalized MSc or MA while offering great benefits to lecturers, employers and students alike. They are fun, but resource-intensive, to supervise. Largely for this reason, in the past it might have been hard to make this scalable to lower than graduate levels of learning, but modern technologies – shared workspaces, blogs, portfolio management tools, rich realtime meeting tools, etc, and a more advanced understanding of ways to identify and record competencies – make it far more possible. It seems to me that what we want is not co-ops or placements, but a robust (and, ideally, publicly funded) approach to integrating academic and in-context learning. Already, a lot of my graduate students and a few undergraduates are funded by their employers, working on our courses at the same time as doing their existing jobs, which seems to benefit all concerned, so there’s clearly a demand. And it’s not just an option for vocational learning. Though (working in computing) much of my teaching does have a vocational grounding, if not a vocational focus, I have come across students elsewhere across the university who are doing far less obviously job-related studies with the support of their employers. In fact, it is often a much better idea for students to learn stuff that is not directly applicable to their workplace, because the boundary-crossing it entails better improves a vast range of the most important skills identified in the RBC report – creativity, communication, critical thinking, problem solving, judgement, listening, reading, and so on. Good employers see the value in that.

Conclusions

Though this is a long post, I have only cherry-picked a few of the many interesting issues that emerge from the report, but I think there are some general themes in my reactions to it that are consistent:

1: it’s not about money

Firstly, the notion that educational systems should be primarily thought of as feeders for industry is dangerous nonsense. Our educational systems are preparation for life (in society and its cultures), and work is only a part of that. Preparedness for work is better seen as a side-effect of education, not its purpose. And education is definitely not the best vehicle for driving economic prosperity. The teaching profession is almost entirely populated by extremely smart, capable, people who (especially in relation to their qualifications) are earning relatively little money. To cap it all, we often work longer hours, in poorer conditions than many of our similarly capable industry colleagues. Though a fair living wage is, of course, very important to us, and we get justly upset when offered unfair wages or worsening conditions, we don’t work for pay: we are paid for our work. Notwithstanding that a lack of money is a very bad thing indeed and should be avoided like the plague, we do so precisely because we think there are some things – common things –  that are much more important than money (this may also partly account for a liberal bias in the profession, though it also helps that the average IQ of teachers is a bit above the norm). And, whether explicitly or otherwise, this is inevitably part of what we teach. Education is not primarily about learning a set of skills and facts: it’s about learning to be, and the examples that teachers set, the way they model roles, cannot help but come laden with their own values. Even if we scrupulously tried to avoid it, the fact of our existence serves as a prime example of people who put money relatively low on their list of priorities. If we have an influence (and I hope we do) we therefore encourage people to value things other than a large wage packet. So, if you are going to college or school in the hope of learning to make loads of money, you’re probably making the wrong choice. Find a rich person instead and learn from them.

2: it is about integrating education and the rest of our lives

Despite its relentless focus on improving the economy, I think this report is fundamentally right in most of the suggestions it makes about education, though it doesn’t go far enough. It is not so much that we should focus on job-related skills (whatever they might be) but that we should integrate education with and throughout our lives. The notion of taking someone out of their life context and inflicting a bunch of knowledge-acquisition tasks with inauthentic, teacher-led criteria for success, not to mention to subjugate them to teacher control over all that they do, is plain dumb. There may be odd occasions where retreating from and separating education from the world is worthwhile, but they are few and far between, and can be catered for on an individual needs basis.

Our educational processes evolved in a very different context, where the primary intent was to teach dogma to the many by the few, and where physical constraints (rarity of books/reading skills, limited availability of scholars, limits of physical spaces) made lecture forms in dedicated spaces appropriate solutions to those particular technical problems. Later, education evolved to focus more on creating a pliant and capable workforce to meet the needs of employers and the military, which happened to fit fairly well with the one-to-many top-down-control models devised to teach divinity etc. Though those days are mostly ended, we still retain strong echoes of these roles in much of our structure and processes – our pedagogies are still deeply rooted in the need to learn specific stuff, dictated and directed by others, in this weird, artificial context. Somehow along the way (in part due to higher education, at least, formerly being a scarce commodity) we turned into filters and gatekeepers for employment purposes.  But, today, we are trying to solve different problems. Modern education has tended to tread a shifting path between supporting individual development and improving our societies: these should be mutually supportive roles though different educational systems tend to put more emphasis on one than the other. With that in mind, it no longer makes sense to routinely (in fact almost universally) take people out of their physical, social, or work context to learn stuff. There are times that it helps or may even be necessary: when we need access to expensive shared resources (that mediaeval problem again), for instance, or when we need to work with in-person communities (hard to teach acting unless you have an opportunity to act with other actors, for example), or when it might be notably dangerous to practice in the real world (though virtual simulations can help). But, on the whole, we can learn far better when we learn in a real world context, where we can put our learning directly into useful practice, where it has value to us and those around us. Community matters immensely – for learning, for motivation, for diversity of ideas, for belonging, for connection, etc – and one of the greatest values in traditional education is that it provides a ready-made social context. We should not throw the baby out with the bathwater and it is important to sustain such communities, online or in-person. But it does not have to be, and should not ever be, the only social context, and it does not need to be the main social context for learning. Pleasingly, in his own excellent keynote at CNIE, our president Neil Fassina made some very similar points. I think that Athabasca is well on course towards a much brighter future.

3: what we teach is not what you learn

Finally, the whole education system (especially in higher education) is one gigantic head fake. By and large, the subjects we teach are of relatively minor significance. We teach ways of thinking, we teach values, we teach a few facts and skills, but mainly we teach a way of being. For all that, what you actually learn is something else entirely, and it is different from what every one of your co-learners learns, because 1) you are your main and most important teacher and 2) you are surrounded by others (in person, in artefacts they create, online) who also teach you. We need to embrace that far more than we typically do. We need to acknowledge and celebrate the differences in every single learner, not teach stuff at them in the vain belief that what we have to tell you matters more than what you want to learn, or that somehow (contrary to all evidence) everyone comes in and leaves knowing the same stuff. We’ve got to stop rewarding and punishing compliance and non-compliance.

What you learn changes you. It makes you able to see things differently, do things differently, make new connections. Anything you learn. There is no such thing as useless learning. It is, though, certainly possible to learn harmful things – misconceptions, falsehoods, blind beliefs, and so on – so the most important skill is to distinguish those from the things that are helpful (not necessarily true – helpful). On the whole, I don’t like approaches to teaching that make you learn stuff faster (though they can be very useful when solving some kinds of problem) because it devalues the journey. I like approaches that help you learn better: deeper, more connected, more transformative. This doesn’t mean that the RBC report is wrong in criticizing our current educational systems, but it is wrong to believe that the answer is to stop (or reduce) teaching the stuff that employers don’t think they need. Learners should learn whatever they want or need to learn, whenever they need to do so, and educational institutions (collectively) should support that. But that also doesn’t mean teachers should teach what learners (or employers, or governments) think they should teach, because 1) we always teach more than that, whether we want to or not, and it all has value and 2) none of these entities are our customers. The heartbreaking thing is that some of the lessons most of us unintentionally teach – from mindless capitulation to authority, to the terrible approaches to learning nurtured by exams, to the truly awful beliefs that people do not like/are not able to learn certain subjects or skills – are firmly in the harmful category.  It does mean that we need to be more aware of the hidden lessons, and of what our students are actually learning from them. We need to design our teaching in ways that allow them to make it relevant and meaningful in their lives. We need to design it so that every student can apply their learning to things that matter to them, we need to help them to reflect and connect, to adopt approaches, attitudes, and values that they can constantly use throughout their lives, in the workplace or not. We need to help them to see what they have learned in a broader social context, to pay it forward and spread their learning contagiously, both in and out of the classroom (or wherever they are doing their learning). We need to be partners and collaborators in learning, not providers.  If we do that then, even if we are teaching COBOL, Italian Renaissance poetry, or some other ‘useless’ subject, we will be doing what employers seem to want and need. More importantly, we will be enriching lives, whether or not we make people fiscally richer.

The Myth of 'Learning Styles'

A straightforward journalistic article from the Atlantic that does a decent job of explaining how and why learning styles theories simply don’t work, with a particular focus on VARK. The takeaways are that, yes, people do often prefer to learn in different ways but, no, accommodating their preferences has no effect on comprehension or recall, and, even if it did, it would be doing a disservice to learners to do so because life ain’t like that (with darker implications of teaching people to believe they need something they don’t, thereby actually reducing their capacity to learn). We’ve known this for a really long time. I am still shocked that (at least as recently as 5 years ago) up to 90% of teachers actually believe in the learning styles myth. Of all the people that should know better, teachers are pretty much at the top of the list.

The article points to a good range of recent reliable sources, including:

Another Nail in the Coffin for Learning Styles? Disparities among Undergraduate Anatomy Students’ Study Strategies, Class Performance, and Reported VARK Learning Styles

Learning style, judgements of learning, and learning of verbal and visual information.

The Scientific Status of Learning Styles Theories

Matching Learning Style to Instructional Method: Effects on Comprehension

Proviso: there’s nothing wrong, and everything right, about thinking of different ways to enable people to learn stuff, and learning styles theories all encourage people to do that. As a design tool, that serves as a reminder that there is seldom one best way to teach anything, I’m all in favour of anything that gets the creative juices flowing and that allows learning designers to take and apply different perspectives. This is the sort of thing that increases engagement, interest, and time on task. Even if it is barking mad or positively evil, as long as we don’t let on why we are doing it, any way we find to do this is probably fine. I could, for instance, imagine ways that a ‘men are from Mars, women are from Venus’ perspective could, despite the very unsavoury nonsense behind it, actually result in some diverse approaches to teaching that would benefit everyone, as long as we didn’t try to teach men one way and women another, of course, and as long as we didn’t let on that we had designed it with these thoughts in mind. Substitute whatever demographic divide, bias, bigotry, or preference you like – religion, weight, politics, sexuality, race, drinking habits, liking for cats or dogs, general level of fitness, whatever. As long as you keep it to yourself and only let it affect how you design your teaching, then do what works for you. There’s a slippery slope to be avoided here, and some complexities to be wary of, especially when it changes the content and intended outcomes – if, say, you chose religion as your discriminator, that does not mean you should teach both evolution and intelligent design, though there might be value in remembering that there might be a religious demographic that won’t readily accept any amount of evidence or argument, so you might want to think about how best to help them, just as you should think about how best to help people with any disability.  Let’s just keep it that way, though – a dirty little in-house secret about how we design our teaching by thinking (wrongly or not) about differences between our learners –  and stop inflicting the stupid notion on our students.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/4308469/the-myth-of-learning-styles

E-Learn 2019, Call for Proposals: Due July 8

https://www.aace.org/conf/elearn/call/

Call for Proposals: Due July 8

E-learn has been running since 1996 (originally under the name of WebNet) and is a great conference for researchers into online learning working in higher education and related fields. At its peak, it used to attract about a thousand delegates though this has mostly tailed off a bit in recent years, so it’s a big enough number to ensure diversity and quality, but small enough that you can get to meet many of them. This year the conference is in one of my favourite locations to visit, New Orleans, and it runs from November 4-7, 2019.

What I like most about the conference is its diversity. It typically attracts a great range from more technical to more educationally focused researchers, with a great spread of experience from student researchers to the most famous in the field. There are also usually a lot of other interested and interesting people involved, as this pie chart suggests:

E-learn attendees pie chart

If you want to submit a paper or poster, acceptance rates are about par for the course: it’s certainly not trivially easy to get a paper accepted, but it’s not fiendishly hard.

Disclaimer: I’m on the Executive Committee, have co-chaired it a couple of times, and have only missed about three of the conferences in the last 20 years so I’m obviously a fan!

Originally posted at: https://landing.athabascau.ca/bookmarks/view/4213561/e-learn-2019-call-for-proposals-due-july-8

In-person vs online teaching

This is roughly the content of my 3 minute pitch to explain (some of) my research, that I gave at the OUNL research day in Heerlen, Netherlands yesterday. I was allowed one slide:

in-person vs self-paced online learning

This is (very roughly) what I said:

Mediaeval scholars were faced with the problem that knowledge (doctrine actually), often found in rare and expensive books, needed to be passed from the few to the many. Lecturing was an efficient solution, given the constraints of physics. Because everyone needed to be in the same place at the same time for this to work, we developed schools, universities, classes, courses, timetables and terms and semesters. We built resources like libraries.  We created organizational units to manage it all, like faculties and colleges. Above all,  for efficiency, we needed rules of behaviour and a natural power dynamic putting the lecturer in control for every moment of the learning activity in a classroom.

Learning (like most things) works best – by far – when learners are intrinsically motivated. It barely works at all when learners are amotivated. Self determination theory tells us that three things are needed for intrinsic motivation: support for autonomy, competence, and relatedness. The mediaeval solution was good for relatedness, but bad for competence (some found it too challenging, some not challenging enough) and terrible for autonomy. The chance of amotivation is thus very high. Many of our pedagogies, processes, and much of the art of teaching since then have been, in one way or another, attempts to deal with this one central problem.  The most common solution to the lack of intrinsic motivation that resulted was to apply externally regulated extrinsic motivation – rewards like grades and qualifications, rules of attendance,  punishments for non-compliance,  etc – which, self determination theory shows, is infallibly fatal to intrinsic motivation, making things far worse. How crazy is it that we have to force people to do the one thing that makes us most human, a drive to learn that is arguably stronger than sex or even the pursuit of food?   Good teachers using well considered teaching methods can usually overcome many of the issues, at least for many students much of the time. But that’s what good pedagogy means. It is highly situated in solving the innate problems of in-person teaching.

On the whole, for perfectly understandable reasons (much distance teaching evolved in an in-person context with which it had to interoperate) we have transferred those exact same pedagogies unthinkingly to open, self paced, self directed, distance learning. ‘Teaching is teaching’, advocates claim, and so they try, as much as possible, to replicate online what they do in a classroom. But the motivational problems faced by distance learners are almost the exact inverse of those of in-person learners. They have lots of autonomy – you can’t really take it away – and can take different paths and pacing to gain competence (e.g. rewinding or skipping videos, re-reading text, augmenting with other resources, etc), but tend to suffer from reduced relatedness, especially when learning truly independently, in a self paced modality. Given this mismatch and the lack of well evolved support and processes for this very different context, it is not surprising there is often a high rate of attrition, especially when teachers (lacking the closeness and authority or in-person colleagues) double down on rewards and punishments through grades, even to the extent of rewarding participation, thus making it even worse.  

There is no such thing as a disembodied, abstract, decontextualized pedagogy – it is all about orchestrating technologies- so any solution must be as much about buildings tools and structures as it is about using techniques and methods. They are entirely inseparable.  A significant part of my current research is thus an attempt to design native online pedagogies, technologies, and other parts of educational systems (including credentialling) that don’t rely on reward and punishment; that are built for supporting learning in the complex, ever changing modern world that does exist, rather than for the indoctrination of mediaeval students.

 

 

George Siemens says he was wrong about networks. Well, not exactly wrong…

A characteristically smart and articulate post from George Siemens explaining why a view of the universe as nothing but networks all the way down – that he has supported in the past – is not sufficient to explain everything that matters. As George says, a systems view tends to be way more useful. It is important to observe that this is not in any way incommensurate with a network-oriented view because systems are entirely about networks, network theories play a very important role in modelling and understanding systems and, in fact, network theories are just a subset of systems theories anyway so, as George points out in this essay, he was not actually wrong in the past. It’s just that (perhaps – I present a counter view at the end) he could have been more right.

Not just one theory but many

There’s a great deal of diversity in systems theories, crossing many disciplinary areas, with different standards for rigour and explanatory power, and that’s part of their strength. They offer ways of talking about systems that are appropriate to their context. What is common to all systems theories is that they are anti-reductive, focused on relationships and interactions between things over time more than their constitutive elements, but there’s a host of different ways that broad approach can be applied.  Personally, I am particularly drawn to the field of self-organizing systems, which means an interest in the general areas of cybernetics, complex adaptive systems, autopoietic systems, signal/boundary systems, evolution, stigmergy, swarm intelligence, networks, etc, but there’s a lot of other helpful kinds of system theory. I have found Michael Moore’s much higher-level systems view of education, for example, to be really useful in my research and teaching, and approaches like systems dynamics can be very helpful to understand why systems that surround us constantly fail.  Systems models can, for instance, help to explain why incentive systems reduce motivation, or how -more generally – systems (once created) develop their own goals independently of and often in direct opposition to the people within them or their creators. Systems views are not always presented as such. One of the most life-changing books I have ever read, for instance, is Jane Jacobs’s The Death and Life of Great American Cities, which presents a rich and poetic systems view of what makes a city area thrive or fail, and has been hugely influential in driving the development of many cities around the world, though there’s far more to it than that. There’s barely a network to be found within it, and it doesn’t draw on any formal systems theories, but it certainly contains one. Though others have developed more network-oriented systems theories out of it (notably Christopher Alexander and Bill Hillier) the power of Jacobs’s systems theory is far more to do with the richness of her storytelling and her complex, multi-layered, deeply humane analysis of human systems. The level of detailed observation and depth of insight is similar in many ways to that of Charles Darwin, another preeminent and seminal systems thinker who did not label himself as such. Both Darwin and Jacobs do not simply show that – they show why and how, in wondrous and complex detail, everything affects everything else. Some systems can be useful but rather boring, especially when they are closed. Computers, for instance, are systems of interoperating parts and layers. They are complicated, for sure, but not (in themselves) complex. This makes them, as systems in themselves, a bit dull. Sitting by itself, notwithstanding interesting ways it can be programmed to adapt, a computer is essentially a closed system that behaves in predictable ways. However, the field of information systems is much more about human systems than computers, the field of computing as a whole is rich in invention, and the field of software development using computers is fully open and truly complex, full of unexpected and emergent behaviours, combining ideas, fields, groups, individuals, and models from all over the place. Connected together, they can do very interesting and sometimes unexpected things. Computers are (mostly) boring systems, but they are part of, and are used to enact or contain many much richer systems. Similar things can be said of legal systems, accounting systems, most machines, many organizations, and so on. It’s true of many systems in nature, too, such as metabolic pathways or neural connections. In themselves, they are (I simplify a little) systems of interacting mechanical processes following a set of simple rules. Things only get really interesting when you look at them as subsystems of other systems, interacting with other subsystems, whether creating something planned or emergent. Of course, it’s not just about things with lots of parts. Even simple, uncomplicated systems can be complex: the classic three body problem is a good illustration of this. It’s about how those parts are configured, and their openness to energy or information from the environment.

More is different

Systems theories that go beyond mere networks are necessary because more is different, as P.W Anderson famously demonstrated way back in 1972, and new laws, principles, patterns, and concerns occur at many different scales. Such laws and regularities are inherently unpredictable from the behaviour of their parts (see Kauffman’s Reinventing the Sacred or Humanity in a Creative Universe or even his older Investigations for a solid theoretical explanation of why this must be – it’s all about adjacent possibles) so, even if you can posit a theory that consists of networks from bottom to top, there’s limited value to be gained from doing so. It’s like string theory – if true, it probably does explain nearly everything in the whole universe but it’s not a lot of help with your shopping or filing your tax returns. Network theory strips a lot of what is meaningful from the system it models. There is a great deal that can be learned about learning from an understanding of the dynamics of networks, but they are of limited value in helping you to, say, construct a learning plan for yourself or others, or figure out why you are procrastinating about your homework right now.

Signals and boundaries

I tweeted a rather opaque response to George’s announcement of his article, in which I mentioned signals and boundaries. That’s worth unpicking a little. The central concept comes from John Holland’s brilliant eponymous (and, sadly, last) book, Signals and Boundaries. For any system that we choose to look at, we must choose which are the boundaries that matter to us, examine the signals that pass between what is at either side of those boundaries, and consider what tranformations occur within the boundaries (not necessarily how they occur), in order to understand it at an appropriate level. Though there are some consistent patterns at every scale (that Holland brilliantly reveals) we come to very different understandings depending on the boundaries we choose: the rules, the signals, the behaviour of the systems, etc are, qualitatively, profoundly different. For instance, consider the difference between anatomy and metabolic pathways in cells. You can’t have the former without the latter, but there is no conceivable way you could deduce the function or form of the heart by looking at enzymes in cells (of course, you could learn useful things about how the heart works by looking at metabolic pathways because they are subsystems or, a little more accurately, sub-sub-sub-subsystems of the heart).  Choosing boundaries is a process of black-boxing wherein, once a significant boundary is chosen, we treat the internal part as a kind of ‘program’ that processes the signals it receives and evokes responses. This is what I think George is getting at when he suggests that what makes systems different is that they embody rules. This is smarter than a simpler network view in a variety of ways. It makes it easier to focus on levels that matter, using context-appropriate vocabularies and meanings, in whatever combinations are significant; it allows us to more easily combine different scales/granularities of boundaried entity; it allows us to think more deeply about qualitative as well as quantitative differences in the signals; it allows us to think about not just networks but sets, or organizational structures, or whatever is appropriate; and (arguably most usefully) it makes it far simpler to think about processes (the ‘programs’) that drive it, and how they affect one another. It does all this without losing any of the value of looking at it as a network. 

Connectivism as a systems theory

In fact, though George is a little dismissive of his most famous and widely cited article on the subject, a lot of this kind of systems perspective appears within it. He talks of ecologies (archetypal systems) quite a bit, explicitly mentions systems theories as playing a foundational role in setting the agenda for the theory he expounds, spends a fair bit of time on chaos theory and self-organization (both explicitly systems fields involving systems theories), and even, as he discusses the implications towards the end, explicitly refers to connectivism as “a systems view of learning”. Though not explicitly mentioned, the theory also draws quite a bit on the field of socially distributed cognition, which is essentially a systems view of knowledge. So, though George may have meandered off the path a bit along the way and got caught up in trying to make everything look like a network from time to time, the version of Connectivism that most people adopt is based on this paper, which is and has always been about a systems theory, rather than a network theory. Even its central message supports this view. Of the eight most oft-quoted principles at the centre of the essay, only three are explicitly about connections. The rest are concerned with processes, axioms, and attitudes that relate to what’s inside the black boxes (the network nodes). These might, charitably, be seen as supportive of networks, but are far more to do with how to learn in and as part of a self-organizing complex adaptive system rather than how the network itself embodies learning. That’s a big part of what makes it useful: we need such theories to make sense of the changing context in which we find ourselves, in which older theories (especially those embedded in a view of education as a formal process involving a teacher) seem inadequate. It also prevents it from being a complete theory of learning – there are other theories and models that take a different systems view (or even, perhaps, a non-systems view) that may be more appropriate, at least in combination with it, to some circumstances – but that’s no bad thing. In fact, it is kind of implied in one of the central axioms of the theory itself: “Learning and knowledge rests in diversity of opinions“.  This has been one of mine.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/4119883/george-siemens-says-he-was-wrong-about-networks-well-not-exactly-wrong

Premature optimism

Despite careful future-proofing and a structure that was deliberately built to evolve over time so that it would remain current, my elderly Social Computing course has pretty much reached the end of its useful life, so I have started to revise and refactor it. While doing so, I came across this 2007 article that I had bookmarked for the course. The headline of the article was “Checkmate? MySpace, Bebo and SixApart To Join Google OpenSocial (confirmed).” The answer to the question posed in the headline was, as we now know, a very resounding NO.

Involving every social network of note, apart from Facebook, in a consortium, as well as having the support of many other huge industry players, OpenSocial seemed to me, and to almost everyone else in the field, to be the beginning of something amazing. At the time, I blogged about this article thus:

This is probably the biggest thing ever to happen in the world of social software.
Wow.
MySpace, Bebo and SixApart are in on the deal that already includes Orkut, Salesforce, LinkedIn, Ning, Hi5, Plaxo, Friendster, Viadeo and Oracle (yes, Oracle). As the article says, checkmate for Facebook, but it can’t be long before they join in.
I can hardly wait to start playing.
The range of possible educational uses is staggeringly large. Maybe not as big as the invention of the Web itself, but potentially as transforming. I think that we have just seen the start of a new era.

I couldn’t have been more wrong. Facebook did not join in at all and it was anything but killed off by the consortium. In fact, the Evil One took the precise opposite course, ruthlessly locking more and more and more in, sucking in more and more from other systems, and giving less and less back, until they pretty much owned the space. Facebook always fought dirtier, with a more single minded focus on one and only one thing (building a social network), regardless of the consequences or moral imperatives, than anyone else. Of the list of prominent OpenSocial members back in 2007, a few just about limp along, but many are dead, including Google’s own Orkut and Google Plus. Friendster died, had a brief revamp as a gaming network then died again. MySpace and Hi5 limp along miserably. Ning did what seemed to be a really bad thing by completely closing its (originally beautiful, elegant, crowd-sourced, evolving) system and converting it into a paid social media hosting service, but it still survives in that role. SixApart is an empty shell company doing nothing of note. The rest barely register, then or now.

A fair number of social software companies fought back by trying to use Facebook’s own evil strategies, mostly without much success – I was particularly sad that Twitter, in particular, slowly removed most means to use its data in any meaningful or useful way outside the application.  A number of them survived by being what Terry Anderson and I call sets rather than networks, thereby avoiding head-on confrontation by not being perceived as social networks. Though many had a social network, that wasn’t their primary role. Many of these remain hugely successful – indeed, YouTube remains perhaps the only centralized social medium to resoundingly beat Facebook in user numbers, though Wikipedia comes pretty close by some measures. Reddit continues to thrive largely unaffected by the evil giant, and the set-oriented face of Twitter continues to do pretty well, even if its social networking side waned long ago. Though too seldom recognized as social media in commentaries on the subject, the success of Amazon and eBay is largely down to their clever use of social software: they are vast, and support vast communities. There are also lots of systems that are doing comfortably in their non-competing niches, such as Pinterest, Tumblr, Medium, and many others. A few vertical social networks, with specific foci (LinkedIn being by far the biggest and most successful example), continue to to very well: in my own fields of education and technology, Academia.edu, GitHub, ResearchGate and StackOverflow are doing fine, for instance (though they tend to be quite set-oriented, which helps), and various MOOC providers and MOOC-ish providers like the Khan Academy are thriving through the use of social software. A few did too well and got taken over by bigger companies, including by the evil cousins Facebook (WhatsApp and Instagram) and Microsoft (GitHub and LinkedIn). This is tragic.

OpenSocial is not exactly dead: there is still a group working on it within the W3C and there are a few implementations still available in minor social systems like MySpace and Hi5. However, it has not progressed significantly since 2013, and Apache closed down Shindig, the main reference implementation, several years ago. Other related standards, like OpenID, OAuth and even the venerable RSS are still going strong but slowly decaying, albeit that they have an enormous momentum that won’t make them easy to kill for a long time to come.

Those of us who continue to dream of an open, distributed, social Web appear to lurk around the periphery. Mastodon continues to grow, Solid looks promising, though I would certainly not put any money on either of them coming to challenge the monoliths in any serious way. However, the biggest distributed social web system, by orders of magnitude, is sitting in front of us, hiding in plain view, and it is still growing very successfully. The open source WordPress powers about a third of all websites, and is rich in social features right out of the box. To put that in perspective, there are as many or more sites running on WordPress than there are sites of any description running on any of the major web servers (obviously, WordPress can run on any major web server). There are plugins to support most distributed standards and protocols, from WebMention to Solid, and much in between, and it supports basics like RSS, in-site collaboration, and public comments (including trackbacks and pingbacks) out of the box. There’s plentiful support, mainly through plugins or manual embedding, for mashups. Sure, it is a long way from the vision that many of us have of a fully distributed open social web, but it does much of the job well enough. And, yes, many WordPress sites are not particularly, if at all, social, but the majority of them have at least some support for engagement, and virtually all are an active part of the Web itself, linking to one another and other sites in many ways, including blogrolls and embedded feeds. A vast number, again most likely the majority, provide hooks or feeds into more than one of the monoliths which, though bad in itself, sneaks in distribution via the back door because the posts themselves remain independent and not locked in. My own site feeds its posts into Twitter, for example, and has the usual set of links that allow its pages to be shared via various social media. It also automatically sucks in a few of my RSS feeds from the Landing and elsewhere, so it is already a mashup. A fair number use BuddyPress, which explicitly overlays a social network onto the system, though they are all at the shallowest end of the long tail.

WordPress itself is inelegant from a software perspective (and it is built on similarly inelegant systems like PHP and MySQL) but, like the tools it is made from, it is very well evolved indeed. It just works. It is one of the most manageable server-based apps I have ever used and demands little skill of its users for authoring. It has an incredibly rich developer community that provides tens of thousands of themes and plugins that can be used to make it do almost anything. Its hybrid open/proprietary model is about the most sensible I have seen. Automattic (the company responsible for it) do try to sell you their hosted services, especially through the JetPack bundle of plugins that it comes with by default, but not objectionably, and they very actively support the open source code and its self-hosting users. Most of their services provide an acceptable free tier and, of course, you don’t have to use them at all as there are many alternatives available. Automattic make their money through providing high quality, convenient tools and services at fair prices, not by locking you in. The plugin marketplace is wide open, with a good balance of open source and commercial options that again provide plentiful choice, and there’s a lot more to be found outside the plugin site hosted by WordPress themselves. And, yes, there are even a couple of OpenSocial plugins, albeit feebly implementing a tiny subset of the standard. It’s not the future we all dreamed of, but it’s as good as it gets right now.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/4013155/premature-optimism

A blast from my past: Google reimplements CoFIND

While searching for a movie using Google Search last night I got (for the first time that I can recall) the option to tag the result, as described in this article. I was pleased to discover that the tool they provide for this is virtually identical (albeit with a much slicker and more refined modern interface overhaul) to the CoFIND system that underpinned my PhD, that I built over 20 years ago now. You are presented with a list of tags, and can select one or more that describe the movie, and/or suggest your own, effectively creating a multi-dimensional rating system that other users can use to judge what the movie is like. When I rated the movie last night, for instance, popular tags presented to me included ‘terrible acting’, ‘bad writing’, ‘clichéed’, ‘boring’ and so on. Having seen the movie, I agree about the bad writing and clichés – it was at the terrible end of the scale – but actually think most of the acting was fairly good, and it was not very boring. What is interestingly different about this, compared with other tagging systems currently available, is that this kind of tag is fuzzy – it represents a value statement about the movie that exists on a continuum, not a simple categorization. The sorting algorithm for the list of tags presented to you appears (like my original CoFIND) to be based mainly on simple popularity though it is possible that (like CoFIND) it uses other metrics like tag age and perhaps even a user model as well. It’s vastly more useful and powerful than the typical thumbs-up/thumbs-down that Google normally provides. The feature has sadly not reappeared on subsequent movie searches, so I am guessing that Google is either still testing it or trying to build up a sufficient base of recommendations by occasionally showing it to people, before opening it up to everyone.

Just in case Google or anyone else has tried to patent this, and to assert my prior art, you can find a description and screenshots (p183 and p184) of my original CoFIND system in chapter 6 of my PhD thesis as well as in many papers before and since, not to mention in a fair few blog posts. It’s out there in the public domain for anyone to use. The interface of my system was, even by the standards of the day, pretty awful and not even a fraction as good as the one provided by Google, but those were different times: it did work in exactly the same way, though. As I developed it further, the interface actually became much worse. Over the course of a few years I experimented with quite a range of methods to get and display ratings/tags, including an ill-conceived Likert scale as well as a much more successful early use of tag clouds, all of which added complexity and reduced usability. Some of these later systems are described and discussed in my PhD too.  In its final, refactored, and heavily evolved form that postdates my PhD by several years, a version of Cofind (last modified 2007) is actually still available, that almost reverts to the the Google-style tag selection approach of the original, with the slight tweak that, in CoFIND, you can disagree about any particular tag use (for instance, if you don’t believe it to be inane then you can cast a vote against that tag).  The interface remains at least as awful as the original, though, and not a patch on Google’s. The main other differences, apart from interface variations, are that the nomenclature differs (I used ‘qualities’  rather than ‘tags), and that CoFIND could be used for anything with a URL, not just movies. If you’re interested, click on any resource link in the system and you’ll see my primitive, ugly, frame-based attempt to do very much the same as Google is doing for movies (nb. unless you are logged in you cannot add new qualities but, for authorized users, a field appears at the end that is just like Google’s). Though primarily intended to share and recommend educational resources, CoFIND was very flexible and was, over the years, used for a range of other purposes from comparing interface designs to discovering images and videos. It was always flaky, ugly, and unscalable, but it worked well enough for my research and teaching purposes, and (because it provides RSS feeds) it was my go-to tool for sharing interesting links right up until 2007, after which I reverted to more conventional but better-maintained tools like the Landing or WordPress. 

A little bit of CoFIND background

I’ve written a fair bit about CoFIND, formally and informally, but not for a few years now, so here’s a little background for anyone that might be interested, and to remind myself of a little of what I learned all those years ago in the light of what I know now.

An evolving, self-organizing, social bookmarking tool

I started my PhD research in 1997 with the observation that, even then, there was a vast amount of stuff to learn from that could be easily found on the Web, but that it was really difficult to find good stuff, let alone stuff that was actually useful to a particular learner at a particular stage in their development. Remember that this was before Google even started, so things were significantly worse then than they are now. Infoseek was as good as it got.

I had also observed that, in any group of learners, people would find different things and, between them, discover a much larger range of useful resources than any one learner (or teacher) could do alone, a fact that I use in my teaching to this day. These would likely (and, it turns out, in reality) be better than what a teacher could find alone because, though individual learners might be less able to distinguish low from high quality, they would know what worked for them and sufficient numbers of eyes would weed out the bad stuff as long as there was a mechanism for it. This was where I came in.

The only such mechanisms widely available at the time were simple rating systems. However, learners have very different learning needs, so I immediately realized that ‘thumbs-up’ or simple Likert scales would not work. This was not about finding the one ‘best’ solution for everyone, but was instead concerned with finding a range of alternatives to fill different ecological niches, and somehow discovering the most useful solution in that niche for a given learner at a given time.  My initial idea was to make use of a crowd, not an individual curator, and to employ a process closely akin to natural evolution to kill bad suggestions and promote good ones, in order to create an ecosystem of learning resources rather than a simple database. CoFIND was a series of software solutions that explored and extended this initial idea.

CoFIND was, on the face of it, what would eventually come to be called a social bookmarking system – a means for learners to find and to share Web resources (and, later, other things) with one another, along with a mechanism for other learners to recommend or critique them. It was by no means the first social bookmarking system, but it was certainly not a common genre at the time, and I don’t think such a dedicated system had ever been used in education before (for all such assertions, I stand to be corrected), though other means of sharing links, from simple web pages or wikis or discussion forums to purpose-built teacher-curated tools were not that uncommon. A lot of my early research involved learning about self-organization and complex systems, in particular focusing on evolution and stigmergy (self-organization through signs left in the environment). As well as the survival-of-the-fittest dynamic, evolution furnished me with many useful concepts that I made good use of, such as the importance of parcellation, the necessity of death, ways to avoid skyhooks, benefits of spandrels, ways to leverage chance (including extinction events), and various approaches to supporting speciation.  As a result of learning about stigmergy I independently developed what later came to be know as tag clouds. I don’t believe that mine were the first ever tag clouds – weighted lists of one sort or another had been around for a few years – but, though mine didn’t then use the name, they were likely the first uses of such things in educational software, and almost certainly the first with this particular theoretical model to support them (again, I am happy to be corrected).

A collaborative filter

The name CoFIND is an acronym for ‘collaborative filter in n-dimensions’. The n dimensions were substantiated through what we (my supervisors and I) called qualities. We went through a long list of possible names for these, and I was drawn for a while to calling them ‘values’, but (unfortunately) we never thought of ‘tags’ because the term was not in common use for this kind of purpose at the time. After a phase of calling them q-tags, I now call qualities by the much more accessible name of ‘fuzzy tags’. Fuzzy tags are not just binary classifications of a topic but tags that describe what we value, or don’t value, in a resource, and how much we value it. While people may sometimes disagree about binary classifications (conventional tags) it is always possible to have different opinions about the application of fuzzy tags: some may find something interesting, for instance, while others may not, and others may feel it to be quite interesting, or incredibly so. Fuzzy tags are to do with fuzzy sets, that have a continuum of grades of membership, which is where the name comes from. Different versions of CoFIND used different ways to establish the fuzziness of a tag – the Likert Scale used in a few mid-period versions was my failed attempt to make it explicit but this was a nightmare for people to actually use.  The first versions used the same kind of frequency-based weighting as Google’s movie tags, but that was a bit coarse – I was uncomfortable with the averaging effect and the unbridled Matthew Effect that threatened to keep early tags at the top of the list for all time, that I rather coarsely kept in check with a simple age-related weighting that was only boosted when they were used (the unfortunate side effect of which being that, if a system was not used for a few weeks, all the tags vanished in a huge extinction event, albeit that they could be revived if anyone ever used one of the dead ones again). The final version was a bit in-between, allowing an indefinitely large scale via simple up-down ratings, balanced with an algorithm that included a decaying but renewable novelty weighting that adjusted to the frequency of use of the system as a whole. This still had the peculiar effect of evening out/initializing all of the tags over time if no one used the system, but at least it caused fewer catastrophes.

‘Traditional’ collaborative filters simply discover whether things are likely to be more valued or less valued on a usually implicit single dimension (good-bad, liked-disliked, useful-useless, etc). CoFIND’s qualities/fuzzy tags allowed people to express in what ways they were better or worse – more interesting, less helpful, more complex, less funny, etc, just as Google’s movie tagging allows you to express what you like or dislike about a movie, not just whether you liked it or not. In many tag-based systems, people tend to use quite a few simple tags that are inherently fuzzy (e.g. Flickr photos tagged as ‘beautiful’) but they are seldom differentiated in the software from those that simply classify a resource as fitting a particular category, so they are rarely particularly helpful in finding stuff to help with, say, learning.

I was building CoFIND just as the field of collaborative filtering was coming out of its infancy, so the precise definition of the term had yet to be settled. At the time, a collaborative filter (then usually called an ‘automated collaborative filter’) was simply any system that used prior explicit and/or implicit preferences of a number of previous users (a usually anonymous crowd) to help make better recommendations and/or filter out weaker recommendations for the current users. The PageRank algorithm that still underpins Google Search would perhaps have then been described as a collaborative filter, as was one of its likely inspirations, PHOAKS (People Helping One Another Know Stuff), that mined Usenet newsgroups for links, taking them as an implicit recommendation within the newsgroup topic area. By this definition, CoFIND was in fact a semi-automated collaborative filter that combined explicit preferences with automated matching. Nowadays the term ‘collaborative filter’ tends to only apply to a specific subset of recommender systems that automatically predict future interests by matching individual patterns of behaviour with those of multiple others, whether by item (people who bought this also bought…) or user (people whose past or expressed preferences seem to be like yours also liked…). I think that, if I built CoFIND today, I would simply refer to it more generically as a recommender system, to avoid confusion.

Disembodied user models

Rather than a collaborative filter, back in the late 90s Peter Brusilovsky saw CoFIND as a new species of educational adaptive hypermedia, as it was perhaps the first (or at least one of the first) that worked on an open corpus rather than through a closed corpus of linked resources. However, he and I were both puzzled about where to find the user model, which was part of Peter’s definition of adaptive hypermedia. I didn’t feel that it needed one, because users chose the things that mattered to them at runtime. In retrospect, I think that the trick behind CoFIND, and what still distinguishes it from almost all other systems apart from this fairly new Google tool, is that it disembodied and exposed the user model. Qualities were, in essence, the things that would normally be invisibly stored in a user model, but I made them visible, in an extreme variant of what Judy Kay later described as scrutable adaptation.  In effect, a learner chose their own learner model at the time they needed it. The reasoning behind doing so was that, for learners, past behaviour is usually a poor predictor of future needs, mainly because 1) learning changes people (so past preferences may have little bearing on future preferences), and 2) learning is driven by a vast number of things other than taste or past actions: we often have a need for it thrust upon us by an extrinsic agency, like a teacher, or a legislative demand for a driving licence, for instance. Qualities (fuzzy tags) allow us to express the current value of something to us, in a form that we can leave behind without a lot of sticky residue, and that future users can use. In fact, later versions did tend to slightly emphasize similar things to those people had added, categorized, or rated (fuzzily tagged) earlier, but this was just a pragmatic attempt to make the system more valuable as a personal bookmark store, and therefore to encourage more use of it, rather than an attempt to build a full-blown collaborative filter in the modern sense of the word.

Moving on

I still believe that, in principle, this is an excellent approach and I have been a little disappointed that more people have not taken up the idea and improved on it. The big and, at the time, insurmountable obstacles that I hit were 1) that it demands a lot of its users to provide both tags and resources, with little obvious personal benefit, so it is unlikely to get a lot of use, 2) that the cold-start problem that affects most collaborative filters (it relies on many users to be useful but no one will use it until it is useful) is magnified exponentially by every one of those n dimensions so it really demands a big lot of users, and 3) that it is fiendishly hard to represent the complex ecological niches effectively in an interface, making the cognitive load unusably high. Google seems to have made good progress on the last point (an evolution enabled by improved web standards and browsers combined with a simplification of the process, which together are enough to reduce the cognitive load by a sizeable amount), and has plenty sufficient numbers of users to cope with the first and second points, at least with regard to movie recommendations. It remains challenging to see how this would work in an educational setting in anything less than the largest of MOOCs or the most passionately focused of user bases. However, I would love to see Google extend this mechanism to OERs, courses, and other educational resources, from Quora answers to Kahn Academy tutorials, because they do have the numbers, and it would work well. For the same reasons, it would also be great to see it applied to something like StackExchange or similar large-scale systems (Reddit perhaps) where people go to seek solutions to learning problems. I doubt that I will build a new version of CoFIND as such, but the ideas behind it should live on, I think, and it’s great to see them back on a system as big as Google Search, even if it is so far only experimental and, so far, just used to recommend movies.

A Universal Moral Code?

57 varieties It appears that there may be a universal moral code, at least across 60 very different cultures, at least according to this large metastudy of anthropological literature. The authors focus explicitly and exclusively on manifestations of cooperative behaviour, so the level of abstraction is fairly high. I’m not totally convinced that it constitutes anything as formulaic as a code, and it contributes little or nothing to philosophical or pragmatic debates about ethical behaviour, but it is nonetheless a very interesting discovery.

The seven moral behaviours/values that the authors hypothesized would be universal, based on their theory of morality-as-cooperation (a game-theory inspired model) are:

  • allocation of resources to kin (family values),
  • coordination to mutual advantage (group loyalty),
  • social exchange (reciprocity),
  • contest between hawks (showing bravery), and
  • doves (showing respect),
  • division (fairness), and
  • possession (property rights).

The hypothesis was confirmed by the analysis. Fascinatingly, one almost-exception was found relating to property rights. In Chuuk society, openly stealing from others is valorized as a form of bravery, albeit that other indicators show that property rights are normally respected by most Chuuk people most of the time, so all that this shows is that bravery is sometimes considered a more important moral value than respect for other people’s property. I suspect similar behaviours might be found among gangs in many cultures, where such actions may signify group loyalty, bravery, respect for other group members, and so on. Assuming that people generally behave well according to their social norms (which they manifestly do), moral issues are only ever a matter of deliberation when they come into conflict with one another so this is not so much an exception as a proof of the rule.

The authors wisely note the limitations of the study, which uses papers that were not originally intended to explore ethical issues, that only looks at 60 cultures, and that uses a methodology that is almost guaranteed to introduce bias, albeit that they took sensible precautions to limit the worst effects of this. They cannot claim that these are the only 7 universals, by any means: these are just the ones that they looked for. Nor can they even reliably claim universality, though this is a decent sample so any exceptions are likely to be quite exceptional, and the results do support their theory.  Because their focus is solely on cooperative strategies, there is nothing relating to pretty big ethical questions on which most societies are likely to agree, like whether it is OK to kill other people, or eat them, or lie to them, and so on. There’s still a lot of scope for variation in ethical beliefs and behaviours within this broad framework.

None-the-less, this provides a rare chunk of empirical evidence to support there being some universality to at least broad groups of moral behaviours and values. Mostly, and unsurprisingly given the game-theoretical basis of the model, the concerns addressed are what you might predict if you were thinking about how a complex society might develop methods of cooperation, given a few basic evolutionary assumptions about gene preservation, an innate urge to hang around with others of your species, and a limit on resources. Such patterns are likely to be innate simply due to the inevitable consequences of a large group of reproducing social animals living together with limited resources. This implies that we might see exactly the same universals in other social species in such circumstances too, at least in those with the capacity for complex thought like most mammals and higher avians. I can’t immediately think of any obvious real-life exceptions, though there are certainly differences in the significance and influence of each value in different social species. Also, at least some of the values may not translate well to truly eusocial creatures like naked mole rats, nor to species where territory or other forms of ownership mean very little (some fish, for instance) nor to those that do not normally spend a lot of time together (cats or octopuses, for instance).

There are potential conflicts between several of the values, the most obvious being bravery/respect, fairness/property rights, though it is possible to imagine conflicts between any or all of them, as the example of the Chuuk people illustrates. The fact that these might be universal ethical patterns does not imply that there are therefore any universal solutions to ethical dilemmas. Nor does universality have any bearing on the fact-value gap or the naturalistic fallacy: universality does not imply rightness. It does, though, provide a promising theoretical model that may be useful when imagining alien intelligences, including those we might one day design ourselves.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/3982628/a-universal-moral-code