Cognitive prostheses and the future of the human race

head made of cogs I have lived in Canada for over 16 years so I was surprised when, a few months ago, a conference-friend whom  I had not seen for many years contacted me to ask whereabouts in Malaysia I lived. I believe that they were misremembering a connection from a conversation long ago in which I’d mentioned a recent visit (it was in fact a day-trip from Singapore) and combining that with the accurate recollection that I no longer lived in the UK.

Not long after, I was on a panel discussing the impact of ChatGPT during which I prompted ChatGPT to introduce me.  Here was its first attempt in response to the prompt “tell me about Jon Dron”. I’ve highlighted things that are true in green, and things that are false in red (emphasized for those unable to see the colours):

Jon Dron is a Canadian educator, researcher, and writer. He is known for his work in the field of e-learning and collaborative technologies. Dron has a PhD in Educational Technology from the University of Calgary, and has held academic positions at several universities in Canada, including Athabasca University, where he is currently a professor.

Dron has written extensively on topics such as online learning, social media, and the use of technology in education. He is the author of several books, including “Teaching Crowds: Learning and Social Media” and “E-Learning Theory and Practice.”

In addition to his academic work, Dron is also a musician and has released several albums of ambient and experimental music.

I’d say that there is not much difference between the human and machine recollections. I would almost certainly make at least as many mistakes if I were to asked to confidently describe a person I don’t know particularly well. In fact, I might make similar mistakes (not, please note, hallucinations) about quite close friends. Most of us don’t have eidetic memories: we reinvent recollections as much as we recall them. While there are surely many profound differences between how humans and large language models (LLMs) like ChatGPT process information, this is at least circumstantial evidence that some of the basic principles underlying artificial neural networks and biological neural networks are probably pretty similar. True, AIs do not know when they are making things up (or telling the truth, for that matter) but, in fairness, much of the time, neither do we. With a lot of intentional training we may be able to remember lines in a play or how to do long division but, usually, our recollections are like blurry JPEGs rather than RAW images.

Even for things we have intentionally learned to do or recall well, it is unusual for that training to stick without continual reinforcement, and mistakes are easily made. A few days ago I performed a set of around 30 songs (neither ambient nor experimental), most of which I had known for decades, all of which I had carefully practiced in the days leading up to the event to be sure I could play them as I intended. Here is a picture of me singing at that gig, drawn by my 6-year-old grandchild who was in attendance:

grandpa singing in the square

 

Despite my precautions and ample experience, in perhaps a majority of songs, I variously forgot words, chords, notes, and, in a couple of cases, whole verses. Combined with errors of execution (my fingers are not robotic, my voice gets husky) there was, I think, only one song in the whole set that came out more or less exactly as I intended. I have made such mistakes in almost every gig I have ever played. In fact, in well over 40 years as a performer, I have never played the same song in exactly the same way twice, though I have played some of them well over 10,000 times. Most of the variations are a feature, not a bug: they are where the expression lies. A performance is a conversation between performer, instruments, setting, and audience, not a mechanical copy of a perfect original. Nonetheless, my goal is usually to at least play the right notes and sing the right words, and I frequently fail to do that. Significantly, I generally know when I have done it wrong (typically a little before in a dread realization that just makes things worse) and adapt fairly seamlessly on the fly so, on the whole, you probably wouldn’t even notice it has happened, but I play much like ChatGPT responds to prompts: I fill in the things I don’t know with something more or less plausible. These creative adaptations are no more hallucinations than the false outputs of LLMs.

The fact that perfect recall is so difficult to achieve is why we need physical prostheses, to write things down, to look things up, or to automate them. Given LLMs’ weaknesses in accurate recall, it is slightly ironic that we often rely on computers for that.  It is, though, considerably more difficult for an LLM to do this because they have no big pictures, no purposes, no plans, not even broad intentions. They don’t know whether what they are churning out is right or wrong, so they don’t know to correct it. In fact, they don’t even know what they are saying, period. There’s no reflection, no metacognition, no layers of introspection, no sense of self, nothing to connect concepts together, no reason for them to correct errors that they cannot perceive.

Things that make us smart

How difficult can it be to fix this? I think we will soon be seeing a lot more solutions to this problem because if we can look stuff up then so can machines, and more reliable information from other systems can be used to feed the input or improve the output of the LLM (Bing, for instance, has been doing so for a while now, to an extent). A much more intriguing possibility is that an LLM itself or subsystem of it might not only look things up but also write and/or sequester code it needs to do things it is currently incapable of doing, extending its own capacity by assembling and remixing higher-level cognitive structures. Add a bit of layering then throw in an evolutionary algorithm to kill of the less viable or effective, and you’ve got a machine that can almost intentionally learn, and know when it has made a mistake.

Such abilities are a critical part of what makes humans smart, too. When discussing neural networks it is a bit too easy to focus on the underlying neural correlates of learning without paying much (if any) heed to the complex emergent structures that result from them – the “stuff” of thought – but those structures are the main things that make it work for humans. Like the training sets for large language models, the intelligence of humans is largely built from the knowledge gained from other humans through language, pedagogies, writing, drawing, music, computers, and other mediating technologies. Like an LLM, the cognitive technologies that result from this (including songs) are parts that we assemble and remix to in order to analyze, synthesize, and create. Unlike most if not all existing LLMs, though, the ways we assemble them – the methods of analysis, the rules of logic, the pedagogies, the algorithms, the principles, and so on (that we have also learned from others) – are cognitive prostheses that play an active role in the assembly, allowing us to build, invent, and use further cognitive prostheses and so to recursively extend our capabilities far beyond the training set, as well as to diagnose our own shortfalls. 

Like an LLM, our intelligence is also fundamentally collective, not just in what happens inside brains, but because our minds are extended, through tools, gadgets, rules, language, writing, structures, and systems that we enlist from the world as part of (not only adjuncts to) our thinking processes. Through technologies, from language to screwdrivers, we literally share our minds with others. For those of us who use them, LLMs are now as much parts of us as our own creative outputs are parts of them.

All of this means that human minds are part-technology (largely but not wholly instantiated in biological neural nets) and so our cognition is about as artificial as that of AIs. We could barely even think without cognitive prostheses like language, symbols, logic, and all the countless ways of doing and using technologies that we have devised, from guitars to cars. Education, in part, is a process of building and enlisting those cognitive prostheses in learners’ minds, and of enabling learners to build and enlist their own, in a massively complex, recursive, iterative, and distributed process, rich in feedback loops and self-organizing subsystems.

Choosing what we give up to the machine

There are many good ways to use LLMs in the learning process, as part of what students do. Just as it would be absurd to deny students the use of pens, books, computers, the Internet, and so on, it is absurd to deny them the use of AIs, including in summative assessments. These are now part of our cognitive apparatus, so we should learn how to participate in them wisely. But I think we need to be extremely cautious in choosing what we delegate to them, above all when using them to replace or augment some or all of the teaching role.

What makes AIs different from technologies of the past is that they perform a broadly similar process of cognitive assembly as we do ourselves, allowing us to offload much more of our cognition to an embodied collective intelligence created from the combined output of countless millions of people. Only months after the launch of ChatGPT, this is already profoundly changing how we learn and how we teach. It is disturbing and disruptive in an educational context for a number of reasons, such as that:

  • it may make it unnecessary for us to learn its skills ourselves, and so important aspects of our own cognition, not just things we don’t need (but which are they?), may atrophy;
  • if it teaches, it may embed biases from its training set and design (whose?) that we will inherit;
  • it may be a bland amalgam of what others have written, lacking originality or human quirks, and that is what we, too, will learn to do;
  • if we use it to teach, it may lead students towards an average or norm, not a peak;
  • it renders traditional forms of credentialling learning largely useless.

We need solutions to these problems or, at least, to understand how we will successfully adapt to the changes they bring, or whether we even want to do so. Right now, an LLM is not a mind at all, but it can be a functioning part of one, much as an artificial limb is a functioning part of a body or a cyborg prosthesis extends what a body can do. Whether we feel any particular limb that it (partly) replicates needs replacing, which system we should replace it with, and whether it is a a good idea in the first place are among the biggest questions we have to answer. But I think there’s an even bigger problem we need to solve: the nature of education itself.

AI teachers

There are no value-free technologies, at least insofar as they are enacted and brought into being through our participation in them, and the technologies that contribute to our cognition, such as teaching, are the most value-laden of all, communicating not just the knowledge and skills they purport to provide but also the ways of thinking and being that they embody. It is not just what they teach or how effectively they do so, but how they teach, and how we learn to think and behave as a result, that matters.

While AI teachers might well make it easier to learn to do and remember stuff, building hard cognitive technologies (technique, if you prefer) is not the only thing that education does. Through education, we learn values, ways of connecting, ways of thinking, and ways of being with others in the world. In the past this has come for free when we learn the other stuff, because real human teachers (including textbook authors, other students, etc) can’t help but model and transmit the tacit knowledge, values, and attitudes that go along with what they teach. This is largely why in-person lectures work. They are hopeless for learning the stuff being taught but the fact that students physically attend them makes them great for sharing attitudes, enthusiasm, bringing people together, letting us see how other people think through problems, how they react to ideas, etc. It is also why recordings of online lectures are much less successful because they don’t, albeit that the benefits of being able to repeat and rewind somewhat compensate for the losses.

What happens, though, when we all learn how to be human from something that is not (quite) human? The tacit curriculum – the stuff through which we learn ways of being, not just ways of doing –  for me looms largest among the problems we have to solve if we are to embed AIs in our educational systems, as indeed we must. Do we want our children to learn to be human from machines that haven’t quite figured out what that means and almost certainly never will?

Many AI-Ed acolytes tell the comforting story that we are just offloading some of our teaching to the machine, making teaching more personal, more responsive, cheaper, and more accessible to more people, freeing human teachers to do more of the human stuff. I get that: there is much to be said for making the acquisition of hard skills and knowledge easier, cheaper, and more efficient. However, it is local thinking writ large. It solves the problems that we have to solve today that are caused by how we have chosen to teach, with all the centuries-long path dependencies and counter technologies that entails, replacing technologies without wondering why they exist in the first place.

Perhaps the biggest of the problems that the entangled technologies of education systems cause are the devastating effects of tightly coupled credentials (and their cousins, grades) on intrinsic motivation. Much of the process of good teaching is one of reigniting that intrinsic motivation or, at least, supporting the development of internally regulated extrinsic motivation, and much of the process of bad teaching is about going with the flow and using threats and rewards to drive the process. As long as credentials remain the primary reason for learning, and as long as they remain based on proof of easily measured learning outcomes provided through end-products like assignments and inauthentic tests, then an AI that offers a faster, more efficient, and better tailored way of achieving them will crowd out the rest. Human teaching will be treated as a minor and largely irrelevant interruption or, at best, a feel-good ritual with motivational perks for those who can afford it. And, as we are already seeing, students coerced to meet deadlines and goals imposed on them will use AIs to take shortcuts. Why do it yourself when a machine can do it for you? 

The future

As we start to build AIs more like us, with metacognitive traits, self-set purposes, and the capacity for independent learning, the problem is just going to get bigger. Whether they are better or worse (they will be both), AIs will not be the same as us, yet they will increasingly seem so, and increasingly play human roles in the system. If the purpose of education is seen as nothing but short-term achievement of explicit learning outcomes and getting the credentials arising from that, then it would be better to let the machines achieve them so that we can get on with our lives. But of course that is not the purpose. Education is for preparing people to live better lives in better societies. It is why the picture of me singing above delights me more than anything ever created by an AI. It is why education is and must remain a fundamentally human process. Almost any human activity can be replaced by an AI, including teaching, but education is fundamental to how we become who we are. That’s not the kind of thing that I think we want to replace.

Our minds are already changing as they extend into the collective intelligence of LLMs – they must – and we are only at the very beginning of this story. Most of the changes that are about to occur will be mundane, complex, and the process will be punctuated but gradual, so we won’t really notice what has been happening until it has happened, by which time it may be too late. It is probably not an exaggeration to say that, unless environmental or other disasters don’t bring it all to a halt, this is a pivotal moment in our history.

It is much easier to think locally, to think about what AIs can do to support or extend what we do now, than it is to imagine how everything will change as a result of everyone doing that at scale. It requires us to think in systems, which is not something most of us are educated or prepared to do. But we must do that, now, while we still can. We should not leave it to AIs to do it for us.

There’s much more on many of the underpinning ideas mentioned in this post, including references and arguments supporting them, in my freely downloadable or cheap-to-purchase latest book (of three, as it happens), How Education Works.

The artificial curriculum

evolving into a robot “Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings” by Simone Grassini is a well-researched, concise but comprehensive overview of the state of play for generative AI (GAI) in education. It gives a very good overview of current uses, by faculty and students, and provides a thoughtful discussion of issues and concerns arising. It addresses technical, ethical, and pragmatic concerns across a broad spectrum. If you want a great summary of where we are now, with tons of research-informed suggestions as to what to do about it, this is a very worthwhile read.

However, underpinning much of the discussion is an implied (and I suspect unintentional) assumption that education is primarily concerned with achieving and measuring explicit specified outcomes. This is particularly obvious in the discussions of ways GAIs can “assist” with instruction. I have a problem with that.

There has been an increasing trend in recent decades towards the mechanization of education: modularizing rather than integrating, measuring what can be easily measured, creating efficiencies, focusing on an end goal of feeding industry, and so on. It has resulted in a classic case of the McNamara Fallacy, that starts with a laudable goal of measuring success, as much as we are able, and ends with that measure defining success, to the exclusion anything we do not or cannot measure. Learning becomes the achievement of measured outcomes.

It is true that consistent, measurable, hard techniques must be learned to achieve almost anything in life, and that it takes sustained effort and study to achieve most of them that educators can and should help with. Measurable learning outcomes and what we do with them matter. However, the more profound and, I believe, the more important ends of education, regardless of the subject, are concerned with ways of being in the world, with other humans. It is the tacit curriculum that ultimately matters more: how education affects the attitudes, the values, the ways we can adapt, how we can create, how we make connections, pursue our dreams, live fulfilling lives, engage with our fellow humans as parts of cultures and societies.

By definition, the tacit curriculum cannot be meaningfully expressed in learning outcomes or measured on a uniform scale. It can be expressed only obliquely, if it can be expressed at all, in words. It is largely emergent and relational, expressed in how we are, interacting with one another, not as measurable functions that describe what we can do. It is complex, situated, and idiosyncratic. It is about learning to be human, not achieving credentials.

Returning to the topic of AI, to learn to be human from a blurry JPEG of the web, or autotune for knowledge, especially given the fact that training sets will increasingly be trained on the output of earlier training sets, seems to me to be a very bad idea indeed.

The real difficulty that teachers face is not that students solve the problems set to them using large language models, but that in so doing they bypass the process, thus avoiding the tacit learning outcomes we cannot or choose not to measure. And the real difficulty that those students face is that, in delegating the teaching process to an AI, their teachers are bypassing the teaching process, thus failing to support the learning of those tacit outcomes or, at best, providing an averaged-out caricature of them. If we heedlessly continue along this path, it will wind up with machines teaching machines, with humans largely playing the roles of cogs and switches in them.

Some might argue that, if the machines do a good enough job of mimicry then it really doesn’t matter that they happen to be statistical models with no feelings, no intentions, no connection, and no agency. I disagree. Just as it makes a difference whether a painting ascribed to Picasso is a fake or not, or whether a letter is faxed or delivered through the post, or whether this particular guitar was played by John Lennon, it matters that real humans are on each side of a learning transaction. It means something different for an artifact to have been created by another human, even if the form of the exchange, in words or whatever, is the same. Current large language models have flaws, confidently spout falsehoods, fail to remember previous exchanges, and so on, so they are easy targets for criticism. However, I think it will be even worse when AIs are “better” teachers. When what they seem to be is endlessly tireless, patient, respectful and responsive; when the help they give is unerringly accurately personal and targeted; when they accurately draw on knowledge no one human could ever possess, they will not be modelling human behaviour. The best case scenario is that they will not be teaching students how to be, they will just be teaching them how to do, and that human teachers will provide the necessary tacit curriculum to support the human side of learning. However, the two are inseparable, so that is not particularly likely. The worst scenarios are that they will be teaching students how to be machines, or how to be an average human (with significant biases introduced by their training), or both.

And, frankly, if AIs are doing such a good job of it then they are the ones who should be doing whatever it is that they are training students to do, not the students. This will most certainly happen: it already is (witness the current actors and screenwriters strike). For all the disruption that results, it’s not necessarily a bad thing, because it increases the adjacent possible for everyone in so many ways. That’s why the illustration to this post is made to my instructions by Midjourney, not drawn by me. It does a much better job of it than I could do.

In a rational world we would not simply incorporate AI into teaching as we have always taught. It makes no more sense to let it replace teachers than it does to let it replace students. We really need to rethink what and why we are teaching in the first place. Unfortunately, such reinvention is rarely if ever how technology works. Technology evolves by assembly with and in the context of other technology, which is how come we have inherited mediaeval solutions to indoctrination as a fundamental mainstay of all modern education (there’s a lot more about such things in my book, How Education Works if you want to know more about that). The upshot will be that, as we integrate rather than reinvent, we will keep on doing what we have always done, with a few changes to topics, a few adjustments in how we assess, and a few “efficiencies”, but we will barely notice that everything has changed because students will still be achieving the same kinds of measured outcomes.

I am not much persuaded by most apocalyptic visions of the potential threat of AI. I don’t think that AI is particularly likely to lead to the world ending with a bang, though it is true that more powerful tools do make it more likely that evil people will wield them. Artificial General Intelligence, though, especially anything resembling consciousness, is very little closer today than it was 50 years ago and most attempts to achieve it are barking in the wrong forest, let alone up the wrong tree. The more likely and more troubling scenario is that, as it embraces GAIs but fails to change how everything is done, the world will end with a whimper, a blandification, a leisurely death like that of lobsters in water coming slowly to a boil. The sad thing is that, by then, with our continued focus on just those things we measure, we may not even notice it is happening. The sadder thing still is that, perhaps, it already is happening.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/19390937/the-artificial-curriculum

Technological distance – my slides from OTESSA ’23

Technological Distance

Here are the slides from my talk today at OTESSA ’23. Technological distance is a way of understanding distance that fits with modern complexivist models of learning such as Connectivism, Heutagogy, Networks/Communities of Practice/Rhizomatic Learning, and so on. In such a model, there are potentially thousands of distances – whether understood as psychological, transactional, social, cognitive, physical, temporal, or whatever – so conventional views of distance as a gap between learner and teacher (or institution or other students) are woefully inadequate.

I frame technological distance as a gap between technologies learners have (including cognitive gadgets, skills, techniques, etc as well as physical, organization, or procedural technologies) and those they need in order to learn. It is a little bit like Vygotsky’s Zone of Proximal Development but re-imagined and extended to incorporate all the many technologies, structures, and people who may be involved in the teaching gestalt.

The model of technology that I use to explain the idea is based on the coparticipation perspective presented in my book that, with luck, should be out within the next week or two. The talk ends with a brief discussion of the main implications for those whose job it is to teach.

Thanks to MidJourney for collaborating with me to produce the images used in the slides.

people as interlocking cogs

On the Misappropriation of Spatial Metaphors in Online Learning | OTESSA Journal

This is a link to my latest paper, published in the closing days of 2022. The paper started as a couple of blog posts that I turned into a paper that nearly made an appearance in the Distance Education in China journal before a last-minute regime change in the editorial staff led to it being dropped, and it was then picked up by the OTESSA Journal after I shared it online, so you might have seen some of it before. My thanks to all the many editors, reviewers (all of whom gave excellent suggestions and feedback that I hope I’ve addressed in the final version), and online commentators who have helped to make it a better paper. Though it took a while I have really enjoyed the openness of the process, which has been quite different from any that I’ve followed in the past.

The paper begins with an exploration of the many ways that environments are both shaped by and shape how learning happens, both online and in-person. The bulk of the paper then presents an argument to stop using the word “environment” to describe online systems for learning. Partly this is because online “environments” are actually parts of the learner’s environment, rather than vice versa. Mainly, it is because of the baggage that comes with the term, which leads us to (poorly) replicate solutions to problems that don’t exist online, in the process creating new problems that we fail to adequately solve because we are so stuck in ways of thinking and acting due to the metaphors on which they are based. My solution is not particularly original, but it bears repeating. Essentially, it is to disaggregate services needed to support learning so that:

  • they can be assembled into learners’ environments (their actual environments) more easily;
  • they can be adapted and evolve as needed; and, ultimately,
  • online learning institutions can be reinvented without all the vast numbers of counter-technologies and path dependencies inherited from their in-person counterparts that currently weigh them down.

My own views have shifted a little since writing the paper. I stick by my belief that 1) it is a mistake to think of online systems as generally analogous to the physical spaces that we inhabit, and 2) that a single application, or suite of applications, should not be seen as an environment, as such (at most, as in some uses of VR, it might be seen as a simulation of one). However, there are (shifting) boundaries that can be placed around the systems that an organization and/or an individual uses for which the metaphor may be useful, at the very least to describe the extent to which we are inside or outside it, and that might frame the various kinds of distance that may exist within it and from it. I’m currently working on a paper that expands on this idea a bit more.

Abstract

In online educational systems, teachers often replicate pedagogical methods, and online institutions replicate systems and structures used by their in-person counterparts, the only purpose of which was to solve problems created by having to teach in a physical environment. Likewise, virtual learning environments often attempt to replicate features of their physical counterparts, thereby weakly replicating in software the problems that in-person teachers had to solve. This has contributed to a vicious circle of problem creation and problem solving that benefits no one. In this paper I argue that the term ‘environment’ is a dangerously misleading metaphor for the online systems we build to support learning, that leads to poor pedagogical choices and weak digital solutions. I propose an alternative metaphor of infrastructure and services that can enable more flexible, learner-driven, and digitally native ways of designing systems (including the tools, pedagogies, and structures) to support learning.

Full citation

Dron, J. (2022). On the Misappropriation of Spatial Metaphors in Online Learning. The Open/Technology in Education, Society, and Scholarship Association Journal, 2(2), 1–15. https://doi.org/10.18357/otessaj.2022.2.2.32

Originally posted at: https://landing.athabascau.ca/bookmarks/view/16550401/my-latest-paper-on-the-misappropriation-of-spatial-metaphors-in-online-learning

Some meandering thoughts on ‘good’ and ‘bad’ learning

There has been an interesting brief discussion on Twitter recently that has hinged around whether and how people are ‘good’ at learning. As Kelly Matthews observes, though, Twitter is not the right place to go into any depth on this, so here is a (still quite brief) summary of my perspective on it, with a view to continuing the conversation.

Humans are nearly all pretty good at learning because that’s pretty much the defining characteristic of our species. We are driven by an insatiable drive to learn at from the moment of our birth (at least). Also, though I’m keeping an open mind about octopuses and crows, we seem to be better at it than at least most other animals. Our big advantage is that we have technologies, from language to the Internet, to share and extend our learning, so we can learn more, individually and collectively, than any other species. It is difficult or impossible to fully separate individual learning from collective learning because our cognition extends into and is intimately a part of the cognition of others, living and dead.

However, though we learn nearly all that we know, directly or indirectly, from and with other people, what we learn may not be helpful, may not be as effectively learned as it should, and may not much resemble what those whose job is to teach us intend. What we learn in schools and universities might include a dislike of a subject, how to conceal our chat from our teacher, how to meet the teacher’s goals without actually learning anything, how to cheat, and so on. Equally, we may learn falsehoods, half-truths, and unproductive ways of doing stuff from the vast collective teacher that surrounds us as well as from those designated as teachers.

For instance, among the many unintended lessons that schools and colleges too often teach is the worst one of all: that (despite our obvious innate love of it) learning is an unpleasant activity, so extrinsic motivation is needed for it to occur. This results from the inherent problem that, in traditional education, everyone is supposed to learn the same stuff in the same place at the same time. Students must therefore:

  1. submit to the authority of the teacher and the institutional rules, and
  2. be made to engage in some activities that are insufficiently challenging, and some that are too challenging.

This undermines two of the three essential requirements for intrinsic motivation, support for autonomy and competence (Ryan & Deci, 2017).  Pedagogical methods are solutions to problems, and the amotivation inherently caused by the system of teaching is (arguably) the biggest problem that they must solve. Thus, what passes as good teaching is largely to do with solving the problems caused by the system of teaching itself. Good teachers enthuse, are responsive, and use approaches such as active learning, problem or inquiry-based learning, ungrading, etc, largely to restore agency and flexibility in a dominative and inflexible system. Unfortunately, such methods rely on the technique and passion of talented, motivated teachers with enough time and attention to spend on supporting their students. Less good and/or time-poor teachers may not achieve great results this way. In fact, as we measure such things, on average, such pedagogies are less effective than harder, dominative approaches like direct instruction (Hattie, 2013) because, by definition, most teachers are average or below average. So, instead of helping students to find their own motivation, many teachers and/or their institutions typically apply extrinsic motivation, such as grades, mandatory attendance, classroom rules, etc to do the job of motivating their students for them. These do work, in the sense of achieving compliance and, on the whole, they do lead to students getting a normal bell-curve of grades that is somewhat better than those using more liberative approaches. However, the cost is huge. The biggest cost is that extrinsic motivation reliably undermines intrinsic motivation and, often, kills it for good (Kohn, 1999). Students are thus taught to dislike or, at best, feel indifferent to learning, and so they learn to be satisficing, ineffective learners, doing what they might otherwise do for the love of it for the credentials and, too often, forgetting what they learned the moment that goal is achieved. But that’s not the only problem.

When we learn from others – not just those labelled as teachers but the vast teaching gestalt of all the people around us and before us who create(d) stuff, communicate(d), share(d), and contribute(d) to what and how we learn – we typically learn, as Paul (2020) puts it, not just the grist (the stuff we remember) but the mill (the ways of thinking, being, and learning that underpin them). When the mill is inherently harmful to motivation, it will not serve us well in our future learning.

Furthermore, in good ways and bad, this is a ratchet at every scale. The more we learn, individually and collectively, the more new stuff we are able to learn. New learning creates new adjacent possible empty niches (Kauffman, 2019) for us to learn more, and to apply that learning to learn still more, to connect stuff (including other stuff we have learned) in new and often unique ways. This is, in principle, very good. However, if what and how we learn is unhelpful, incorrect, inefficient, or counter-productive, the ratchet takes us further away from stuff we have bypassed along the way. The adjacent possibles that might have been available with better guidance remain out of our reach and, sometimes, even harder to get to than if the ratchet hadn’t lifted us high enough in the first place. Not knowing enough is a problem but, if there are gaps, then they can be filled. If we have taken a wrong turn, then we often have to unlearn some or all of what we have learned before we can start filling those gaps. It’s difficult to unlearn a way of learning. Indeed, it is difficult to unlearn anything we have learned. Often, it is more difficult than learning it in the first place.

That said, it’s complex, and entangled. For instance, if you are learning the violin then there are essentially two main ways to angle the wrist of the hand that fingers the notes, and the easiest, most natural way (for beginners) is to bend your hand backwards from the wrist, especially if you don’t hold the violin with your chin, because it supports the neck more easily and, in first position, your fingers quickly learn to hit the right bit of the fingerboard, relative to your hand. Unfortunately, this is a very bad idea if you want a good vibrato, precision, delicacy, or the ability to move further up the fingerboard: the easiest way to do that kind of thing is to to keep your wrist straight or slightly angled in from the wrist, and to support the violin with your chin. It’s more difficult at first, but it takes you further. Once the ‘wrong’ way has been learned, it is usually much more difficult to unlearn than if you were starting from scratch the ‘right’ way. Habits harden. Complexity emerges, though, because many folk violin styles make a positive virtue of holding the violin the ‘wrong’ way, and it contributes materially to the rollicking rhythmic styles that tend to characterize folk fiddle playing around the world. In other words, ‘bad’ learning can lead to good – even sublime – results. There is similarly plenty of space for idiosyncratic technique in many of the most significant things we do, from writing to playing hockey to programming a computer and, of course, to learning itself. The differences in how we do such things are where creativity, originality, and personal style emerge, and you don’t necessarily need objectively great technique (hard technique) to do something amazing. It ain’t what you do, it’s the way that you do it, that’s what gets results. To be fair, it might be a different matter if you were a doctor who had learned the wrong names for the bones of the body or an accountant who didn’t know how to add up numbers. Some hard skills have to be done right: they are foundations for softer skills. This is true of just about every skill, to a greater or lesser extent, from writing letters and spelling to building a nuclear reactor and, indeed, to teaching.

There’s much more to be said on this subject and my forthcoming book includes a lot more about it! I hope this is enough to start a conversation or two, though.

References

Hattie, J. (2013). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. Taylor & Francis.

Kauffman, S. A. (2019). A World Beyond Physics: The Emergence and Evolution of Life. Oxford University Press.

Kohn, A. (1999). Punished by rewards: The trouble with gold stars, incentive plans, A’s, praise, and other bribes (Kindle). Mariner Books.

Paul, A. M. (2021). The Extended Mind: The Power of Thinking Outside the Brain. HarperCollins.

Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Publications.

 

Slides from my ICEEL 22 Keynote, November 20, 2022

ICEEL 22 keynote

Here are the slides (11.2MB PDF) from my opening keynote yesterday at the 6th International Conference on Education and E-Learning, held online, hosted this year in Japan. In it I discussed a few of the ideas and consequences of them from my forthcoming book, How Education Works: Teaching, Technology, and Technique.

Title: It ain’t what you do, it’s the way that you do it, that’s what gets results

Abstract: In an educational system, no teacher ever teaches alone. Students teach themselves and, more often than not, teach one another. Textbook authors and illustrators, designers of open educational resources, creators of curricula, and so on play obvious teaching roles. However, beyond those obvious teachers there are always many others, from legislators to software architects, from professional bodies to furniture manufacturers . All of these teachers matter, not just in what they do but in how they do it: the techniques matter at least as much as the tools and methods.  The resulting complex collective teacher is deeply situated and, for any given learner, inherently unpredictable in its effects. In this talk I will provide a theoretical model to explain how these many teachers may work together or in opposition, how educational systems evolve, and the nature of learning technologies. Along the way I will use the model to explain why there is and can be no significant difference between outcomes for online and in-person teaching, why teaching to perceived learning styles research is doomed to fail, why small group tutoring will always (on average) be better than classroom teaching, and why quantitative research methods have little value in educational research.

So, this is a thing…

Students are now using AIs to write essays and assignments for credit, and they are (probably) getting away with it. This particular instance may be fake, but the tools are widely available and it would be bizarre were no one to be using them for this purpose. There are already far too many sites providing stuff like product reviews and news stories (re)written by AIs, and AIs are already being used for academic paper writing. In fact, systems for doing so, like CopyMatic or ArticleGenerator, are now a commodity item. So the next step will be that we will develop AIs to identify the work of other AIs (in fact, that is already a thing, e.g. here and here), and so it will go on, and on, and on.

This kind of thing will usually evade plagiarism checkers with ease, and may frequently fool human markers. For those of us working in educational institutions, I predict that traditionalists will demand that we double down on proctored exams, in a vain attempt to defend a system that is already broken beyond repair. There are better ways to deal with this: getting to know students, making each learning journey (and outputs) unique and personal, offering support for motivated students rather than trying to ‘motivate’ them, and so on. But that is not enough.

I am rather dreading the time when an artificial student takes one of my courses. The systems are probably too slow, quirky, and expensive right now for real-time deep fakes driven by plausible GANs to fool me, at least for synchronous learning, but I think it could already convincingly be done for asynchronous learning, with relatively little supervision.  I think my solution might be to respond with an artificial teacher, into which there has been copious research for some decades, and of which there are many existing examples.

To a significant extent, we already have artificial students, and artificial teachers teaching them. How ridiculous is that? How broken is the system that not only allows it but actively promotes it?

These tools are out there, getting better by the day, and it makes sense for all of us to be using them. As they become more and more ubiquitous, just as we accommodated pocket calculators in the teaching of math, so we will need to accommodate these tools in all aspects of our education. If an AI can produce a plausible new painting in any artist’s style (or essay, or book, or piece of music, or video) then what do humans need to learn, apart from how to get the most out of the machines? If an AI can write a better essay than me, why should I bother? If a machine can teach as well as me, why teach?

This is a wake-up call. Soon, if not already, most of the training data for the AIs will be generated by AIs. Unchecked, the result is going to be a set of ever-worse copies of copies, that become what the next generation consumes and learns from, in a vicious spiral that leaves us at best stagnant, at worst something akin to the Eloi in H.G. Wells’s Time Machine.  If we don’t want this to happen then it is time for educators to reclaim, to celebrate, and (perhaps a little) to reinvent our humanity. We need, more and more, to think of education as a process of learning to be, not of learning to do, except insofar as the doing contributes to our being. It’s about people, learning to be people, in the presence of and through interaction with other people. It’s about creativity, compassion, and meaning, not the achievement of outcomes a machine could replicate with ease. I think it should always have been this way.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/15164121/so-this-is-a-thing

Over 150 AU staff signed this letter to the Government of Alberta and AU’s Board of Governors

Today I sent this letter from staff at Athabasca University to the Albertan Advanced Education Minister and Board of Governors of the University, cc’d to various government & opposition politicians in Alberta, and a few selected journalists:

I strongly support the university’s continuing presence in the town of Athabasca, but not the forced relocation of any staff to the area. As an online community, I believe it to be in the interests of all staff and students of the university, including residents of the town of Athabasca, that all university staff who can and who wish to work from home, whatever their role, should be allowed to make that home wherever they choose.

The 149 signatories to the letter included academic staff (46%), managers (12%), administration staff (12%), professional staff (33%), RAs (1%) and tutors/academic experts (7%). 48% live in the region of Edmonton, 19% in the region of Calgary, 15% in rural Alberta outside Athabasca region, 8% in the region of Athabasca, 5% in Ontario, 3% in BC, 1% in Nova Scotia, and 1% in Saskatchewan. A further 3 staff signed the letter anonymously, and a number of others expressed general agreement with the main points made but, for various reasons, chose not to sign. One more signed today, after I had sent the letter.

How this came about

For context, the Government of Alberta has made a number of demands, under threat of withdrawal of funding, that would require 500 additional staff to move to Athabasca (notably including all the executive staff), that would force us to end our near-virtual strategy, and that would require us to change our focus from teaching anyone and everyone to teaching Albertans, with an initial deadline of 2024/25. This is our president’s explanation and response. Perhaps as a result of public outrage, the minister responsible has since claimed the deadlines are negotiable, and suggested that a little flexibility might be allowed (given that the demands are literally impossible to meet), but he has not stepped back on the basic requirements, and has repeatedly emphasized that he will force all of the executive team to work on the Athabasca campus, despite also claiming he will not force anyone to work there, among other contradictions.

I sent an email to an assortment of staff that I know a week ago today, asking them to sign the statement above and pass it along to other staff members. I did not want to use any official channels to send it for fear that it would be seen as being driven by those with partisan positions to defend (none-the-less, I did receive one anonymous comment from someone who did not sign it because they had received it via their boss and assumed it was driven from the top – it was not!). Because of the viral approach to dissemination, I am fairly certain that it failed to reach all AU staff, and the signatories are almost certainly skewed to people I know, and those who know people I know. I suspect that some groups (especially tutors and administrators) are under-represented.  I therefore have no way of telling what percentage of recipients actually signed the letter, but those who signed make up around an eighth of the workforce in total. The Board of Governors is required to give a response to the Government of Alberta’s demands tomorrow, August 31st, so I had to pull all of this together hastily, otherwise I am confident that the letter would have gained more signatures.

A brief summary of the comments

As well as signing the letter, I also asked the staff to (optionally) provide comments. I am not going to include the 20 or so pages of these from scores of staff members that I received in full here, though they are full of fantastic ideas, expressions of concern, (sometimes heart-rending) stories, as well as expressions of caring for one another, for their communities, for the university, and for the students they work for. Once they are fully anonymized, I may share them later. However, I will attempt a summary now.

Many – including those living in the Athabasca region – speak of how much they value being able to work from home, and that they would reluctantly seek new employment if that option were not available. For example, one employee writes:
I am a resident of Athabasca and I choose to live here; I have proven (since March 2020) that my job can be successfully executed virtually from my home office. My work-life balance has improved significantly because I can work from home.
Even faculty – who would not be required to move – speak of resigning were this to occur.
Some mention the importance of understanding the needs of our students, or express concern about the effects that the disruption caused by this initiative would cause.
Many mention difficulties they would face working at Athabasca. Often, this is due to the needs of their families, especially with regard to job opportunities and health. This is a particularly poignant comment that expresses several of the concerns shared by many:
I initially applied for a position with AU because it was in a small community that I wanted to raise my family in. However, my spouse was not able to find work after he was laid off with the decline in oil and gas and my son needed specialized services that the town did not have. Therefore, I applied for an Edmonton position so my spouse could find work to help support the family and my son could access the services he required.
Another, living in Athabasca, writes:
If I were place based at the AU Campus, I would have to use my vacation to care for [my sick child] which would significantly decrease the amount of vacation available to me if not completely exhaust my allowable annual vacation.
Another writes:
I am struggling with this forced relocation as I will not be forced to relocate away from my children. My husband would be out of a job. We would make a loss on our home if we were forced to sell to relocate. I have been going through cancer treatment and my oncologist and medical Team are located in Edmonton and I would jeopardize my health moving away from my health care team.
Some express the concern that AU would suffer from a hugely diminished job pool. For example:
Allowing work from home and not forcing employees to relocate to another province means retaining staff, retaining expertise, widening the applicant pool so as to entice top talent across Canada, and positions AU as a leading employer. AU students can take courses anywhere in the world — AU staff should be able to work from anywhere in Canada.
Others observe the need for big improvements to infrastructure, services, and transport links for the town to accommodate greater numbers, though a couple suggest they might accept incentives to move there. Quite a few think that it would do active harm to the town were substantial numbers to relocate. As one staff member puts it:
Placing all your eggs in one basket (or relying on one or two industries) will not provide the economic security and stability required for long-term success.
Several explicitly draw attention to the point made in the letter that the executive team should not have to live there.
Some ask that the government should stop interfering with the operations of the university. Many would like to be more involved in conversations being held privately between the Board of Governors and the Government of Alberta, asking for their voices to be heard by all parties in the dispute.
Some challenge the notion that AU should be required to bear the burden of supporting the town. For example, one writes:
AU is one of Alberta’s four CARUs and as such, its mandate should be about education and research, not about economic development of a region. No other company or university has such mandate or responsibility.
Some provide suggestions for ways we can expand on what we are already doing to provide services to the region, and to take more advantage of our unique location for research: there are many good suggestions and reports of existing initiatives among the comments, such as this:
FST is home to Science Outreach Athabasca which is an organization supported by faculty and members of the town of Athabasca that has been engaging the community of Athabasca for 20 years and hosted over 120 public talks, science camps, nature hikes, butterfly counts, and other activities. We also host lab sessions for junior high and high school students in Athabasca schools which our faculty volunteer to do. Our research activity in FST has been growing in environmental science and computational biology with three research chairs and recruitments of new faculty to increase our capacity in remediation, long-term monitoring, aquatic systems, rural sustainability, and regenerative design, to name a few.
A few express concern with intimidation they have faced when attempting to voice opinions not held by those with louder voices and political positions to defend. Though mostly not included in the comments, personal messages to me expressed relief and gratitude at being allowed to express opinions they were afraid to share with colleagues and town residents, because of fears of reprisal or ostracization. One, that is included, put it well:
I’m tired of my voice not being heard which is why I decided to compose this letter. I’m tired of being told, I’m tired of the lobbyist/activists, Municipal and Provincial Governments not respecting the voices on the “other side”

This is the comment I received after sending the letter today, that is quite representative of several others:

Athabasca University is an online university and has been operating efficiently with the work from home environment and I believe will continue to do so with the a near virtual environment. I support the near virtual initiative.

The full range of comments is far richer, far more nuanced, and far more varied than what I have been able to summarize here and I apologize to the many dozens of people who provided them for not doing them as much justice as they deserve.

I hope that the recipients read and act on the letter. At the very least, they will have a far better idea of the needs, concerns, and feelings of a significant portion of AU staff than they had before, and I hope that will colour their judgment.

Thank you, everyone who signed, and thank you to all who will read it. I will be circulating the full letter and addendum to as many of those who signed it as possible over the next day or two.

We shape our buildings and, afterwards, our buildings shape us: some lessons in how not to build an online university, and some ideas for doing it better

My heart briefly leapt to my throat when I saw Thursday’s Globe & Mail headline that the Albertan government had (allegedly) dropped its insane plan to force Athabasca University to move 65% of its workforce to the town of Athabasca. It seemed that way, given that the minister for post secondary education was referring to his demands and accompanying threat as only a ‘suggestion’ (broadly along the lines of Putin’s ‘suggestion’ that Ukraine should be part of Russia, perhaps). However, other reports, have said that he has denied any change in his requirements, albeit that he now claims it is open to negotiation. A ham-fisted negotiation tactic or just plain confused? I hope so, but I doubt it. I think that this is just a ploy to push the real agenda through with little resistance, and largely unnoticed. In the Globe & Mail article, the minister goes on to say “I would indeed like to see, at a bare minimum, senior executives and administrative staff be based in the town, as they have been for the past several decades.” A majority of what might be described as administrative staff do probably live in Athabasca anyway, and there is no reason for any of them to leave, so that’s just gaining a few easy election points from town voters. If the government actually wanted to help the town it would invest in the infrastructure and support needed to allow it to thrive, which it has signally failed to do for several decades, at least. No, his main target is clearly the senior executives: basically, he and the UCP want to put a team of executive lackeys in charge so that they can push their agenda through unopposed by anyone they care about. They have already sacked the incumbent and installed a chair of the board of governors who will do their bidding, and they have increased representation on the board from the town of Athabasca so this is the obvious next step. The execs won’t have to be fired. If they are required to move to Athabasca, most of what is probably the best executive team ever assembled in this or any other Albertan university will resign. Whoever replaces them will do the UCP’s dirty work, largely free from media oversight. Job done, bad press averted.

The UCP will, I am very sad to say, appear to have support from our own professional and faculty union (AUFA), even though most of us will, whether weakly or strongly, oppose it. This is because AUFA has a small but disproportionately powerful caucus in Athabasca, members of which have been deeply involved with an activist group called KAAU (Keep Athabasca in Athabasca University), who actually paid an insider lobbyist to start this fracas in the first place. Seriously. A casual observer might perceive at least a portion of the union’s leadership as putting the interests of the town ahead of the interests of the university. At best, their loyalties appear to be divided. The evidence for this is all too apparent in press statements and blog posts on the subject. Though most of us (including me) support the continuing presence of AU in Athabasca, these posts do not represent the views of most of those in the union, only those in charge of it. Only around 20% or thereabouts of AUFA members actually live in Athabasca, a percentage that has steadily fallen over the course of the last two decades, and almost all of those are professional members, not academics. Most members who had the chance to leave over the past 20 years did so. This is a point worth dwelling on.

We shape our buildings…

Athabasca High Street
Athabasca High Street at peak season

Athabasca is a tiny, inclement (-40 in Winter, bugs in summer) Northern town over 180km away from the nearest International airport. There is one (private) bus from Edmonton leaving late at night that arrives in town at 2:46am after a 3+ hour journey on a small, treacherous road. When it got too big for its Edmonton home, the university was (disastrously) moved there by a conservative government in 1984, ostensively to fill a gap left by the closure of the town’s main employer, but more likely due to the property interests held there by those behind the plan. About half the faculty resigned rather than work there. Ironically, the first president of AU deliberately named the university after a geographical feature of Alberta (the Athabasca River) precisely to avoid associating it with any city or region, so that local politics wouldn’t interfere with its mission. We might have been named after a mountain were it not that the University of Alberta happened to be demolishing Athabasca Hall (a students’ residence) at the time, so the name was free for us to use. It had nothing whatsoever to do with the town. It is possible that the president who named it was even unaware of the town’s existence or, at least, considered it to be too insignificant to be an issue.

Whatever charms the town may have (and it has a few), Athabasca has been a hobble for AU from the very start. I wrote about this at some length 5 years ago, just as we were on the cusp of making the massive changes we have been implementing ever since, but I would like to focus on two particularly relevant aspects in this post: the effects on the hiring pool, and the short-circuiting of communication with the rest of the university.

Firstly, it is really difficult to attract good employees to the town. Some residents of Athabasca will say that they feel insulted by this, believing that it implies that they are not the best and brightest. This is either disingenuous or a confirmation that they are, in fact, not the best and brightest, because all it means is that we have fewer good people to choose from. There are, of course, some incredibly smart, talented, creative people who live in Athabasca. But, equally, some are not: we have too often had to pick the best of a not-too-great bunch. The more people we expect to live in Athabasca, the bigger the problem of those who are not the best and brightest becomes. The undesirability of the place is confirmed by the KAAU itself, whose biggest complaint – the one that (at least on the face of it) drove their lobbying and union discontent in the first place – is that people have been leaving the town in droves since they were no longer required to stay, which pretty much says all that needs to be said. It is also notable that faculty and tutors are not and have never successfully been required to work in the town in all the university’s history, because it would be impossible to recruit sufficient numbers of sufficient quality, a fact that all parties involved in this (including the minister) acknowledge. We should get the best possible staff for almost every role – we all play some role in our distributed teaching model – but it is true in spades, plus some, for our executive team who, more than anyone else, have to be the most excellent that we can get. Right now, we have the best executive team that has ever been assembled at AU, bar none, and that is only possible because – for the first time ever – none of them have had to live in Athabasca.

…and our buildings shape us

Athabasca has, overwhelmingly, been home for staff that support but that do not directly implement its mission. Historically, these staff (predominantly administrators) have had extremely privileged access to the the leaders of the university compared with the rest of us. Even if they didn’t bump into them socially or in the canteens and halls, they would talk to people that did. And they would be the ones attending meetings in person while the rest of us phoned in or, in latter years, struggled with webmeeting systems that never really worked properly for in-person attendees, despite absurdly expensive equipment designed to support it. Fixing this was never a particularly high priority because those with the power to do so were the ones attending in-person, and it was just fine for them. Inevitably, Athabasca residents had a much better idea of what was going on and who was doing what than anyone else. More problematically, they had far greater influence over it: they didn’t ask for this, but they certainly got it. It is no wonder that they are now peeved, because most of their power, influence, and control over everything has been massively diminished since most of the execs left town. Their perception – voiced on many occasions by the Athabasca-dominated union –  that too much has recently been happening without consultation and that there is not enough communication from our leaders is, objectively speaking, completely false: in fact, it is far better than it has ever been, for those of us (the majority of staff) living remotely. They just no longer have a direct line themselves. I think this is the root of most of the union troubles of the last few years, whether consciously or not, and of the current troubles with the Albertan government.

In-person communities short-circuit online communities. I’ve seen it in teaching contexts a thousand times over: it just takes one group to branch off in person to severely damage or destroy a previously successful online community. Without fail, online communication becomes instrumental and intermittent. Tacit knowledge, in particular, disappears (apart from for the in-person group). Researchers like me (and many others at AU, including our president, in some of his former roles) have spent a great deal of time trying to make native online tools, systems, and working/teaching approaches that reduce these effects, but with only limited success. Combining fully online and in-person communities invariably wrecks the online community. Only when it is fully online, or when the online community is just an extension of the in-person community, can it thrive. Without the best of research-driven online tools and processes (most of which are not implemented at AU), hybrids are a disaster, and they are not much improved with even the best we have to offer.

In the past, the problem was partially offset by the fact that we had a few smaller learning centres elsewhere, in St Albert, Edmonton and Calgary (and, formerly, Fort McMurray), that were visited by the execs with varying frequency. However, this created what were, in many ways, bigger problems. It was incredibly inefficient, environmentally damaging, and expensive, wasting a lot of time and energy for all concerned. More significantly, although it helped to keep the exec team to be a little more in touch with others around the university and it helped to fill gaps in online communication for those living near them, it actually exacerbated the problem for our online community, because it created yet more in-person enclaves and cliques that developed independently of one another, sharing very little with the rest. Our business school, for instance, lived an almost entirely separate life from the rest of the university, in its own campus in St Albert (a satellite city attached to Edmonton), running its own largely independent communications and IT infrastructure but frequently meeting in person. As a result, we never developed the kind of unified online culture needed to sustain us.

Even more importantly, few of those with the power to change it ever learned what remote working was like for our students, so we didn’t create that online culture or community for them, either. Because of the inequalities that ensued, those of us who did know what it was like were not able to adequately influence the rest (especially the executive team) to get something done about it, because we were crowded out by the clamour of local communities. It’s not that the problem was unrecognized: it’s just that immediate operational concerns of in-person employees always came first. This was – and remains – a huge mistake. Too few of our students feel they belong, too few barely if ever interact with another student, too few see anything of the university beyond the materials provided for the courses they take. We have some excellent teaching processes, but processes (even the best) are only a part of what makes for a rewarding education. Yes, we do have plentiful support of all kinds, teaching approaches that should (for some but not all faculties) provide opportunities to develop relationships with human tutors, and the occasional opportunity to engage more broadly (mainly through the Landing), but many students completely bypass all of that. The need for it is beyond obvious, as evidenced by large number of Discords, Facebook Groups, Subreddits, and so on that they set up themselves to support one another. However, these are just more isolated enclaves, more subcultures, more virtual islands, without a single unifying culture to knit them together.

Online communication at AU has, as a direct result of its physical campuses, always tended to be extremely instrumental and terse, if it happened at all. When I arrived 15 years ago, most of my colleagues hardly ever communicated online with colleagues outside of a formal, intentional context. Those of us who did were yet another little clique. Emails (which were and remain the most commonly used tech) were only sent if there were a purpose, and most of the tacit knowledge, that more than anything else makes a traditional institution work despite its typically dire organization, was absent. In its place the university developed a very rigid, unforgiving, impersonal set of procedures for pretty much everything, including our teaching. If there was no procedure then it didn’t happen. There were gigantic gaps. The teaching staff – especially tutors but also most of the faculty – were largely unable to share in a culture and the admin-focused tacit knowledge that resided largely in one remote location. This was the largest part of what drove Terry Anderson and I to create the Landing: it was precisely to support the tacit, the informal, the in-between, the ad-hoc, the cultural, the connective aspects of a university that were missing. We touted it as a space between the formal spaces, actively trying to cultivate and nurture a diverse set of reasons to be there, to make others visible. Treating it as a space was, though, a mistake. Though it did (and does) help a little, the Landing was just another place to visit: it therefore has not (or has not yet) fulfilled our vision for it to seep into the cracks and to make humans visible in all of our systems. And we were not able to support the vital soft, human processes that had to accompany the software because we were just academics and researchers, not bosses: technologies are the tools, structures, and systems and what we do with them, but what we do with them is what matters most.  We need much more, and much better, and we need to embed it everywhere, in order to get rid of the short circuits of in-person cliques and online islands. A further death-knell to our online community was instigated by the (Athabasca-dominated) union that one day chose – without consultation – to kill off the only significant way for AUFA members to communicate more informally, its mailing list, only reluctantly bringing it back (after about 2 years of complaints), in a diluted, moderated, half-assed format that did not challenge their power. From an informal means of binding us, it became another instrumental tool.

Moving on

Despite the problems, it would be a senseless waste to pull out of Athabasca. We need a place for the library, for archives, for outreach into communities in the region, for labs, for astronomy, and to support research based in the region, of which there is already a growing amount. Virtually no one at the university thinks for a moment that we should leave the town. We are just doubling down on things to which it is best suited, rather than making it a centre of all our operations.  If people want to live there, they can. We can make a difference to an under-served region in our research, our outreach, and our facilities, and we are constantly doing more to make that happen, as a critical part of our reinvention of the university. It has symbolic value, too, as the only physical space that represents the university, albeit that few people ever see it.

However…

Athabasca should never become the seat of power, whether due to numbers of collocated workers or because it is where the exec team are forced to live. I am not singling the town out for special treatment in this: nowhere should play this role. We are and must be an online community, first and foremost. This is especially the case for our exec team. In fact, the more distributed they are the better. They will not walk the talk and fix what is broken unless they live with the consequences, and they are the last people who should be clustered together, especially with a particular employee demographic. This brings benefits to the university and to the communities to which we belong, including to Athabasca.

By far the greatest threat from the Albertan government’s intrusions and our own union’s efforts to restore their personal power is to the identity and culture – the very soul –  of the institution itself. Slowly (too slowly) and a bit intermittently we have, in recent years, been staggering towards creating a unified, online-native culture that embraces the whole institution. It has not been easy, especially thanks to the Athabascan resistance. But, regardless of their interference, we have made other mistakes. Our near-virtual implementation was the result of a large group representing the whole university, but one that lacked well-defined leadership or a clear mandate, that rushed development due to the pandemic, and that ignored most of what it found in its investigations of needs in its report to the university, leading to a hasty and incomplete implementation that has caused some unrest, most notably among those at Athabasca who are used to the comforts and conveniences of in-person working. For the majority of us who were already working online before the pandemic, things have got better, for the most part, but the benefits are very uneven. Too often we have poorly replicated in-person processes and methods to accommodate the newcomers, leading to (for instance) endless ineffectual meetings and yet more procedures. The near-virtual strategy remains a work in progress, and things will improve, but it got off to a stumbling, over-hasty start.

With limited funds, and contributing to the multiple failings of the near-virtual plan, we have signally failed to put enough effort into developing the technical infrastructure needed to support our nascent online community (one of the main needs identified by the near-virtual committee but not appearing in any meaningful way in the plan). I think we really should have focused on creating workable technologies to support our own community before working on teaching and administrative systems (or at least at the same time) but, after a decade of neglect while we were on the verge of bankruptcy, I guess we did need to fix those pretty urgently because they are what our students depend on. It’s just a bit tricky to pull yourself up by your own bootstraps if you are still using off-the-shelf tools designed to support in-person organizations (and commercial ones at that) rather than those designed for a virtual institution, especially when the more important human and organizational aspects are still rooted firmly in place-based thinking. I wrote about one aspect of that the other day. This won’t be a problem for long, I hope.  The fruits of the reinvention of our student-facing systems – that is taking up the bulk of our development resources right now – should start to appear around the end of this year, if the Albertan government or our own union doesn’t destroy it first. I hope that we can then get round to fixing our own house because, if we don’t, we will be easy prey for the next politician seeking easy votes and/or a sly buck from their investments.

Shaping our lives

The title of this post is a quote from Churchill. In fact, he liked it so much that he used variants on the phrase (sometimes preferring ‘dwellings’ to ‘buildings’) a number of times over a course of decades. I could equally have used Culkin’s (usually misattributed to Mcluhan) ‘we shape our tools and then our tools shape us’ because, as the first president of the university recognized many decades ago, we exist as a university within our communications network, not in a physical nor even a virtual space.

The recursive dynamic implied by Churchill’s and Culkin’s aphorisms applies to any complex adaptive system. In most systems – natural ecosystems, money markets, ant-trails, cities, and so on – this leads to metastability and adaptation, as agents adapt to their environments and, in the process, change those environments, in an endless emergent cycle of evolution. However, the large and slow moving elements of any complex system influence the small and fast moving far more than vice versa and humans are the only creatures that we know of who can deliberately mess with this dynamic by making radical and rapid changes to the large and slow moving parts of the spaces in which they dwell. In the past it has happened to Athabasca University due to the machinations of a small number of self-serving politicians and geographically located cliques, not due to educators. If we can prevent government interference and diminish the significance of those cliques then we can change that, and we have been doing so, rebuilding our systems to serve the needs of staff and students, not of a few land developers or groups of local residents.  This is not the time to stop. We are on the verge of creating a viable community and infrastructure for learning that could scale more or less indefinitely, where everyone – especially the students – can feel a part of something wonderful. Not cogs in machines, not products, but parts of an organic, evolving whole to which we all belong, and to which we all contribute. This matters: to our staff, to our students, to the people of Alberta, to the people of Canada, to the world. We should not be condemned to merely serve a small part of the economic needs of a small community, nor even of a province or country. If we follow that path then we will whimperingly shrink into a minor anachronistic irrelevance that appears as no more than a footnote in the annals of history, out-competed by countless others. Athabasca University matters most because it (not quite alone, but as part of a small, select pack of open and distance institutions) is beating a path that others can follow; an open, expansive, human-centred path towards a better future for us all. Let’s not let this die.

Learning, Technology, and Technique | Canadian Journal of Learning and Technology

This is my latest paper, Learning, Technology, and Technique, in the current issue of the Canadian Journal of Learning and Technology (Vol. 48 No. 1, 2022).

Essentially, because this was what I was invited to do, the paper shrinks down over 10,000-words from my article Educational technology: what it is and how it works (itself a very condensed summary of my forthcoming book, due out Spring 2023) to under 4,000 words that, I hope, more succinctly capture most of the main points of the earlier paper. I’ve learned quite a bit from the many responses to the earlier paper I received, and from the many conversations that ensued – thank you, all who generously shared their thoughts – so it is not quite the same as the original. I hope this one is better. In particular, I think/hope that this paper is much clearer about the nature and importance of technique than the older paper, and about the distinction between soft and hard technologies, both of which seemed to be the most misunderstood aspects of the original. There is, of course, less detail in the arguments and a few aspects of the theory (notably relating to distributed cognition) are more focused on pragmatic examples, but most are still there, or implied. It is also a fully open paper, not just available for online reading, so please freely download it, and share it as you will.

Here’s the abstract:

To be human is to be a user, a creator, a participant, and a co-participant in a richly entangled tapestry of technologies – from computers to pedagogical methods – that make us who we are as much as our genes. The uses we make of technologies are themselves, nearly always, also technologies, techniques we add to the entangled mix to create new assemblies. The technology of greatest interest is thus not any of the technologies that form that assembly, but the assembly itself. Designated teachers are never alone in creating the assembly that teaches. The technology of learning almost always involves the co-participation of countless others, notably learners themselves but also the creators of systems, artifacts, tools, and environments with and in which it occurs. Using these foundations, this paper presents a framework for understanding the technological nature of learning and teaching, through which it is possible to explain and predict a wide range of phenomena, from the value of one-to-one tutorials, to the inadequacy of learning style theories as a basis for teaching, and to see education not as a machine made of methods, tools, and systems but as a complex, creative, emergent collective unfolding that both makes us, and is made of us.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/14622408/my-latest-paper-learning-technology-and-technique-now-online-in-the-canadian-journal-of-learning-and-technology