10 minute chats on Generative AI – a great series, now including an interview with me

This is a great series of brief interviews between Tim Fawns and an assortment of educators and researchers from across the world on the subject of generative AI and its impact on learning and teaching.

The latest (tenth in the series) is with me.

Tim asked us all to come up with 3 key statements beforehand that he used to structure the interviews. I only realized that I had to do this on the day of the interview so mine are not very well thought-through, but there follows a summary of very roughly what I would have said about each if my wits were sharper. The reality was, of course, not quite like this. I meandered around a few other ideas and we ran out of time, but I think this captures the gist of what I actually wanted to convey:

Key statement 1: Most academics are afraid of AIs being used by students to cheat. I am afraid of AIs being used by teachers to cheat. cyborg teacher

For much the same reasons that many of us balk at students using, say, ChatGPT to write part or all of their essays or code, I think we should be concerned when teachers use it to replace or supplement their teaching, whether it be for writing course outlines, assessing student work, or acting as intelligent tutors (to name but a few common uses).  The main thing that bothers me is that human teachers (including other learners, authors, and many more) do not simply help learners to achieve specified learning outcomes. In the process, they model ways of thinking, values, attitudes, feelings, and a host of other hard-to-measure tacit and implicit phenomena that relate to ways of being, ways of interacting, ways of responding, and ways of connecting with others. There can be huge value in seeing the world through another’s eyes, of interacting with them, adapting your responses, seeing how they adapt to yours, and so on. This is a critical part of how we learn the soft stuff, the ways of doing things, the meaning, the social value, the connections with our own motivations, and so on. In short, education is as much about being a human being, living in human communities, as it is about learning facts and skills. Even when we are not interacting but, say, simply reading a book, we are learning not just the contents but the ways the contents are presented, the quirks, the passions, the ways the authors think of their readers, their implicit beliefs, and so on.

While a generative AI can mimic this pretty well, it is by nature a kind of average, a blurry reconstruction mashed up from countless examples of the work of real humans. It is human-like, not human. It can mimic a wide assortment of nearly-humans without identity, without purpose, without persistence, without skin in the game. As things currently stand (though this will change) it is also likely to be pretty bland – good enough, but not great.

It might be argued that this is better than nothing at all, or that it augments rather than replaces human teachers, or it helps with relatively mundance chores, or it provides personalized support and efficiencies in learning hard skills, or it allows teachers to focus on those human aspects, or even that using a generative AI is a good way of learning in itself. Right now and in the near future, this may be true because we are in a system on the verge of disruption, not yet in the thick of it, and we come to it with all our existing skills and structures intact. My concern is what happens as it scales and becomes ubiquitous; as the bean-counting focus on efficiencies that relate solely to measurable outcomes increasingly crowd out the time spent with other humans; as the generative AIs feed on one another becoming more and more divorced from their human originals; as the skills of teaching that are replaced by AIs atrophy in the next generation; as time we spend with one another is replaced with time spent with not-quite human simulacra; as the AIs themselves become more and more a part of our cognitive apparatus in both what is learned and how we learn it. There are Monkeys’ Paws all the way down the line: for everything that might improved, there are at least as many things that can and will get worse.

Key statement 2: We and our technologies are inherently intertwingled so it makes no more sense to exclude AIs from the classroom than it would to exclude, say, books or writing. The big questions are about what we need to keep. intertwingled technologies and humans

Our cognition is fundamentally intertwingled with the technologies that we use, both physical and cognitive, and those technologies are intertwingled with one another, and that’s how our collective intelligence emerges. For all the vital human aspects mentioned above, a significant part of the educational process is concerned with building cognitive gadgets that enable us to participate in the technologies of our cultures, from poetry and long division to power stations and web design. Through that participation our cognition is highly distributed, and our intelligence is fundamentally collective. Now that generative AIs are part of that, it would be crazy to exclude them from classrooms or from their use in assessments. It does, however, raise more than a few questions about what cognitive activities we still need to keep for ourselves.

Technologies expand or augment what we can do unaided. Writing, say, allows us (among other things) to extend our memories. This creates many adjacent possibles, including sharing them with others, and allowing us to construct more complex ideas using scaffolding that would be very difficult to construct on our own because our memories are not that great.

Central to the nature of writing is that, as with most technologies, we don’t just use it but we participate in its enactment, performing part of the orchestration ourselves (for instance we choose what words and ideas we write – the soft stuff), but also being part of its orchestration (e.g we must typically spell words and use grammar sufficiently uniformly that others can understand them – the hard stuff).

In the past, we used to do nearly all of that writing by hand. Handwriting was a hard skill that had to be learned well enough that others could read what we have written, a process that typically required years of training and practice, demanding mastery of a wide range of technical proficiencies from spelling and punctuation to manual dexterity and the ability to sharpen a quill/fill a fountain pen/insert a cartridge, etc. To an increasingly large extent we have now offloaded many of those hard skills, first to typewriters and now to computers. While some of the soft aspects of handwriting have been lost – the cognitive processes that affect how we write and how we think, the expressiveness of the never-perfect ways we write letters on a page, etc – this was a sensible thing to do. From a functional perspective, text produced by a computer is far more consistent, far more readable, far more adaptable, far more reusable, and far more easily communicated. Why should we devote so much effort and time to learning to be part of a machine when a machine can do that part for us, and do it better?

Something that can free us from having to act as an inflexible machine seems, by and large, like a good thing. If we don’t have to do it ourselves then we can spend more time and effort on what we do, how we do it, the soft stuff, the creative stuff, the problem-solving stuff, and so on. It allows us to be more capable, to reach further, to communicate more clearly. There are some really big issues relating to the ways that the constraints of handwriting such as the relative difficulty of making corrections, the physicality of the movements, and the ways our brains are changed by handwriting that result in different ways of thinking, some of which may be very valuable. But, as Postman wrote, all technologies are Faustian bargains involving losses and harms as well as gains and benefits. A technology that thrives is usually (at least in the short term) one in which the gains are perceived to outweigh the losses. And, even when largely replaced, old technologies seldom if ever die, so it is usually possible to retrieve what is lost, at least until the skills atrophy, components are no longer made, or they are designed to die (old printers with chip-protected cartridges that are no longer made, for instance).

What is fundamentally different about generative AIs, however, is that they allow us to offload exactly the soft, creative, problem solving aspects of our cognition, that technologies normally support and expand, to a machine. They provide extremely good pastiches of human thought and creativity that can act well enough to be considered as drop-in replacements. In many cases, they can do so a lot better – from the point of view of someone seeing only the outputs – than an average human. An AI image generator can draw a great deal better than me, for instance. But, given that these machines are now part of our extended, intertwingled minds, what is left for us? What parts of our minds should they or will they replace? How can we use them without losing the capacity to do at least some of the things they do better or as well as us? What happens if we lack those cognitive gadgets we never installed in our minds because AIs did it for us? This is not the same as, say, not knowing how to make a bow and arrow or write in cuneiform. Even when atrophied, such skills can be recovered. This is the stuff that we learn the other stuff for. It is especially important in the field of education which, traditionally at least, has been deeply concerned with cultivating the hard skills largely if not solely so that we can use them creatively, socially and productively once they are learned. If the machines are doing that for us, what is our role? This is not (yet) Kurzweil’s singularity, the moment when machines exceed our own intelligence and start to develop on their own, but it is the (drawn-out, fragmented) moment that machines have become capable of participating in soft, creative technologies on at least equal footing to humans. That matters. This leads to my final key statement.

Key statement 3: AIs create countless new adjacent possible empty niches. They can augment what we can do, but we need to go full-on Amish when deciding whether they should replace what we already do. Amish cyborg

Every new creation in the world opens up new and inherently unprestatable adjacent possible empty niches for further creation, not just in how it can be used as part of new assemblies but in how it connects with those that already exist. It’s the exponential dynamic ratchet underlying natural evolution as much as technology, and it is what results in the complexity of the universe. The rapid acceleration in use and complexity of generative AIs – itself enabled by the adjacent possibles of the already highly disruptive Internet – that we have seen over the past couple of years has resulted in a positive explosion of new adjacent possibles, in turn spawning others, and so on, at a hitherto unprecedented scale and speed.

This is exactly what we should expect in an exponentially growing system. It makes it increasingly difficult to predict what will happen next, or what skills, attitudes, and values we will need to deal with it, or how we will affected by it. As the number of possible scenarios increases at the same exponential rate, and the time between major changes gets ever shorter, patterns of thinking, ways of doing things, skills we need, and the very structures of our societies must change in unpredictable ways, too. Occupations, including in education, are already being massively disrupted, for better and for worse. Deeply embedded systems, from assessment for credentials to the mass media, are suddenly and catastrophically breaking.  Legislation, regulations, resistance from groups of affected individuals, and other checks and balances may slightly alter the rate of change, but likely not enough to matter. Education serves both a stabilizing and a generative role in society, but educators are at least as unprepared and at least as disrupted as anyone else. We don’t – in fact we cannot – know what kind of world we are preparing our students for, and the generative technologies that now form part of our cognition are changing faster than we can follow. Any AI literacies we develop will be obsolete in the blink of an eye. And, remember, generative AIs are not just replacing hard skills. They are replacing the soft ones, the things that we use our hard skills to accomplish.

This is why I believe we would do well to heed the example of the Amish, who (contrary to popular belief) are not opposed to modern technologies but, in their communities, debate and discuss the merits and disadvantages of any technology that is available, considering the ways in which it might affect or conflict with their values, only adopting those agreed to be, on balance, good, and only doing so in ways that accord with those values. Different communities make different choices according to their contexts and needs. In order to do that, we have to have values in the first place. But what are the values that matter in education?

With a few exceptions (laws and regulations being the main ones) technologies do not determine how we will act but, through the ways they integrate with our shared cognition, existing technologies, and practices, they have a lot of momentum and, unchecked, generative AIs will inherit the values associated with what currently exists. In educational systems that are increasingly regulated by government mandates that focus on nothing but their economic contributions to industry, where success or failure is measured solely by proxy criteria like predetermined outcomes of learning and enrolments, where a millennium of path dependencies still embodies patterns of teacher control and indoctrination that worked for mediaeval monks and skillsets that suited the demands of factory owners during the industrial revolution, this will not end well. Now seems the time we most need to reassert and double down on the human, the social, the cultural, the societal, the personal, and the tacit value of our institutions. This is the time to talk about those values, locally and globally. This is the time to examine what matters, what we care about, what we must not lose, and why we must not lose it. Tomorrow it will be too late. I think this is a time of great risk but it is also a time of great opportunity, a chance to reflect on and examine the value and nature of education itself. Some of us have been wanting to have these conversations for decades.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20146256/10-minute-chats-on-generative-ai-a-great-series-now-including-an-interview-with-me

Research, Writing, and Creative Process in Open and Distance Education: Tales from the Field | Open Book Publishers

Research, Writing, and Creative Process in Open and Distance Education: Tales from the Field is a great new book about how researchers in the field of open, online, and distance education go about writing and/or their advice to newcomers in the field. More than that, it is about the process of writing in general, containing stories, recommendations, methods, tricks, and principles that pretty much anyone who writes, from students to experienced authors, would find useful and interesting. It is published as an open book (with a very open CC-BY-NC licence) that is free to read or download as well as to purchase in paper form.

OK, full disclosure, I am a bit biased. I have a chapter in it, and many of the rest are by friends and aquaintances. The editor and author of one of the chapters is Dianne Conrad, the foreword is by Terry Anderson, and the list of authors includes some of the most luminous, widely cited names in the field, with a wealth of experience and many thousands of publications between them. The full list includes David Starr-Glass, Pamela Ryan,  Junhong Xiao, Jennifer Roberts, Aras Bozkurt, Catherine Cronin, Randy Garrison, Tony Bates, Mark Nichols, Marguerite Koole (with Michael Cottrell, Janet Okoko & Kristine Dreaver-Charles), and Paul Prinsloo.

Apart from being a really good idea that fills a really important gap in the market, what I love most about the book is the diversity of the chapters. There’s everything from practical advice on how to structure an effective paper, to meandering reflective streams of consciousness that read like poetry, to academic discussions of identity and culture. It contains a lot of great stories that present a rich variety of approaches and processes, offering far from uniform suggestions about how best to write or why it is worth doing in the first place. Though the contributors are all researchers in the field of open and distance learning, nearly all of us started out on very different career paths, so we come at it with a wide range of disciplinary, epistemological and stylistic frameworks. Dianne has done a great job of weaving all of these different perspectives together into a coherent tapestry, not just a simple collection of essays.

The diversity is also a direct result of the instructions Dianne sent with the original proposal, which provides a pretty good description of the general approach and content that you will find in the book:

I am asking colleagues, as researchers, scholars, teachers, and writers in our field (ODL), to reflect on and write about your research/writing process, including topics such as:

  *   Your background and training as a scholar

  *   Your scholarly interests

  *   Why you research/write

  *   How you research/write

  *   What philosophies guide your work?

  *   Conflicts?  Barriers?

  *   Mentors, opportunities

  *   Reflections, insights, sorrows

  *   Advice, takeaways

  *   Anything else you feel is relevant

The “personal stuff,” as listed above, should serve as jump-off points to scholarly issues; that is, this isn’t intended to be a memoir or even a full-on reflective. Use the opportunity to reflect on your own work as a lead-in/up to the scholarly issues you want to address/promote/explore.

The aim of the book is to inform hesitant scholars, new scholars, and fledgling/nervous writers of our time-tested processes; and to spread awareness of the behind-the-curtain work involved in publishing and “being heard.”

My own chapter (Chapter 3, On being written) starts with rather a lot of sailing metaphors that tack around the ways that writing participates in my cognition and connects us, moves back to the land with a slight clunk and some geeky practical advice about my approach to notetaking and the roles of the tools that I use for the purpose, thence saunters on to the value of academic blogging and how I feel about it, and finally to a conclusion that frames the rest in something akin to a broader theory of complexity and cognition. All of it draws heavily from themes and theories explored in my recently published (also open) book, How Education Works: Teaching, Technology, and Technique. For all the stretched metaphors, meandering sidetracks, and clunky continuity I’m quite pleased with how it came out.

Most of the other chapters are better structured and organized, and most have more direct advice on the process (from start to finish), but they all tell rich, personal, and enlightening stories that are fascinating to read, especially if you know the people writing them or are familiar with their work. However, while the context, framing, and some of the advice is specific to the field of open and distance learning, the vast majority of lessons and advice are about academic writing in general. Whatever field you identify with, if you ever have to write anything then there’s probably something in it for you.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/19868519/research-writing-and-creative-process-in-open-and-distance-education-tales-from-the-field-open-book-publishers

View of Speculative Futures on ChatGPT and Generative Artificial Intelligence (AI): A Collective Reflection from the Educational Landscape

This is a remarkable paper, pubished in the Asian Journal of Distance Education, written by 35 remarkable people from all over the world and me. It was led by the remarkable Aras Boskurt, who pulled all 36 of us together and wrote much of it in the midst of personal tragedy and the aftermath of a devastating earthquake. The research methodology was fantastic: Aras got each of us to write two 500-word pieces of speculative fiction, presenting positive and negative futures for generative AI in education. The themes that emerged from them were then condensed in the conventional part of the paper, that we worked on together using Google Docs. It took less than 50 days from the initial invitation on January 22 to the publication of the paper. As Eamon Costello put it, “It felt like being in a flash mob of top scholars.”  At 130 pages it is more of a book than a paper,  but most of it consists of those stories/poems/plays, many of which are great stories in their own right. They make good bedtime reading.

Abstract

While ChatGPT has recently become very popular, AI has a long history and philosophy. This paper intends to explore the promises and pitfalls of the Generative Pre-trained Transformer (GPT) AI and potentially future technologies by adopting a speculative methodology. Speculative future narratives with a specific focus on educational contexts are provided in an attempt to identify emerging themes and discuss their implications for education in the 21st century. Affordances of (using) AI in Education (AIEd) and possible adverse effects are identified and discussed which emerge from the narratives. It is argued that now is the best of times to define human vs AI contribution to education because AI can accomplish more and more educational activities that used to be the prerogative of human educators. Therefore, it is imperative to rethink the respective roles of technology and human educators in education with a future-oriented mindset.

Citation

Bozkurt, A., Xiao, J., Lambert, S., Pazurek, A., Crompton, H., Koseoglu, S., Farrow, R., Bond, M., Nerantzi, C., Honeychurch, S., Bali, M., Dron, J., Mir, K., Stewart, B., Costello, E., Mason, J., Stracke, C. M., Romero-Hall, E., Koutropoulos, A., Toquero, C. M., Singh, L Tlili, A., Lee, K., Nichols, M., Ossiannilsson, E., Brown, M., Irvine, V., Raffaghelli, J. E., Santos-Hermosa, G Farrell, O., Adam, T., Thong, Y. L., Sani-Bozkurt, S., Sharma, R. C., Hrastinski, S., & Jandrić, P. (2023). Speculative futures on ChatGPT and generative artificial intelligence (AI): A collective reflection from the educational landscape. Asian Journal of Distance Education, 18(1), 53-130. https://doi.org/10.5281/zenodo.7636568

Originally posted at: https://landing.athabascau.ca/bookmarks/view/17699638/view-of-speculative-futures-on-chatgpt-and-generative-artificial-intelligence-ai-a-collective-reflection-from-the-educational-landscape

Technology, Teaching, and the Many Distances of Distance Learning | Journal of Open, Flexible and Distance Learning

I am pleased to announce my latest paper, published openly in the Journal of Open, Flexible and Distance Learning, which has long been one of my favourite distance and ed tech journals.

The paper starts with an abbreviated argument about the technological nature of education drawn from my forthcoming book, How Education Works, zooming in on the distributed teaching aspect of that, leading to a conclusion that the notion of “distance” as a measure of the relationship between a learner and their teacher/institution is not very useful when there might be countless teachers at countless distances involved.

I go on to explore a number of alternative ways we might conceptualize distance, some familiar, some less so, not so much because I think they are any better than (say) transactional distance, but to draw attention to the complexity, fuzziness, and fragility of the concept. However, I find some of them quite appealing: I am particularly pleased with the idea of inverting the various presences in the Community of Inquiry model (and extensions of it). Teaching, cognitive, and social (and emotional and agency) distances and presences essentially measure the same things in the same way, but the shift in perspective subtly changes the narratives we might build around them. I could probably write a paper on each kind of distance I provide, but each gets a paragraph or two because what it is all leading towards is an idea that I think has some more useful legs: technological distance.

I’m still developing this idea, and have just submitted another paper that tries to unpack it a bit more, so don’t expect something fully-formed just yet – I welcome discussion and debate on its value, meaning, and usefulness. Basically, technological distance is a measure of the gaps left between the technologies (including cognitive tools in learners’ own minds, what teachers orchestrate, textbooks, digital tools, etc, etc) that the learner has to fill in order to learn something. This is not just about the subject matter – it’s about the mill (how we learn) well as the grist (what we learn). There are lots of ways to reduce that distance, many of which are good for learning, but some of which undermine it by effectively providing what Dave Cormier delightfully describes as autotune for knowledge. The technologies provide the knowledge so learners don’t have to engage with or connect it themselves. This is not always a bad thing – architects may not need drafting skills, for instance, if they are going to only ever use CAD, memorization of facts easily discovered might not always be essential, and we will most likely see ubiquitous generative AI as part of our toolset now and in the future, for instance – but choosing what to learn is one reason teachers (who/whatever they are) can be useful. Effective teaching is about making the right things soft so the process itself teaches. However, as what needs to be soft is different for every person on the planet, we need to make learning (of ourselves or others) visible in order to know that. It’s not science – it’s technology. That means that invention, surprise, creativity, passion, and many other situated things matter.

My paper is nicely juxtaposed in the journal with one from Simon Paul Atkinson, which addresses definitions of “open”, “distance” and “flexible” that, funnily enough, was my first idea for a topic when I was invited to submit my paper. If you read both, I think you’ll see that Simon and I might see the issue quite differently, but his is a fine paper making some excellent points.

Abstract

The “distance” in “distance learning”, however it is defined, normally refers to a gap between a learner and their teacher(s), typically in a formal context. In this paper I take a slightly different view. The paper begins with an argument that teaching is fundamentally a technological process. It is, though, a vastly complex, massively distributed technology in which the most important parts are enacted idiosyncratically by vast numbers of people, both present and distant in time and space, who not only use technologies but also participate creatively in their enactment. Through the techniques we use we are co-participants in not just technologies but the learning of ourselves and others, and hence in the collective intelligence of those around us and, ultimately, that of our species. We are all teachers. There is therefore not one distance between learner and teacher in any act of deliberate learning— but many. I go on to speculate on alternative ways of understanding distance in terms of the physical, temporal, structural, agency, social, emotional, cognitive, cultural, pedagogical, and technological gaps that may exist between learners and their many teachers. And I conclude with some broad suggestions about ways to reduce these many distances.

Reference

Originally posted at: https://landing.athabascau.ca/bookmarks/view/17293757/my-latest-paper-technology-teaching-and-the-many-distances-of-distance-learning-journal-of-open-flexible-and-distance-learning

Petition · Athabasca University – Oppose direct political interference in universities · Change.org

https://www.change.org/p/athabasca-university-oppose-direct-political-interference-in-universities

I, like many staff and students, have been deeply shaken and outraged by recent events at Athabasca University. This is a petition by me and Simon Buckingham Shum, of the University of Technology Sydney, Australia to protest the blatant interference by the Albertan government in the affairs of AU over the past year, that culminated in the firing of its president, Professor Peter Scott, without reason or notice. Even prior to this, the actions of the Albertan government had been described by Glen Jones (Professor of Higher Education, University of Toronto) as: “the most egregious political interference in a public university in Canada in more than 100 years” This was an assault on our university, an assault on the very notion of a public university, and it sets a disturbing precedent that cannot stand unopposed.

We invite you to view this brief summary, and consider signing this petition to signal your concern. Please feel more than free to pass this on to anyone and everyone – it is an international petition that has already been signed by many, both within and beyond the AU community.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/17102318/petition-%C2%B7-athabasca-university-oppose-direct-political-interference-in-universities-%C2%B7-changeorg

Proceedings of The Open/Technology in Education, Society, and Scholarship Association Conference, 2022 (and call for proposals for this year’s conference, due January 31)

https://conference.otessa.org/index.php/conference/issue/view/3

These are the proceedings of OTESSA ’22. There’s a good mix of research/theory and practice papers, including one from me, Rory McGreal, Vive Kumar, and Jennifer Davies arising from our work on trying to use digital landmarks to make e-texts more memorable.

It was a great conference, held entirely online but at least as engaging and with as many opportunities for networking, personal interaction, and community building (including musical and dance sessions) as many that I’ve attended held in person. Kudos to the organizers.

This year’s conference will be held both in Toronto and online, from May 27-June 2. The in-person/blended part of the conference is from May 29-31, the rest is online. The deadline for proposals is January 31st, which is dauntingly close. However, only 250-500 words are needed for a research-oriented or practice-oriented proposal. If you wish to publish as well, you can submit a proceeding file (1000-2000 words – or media) now or at any later date. Here’s the link for submissions.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/16754483/proceedings-of-the-opentechnology-in-education-society-and-scholarship-association-conference-2022-and-call-for-proposals-for-this-years-conference-due-january-31

Hot off the press: Handbook of Open, Distance and Digital Education (open access)

https://link.springer.com/referencework/10.1007/978-981-19-2080-6

This might be the most important book in the field of open, distance, and digital education to be published this decade.Handbook cover Congratulations to Olaf Zawacki-Richter and Insung Jung, the editors, as well as to all the section editors, for assembling a truly remarkable compendium of pretty much everything anyone would need to know on the subject. It includes chapters written by a very high proportion of the most well-known and influential researchers and practitioners on the planet as well as a few lesser known folk along for the ride like me (I have a couple of chapters, both cowritten with Terry Anderson, who is one of those top researchers). Athabasca University makes a pretty good showing in the list of authors and in works referenced. In keeping with the subject matter, it is published by Springer as an open access volume, but even the hardcover version is remarkably good value (US$60) for something of this size.

The book is divided into six broad sections (plus an introduction), each of which is a decent book in itself, covering the following topics:

  • History, Theory and Research,
  • Global Perspectives and Internationalization,
  • Organization, Leadership and Change,
  • Infrastructure, Quality Assurance and Support Systems,
  • Learners, Teachers, Media and Technology, and
  • Design, Delivery, and Assessment

There’s no way I’m likely to read all of its 1400+ pages in the near future, but there is so much in it from so many remarkable people that it is going to be a point of reference for me for years to come. I’m really going to enjoy dipping into this.

If you’re interested, the chapters that Terry and I wrote are on Pedagogical Paradigms in Open and Distance Education and Informal Learning in Digital Contexts. A special shoutout to Junhong Xiao for all his help with these.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/16584686/hot-off-the-press-handbook-of-open-distance-and-digital-education-open-access