Slides from my SITE keynote, 2024: The Intertwingled Teacher

Photo of Jon holding a photo of Jon The Intertwingled Teacher 

These are the slides from my opening keynote at SITE ‘24 today, at Planet Hollywood in Las Vegas. The talk was based closely on some of the main ideas in How Education Works.  I’d written an over-ambitious abstract promising answers to many questions and concerns, that I did just about cover but far too broadly. For counter balance, therefore, I tried to keep the focus on a single message – t’aint what you do, it’s the way that you do it (which is the epigraph for the book) – and, because it was Vegas,  I felt that I had to do a show, so I ended the session with a short ukulele version of the song of that name. I had fun, and a few people tried to sing along. The keynote conversation that followed was most enjoyable – wonderful people with wonderful ideas, and the hour allotted to it gave us time to explore all of them.

Here is that bloated abstract:

Abstract: All of us are learning technologists, teaching others through the use of technologies, be they language, white boards, and pencils or computers, apps, and networks. We are all part of a vast, technology-mediated cognitive web in which a cast of millions – in formal education including teachers such as textbook authors, media producers, architects, software designers, system administrators, and, above all, learners themselves –  co-participates in creating an endless, richly entwined tapestry of learning. This tapestry spreads far beyond formal acts of teaching, far back in time, and far into the future, weaving in and helping to form not just the learning of individuals but the collective intelligence of the whole human race. Everyone’s learning journey both differs from and is intertwingled with that of everyone else. Education is an overwhelmingly complex and unpredictable technological system in which coarse patterns and average effects can be found but, except in the most rigid, invariant, minor details, of which individual predictions cannot be accurately made. No learner is average, and outcomes are always greater than what is intended. The beat of a butterfly’s wing in Timbuktu can radically affect the experience of a learner in Toronto. A slight variation in tone of voice can make all the difference between a life-transforming learning experience and a lifelong aversion to a subject. Beautifully crafted, research-informed teaching methods can be completely ineffective, while poor teaching, or even the absence of it, can result in profoundly affective learning. For all our efforts to understand and control it, education as a technological process is far closer to art than to engineering. What we do is usually far less significant than the idiosyncratic way that we do it, and how much we care for the subject, our students, and our craft is often far more important than the pedagogical methods we use. In this talk I will discuss what all of this implies for how we should teach, for how we understand teaching, and for how we research the massively intertwingled processes and tools of teaching. Along the way I will explain why there is no significant difference between measured outcomes of online or in-person learning, the futility of teaching to learning styles, the reason for the 2-sigma advantage of personal tuition, the surprising commonalities between behaviourist, cognitivist, constructivist models of learning and teaching, the nature of literacies, and the failure of reductive research methods in education. It will be fun

▶ I got air: interview with Terry Greene

Since 2018, Terry Greene has been producing a wonderful series of podcast interviews with open and online learning researchers and practitioners called Getting Air. Prompted by the publication of How Education Works, (Terry is also responsible for the musical version of the book, so I think he likes it) this week’s episode features an interview with me.

I probably should have been better prepared. Terry asked some probing, well-informed, and sometimes disarming questions, most of which led to me rambling more than I might have done if I’d thought about them in advance. It was fun, though, drifting through a broad range of topics from the nature of technology to music to the perils of generative AI (of course).

I hope that Terry does call his PhD dissertation “Getting rid of instructional designers”.

Presentation – Generative AIs in Learning & Teaching: the Case Against

Here are the slides from my presentation at AU’s Lunch ‘n’ Learn session today. The presentation itself took 20 minutes and was followed by a wonderfully lively and thoughtful conversation for another 40 minutes, though it was only scheduled for half an hour. Thanks to all who attended for a very enjoyable discussion! self portrait of chatGPT, showing an androgynous human face overlaid with circuits

The arguments made in this were mostly derived from my recent paper on the subject (Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020) but, despite the title, my point was not to reject the use of generative AIs at all. The central message I was hoping to get across was a simpler and more important one: to encourage attendees to think about what education is for, and what we would like it to be. As the slides suggest, I believe that is only partially to do with the objectives and outcomes we set out to achieve,  that it is nothing much at all to do with the products of the system such as grades and credentials, and that focus on those mechanical aspects of the system often creates obstacles to the achievement of it. Beyond those easily measured things, education is about the values, beliefs, attitudes, relationships, and development of humans and their societies.  It’s about ways of being, not just capacity to do stuff. It’s about developing humans, not (just) developing skills. My hope is that the disruptions caused by generative AIs are encouraging us to think like the Amish, and to place greater value on the things we cannot measure. These are good conversations to have.

▶ How Education Works, the audio book: now with beats

My book has been set to music!

Many thanks to Terry Greene for converting How Education Works into the second in his inspired series of podcasts, EZ Learning – Audio Books with Beats. There’s a total of 15 episodes that can be listened to online, subscribed to with your preferred podcast app, or downloaded for later listening, read by a computer-generated voice and accompanied by some cool, soothing beats.

Terry chose a deep North American voice for the reader and Eaters In Coffeeshops Mix 1 by Eaters to accompany my book. I reckon it works really well. It’s bizarre, at first – the soothing robotic voice introduces weird pauses, mispronunciations, and curious emphases, and there are occasional voice parts in the music that can be slightly distracting – but you soon get used to it if you relax into the rhythm, and it leads to the odd serendipitous emphasis that enhances rather than detracts from the text. Oddly, in some ways it almost feels more human as a result. Though it can be a bit disconcerting at times and there’s a fair chance of being lulled to sleep by the gentle rhythm, I have a hunch that the addition of music might make it easier to remember passages from it, for reasons discussed in a paper I wrote with Rory McGreal, VIve Kumar, and Jennifer Davies a year or so ago.

I have been slowly and painfully working on a manually performed audiobook of How Education Works but it is taking much longer than expected thanks to living on the flight path of a surprising number of float planes, being in a city built on a rain forest with a noisy gutter outside my window, having two very vocal cats, and so on, not to mention not having a lot of free time to work on it, so I am very pleased that Terry has done this. I won’t stop working on the human-read version – I think this fills a different and very complementary niche – but it’s great to have something to point people towards when they ask for an audio version.

The first season of Audio Books with Beats, appearing in the feed after the podcasts for my book chapters, was another AU Press book, Terry Anderson’s Theory and Practice of Online Learning which is also well worth a listen – those chapters follow directly from mine in the list of episodes. I hope and expect there will be more seasons to come so, if you are reading this some time after it was posted, you may need to scroll down through other podcasts until you reach the How Education Works. In case it’s hard to find, here’s a list of direct links to the episodes.

Acknowledgements, Prologue, introduction

Chapter 1: A Handful of Anecdotes About Elephants

Chapter 2:  A Handful of Observations About Elephants

Part 1: All About Technology

Chapter 3: Organizing Stuff to Do Stuff

Chapter 4: How Technologies Work

Chapter 5: Participation and Technique

Part II: Education as a Technological System

Chapter 6: A Co-Participation Model of Teaching

Chapter 7: Theories of Teaching

Chapter 8: Technique, Expertise, and Literacy

Part III: Applying the Co-Participation Model

Chapter 9: Revealing Elephants

Chapter 10: How Education Works

Epilogue

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20936998/%E2%96%B6-how-education-works-the-audio-book-now-with-beats

Recording and slides from my ESET 2023 keynote: Artificial humanity and human artificiality

Here are the slides from my keynote at ESET23 in Taiwan (I was online, alas, not in Taipei!).

I will try to remember to update this post with a link to the recording, when it is available.

Here’s a recording of the actual keynote.

The themes of my talk will be familiar to anyone who follows my blog or who has read my recent paper on the subject. This is about applying the coparticipation theory from How Education Works to generative AI, raising concerns about the ways it mimics the soft technique of humans, and discussing how problematic that will be if the skills it replaces atrophy or are never learned in the first place, amongst other issues.

This is the abstract:

We are participants in, not just users of technologies. Sometimes we participate as orchestrators (for instance, when choosing words that we write) and sometimes as part of the orchestration (for instance, when spelling those words correctly). Usually, we play both roles.  When we automate aspects of technologies in which we are just parts of the orchestration, it frees us up to be able to orchestrate more, to do creative and problem-solving tasks, while our tools perform the hard, mechanical tasks better, more consistently, and faster than we could ourselves. Collectively and individually, we therefore become smarter. Generative AIs are the first of our technologies to successfully automate those soft, open-ended, creative cognitive tasks. If we lack sufficient time and/or knowledge to do what they do ourselves, they are like tireless, endlessly flexible personal assistants, expanding what we can do alone. If we cannot draw, or draw up a rental agreement, say, an AI will do it for us, so we may get on with other things. Teachers are therefore scrambling to use AIs to assist in their teaching as fast as students use AIs to assist with their assessments.

For achieving measurable learning outcomes, AIs are or will be effective teachers, opening up greater learning opportunities that are more personalized, at lower cost, in ways that are superior to average human teachers.  But human teachers, be they professionals, other students, or authors of websites, do more than help learners to achieve measurable outcomes. They model ways of thinking, ways of being, tacit knowledge, and values: things that make us human. Education is a preparation to participate in human cultures, not just a means of imparting economically valuable skills. What will happen as we increasingly learn those ways of being from a machine? If machines can replicate skills like drawing, reasoning, writing, and planning, will humans need to learn them at all? Are there aspects of those skills that must not atrophy, and what will happen to us at a global scale if we lose them? What parts of our cognition should we allow AIs to replace? What kinds of credentials, if any, will be needed? In this talk I will use the theory presented in my latest book, How Education Works: Teaching, Technology, and Technique to provide a framework for exploring why, how, and for what purpose our educational institutions exist, and what the future may hold for them.

Pre-conference background reading, including the book, articles, and blog posts on generative AI and education may be found linked from https://howeducationworks.ca

Preprint – The human nature of generative AIs and the technological nature of humanity: implications for education

Here is a preprint of a paper I just submitted to MDPI’s Digital journal that applies the co-participation model that underpins How Education Works (and a number of my papers over the last few years) to generative AIs (GAIs). I don’t know whether it will be accepted and, even if it is, it is very likely that some changes will be required. This is a warts-and-all raw first submission. It’s fairly long (around 10,000 words).

The central observation around which the paper revolves is that, for the first time in the history of technology, recent generations of GAIs automate (or at least appear to automate) the soft technique that has, till now, been the sole domain of humans. Up until now, every technology we have ever created, be it physically instantiated, cognitive, organizational, structural, or conceptual, has left all of the soft part of the orchestration to human beings.

The fact that GAIs replicate the soft stuff is a matter for some concern when they start to play a role in education, mainly because:

  • the skills they replace may atrophy or never be learned in the first place. This is not even slightly like replacing hard skills of handwriting or arithmetic: we are talking about skills like creativity, problem-solving, critical inquiry, design, and so on. We’re talking about the stuff that GAIs are trained with.
  • the AIs themselves are an amalgam, an embodiment of our collective intelligence, not actual people. You can spin up any kind of persona you like and discard it just as easily. Much of the crucially important hidden/tacit curriculum of education is concerned with relationships, identity, ways of thinking, ways of being, ways of working and playing with others. It’s about learning to be human in a human society. It is therefore quite problematic to delegate how we learn to be human to a machine with (literally and figuratively) no skin in the game, trained on a bunch of signals signifying nothing but more signals.

On the other hand, to not use them in educational systems would be as stupid as to not use writing. These technologies are now parts of our extended cognition, intertwingled with our collective intelligence as much as any other technology, so of course they must be integrated in our educational systems. The big questions are not about whether we should embrace them but how, and what soft skills they might replace that we wish to preserve or develop. I hope that we will value real humans and their inventions more, rather than less, though I fear that, as long as we retain the main structural features of our education systems without significant adjustments to how they work, we will no longer care, and we may lose some of our capacity for caring.

I suggest a few ways we might avert some of the greatest risks by, for instance, treating them as partners/contractors/team members rather than tools, by avoiding methods of “personalization” that simply reinforce existing power imbalances and pedagogies designed for better indoctrination, by using them to help connect us and support human relationships, by doing what we can to reduce extrinsic drivers, by decoupling learning and credentials, and by doubling down on the social aspects of learning. There is also an undeniable explosion in adjacent possibles, leading to new skills to learn, new ways to be creative, and new possibilities for opening up education to more people. The potential paths we might take from now on are unprestatable and multifarious but, once we start down them, resulting path dependencies may lead us into great calamity at least as easily as they may expand our potential. We need to make wise decisions now, while we still have the wisdom to make them.

MDPI invited me to submit this article free of their normal article processing charge (APC). The fact that I accepted is therefore very much not an endorsement of APCs, though I respect MDPI’s willingness to accommodate those who find payment difficult, the good editorial services they provide, and the fact that all they publish is open. I was not previously familiar with the Digital journal itself. It has been publishing 4 articles a year since 2021, mostly offering a mix of reports on application designs and literature reviews. The quality seems good.

Abstract

This paper applies a theoretical model to analyze the ways that widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique performed creatively or idiosyncratically). Education may be seen as a technological process for developing the soft and hard techniques of humans to participate in the technologies and thus the collective intelligence of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain: the very things that technologies enabled us to do can now be done by the technologies themselves. The consequences for what, how, and even whether we learn are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/20512771/preprint-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

The artificial curriculum

evolving into a robot “Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings” by Simone Grassini is a well-researched, concise but comprehensive overview of the state of play for generative AI (GAI) in education. It gives a very good overview of current uses, by faculty and students, and provides a thoughtful discussion of issues and concerns arising. It addresses technical, ethical, and pragmatic concerns across a broad spectrum. If you want a great summary of where we are now, with tons of research-informed suggestions as to what to do about it, this is a very worthwhile read.

However, underpinning much of the discussion is an implied (and I suspect unintentional) assumption that education is primarily concerned with achieving and measuring explicit specified outcomes. This is particularly obvious in the discussions of ways GAIs can “assist” with instruction. I have a problem with that.

There has been an increasing trend in recent decades towards the mechanization of education: modularizing rather than integrating, measuring what can be easily measured, creating efficiencies, focusing on an end goal of feeding industry, and so on. It has resulted in a classic case of the McNamara Fallacy, that starts with a laudable goal of measuring success, as much as we are able, and ends with that measure defining success, to the exclusion anything we do not or cannot measure. Learning becomes the achievement of measured outcomes.

It is true that consistent, measurable, hard techniques must be learned to achieve almost anything in life, and that it takes sustained effort and study to achieve most of them that educators can and should help with. Measurable learning outcomes and what we do with them matter. However, the more profound and, I believe, the more important ends of education, regardless of the subject, are concerned with ways of being in the world, with other humans. It is the tacit curriculum that ultimately matters more: how education affects the attitudes, the values, the ways we can adapt, how we can create, how we make connections, pursue our dreams, live fulfilling lives, engage with our fellow humans as parts of cultures and societies.

By definition, the tacit curriculum cannot be meaningfully expressed in learning outcomes or measured on a uniform scale. It can be expressed only obliquely, if it can be expressed at all, in words. It is largely emergent and relational, expressed in how we are, interacting with one another, not as measurable functions that describe what we can do. It is complex, situated, and idiosyncratic. It is about learning to be human, not achieving credentials.

Returning to the topic of AI, to learn to be human from a blurry JPEG of the web, or autotune for knowledge, especially given the fact that training sets will increasingly be trained on the output of earlier training sets, seems to me to be a very bad idea indeed.

The real difficulty that teachers face is not that students solve the problems set to them using large language models, but that in so doing they bypass the process, thus avoiding the tacit learning outcomes we cannot or choose not to measure. And the real difficulty that those students face is that, in delegating the teaching process to an AI, their teachers are bypassing the teaching process, thus failing to support the learning of those tacit outcomes or, at best, providing an averaged-out caricature of them. If we heedlessly continue along this path, it will wind up with machines teaching machines, with humans largely playing the roles of cogs and switches in them.

Some might argue that, if the machines do a good enough job of mimicry then it really doesn’t matter that they happen to be statistical models with no feelings, no intentions, no connection, and no agency. I disagree. Just as it makes a difference whether a painting ascribed to Picasso is a fake or not, or whether a letter is faxed or delivered through the post, or whether this particular guitar was played by John Lennon, it matters that real humans are on each side of a learning transaction. It means something different for an artifact to have been created by another human, even if the form of the exchange, in words or whatever, is the same. Current large language models have flaws, confidently spout falsehoods, fail to remember previous exchanges, and so on, so they are easy targets for criticism. However, I think it will be even worse when AIs are “better” teachers. When what they seem to be is endlessly tireless, patient, respectful and responsive; when the help they give is unerringly accurately personal and targeted; when they accurately draw on knowledge no one human could ever possess, they will not be modelling human behaviour. The best case scenario is that they will not be teaching students how to be, they will just be teaching them how to do, and that human teachers will provide the necessary tacit curriculum to support the human side of learning. However, the two are inseparable, so that is not particularly likely. The worst scenarios are that they will be teaching students how to be machines, or how to be an average human (with significant biases introduced by their training), or both.

And, frankly, if AIs are doing such a good job of it then they are the ones who should be doing whatever it is that they are training students to do, not the students. This will most certainly happen: it already is (witness the current actors and screenwriters strike). For all the disruption that results, it’s not necessarily a bad thing, because it increases the adjacent possible for everyone in so many ways. That’s why the illustration to this post is made to my instructions by Midjourney, not drawn by me. It does a much better job of it than I could do.

In a rational world we would not simply incorporate AI into teaching as we have always taught. It makes no more sense to let it replace teachers than it does to let it replace students. We really need to rethink what and why we are teaching in the first place. Unfortunately, such reinvention is rarely if ever how technology works. Technology evolves by assembly with and in the context of other technology, which is how come we have inherited mediaeval solutions to indoctrination as a fundamental mainstay of all modern education (there’s a lot more about such things in my book, How Education Works if you want to know more about that). The upshot will be that, as we integrate rather than reinvent, we will keep on doing what we have always done, with a few changes to topics, a few adjustments in how we assess, and a few “efficiencies”, but we will barely notice that everything has changed because students will still be achieving the same kinds of measured outcomes.

I am not much persuaded by most apocalyptic visions of the potential threat of AI. I don’t think that AI is particularly likely to lead to the world ending with a bang, though it is true that more powerful tools do make it more likely that evil people will wield them. Artificial General Intelligence, though, especially anything resembling consciousness, is very little closer today than it was 50 years ago and most attempts to achieve it are barking in the wrong forest, let alone up the wrong tree. The more likely and more troubling scenario is that, as it embraces GAIs but fails to change how everything is done, the world will end with a whimper, a blandification, a leisurely death like that of lobsters in water coming slowly to a boil. The sad thing is that, by then, with our continued focus on just those things we measure, we may not even notice it is happening. The sadder thing still is that, perhaps, it already is happening.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/19390937/the-artificial-curriculum

Look what just arrived on my doorstep! #howeducationworks from @au_press is now available in print and e-book formats

Photo of hard copies of How Education Works

Hard copies and e-book versions of How Education Works are now available, and they are starting to turn up in bookstores. The recommended retail price is CAD$40 but Amazon is selling the Kindle version for a bit less.

Here are a few outlets that are selling it (or order it from your local independent bookstore!):

AU Press (CA)

Barnes & Noble (US)

Blackwells (UK)

Amazon (CA)

Amazon (JP)

University of Chicago Press (US)

Indigo (CA)

Booktopia (AU)

For those wanting to try before they buy or who cannot afford/do not want the paper or e-book versions, you can read it for free online, or download a PDF of the whole book.

The publishers see this as mainly targeted at professional teachers and educational researchers, but those are far from the only audiences I had in mind as I was writing it. Apart from anything else, one of the central claims of the book is that literally everyone is a teacher.  But it’s as much a book about the nature of technology as it is about education, and as much about the nature of knowledge as it is about how that knowledge is acquired. If you’re interested in how we come to know stuff, how technologies work, or how to think about what makes us (individually and collectively) smart, there’s something in the book for you. It’s a work of philosophy as much as it is a book of practical advice, and it’s about a way of thinking and being at least as much as it is about the formal practice of education. That said, it certainly does contain some ideas and recommendations that do have practical value for educators and educational researchers. There’s just more to it than that.

I cannot begin to express how pleased I am that, after more than 10 years of intermittent work, I finally have the finished article in my hands. I hope you get a chance to read it, in whatever format works for you! I’ll end this post with a quote, that happens to be the final paragraph of the book…

“If this book has helped you, however slightly, to think about what you know and how you have come to know it a little differently, then it has been a successful learning technology. In fact, even if you hold to all of your previous beliefs and this book has challenged you to defend them, then it has worked just fine too. Even if you disagreed with or misunderstood everything that I said, and even if you disliked the way that I presented it, it might still have been an effective learning technology, even though the learning that I hoped for did not come about. But I am not the one who matters the most here. This is layer upon layer of technology, and in some sense, for some technology, it has done what that technology should do. The book has conveyed words that, even if not understood as I intended them to be, even if not accepted, even if rabidly disagreed with, have done something for your learning. You are a different person now from the person you were when you started reading this book because everything that we do changes us. I do not know how it has changed you, but your mind is not the same as it was before, and ultimately the collectives in which you participate will not be the same either. The technology of print production, a spoken word, a pattern of pixels on a screen, or dots on a braille reader has, I hope, enabled you, at least on occasion, to think, criticize, acknowledge, recognize, synthesize, and react in ways that might have some value in consolidating or extending or even changing what you already know. As a result of bits and bytes flowing over an ether from my fingertips to whatever this page might be to you, knowledge (however obscure or counter to my intentions) has been created in the world, and learning has happened. For all the complexities and issues that emerge from that simple fact, one thing is absolutely certain: this is good.”

 

 

A decade of unwriting: the life history of "How Education Works"

How Education Works book coverAbout 10 years ago I submitted the first draft of a book called “How Learning Technologies Work” to AU Press. The title was a nod to David Byrne’s wonderful book, “How Music Works” which is about much more than just music, just as mine was about much more than learning technologies.

Pulling together ideas I had been thinking about for a few years, the book had taken me only a few months to write, mostly at the tail end of my sabbatical. I was quite pleased with it. The internal reviewers were positive too, though they suggested a number of sensible revisions, including clarifying some confusing arguments and a bit of restructuring. Also, in the interests of marketing, they recommended a change to the title because, though accurately describing the book’s contents, I was not using “learning technologies” in its mainstream sense at all (for me, poetry, pedagogies, and prayer are as much technologies as pots, potentiometers and practices), so it would appeal to only a small subset of its intended audience. They were also a bit concerned that it would be hard to find an audience for it even if it had a better title because it was at least as much a book about the nature of technology as it was a book about learning, so it would fall between two possible markets, potentially appealing to neither.

A few months later, I had written a new revision that addressed most of the reviewers’ recommendations and concerns, though it still lacked a good title. I could have submitted it then. However, in the process of disentangling those confusing arguments, I had realized that the soft/hard technology distinction on which much of the book rested was far less well-defined than I had imagined, and that some of the conclusions that I had drawn from it were just plain wrong. The more I thought about it, the less happy I felt.

And so began the first of a series of substantial rewrites. However, my teaching load was very high, and I had lots of other stuff to do, so progress was slow. I was still rewriting it when I unwisely became Chair of my department in 2016, which almost brought the whole project to a halt for another 3 years. Despite that, by the time my tenure as Chair ended, the book had grown to around double its original (not insubstantial) length, and the theory was starting to look coherent, though I had yet to make the final leap that made sense of it all.

By 2019, as I started another sabbatical, I had decided to split the book into two. I put the stuff that seemed useful for practitioners into a new book,  “Education: an owner’s manual”, leaving the explanatory and predictive theory in its own book, now grandiosely titled “How Education Works”, and worked on both simultaneously. Each grew to a few hundred pages.

Neither worked particularly well. It was really difficult to keep the theory out of the practical book, and the theoretical work was horribly dry without the stories and examples to make sense of it. The theory, though, at last made sense, albeit that I struggled (and failed) to give it a catchy name. The solution was infuriatingly obvious. In all my talks on the subject my catchphrase from the start had been “’tain’t what you do, it’s the way that you do it, that’s what gets results” (it’s the epigraph for the book), so it was always implicit that softness and hardness are not characteristics of all technologies, as such, nor even of their assemblies, but of the ways that we participate in their orchestration. Essentially, what matters is technique: the roles we play as parts of the orchestration or orchestrators of it. That’s where the magic happens.

But now I had two mediocre books that were going nowhere. Fearing I was about to wind up with two unfinished and/or unsellable books, about half way through my sabbatical I brutally slashed over half the chapters from both, pasted the remains together, and spent much of the time I had left filling in the cracks in the resulting bricolage.

I finally submitted “How Education Works: Teaching, Technology, and Technique” in the closing hours of 2020, accompanied by a new proposal because, though it shared a theme and a few words with the original, it was a very different book.

Along the way I had written over a million words, only around a tenth of which made it into what I sent to AU Press. I had spent the vast majority of my authoring time unwriting rather than writing the book and, with each word I wrote or unwrote, the book had written me, as much as I had written it. The book is as much a part of my cognition as a product of it.

And now, at last, it can be part of yours.

30 months after it was submitted – I won’t go into the reasons apart from to say it has been very frustrating –  the book is finally available as a free PDF download or to read on the Web. If all goes to plan, the paper and e-book versions should arrive June 27th, 2023, and can be pre-ordered now.

It is still a book about technology at least as much as it is about education (very broadly defined), albeit that it is now firmly situated in the latter. It has to be both because among the central points I’m making are that we are part-technology and technology is part-us, that cognition is (in part) technology and technology is (in part) cognition, and that education is a fundamentally technological and thus fundamentally human activity. It’s all one complex, hugely distributed, recursive intertwingularity in which we and our technological creations are all co-participants in the cognition and learning of ourselves and one another.

During the 30 months AU Press has had the book I have noticed a thousand different ways the book could be improved, and I don’t love all of the edits made to it along the way (by me and others), but I reckon it does what I want it to do, and 10 years is long enough.

It’s time to start another.

A few places you can buy the book

AU Press (CA)

Barnes & Noble (US)

Blackwells (UK)

Amazon (CA)

Amazon (JP)

University of Chicago Press (US)

Indigo (CA)

Booktopia (AU)

Technological distance – my slides from OTESSA ’23

Technological Distance

Here are the slides from my talk today at OTESSA ’23. Technological distance is a way of understanding distance that fits with modern complexivist models of learning such as Connectivism, Heutagogy, Networks/Communities of Practice/Rhizomatic Learning, and so on. In such a model, there are potentially thousands of distances – whether understood as psychological, transactional, social, cognitive, physical, temporal, or whatever – so conventional views of distance as a gap between learner and teacher (or institution or other students) are woefully inadequate.

I frame technological distance as a gap between technologies learners have (including cognitive gadgets, skills, techniques, etc as well as physical, organization, or procedural technologies) and those they need in order to learn. It is a little bit like Vygotsky’s Zone of Proximal Development but re-imagined and extended to incorporate all the many technologies, structures, and people who may be involved in the teaching gestalt.

The model of technology that I use to explain the idea is based on the coparticipation perspective presented in my book that, with luck, should be out within the next week or two. The talk ends with a brief discussion of the main implications for those whose job it is to teach.

Thanks to MidJourney for collaborating with me to produce the images used in the slides.

people as interlocking cogs