Democratech: reflections on the human nature of blockchain

mediaeval blockchain votingAt short notice I was invited to be guest of honour and keynote at Bennett University’s International Conference on Blockchain for Inclusive and Representative Democracy  yesterday. I was not able to attend the entire conference – my opening keynote was at 9:30pm here in Vancouver and I eventually needed to sleep – but I made it for a few hours. I was impressed with the diversity and breadth of the work going on, mainly in India, and the passionate, smart people in attendance. It was a particular pleasure to hear from Ramesh Sharma, who I have known for many years in an online learning context, here speaking of very different things, and I really loved the ceremonial lighting of the lantern – the sharing of the light – with which the conference began. It is a powerful and connecting metaphor.

Like most geeks I do have the occasional thought about blockchain and democracy but I can’t describe myself as an expert or even an enthusiastic amateur in either field. So, rather than speaking about things the delegates knew far more about than I, and given the compressed time-frame for preparing the keynote, I chose to ground the talk in familiar territory, taking a broad-brush view of how to think of the technological ecosystem into which the technologies must fit. It led to some new thoughts here and there: in particular, I rather like the idea of technologies in general acting as a kind of distributed ledger of human cognition. The result was these slides – Democratech: reflections on the human nature of blockchain.

In rough note form (not a polished academic work and not particularly coherent!), the text below is approximately what I spoke about for each of the slides:

1 In this talk I will be using ideas from my most recent book: here it is. You can download it for free or buy it in paper or electronic form if you wish. See http://teachingcrowds.ca. It is at least as  much about the nature of technology as it is about the nature of education, and that’s what I want to talk about today: what kind of a technology is blockchain, and why does it matter?

2 “Technology” is a fuzzy term that can mean many things to different people. I spend a whole chapter in the book exploring many definitions of what “technology” means. To, save time, I am going to use what I conclude to be the best definition, from Brian Arthur, “orchestrating phenomena to our use”.

3 I prefer to think of this as “organizing stuff to do stuff”, because it makes it clearer that the stuff that it organizes nearly always includes stuff already organized to do stuff: as Arthur observes, almost all if not all technologies are assemblies of other technologies, at least when they are put to use.

Technologies are made of technologies, at every scale, and they are parts of webs of technologies that stretch far into time and space.  Kevin Kelly calls this massively interconnected network the technium. And, as he puts it, technology can be thought of as both a thing and a verb or, as Ursula Franklin puts it fish and water – a slippery thing to pin down. It is something we do and something we have done. In fact it is typically both.

4 By this definition, democracies are technologies too – in fact, hugely complex assemblies of technologies. They orchestrate phenomena using systems, physical objects, and assemblies of them, to approximate a fair voice for all in the governance of where we dwell. So are words, and language, and, as Franklin notes, there are technologies of prayer.

5 If you take nothing else from this speech, take this: only the whole assembly matters. The parts are very important to the designer and make a big difference to how a technology works and is experienced, but it is how the parts are assembled and act together that makes the technology as it is experienced, as it is instantiated. That includes what we do with them – more on that in a moment.

If you are not convinced, think about some of the parts of the computer you are looking at now: some are sharp, some contain harmful chemicals, and there’s a good chance that there is a deadly amount of  electricity flowing through them, and yet we gain benefit from them, not loss of life, because we assemble them in ways that (at least normally) eliminate the harm by adding technologies to prevent it: counter technologies. Often, a large part of what we recognize as a technology is in fact a counter technology to other parts of it – think of cars, for example, where many of the components are simply there to stop other components blowing up, seizing, or killing people.

6 Technologies create what Stuart Kauffman calls “adjacent possibles” – empty niches that further technologies can fill, individually or in conjunction with others, including others that already exist. Every new technology makes further technologies possible, adding new parts to new assemblies. This accounts for the exponential growth in technologies over the past 10000 years or so: technologies evolve from and with other technologies, almost never out of nothing.

Those adjacent possible empty niches are fundamentally unprestatable, as Kauffman puts it: no one can imagine all the possible assemblies into which we might put something as simple as a screwdriver. A stirrer of paint, a back scratcher, a scribe, a pointer, a stabbing weapon, a weight, a missile, a crow bar… And this is true of every technology. All can be assembled differently, in indefinitely many assemblies, to make indefinitely many wholes. This is true at the finest of scales. Though there may be some very close resemblances between instances, you have never written your own signature, nor washed your clothes, nor eaten your food the same way twice. Only machines can do that, but they are part of our technologies as much as we are part of them: the machine may behave consistently but the technology through which we use it – the instantiation in which we participate – most likely does not.

Technologies also come with path dependencies that can harden and distort assemblies, because the soft must shape itself around the hard. What exists shapes what can exist.

7, 8 When instantiated, we are participants in, not just users of, the technology. Using a technology is also a technology: whether organizing it or being part of the organization

9 , 10 We are coparticipants in a largely self-organizing web of technology that is part organic, part process, part physical object, part conceptual, part structural. Technologies democratize cognition though they also embed and harden values of the powerful, and the uses to which they are put are too often to subdue, constrain, or abuse our fellow humans. It is always important to remember that the technology that matters is seldom its most obvious components: it is the assembly they are in. As they are used, they are different technologies to everyone that uses them, because they are parts of different assemblies: the production line is a very different technology for its boss, its workers, its shareholders, the consumers of what it produces, orchestrating different phenomena to different users. This means that technologies – as instantiated – are never neutral. They have histories, contexts, and propensities.

11 And our input matters: it is not just the method but the way things are done that matters. Every assembly can be a creative assembly, and it is possible to do it well or badly. And so we all create new adjacent possibles for one another.  Through technologies we participate in the collective cognition of the human race: in effect, technologies form the distributed ledger of our shared cognition. But all of us assemble and interpret in the ways we use technology, whether we form part of it (hard technique) or are the organizers (soft technique).

12 Blockchain is a technology capable of achieving great good: potentially accountable but equally interesting in ways it can support anonymity, free from central control but also interesting in the context of an existing system of trust, good for both privacy and transparency, etc. It has indefinitely many adjacent possibles, from the exchange of property to the assertion of identity, from enabling reliable voting to making supply chains accountable.

13 But all technologies are what Neil Postman called Faustian bargains. When you invent the ship you invent the shipwreck as Paul Virilio put it. The story of the Monkeys Paw, by W.W. Jacobs is a tale of horror in which a monkey’s paw grants three wishes to a modest couple, who ask only to pay off their mortgage with their first wish. Moments later, they learn their son has died in a horrible accident at a factory in which he works and the company will pay compensation: the exact cost of the outstanding mortgage. And so the story goes on. Technologies are like that.

Blockchain can be subverted by organized crowds (botnets and human), malware, cracking, etc, and quantum computing means all bets are off about reliability an security. It is possible to lose votes as easily as it is to lose millions in bitcoin. Blockchain can conceal criminal activity, and, conversely, enable a level of surveillance never seen before. Remember, this is all about the assembly, and blockchain is a very versatile component. It’s a super-soft technology that connects many others. Blockchain makes new forms of democracy possible, but it also enables new forms of tyranny.

To understand blockchain we must understand the technologies of which it forms only part of the assembly. Never forget that it is only ever the assembly that matters, not the parts. This is and has always been true of all the technologies of democracy. Paper voting, say, in its raw form is incredibly and fundamentally unreliable, prone to loss, error, abuse, corruption, coercion, loss of privacy, etc and it is terribly, terribly inefficient and insecure. However, we throw in a lot of counter technologies – systems to assure reliability, safes, multiple counts, policing procedures, surveillance, electronic counts, , observers, etc – and so the process is now so well evolved that it often enough works. Paper is not the technology of interest: it is the whole system that surrounds it. Same for blockchain.

14 Understanding technologies mean we we must know the adjacent possibles but, remember, we we can only ever see the most brightly lit of these from where we currently stand. The creative potential, for both good and evil, is barely visible at all. Someone, somehow, somewhere, will find new assemblies that achieve their ends, whether it benefits all of us or not. Sadly, those most able are typically those least trustworthy thanks to the fundamental inequalities of our societies that reward greed and that give most to those who already have most. Anything is weaponizable, including democracy, as (here in Canada) our neighbours south of the border are discovering to their cost. And it means understand what happens at scale: the environmental impacts and counter technologies to that: but, as Reneé Dubos put it, fixing problems with counter technologies is a philosophy of despair, because every counter technology we create is another Faustian bargain that creates new problems to solve, and new adjacent possibles we never foresaw.

15 We must understand where blockchain fits in the massive web of the collective technium – the Ricardian contracts, the oracles, the legal frameworks that surround them, the ZKP techniques, the privacy laws, the voting practices, the laws of ownership, and so on. It is unwise to simply drop it in as a replacement for what we already do because it will harden what should not be hardened – when we automate we tend to simplify – and create new relationships that may be incompatible or positively dangerous to existing technologies of democracy. But, as we reinvent it, we must always remember the unprestatable adjacent possibles we create, the things we reinforce, the things we lose. And we must remember that someone, somewhere is seeing adjacent possibles we did not imagine, assemblies we have yet to conceive, and they may not be friendly to democratic ideals.

16 To understand this means we must look far beyond the bits and bytes and flashing lights; we must make empathetic leaps into the hearts and minds of our coparticipants in the technium. We are technologies, as much a part of blockchain as it is part of the broader web of the technium.

What kind of technologies do we want to be?

Just a metatool? Some thoughts why generative AIs are not tools

hammer holding an AI nailMany people brush generative AI aside as being just a tool. ChatGPT describes itself as such (I asked). I think it’s more complicated than that, and this post is going to be an attempt to explain why. I’m not sure about much of what follows and welcome any thoughts you may have on whether this resonates with you and, if not, why not.

What makes something a tool

I think that to call something a tool is shorthand for it having all of the following 5 attributes:

  1. It is an object (physical, digital, cognitive, procedural, organizational, structural, conceptual, spiritual, etc. – i.e. the thing we normally identify as the tool),
  2. used with/designed for a purpose, that
  3. can extend the capabilities of an actor (an intelligent agent, typically human), who
  4. may perform an organized action or series of actions with it, that
  5. cause changes to a subject other than the tool itself (such as a foodstuff, or piece of paper, a mental state, or a configuration of bits),

More informally, less precisely, but perhaps more memorably:

A tool is something that an intelligent agent does something with in order to do something to something else

Let me unpack that a bit.

A pebble used as a knife sharpener is a tool, but one used to reinforce concrete is not. A pen used to write on paper is a tool, but the paper is not. The toolness in each case emerges from what the agent does and the fact that it is done to something, in order to achieve something (a sharp knife, some writing).

Any object we label as a tool can become part of another with different organization. A screwdriver can become an indefinitely large number of other tools  apart from one intended for driving screws. In fact, almost anything can become a tool with the right organization. The paper can be a tool if it is, say, used to scoop up dirt. And, when I say “paper”, remember that this is the label for the object I am calling a tool, but it is the purpose, what it does, how it is organized, and the subject it acts upon that makes it so.

It is not always easy to identify the “something else” that a tool affects. A saw used to cut wood is an archetypal tool, but a saw played with a bow to make music is, I think, not. Perhaps the bow is a tool, and maybe we could think of the saw as a tool acting on air molecules, but I think we tend to perceive it as the thing that is acted upon rather than the thing we do something with.

Toolness is intransitive: a computer may be a tool for running programs, and a program running on it may be a tool that fixes a corrupt disk, but a computer is not a tool for fixing a corrupt disk.

A great many tools are also a technologies in their own right. The intention and technique of the tool maker combines with that of the tool user, so the tool user may achieve more (or more reliably, faster, more consistently, etc) than would be possible without both. A fountain pen adds more to the writing assembly than a quill, for instance, so demanding less of the writer. Many tools are partnerships of this nature, allowing the cognition of more than one person to be shared. This is the ratchet that makes humans smart.

Often, the organization performed by the maker of a technology entirely replaces that of the tool user. A dish sponge is a tool, but a dishwasher is not: it is an appliance. Some skill is needed to load it but the dishwashing itself – the purpose for which it is designed – is entirely managed by the machine.

The case is less clear for an appliance like, say, a vacuum cleaner. I think this is because there are two aspects to the device: the mechanism that autonomously sucks dirt is what makes it an appliance, but the hose (or whatever) used to select the dirt to be removed is a tool. This is reflected in common usage, inasmuch as a vacuum cleaner is normally sold with what are universally described as tools (i.e. the things that a person actively manipulates). The same distinction is still there in a handheld machine, too – in fact, many come with additional tools – though I would be much more comfortable describing the whole device as a tool, because that’s what is manipulated to suck up the dirt. Many power tools fit in this category: they do some of the work autonomously but they are still things people do something with in order to do something to something else.

Humans can occasionally be accurately described as tools: the movie Swiss Army Man, for instance, features Daniel Radcliffe as a corpse that turns out to have many highly inventive uses. For real live humans, though, the case is less clear.  Employees in scripted call centres, or teachers following scripted lesson plans are more like appliances than tools: having been “programmed”, they run autonomously, so the scripts may be tools but the people are not. Most other ways of using other people are even less tool-like. If I ask you to pick up some shopping for me, say, then my techniques of persuasion may be tools, but you are the one organizing phenomena to shop, which is the purpose in question.

The case is similar for sheepdogs (though they are not themselves tool users), that I would be reluctant to label as tools, though skills are clearly needed to make them do our bidding and they do serve tool-like purposes as part of the technology of shepherding. The tools, though, are the commands, methods of training, treats, and so on, not the animals themselves.

Why generative AIs are not tools

For the same reasons of transitivity that dishwashers, people, and sheepdogs are not normally tools, neither are generative AIs. Prompts and other means of getting AIs to do our bidding are tools but generative AIs themselves work autonomously.  This comes with the proviso that almost anything can be repurposed so there is nothing that is not at least latently a tool but, at least in their most familiar guises, generative AIs tend not to be.

Unlike conventional appliances, but more like sheepdogs, the work generative AIs perform is neither designed by humans nor scrutable to us. Unlike sheepdogs, but more like humans, generative AIs are tool users, too: not just (or not so much) words, but libraries, programming languages, web crawlers, filters, and so on. Unlike humans, though, generative AIs act with their users’ intentions, not their own, expressed through the tools with which we interact with them.  They are a bit like partial brains, perhaps, remarkably capable but not aware of nor able to use that capability autonomously.

It’s not just chatbots. Many recommender systems and search engines (increasingly incorporating deep learning), also sit uncomfortably in the category of tools, though they are often presented as such. Amazon’s search, say, is not (primarily) designed to help you find what you are looking for but to push things at you that Amazon would like you to buy, which is why you must troll through countless not-quite-right things despite it being perfectly capable of exactly matching your needs. If it is anyone’s tool, it is Amazon’s, not ours. The same for a Google search: the tools are your search terms, not Google Search, and it is acting quite independently in performing the search and returning results that are likely more beneficial to Google than to you. This is not true of all search systems. If I search for a file on my own computer then, if it fails to provide what I am looking for, it is a sign that the tool (and I think it is a tool because the results should be entirely determinate) is malfunctioning. Back in those far off days when Amazon wanted you to find what you wanted or Google tried to provide the closest match to your search term, if not tools then we could at least think of them as appliances designed to be controlled by us.

I think we need a different term for these things. I like “metatool” because it is catchy and fairly accurate. A metatool is something that uses tools to do our bidding, not a tool in its own right.  It is something that we use tools to act upon that is itself a tool user. I think this is better than a lot of other metaphors we might use: slave, assistant (Claude describes itself, incidentally, not as ‘merely’ a tool, but as an intelligent assistant), partner, co-worker, contractor, etc all suggest more agency and intention than generative AIs actually possess, but appliance, machine, device, etc fail to capture the creativity, tailoring, and unpredictability of the results.

Why it matters

The big problem with treating generative AIs as tools is that it overplays our own agency and underplays the creative agency of the AI. It encourages us to think of them, like actual tools, as, cognitive prostheses, ways of augmenting and amplifying but still using and preserving human cognitive capabilities, when what we are actually doing is using theirs. It also encourages us to think the results will be more deterministic than they actually are. This is not to negate the skill needed to use prompts effectively, nor to underplay the need to understand what the prompt is acting upon. Just as the shepherd needs to know the sheepdog, the genAI user has to know how their tools will affect the medium.

Like all technologies, these strange partial brains effectively enlarge our own. All other technologies, though, embed or embody other humans’ thinking and/or our own. Though largely consisting of the compressed expressed thoughts of millions of people, AI’s thoughts are not human thoughts: even using the most transparent of them, we have very little access to the mechanisms behind their probablistic deliberations. And yet, nor are they independent thinking agents. Like any technology we might think of them as cognitive extensions but, if they are, then it is as though we have undergone an extreme form of corpus callosotomy, or we are experiencing something like Jaynes’s bicameral mind. Generative AIs are their own thing: an embodiment of collective intelligence as well as contributors to our own, wrapped up in a whole bunch of intentional programming and training that imbues them, in part, with (and I find this very troubling) the values of their creators and in part with the sum output of a great many humans who created the data on which they are trained.

I don’t know whether this is, ultimately, a bad thing. Perhaps it is another stage in our evolution that will make us more fit to deal with the complex world and new problems in it that we collectively continue to create. Perhaps it will make us less smart, or more the same, or less creative. Perhaps it will have the opposite effects. Most likely it will involve a bit of all of that. I think it is important that we recognize it as something new in the world, though, and not just another tool.

We are (in part) our tools and they are (in part) us

anthropomorphized hammer using a person as a toolHere’s a characteristically well-expressed and succinct summary of the complex nature of technologies, our relationships with them, and what that means for education by the ever-wonderful Tim Fawns. I like it a lot, and it expresses much what I have tried to express about the nature and value of technologies, far better than I could do it and in far fewer words. Some of it, though, feels like it wants to be unpacked a little further, especially the notions that there are no tools, that tools are passive, and that tools are technologies. None of what follows contradicts or negates Tim’s points, but I think it helps to reveal some of the complexities.

There are tools

Tim starts provocatively with the claim that:

There are no tools. Tools are passive, neutral. They can be picked up and put down, used to achieve human goals without changing the user (the user might change, but the change is not attributed to the tool).

I get the point about the connection between tools and technology (in fact it is very similar to one I make in the “Not just tools” section of Chapter 3 of How Education Works) and I understand where Tim is going with it (which is almost immediately to consciously sort-of contradict himself), but I think it is a bit misleading to claim there are no tools, even in the deliberately partial and over-literal sense that Tim uses the term. This is because to call something a tool is to describe a latent or actual relationship between it and an agent (be it a person, a crow, or a generative AI), not just to describe the object itself. At the point at which that relationship is instantiated it very much changes the agent: at the very least, they now have a capability that they did not have before, assuming the tool works and is used for a purpose. Figuring out how to use the tool is not just a change to the agent but a change to what the agent may become that expands the adjacent possible. And, of course, many tools are intracranial so, by definition, having them and using them changes the user. This is particularly obvious when the tool in question is a word, a concept, a model, or a theory, but it is just as true of a hammer, a whiteboard, an iPhone, or a stick picked up from the ground with some purpose in mind, because of the roles we play in them.

Tools are not (exactly) technologies

Tim goes on to claim:

Tools are really technologies. Each technology creates new possibilities for acting, seeing and organising the world.

Again, he is sort-of right and, again, not quite, because “tool” is (as he says) a relational term. When it is used a tool is always part of a technology because the technique needed to use it is a technology that is part of the assembly, and the assembly is the technology that matters. However, the thing that is used – the tool itself – is not necessarily a technology in its own right. A stick on the ground that might be picked up to hit something, point to something, or scratch something is simply a stick.

Tools are not neutral

Tim says:

So a hammer is not just sitting there waiting to be picked up, it is actively involved in possibility-shaping, which subtly and unsubtly entangles itself with social, cognitive, material and digital activity. A hammer brings possibilities of building and destroying, threatening and protecting, and so forth, but as part of a wider, complex activity.

I like this: by this point, Tim is telling us that there are tools and that they are not neutral, in an allusion to Culkin’s/McLuhan’s dictum that we shape our tools and thereafter our tools shape us.  Every new tool changes us, for sure, and it is an active participant in cognition, not a non-existent neutral object. But our enactment of the technology in which the tool participates is what defines it as a tool, so we don’t so much shape it as we are part of the shape of it, and it is that participation that changes us. We are our tools, and our tools are us.

There is interpretive flexibility in this – a natural result of the adjacent possibles that all technologies enable – which means that any technology can be combined with others to create a new technology. An iPhone, say, can be used by anyone, including monkeys, to crack open nuts (I wonder whether that is covered by AppleCare?), but this does not make the iPhone neutral to someone who is enmeshed in the web of technologies of which the iPhone is designed to be a part. As the kind of tool (actually many tools) it is designed to be, it plays quite an active role in the orchestration: as a thing, it is not just used but using. The greater the pre-orchestration of any tool, the more its designers are co-participants in the assembled technology, and it can often be a dominant role that is anything but neutral.

Most things that we call tools (Tim uses the hammer as an example) are also technologies in their own right, regardless of their tooliness: they are phenomena orchestrated with a purpose, stuff that is organized to do stuff and, though softer tools like hammers have a great many adjacent possibles that provide almost infinite interpretive flexibility, they also – as Tim suggests – have propensities that invite very particular kinds of use. A good hardware store sells at least a dozen different kinds of hammer with slightly different propensities, labelled for different uses. All demand a fair amount of skill to use them as intended. Such stores also sell nail guns, though, that reduce the amount of skill needed by automating elements of the process. While they do open up many further adjacent possibles (with chainsaws, making them mainstays of a certain kind of horror movie), and they demand their own sets of skills to use them safely, the pre-orchestration in nail guns greatly reduces many of the adjacent possibles of a manual hammer: they aren’t much good for, say, prying things open, or using as a makeshift anchor for a kayak, or propping up the lid of a tin of paint. Interestingly, nor are they much use for quite a wide range of nail hammering tasks where delicacy or precision are needed. All of this is true because, as a nail driver, there is a smaller gap between intention and execution that needs to be filled than for even the most specialized manual hammer, due to the creators of the nail gun having already filled a lot of it, thus taking quite a few choices away from the tool user. This is the essence of my distinction between hard and soft technologies, and it is exactly the point of making a device of this nature. By filling gaps, the hardness simplifies many of the complexities and makes for greater speed and consistency which in turn makes more things possible (because we no longer have to spend so much time being part of a hammer) but, in the process, it eliminates other adjacent possibles. The gaps can be filled further. The person using such a machine to, say, nail together boxes on a production line is not so much a tool user as a part of someone else’s tool. Their agency is so much reduced that they are just a component, albeit a relatively unreliable component.

Being tools

In an educational context, a great deal of hardening is commonplace, which simplifies the teaching process and allows things to be done at scale. This in turn allows us to do something approximating reductive science, which gives us the comforting feeling that there is some objective value in how we teach. We can, for example, look at the effects of changes to pre-specified lesson plans on SAT results, if both lesson plans and SATs are very rigid, and infer moderately consistent relationships between the two, and so we can improve the process and measure our success quite objectively. The big problem here, though, is what we do not (and cannot) examine by such approaches, such as the many other things that are learned as a result of being treated as cogs in a mechanical system, the value of learning vs the value of grades, or our places in social hierarchies in which we are forced to comply with a very particular kind of authority. SATs change us, in many less than savoury ways. SATs also fail to capture more than a miniscule fraction of the potentially useful learning that also (hopefully) occurred. As tools for sorting learners by levels of competence, SATs are as far from neutral as you can get, and as situated as they could possibly be. As tools for learning or for evaluating learning they are, to say the least, problematic, at least in part because they make the learner a part of the tool rather than a user of it. Either way, you cannot separate them from their context because, if you did, it would be a different technology. If I chose to take a SAT for fun (and I do like puzzles and quizzes, so this is not improbable) it would be a completely different technology than for a student, or a teacher, or an administrator in an educational system. They are all, in very different ways, parts of the tool that is in part made of SATs. I would be a user of it.

All of this reinforces Tim’s main and extremely sound points, that we are embroiled in deeply intertwingled relationships with all of our technologies, and that they cannot be de-situated. I prefer the term “intertwingled” to the term “entangled” that Tim uses because, to me, “entangled” implies chaos and randomness but, though there may (formally) be chaos involved, in the sense of sensitivity to initial conditions and emergence, this is anything but random. It is an extremely complex system but it is highly self-organizing, filled with metastabilities and pockets of order, each of which acts as a further entity in the complex system from which it emerges.

It is incredibly difficult to write about the complex wholes of technological systems of this nature. I think the hardest problem of all is the massive amount of recursion it entails. We are in the realms of what Kauffman calls Kantian Wholes, in which the whole exists for and by means of the parts, and the parts exist for and by means of the whole, but we are talking about many wholes that are parts of or that depend on many other wholes and their parts that are wholes, and so on ad infinitum, often crossing and weaving back and forth so that we sometimes wind up with weird situations in which it seems that a whole is part of another whole that is also part of the whole that is a part of it, thanks to the fact that this is a dynamic system, filled with emergence and in a constant state of becoming. Systems don’t stay still: their narratives are cyclic, recursive, and only rarely linear. Natural language cannot easily do this justice, so it is not surprising that, in his post, Tim is essentially telling us both that tools are neutral and that they are not, that tools exist and that they do not, and that tools are technologies and they are not. I think that I just did pretty much the same thing.

Source: There are no tools – Timbocopia

Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research | TechTrends

The latest paper I can proudly add to my list of publications,  Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research has been published in the (unfortunately) closed journal TechTrends. Here’s a direct link to the paper that should hopefully bypass the paywall, if it has not been used too often.

I’m 16th of 47 coauthors, led by the truly wonderful Junhong Xiao, who is the primary orchestrator and mastermind behind it. This is a companion piece to our Manifesto for Teaching and Learning in a Time of Generative AI and it starts where the other paper left off, delving further into what we don’t know (or at least do not agree that we know) about and (taking up most of the paper) what we might do about that lack of knowledge. I think this presents a pretty useful and wide-ranging research agenda for anyone with an interest in AI and education.

Methodologically, it emerged through a collaborative writing process between a very multinational group of international researchers in open, digital, and online learning. It’s not a random sample of people who happen to know one another: the huge group represents a rich mix of (extremely) well-established and (excellent) emerging researchers from a broad set of cultural backgrounds, covering a wide range of research interests in the field. Junhong does a great job of extracting the themes and organizing all of that into a coherent narrative.

In many ways I like this paper more than its companion piece. I think this is because, though its findings are – as the title implies – less well-defined than the first, I am more closely aligned with the underlying assumptions, attitudes and values that underpin the analysis. It grapples more firmly with the wicked problems and it goes deeper into the broader, situated, human nature of the systems in which generative AI is necessarily intertwingled, skimming over the more simplistic conversations about cheating, reliability, and so on to get at some meatier but more fundamental issues that, ultimately, relate to how and why we do this education thing in the first place.

Abstract

Advocates of AI in Education (AIEd) assert that the current generation of technologies, collectively dubbed artificial intelligence, including generative artificial intelligence (GenAI), promise results that can transform our conceptions of what education looks like. Therefore, it is imperative to investigate how educators perceive GenAI and its potential use and future impact on education. Adopting the methodology of collective writing as an inquiry, this study reports on the participating educators’ perceived grey areas (i.e. issues that are unclear and/or controversial) and recommendations on future research. The grey areas reported cover decision-making on the use of GenAI, AI ethics, appropriate levels of use of GenAI in education, impact on learning and teaching, policy, data, GenAI outputs, humans in the loop and public–private partnerships. Recommended directions for future research include learning and teaching, ethical and legal implications, ownership/authorship, funding, technology, research support, AI metaphor and types of research. Each theme or subtheme is presented in the form of a statement, followed by a justification. These findings serve as a call to action to encourage a continuing debate around GenAI and to engage more educators in research. The paper concludes that unless we can ask the right questions now, we may find that, in the pursuit of greater efficiency, we have lost the very essence of what it means to educate and learn.

Reference

Xiao, J., Bozkurt, A., Nichols, M., Pazurek, A., Stracke, C. M., Bai, J. Y. H., Farrow, R., Mulligan, D., Nerantzi, C., Sharma, R. C., Singh, L., Frumin, I., Swindell, A., Honeychurch, S., Bond, M., Dron, J., Moore, S., Leng, J., van Tryon, P. J. S., … Themeli, C. (2025). Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research. TechTrends. https://doi.org/10.1007/s11528-025-01060-6

Slides from my TRU TPC keynote: “It’s a technology problem: How education doesn’t work and why we shouldn’t fix it”

mediaeval zoom session, with a professor holding a carrot and stick, looking puzzled and a bit surprised, presumably because Zoom was not a popular technology in mediaeval times Here are the slides from my keynote at Thompson Rivers University’s Teaching Practices Colloquium this morning. I quite like the mediaeval theme (thanks ChatGPT), which I created to provide a constant reminder that the problems we have to solve are the direct result of decisions made 1000 years ago. There was a lot of stuff from my last book in the talk, framed in terms of Faustian Bargains, intrinsic motivation, counter technologies, and adjacent possibles. This was the abstract:

Why is it that educators feel it is necessary to motivate students to learn when love of learning is a defining characteristic of our species? Why do students disengage from education? Why do so many cheat? How can we be better teachers? What does “good teaching” even mean? And what role does technology play in all of this? Drawing on ideas, theories, and models from his book, How Education Works: Teaching, Technology, and Technique, Jon Dron will provide some answers to these and many more questions through a tale that straddles most of a millennium, during which you may encounter a mutilated monk, a man who lost a war, a robot named Claude, part of a monkey, and an unsuccessful Swiss farmer who made a Faustian bargain and changed education forever. Along the way you will learn why most educational science is pointless, why the best teaching methods fail, why the worst succeed, and why you should learn to love learning technologies. There may be singing.

I had a lot of fun –  there was indeed singing, a silicone gorilla hand that turned out to be really useful, and some fun activities from which I learned stuff. I think it worked fine as a hybrid event. It was a sympathetic audience, online and in-person. TRU has a really interesting (and tension-filled, in good and bad ways) mix of online and in-person teaching practices, and I’ve met and listened to some really smart, thoughtful, reflective practitioners today. Almost all cross disciplinary boundaries – who knew you could combine culinary science and nursing? – so there’s a lot of invention going on. Unexpectedly, and far more than from a lot of bigger International conferences,  I’m going to go home armed with a whole bunch of of new ideas.

Understanding collective stupidity in social computing systems

Here are the slides from a talk I just gave to a group of grad students at AU in our ongoing seminar series, on the nature of collectives and ways we can use and abuse them. It’s a bit of a sprawl covering some 30 odd years of a particularly geeky, semi-philosophical branch of my research career (not much on learning and teaching in this one, but plenty of termites) and winding up with very much a work in progress. I rushed through it at the end of a very long day/week/month/year/life but I hope someone may find it useful!

This is the abstract:

“Collective intelligence” (CI)  is a widely-used but fuzzy term that can mean anything from the behaviour of termites, to the ability of an organization to adapt to a changing environment, to the entire human race’s capacity to think, to the ways that our individual neurons give rise to cognition. Common to all, though, is the notion that the combined behaviours of many independent agents can lead to positive emergent changes in the behaviour of the whole and, conversely, that the behaviour of the whole leads to beneficial changes in the behaviours of the agents of which it is formed. Many social computing systems, from Facebook to Amazon, are built to enable or to take advantage of CI. Here I define social computing systems as digital systems that have no value unless they are used by at least two participants, and in which those participants play significant roles in affecting one another’s behaviour. This is a broad definition that embraces Google Search as much as email, wikis, and blogs, and in which the behaviour of humans and the surrounding structures and systems they belong to are at least as important as the algorithms and interfaces that support them.  Unfortunately, the same processes that lead to the wisdom of crowds can at least as easily result in the stupidity of mobs, including phenomena like filter bubbles and echo chambers that may be harmful in themselves or that render systems open to abuse such as trolling, disinformation campaigns, vote brigading, and successful state manipulation of elections.  If we can build better models of social computing systems, taking into account their human and contextual elements, then we stand a better chance of being able to avoid their harmful effects and using them for good.  To this end I have coined the term “ochlotecture”, from the Classical Greek ὄχλος (ochlos), meaning  “multitude” and τέκτων (tektōn) meaning “builder”. In this seminar I will identify some of the main ochlotectural elements that contribute to collective intelligence, describe some of the ways it can be undermined, and explore some of the ramifications as they relate to social software design and management.

 

Published in JODDE – Learning: A technological perspective

Dron, J. (2024). Learning: A technological perspective. Journal of Open, Distance, and Digital Education, 1(2), Article 2. https://doi.org/10.25619/dpvg4687

abstract representation of the technological connectomeMy latest paper, Learning: A technological perspective, was published today in the (open) Journal of Open, Distance, and Digital Education.  Methodologically, it provides a connected series of (I think) reasonable and largely uncontroversial assertions about the nature of technology and, for each assertion, offers some examples of why that matters to educators. In the process it wends its way towards a view of learning that is firmly situated in the field of extended cognition (and related complexivist learning theories such as Connectivism, Rhizomatic Learning, Networks of Practice, etc), with a technological twist that is, I think, pragmatically useful and theoretically interesting. Much of it repeats ideas from How Education Works but it extends and generalizes them further into the realms of intelligence and cognition through what I describe as the technological connectome.

I wrote this paper to align with the themes of the journal so, as a result, it has a greater focus on education than on the technological connectome, but I intend to write more on the subject some time soon. The essence of the idea is that what we recognize as intelligent behaviour consists largely of intracranial technologies like words, symbols, theories, models, procedures, structures, skills, ways of doing things, and so on – our cognitive gadgets – that we largely share with others, and that exist in vastly interconnected, hugely recursive, massively layered assemblies in and beyond our heads. I invoke Reed’s Law to help explain how and why this makes our intracranial cognition so much greater than the neural networks that host it: it’s not just the neural connections but the groups and multi-scaled clusters of technological entities that emerge as a result that can then be a part of the network that embodies them, and of one another, and so on and so on. In passing, I have a vague and hard-to-express hunch that the “and so on” is at least part of the answer to the hard problem: networks that form other networks that themselves become parts of the networks that form them (rinse and repeat) seems like a potential path to self-consciousness to me. However,  the ludicrous levels of intertwingularity implied by this, not to mention an almost total absence of any idea about the underlying mechanism, ties my little mind in knots that I cannot yet and probably will never unravel.

At least as importantly, these private intracranial technologies are in turn parts of even greater assemblies that extend into our bodies, our environments, and above all into the technologies around us, and thence into the minds of others. To a large extent it is our ability to make use of and participate in this extended technological connectome, that is both within us and beyond us, that forms the object, the subject, and the purpose of education. Our technologies as much form a part of our cognition as they enable it. We continuously shape and are shaped by them, assembling and reassembling them as we move into the adjacent possibles that result, creating further adjacent possibles every time we do, for ourselves and others. There is something incredibly awesome about that.

Abstract

This paper frames technology as a phenomenon that is inextricable from individual and collective cognition. Technologies are not “the other”, separate from us: we are parts of them and they are parts of us. We learn to be technologies as much as we learn to use them, and each use is itself a technology through which we participate both as parts and as creators of nodes in a vast technological connectome of awesome complexity. The technological connectome in turn forms a major part of what makes us, individually and collectively, smart. With that framing in mind, the paper is presented as a series of sets of observations about the nature of technology followed by examples of consequences for educators that illustrate some of the potential value of understanding technology this way, ending with an application of the model to provide actionable insights into what large language models imply for how we should teach.

How AI works for education: an interview with me for AACE Review

Thanks to Stefanie Panke for some great questions and excellent editing in this interview with me for the AACE Review.

The content is in fact the product of two discussions, one coming from student questions at the end of a talk that I gave for the Asian University for Women just before Christmas, the other asynchronously with Stefanie herself.

Stefanie did a very good job of making sense of my rambling replies to the students that spanned quite a few issues, including some from my book, How Education Works, some with (mainly) generative AI, and a little about the intersection of collective and artificial intelligence. Stefanie’s own prompts were great: they encouraged me to think a little differently, and to take some enjoyable detours along the way around the evils of learning management systems, artificially-generated music, and  social media, as well as a discussion of the impact of generative AI on learning designers, thoughts on legislation to control AI, and assessment.

Here are the slides from that talk at AUW – I’ve not posted this separately because hardly any are new: it mostly cobbles together two recent talks, one for Contact North and the other my keynote for ICEEL ’24. The conversation afterwards was great, though, thanks to a wonderfully thoughtful and enthusiastic bunch of very smart students.

The collective ochlotecture of large language models: slides from my talk at CI.edu, 2024

Here are my slides from the 1st International Symposium on Educating for Collective Intelligence, last week, here is my paper on which it was based, and here is the video of the talk itself:

You can find this and videos of the rest of the stunning line-up of speakers at https://www.youtube.com/playlist?list=PLcS9QDvS_uS6kGxefLFr3kFToVIvIpisn It was an incredibly engaging and energizing event: the chat alone was a masterclass in collective intelligence that was difficult to follow at times but that was filled with rich insights and enlightening debates. The symposium site, that has all this and more, is at https://cic.uts.edu.au/events/collective-intelligence-edu-2024/

Collective intelligence, represented in the style of 1950s children's books.With just 10 minutes to make the case and 10 minutes for discussion, none of us were able to go into much depth in our talks. In mine I introduced the term “ochlotecture”, from the Classical Greek ὄχλος (ochlos), meaning  “multitude” and τέκτων (tektōn) meaning “builder” to describe the structures and processes that define the stuff that gives shape and form to collections of people and their interactions. I think we need such a term because there are virtually infinite ways that such things can be configured, and the configuration makes all the difference. We blithely talk of things like groups, teams, clubs, companies, squads, and, of course, collectives, assuming that others will share an understanding of what we mean when, of course, they don’t. There were at least half a dozen quite distinct uses of the term “collective intelligence” in this symposium alone. I’m still working on a big paper on this subject that goes into some depth on the various dimensions of interest as they pertain to a wide range of social organizations but, for this talk, I was only concerned with the ochlotecture of collectives (a term I much prefer to “collective intelligence” because intelligence is such a slippery word, and collective stupidity is at least as common). From an ochlotectural perspective, these consist of a means of collecting crowd-generated information, processing it, and presenting the processed results back to the crowd. Human collective ochlotectures often contain other elements – group norms, structural hierarchies, schedules, digital media, etc – but I think those are the defining features. If I am right then large language models (LLMs) are collectives, too, because that is exactly what they do. Unlike most other collectives, though (a collectively driven search engine like Google Search being one of a few partial exceptions) the processing is unique to each run of the cycle, generated via a prompt or similar input. This is what makes them so powerful, and it is what makes their mimicry of human soft technique so compelling.

I did eventually get around to the theme of the conference. I spent a while discussing why LLMs are troubling – the fact that we learn values, attitudes, ways of being, etc from interacting with them; the risks to our collective intelligence caused by them being part of the crowd, not just aggregators and processors of its outputs; and the potential loss of the soft, creative skills they can replace – and ended with what that implies for how we should act as educators: essentially, to focus on the tacit curriculum that has, till now, always come from free; to focus on community because learning to be human from and with other humans is what it is all about; and to decouple credentials so as to reduce the focus on measurable outcomes that AIs can both teach and achieve better than an average human. I also suggested a couple of principles for dealing with generative AIs: to treat them as partners rather than tools, and to use them to support and nurture human connections, as ochlotects as much as parts of the ochlotecture.

I had a point to make in a short time, so the way I presented it was a bit of a caricature of my more considered views on the matter. If you want a more balanced view, and to get a bit more of the theoretical backdrop to all this, Tim Fawns’s talk (that follows mine and that will probably play automatically after it if you play the video above) says it all, with far greater erudition and lucidity, and adds a few very valuable layers of its own. Though he uses different words and explains it far better than I, his notion of entanglement closely echoes my own ideas about the nature of technology and the roles it plays in our cognition. I like the word “intertwingled” more than “entangled” because of its more positive associations and the sense of emergent order it conveys, but we mean substantially the same thing: in fact, the example he gave of a car is one that I have frequently used myself, in exactly the same way.

New paper: The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

I’m proud to be the 7th of 47 authors on this excellent new paper, led by the indefatigable Aras Bozkurt and featuring some of the most distinguished contemporary researchers in online, open, mobile, distance, e- and [insert almost any cognate sub-discipline here] learning, as well as a few of us hanging on their coat tails like me.

AI negaiveAs the title suggests, it is a manifesto: it makes a series of statements (divided into 15 positive and 20 negative themes) about what is or what should be, and it is underpinned by a firm set of humanist pedagogical and ethical attitudes that are anything but neutral. What makes it interesting to me, though, can mostly be found in the critical insights that accompany each theme, that capture a little of the complexity of the discussions that led to them, and that add a lot of nuance. The research methodology, a modified and super-iterative Delphi design in which all participants are also authors is, I think, an incredibly powerful approach to research in the technology of education (broadly construed) that provides rigour and accountability without succumbing to science-envy.

 

AI-positiveNotwithstanding the lion’s share of the work of leading, assembling, editing, and submitting the paper being taken on by Aras and Junhong, it was a truly collective effort so I have very little idea about what percentage of it could be described as my work. We were thinking and writing together.  Being a part of that was a fantastic learning experience for many of us, that stretched the limits of what can be done with tracked changes and comments in a Google Doc, with contributions coming in at all times of day and night and just about every timezone, over weeks. The depth and breadth of dialogue was remarkable, as much an organic process of evolution and emergence as intelligent design, and one in which the document itself played a significant participant role. I felt a strong sense of belonging, not so much as part of a community but as part of a connectome.

For me, this epitomizes what learning technologies are all about. It would be difficult if not impossible to do this in an in-person setting: even if the researchers worked together on an online document, the simple fact that they met in person would utterly change the social dynamics, the pacing, and the structure. Indeed, even online, replicating this in a formal institutional context would be very difficult because of the power relationships, assessment requirements, motivational complexities and artificial schedules that formal institutions add to the assembly. This was an online-native way of learning of a sort I aspire to but seldom achieve in my own teaching.

The paper offers a foundational model or framework on which to build or situate further work as well as providing a moderately succinct summary of  a very significant percentage of the issues relating to generative AI and education as they exist today. Even if it only ever gets referred to by each of its 47 authors this will get more citations than most of my papers, but the paper is highly cite-able in its own right, whether you agree with its statements or not. I know I am biased but, if you’re interested in the impacts of generative AI on education, I think it is a must-read.

The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future

Bozkurt, A., Xiao, J., Farrow, R., Bai, J. Y. H., Nerantzi, C., Moore, S., Dron, J., … Asino, T. I. (2024). The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future. Open Praxis, 16(4), 487–513. https://doi.org/10.55982/openpraxis.16.4.777

Full list of authors:

  • Aras Bozkurt
  • Junhong Xiao
  • Robert Farrow
  • John Y. H. Bai
  • Chrissi Nerantzi
  • Stephanie Moore
  • Jon Dron
  • Christian M. Stracke
  • Lenandlar Singh
  • Helen Crompton
  • Apostolos Koutropoulos
  • Evgenii Terentev
  • Angelica Pazurek
  • Mark Nichols
  • Alexander M. Sidorkin
  • Eamon Costello
  • Steven Watson
  • Dónal Mulligan
  • Sarah Honeychurch
  • Charles B. Hodges
  • Mike Sharples
  • Andrew Swindell
  • Isak Frumin
  • Ahmed Tlili
  • Patricia J. Slagter van Tryon
  • Melissa Bond
  • Maha Bali
  • Jing Leng
  • Kai Zhang
  • Mutlu Cukurova
  • Thomas K. F. Chiu
  • Kyungmee Lee
  • Stefan Hrastinski
  • Manuel B. Garcia
  • Ramesh Chander Sharma
  • Bryan Alexander
  • Olaf Zawacki-Richter
  • Henk Huijser
  • Petar Jandrić
  • Chanjin Zheng
  • Peter Shea
  • Josep M. Duart
  • Chryssa Themeli
  • Anton Vorochkov
  • Sunagül Sani-Bozkurt
  • Robert L. Moore
  • Tutaleni Iita Asino

Abstract

This manifesto critically examines the unfolding integration of Generative AI (GenAI), chatbots, and algorithms into higher education, using a collective and thoughtful approach to navigate the future of teaching and learning. GenAI, while celebrated for its potential to personalize learning, enhance efficiency, and expand educational accessibility, is far from a neutral tool. Algorithms now shape human interaction, communication, and content creation, raising profound questions about human agency and biases and values embedded in their designs. As GenAI continues to evolve, we face critical challenges in maintaining human oversight, safeguarding equity, and facilitating meaningful, authentic learning experiences. This manifesto emphasizes that GenAI is not ideologically and culturally neutral. Instead, it reflects worldviews that can reinforce existing biases and marginalize diverse voices. Furthermore, as the use of GenAI reshapes education, it risks eroding essential human elements—creativity, critical thinking, and empathy—and could displace meaningful human interactions with algorithmic solutions. This manifesto calls for robust, evidence-based research and conscious decision-making to ensure that GenAI enhances, rather than diminishes, human agency and ethical responsibility in education.