Ben Werdmuller is a serial innovator, edtech veteran, and deeply insightful commentator on the tech industry whose skills defy easy categorization. I like him a lot. In One size fits none: let communities build for themselves Ben tells us about how to build digital social systems that fit the needs of their communities, and it is well worth reading if you have any interest in social software.
The post starts with description of the reaction of developers when, in the Summer of 2007, at an Elgg-jam at my then-university in Brighton, Ben first introduced the newly refactored Elgg 1.0 framework. In its several pre-version-1 iterations, Elgg was not a development framework but a full-blown web application. It had blogs, wikis, file sharing, bookmarking, groups, and much more, all wrapped up in a robust social network system with smart discretionary access, extensible very easily through a simple-to-use plugin system. It was easy to use, rich in features, highly adaptable, and it might have been the most popular open source social networking system on the planet at that point. It was a bit hacked-together and not exactly an engineering masterpiece, but it worked really well.
What Ben announced that day stripped away virtually all of its existing functionality, leaving only a tiny core that could do almost nothing user-facing on its own apart from simple user management, the display of activities, and some basic admin tasks. I don’t think it was even possible to create a post and I have a feeling there were floppy disks around at the time onto which the whole thing could fit. The idea was that it was up to developers to provide plugins that end-users could configure to create any kind of social system they wanted, with the core providing the API and data structures to support and greatly simplify their development. A few common tools like blogs, wikis, file sharing, and bookmarks were provided in a package of core plugins to help get things started, but all were (and are) optional. It was extremely elegant.
I believe that I was the person Ben refers to who, many years later (at another Elgg-jam, in San Francisco, as it happens), described his “big reveal” as a mind-blowing moment. Almost every hair on my body stood on end. I got it immediately because I had been thinking along very similar lines – there’s a chapter on such things in my first book, published earlier the same year – and had been, up until that point, intending to spend my newly-acquired national teaching fellowship money on building it. Instead I went with Elgg, which provided the framework on which the Landing and a few other sites (including the one at Brighton to which Ben refers) were built, and the money mostly went towards plugin development for it.
In fact, in the form in which it first launched, Elgg 1.0 wasn’t exactly what I wanted. My vision was more distributed and centred around small services, loosely joined, rather than a single monolithic plugin-based server. The roadmap, though, that Ben described that day made exactly that possible, with plans for a robust and extensible range of services and standards for information interchange that, had they gained any traction, would have made a federated social system of almost any kind simple to create and evolve.
They didn’t gain that traction.
I think a big part of the reason might be that, with no backwards compatibility at all with the older version, and no good migration path for those already running Elgg, it lost almost all of the momentum and good will it had previously gained, and others had moved into the space in the interim that could provide an off-the-shelf experience that was at least as good as the replacement, without the need for further development. In particular, WordPress and Buddypress were already on the rise. Ben eventually moved on to do other things, Elgg gained a loyal and slowly growing following and became a foundation, but its focus shifted to being a development platform for building bespoke servers rather than a distributed social system. The web services and neat ODD protocol never took off enough to be usable beyond some very limited use cases. However, the plugin-based architecture and tiny core was still a cool idea and building using small pieces for almost everything seemed to me to be a really good way to build a social system, so that’s what I and my teams did. It turns out to be much less cool when you want to maintain it, though, a fact that I was quite well aware of but failed to grasp in its full magnitude until it was too late.
Red Queen development
As we built the Landing we soon ran into the painful flipsides of plugins, which include the fact that you can’t easily remove them once many people use them, the large number of dependencies they create, and the fact that they have to be maintained, at least every time the core gets updated. It is not helped by the fact that, I think for efficiency, backwards compatibility is still rarely much of a consideration when Elgg gets an upgrade: though they will generally survive (with deprecation notices) for a version or two, many old plugins will simply break if they are not updated, often in subtle, difficult to debug ways. And part of the elegance of the design is also one of its greatest flaws: that, though you can design things in a more robust way, any plugin can fully override almost anything provided by any other simply by including a file of the same name and position in the directory hierarchy. This plays havoc with new versions, and makes plugins far more co-dependent than the very self-contained, well-encapsulated services I had been imagining. To make things worse, it does not scale at all well: Elgg’s object-over-relational data model is very elegant, but it is not very efficient when your site grows large, and every data-storing plugin adds to the problem.
At one point the Landing had 116 plugins (admittedly with a few turned off by default), about a third of which we built, a third of which were distributed with the core, and a third of which were community-developed. As well as our own plugins, we gradually had to take on more and more of the community-plugin development ourselves as original developers abandoned them, or face the wrath of those who needed them. Of the 90 or so that are left today, about half are our/my responsibility. On average, when things were going well and we had the funding for a full-time developer, I reckon most plugins averaged about a person-week of design, development, and testing to upgrade, though the various dependencies and bottlenecks meant that it was rarely less than a month from start to finish before they arrived on the site. Meanwhile, the core was getting updates, sometimes more than once a year. With very little spare cash, especially after losing our full-time developer, there was no way that we could ever hope to keep up with the release cycles of the core and keep the number of plugins we had to maintain. We were stuck in a Red Queen Regime, running harder and harder to stay in the same place. Some call this a technological debt, but it’s just the price of ownership, and we couldn’t pay enough.
It may be a blessing in disguise then, that, some 10 or 11 years ago, the decision over whether to continue development was taken out of our hands by a CIO who refused us any resources to even test let alone to install anything, as a result of a grossly misguided “back to baseline” principle that ravaged many good systems during his tenure, even though we (then) had plenty of money to continue and offered to put it all into his budget. The Landing limped along regardless because it was embedded in many courses, research groups, centres, and so on, so it couldn’t simply be switched off, no off-the-shelf alternative came close to doing anything similar, and we built it to be robust (though never expecting it to still be around, almost unaltered, over a decade later) so it carried on working. With the help of less hostile but never exactly enthusiastic CIOs, we have limped along ever since, very slowly creeping up through the versions on a shoestring budget and odd moments of my own spare time, but we are very far behind the cutting edge.
And then came ChatGPT
LLMs – Claude in particular – can be great at coding, especially for small projects like plugins. I have been vibe coding for a few years now, and it has been incredibly useful in many aspects of my life. However, even the best of them tend to struggle with Elgg plugins. I think it is because there is not enough Elgg code out in the wild, and there have been too many versions and too many approaches to development, so there’s not enough good quality training data. Since the first week of the launch of ChatGPT, I have been trying to get genAIs to help me with Elgg plugin upgrades and bug fixing but, though I have picked up some very helpful ideas in the midst of some very bad attempts at solutions and they have spotted a few bugs for me, not a single line of actual AI-generated code has ever made it onto the Landing. This is going to change.
A few days before Ben wrote his post, on a hunch, after some frustrating attempts at getting Claude, ChatGPT and Gemini to upgrade an existing plugin that was too difficult for me to take on alone, I instead simply asked Claude to make me a new one, with specs I had extracted from the original (using ChatGPT and tweaking the output), but giving it no access to any of the original’s source code or program structure.
Apart from a couple of minor syntax problems that took hardly a minute to fix, it worked first time. It was considerably more polished than the original and, indeed, than almost all the plugins we had written ourselves or commissioned at costs of up to $10,000. It has no deprecated code at all – something that is not even true of plugins in the core for our current Elgg version – and it has all sorts of useful little configuration options that Claude extrapolated from the specs and that I would have been too lazy to bother with, but that make it way more adaptable than its predecessor. It even has a complete set of language files for both French and English – extremely rare in human-made plugins – and it would be trivial to ask it for other languages if we needed them.
I think this works because of the different way Claude approaches the problem compared with how it handles an existing plugin. When trying to fix a broken or obsolete plugin, the plugin itself plays a large influencing role, then Claude pulls on a ragtag bunch of existing plugins as examples, but the paucity and mixed quality of the training data means they are less than wonderful role models. Almost all of its prior attempts included code from a future version of Elgg, or an older one, or one that has never existed, and it quite often did things in a very non-Elgg way. In contrast, when building a new plugin from scratch, its strategy appears to be to read the entire core codebase and all of the official documentation, then to build the plugin to fit, with little or no reference to any existing plugins beyond those that come with the core distribution. When things go wrong, it goes straight to the definitive source of a function in the core, not to a muddle of existing solutions, and its context window (at least in the paid versions) is now large enough for it to contain much if not all of the whole thing, or at least for retrieval-augmented generation to deal with the correct pieces. The small core that was so useful to human developers turns out to be ideal for LLMs.
The key lesson to be drawn from this is that, if the architecture is sufficiently and cleanly modular (as Elgg’s is), then it may now be more effective to recreate components from scratch than to maintain the ones you have already written. If it continues to pan out as it has so far done, I’d say this is a potential game changer. As well as making development extremely agile, it even improves the security of the system because, though any one plugin may yet have flaws despite the apparently high quality of coding, it is not going to stick around for long enough for them to be exploited, and anyone who follows this approach is not going to have the same plugins as anyone else so it’s not worth anyone’s while to develop a specific hack for it. The next upgrade is almost ready so I am only going to use this approach sparingly for now but, when the time comes for the next major upgrade, this is how I intend to do most of it. I won’t let it near core plugins or still-maintained community plugins but, for all those we inherited or created, ChatGPT or Gemini will provide me with the spec. I’ll then run each spec through Claude, getting it to produce the complete plugin including unit tests. It will still take time, and I don’t expect it to work as well all the time, but much of that time will be spent by Claude, not me. At one fell swoop, this almost eliminates the technological debt.
This principle is not necessarily limited to elegantly engineered systems like Elgg. A night or two ago I went through my regular quandary about how to schedule ad hoc meetings for one of my courses. In the past I’ve used wikis, discussion forums, various free (but not quite right) poll-based schedulers like Doodle, and more. None were great, and the ones that worked best raised potential privacy concerns that I was not willing to grapple with. The length of time it takes to get a plugin to production made a Landing plugin a non-starter. Then it struck me that my own personal website would be more private and controllable than any of those, and hosted on Canadian soil (unlike any of the rest) so I went in search of a plugin. WordPress is very inelegant, sprawling software, and plugin development is positively painful compared with Elgg, but the vast numbers of WP developers mean that, among the many tens of thousands of plugins, no matter what the task, at least one will do the job I want, or close enough for me to tweak so that it does. At least that had always been the case until now. To my great surprise, this time, there were none. Something like the functionality does exist in a few polling and scheduling plugins, but with very complex configurations and a lot of unwanted fluff around them, not to mention the need to get premium non-open versions to do what I want. I just wanted a small subset of Doodle’s functionality, that would not store any private data, nor cater for needs I don’t have. So I asked Claude to make it, knowing that it would already be quite skilled in WP development because of the vast number of examples to learn from. It took about 4 attempts to get exactly what I wanted. Overall the whole process took about an hour, including writing the spec, Claude’s thinking time, and the time it took to upload, configure and test it. It works really nicely. I actually spent more time earlier looking for the right software than it took to make it from scratch. I have some experience writing specs, but even a beginner could do this with a bit of help from the AI.
Ochlotecture management
I might ask an LLM to build the Spec Manager – essentially a means of managing the application architecture, not unlike a traditional source code management system – that Ben writes about, to simplify and automate some of the workflow, not that it is particularly onerous. However, the time it would save would allow me more time to work on another idea sparked by Ben’s post.
Doing what we already do, better, cheaper, and faster, is quite cool, but the most significant benefits of any new technology come from being able to do things that were previously impossible: it is the adjacent possibles they create and we exploit that drive progress. As Ben says, some of the biggest things that matter in a social system are the what, why, and for whom, and that’s very true, but there’s more. I’ve written previously of the ochlotecture of a social system, by which I mean all the human as well as non-human elements of it that make it do what it does, including the whats, whys, and for-whoms: the written and unwritten rules, the structural topography (networks, group hierarchies, set clusters, etc) , the norms around posting, the pace, the interests of the community, the cross-cutting networks, the ethical principles, the aesthetic preferences, the physical spaces they inhabit, and so on, that combine to give shape to a community. In essence it is much like a user model, only for crowds.
It strikes me that it should be possible to build an Ochlotecture Manager in much the same way as we might build the Spec Manager. Exactly how this would work is to be determined but I envisage it including an assortment of personas and scenarios as well as rules, demographics, contextual information, and network/group/set structures. The idea is to try to get away from the traditional functional definitions and instead describe relationships, policies, norms, and so on in a way that, with a bit of work, LLMs will be able to interpret and thus to better fit the site to its community. This would be particularly useful in a learning context, where a lot of software is built or chosen to perform a function, with far too little regard to how it achieves it. It almost never fits exactly what a teacher would like to do, because it ain’t what you do, it’s the way that you do it, that’s what gets results, and you can’t do the same thing the same way for everyone and expect it to be a perfect fit for all of them. The app will most like generate some YAML or JSON and instructions about how to deal with it. But this doesn’t end with the design.
A much under-utilized adjacent possible of LLMs lies in their potential to connect people and sustain communities. From summarizing conversations or connecting individuals with complementary needs, to nudging conversations or analyzing sentiment, there are many ways LLMs can catalyze interaction, not as a participant but an enabler. Having a clearly specified ochlotecture would make this much easier to achieve. It might not be a bad ochlotectural analyst, too, suggesting and implementing improvements in the design based on not user models but crowd models.
Having done that, it opens up the potential to make this a truly adaptive system, not just changing data and parameters but also the underlying code itself as a community evolves. Imagine, to give a simple example, a discussion forum in which the system observes people regularly responding with “this is great” or similar replies. The system could identify a need for some kind of rating system and, rather than simply implementing a “like” button (which is far from ideal in all situations) it could consult its ochlotectural model to identify what would work best. This could range from a simple change of wording – “recommend”, perhaps, or “rate”, depending on the community – to a multi-dimensional ranking system, that might work better if more precise feedback is needed (e.g. in peer review). More complex changes are possible: it might build a system to (say) manage events, or create photo albums, or implement breakout spaces, or shift between threaded and non-threaded discussions. Perhaps it could shuffle menus to better fit community needs, or fix accessibility issues, or identify more relevant posts. I’d be extremely nervous of taking humans out of that loop – that way disaster lies – but perhaps the humans would not need to be developers as long as a developer had crafted the spec and the ochlotecture carefully enough in the first place. Community members themselves could suggest things, the LLM could present them to the group (perhaps creating a poll system for voting, or some other dispute-settling mechanism to do so), and it could use the ochlotectural and architectural models to help guide the actual development. It might even do a bit of proactive A/B testing, making an evolutionary (survival of the fittest) approach possible. Ultimately, it might even evolve how it evolves, developing its own strategies for engaging the community and responding to changing needs. It would be no more annoying that it constantly changes than it is for existing cloud services, with the added benefit that, if the community doesn’t like it, they can fix it.
In my perfect world all of this would rely on a local, open LLM but, though some are now extremely good for coding assistance, none currently have the large context windows and sophisticated tuning of the bigger commercial models. This will probably change. A hybrid approach might work in the interim, where the local model deals with everything apart from the coding itself, and the commercial model does the rest, but I’ve not thought through the economics of that.
Bricoleering: a new paradigm?
We are at the bottom of a learning curve with genAI right now. Most of us are simply replacing things we already do with LLMs, and that is highly problematic for reasons I and many others have written about extensively (see at least half my posts at https://jondron.ca/ai). In a world with machines that can creatively replicate almost any human cognitive skill, often at an expert level, there are high risks that our descendants are going to lose at least a portion of their own capacity to do so unaided. That’s not necessarily a bad thing, in itself. Few of us can still recite every word of a novel from memory, or create a bow and arrow, or perform complex mental arithmetic, because we don’t need to. Coarse grained cognition – thinking in bigger chunks, using the products of our own and other humans’ thought – is what has let us build pyramids, spaceships, welfare systems and virtually every invention ever, including this sentence. It’s our collective, extended cognition that makes it possible to constantly create more. That’s more of a problem when creativity itself is at stake, however, because we risk delegating too much of it to the machine, and allowing our own capabilities to atrophy. Already, I quite often tell the machine what I’m trying to do then ask it for a list of ideas and select one, rather than trying to think of one myself: that’s how the picture at the top of this post was conceived. At scale, this is not a great idea. If the world is going to be a better and not a worse place, we need to learn to be creative with the creative outputs of the cognitive Santa Claus machines, not simply to specify and use them. I think that the idea I suggest above is one of the ways this can happen. A plugin-based (or other component-oriented) approach enables us to do bricolage with the pieces, assembling, disassembling, and reassembling them in new and creative ways that neither we nor genAIs could do alone. It is not Levi Strauss’s bricolage of the “savage mind”, however, nor is it engineering. I think it is a new paradigm in which we do not simply assemble pieces we happen to have lying around but actively help to shape them so that they will fit. Our roles are closer to those of architects like Frank Gehry, who famously couldn’t use the machines that were essential to creating his iconic machine-made designs, instead relying on hand-drawn sketches to communicate his idea to those who could. I don’t know what to call this: “bricoleering” perhaps, or “adaptafacture”?


