Here are the slides from a talk I just gave to a group of grad students at AU in our ongoing seminar series, on the nature of collectives and ways we can use and abuse them. It’s a bit of a sprawl covering some 30 odd years of a particularly geeky, semi-philosophical branch of my research career (not much on learning and teaching in this one, but plenty of termites) and winding up with very much a work in progress. I rushed through it at the end of a very long day/week/month/year/life but I hope someone may find it useful!
This is the abstract:
“Collective intelligence” (CI) is a widely-used but fuzzy term that can mean anything from the behaviour of termites, to the ability of an organization to adapt to a changing environment, to the entire human race’s capacity to think, to the ways that our individual neurons give rise to cognition. Common to all, though, is the notion that the combined behaviours of many independent agents can lead to positive emergent changes in the behaviour of the whole and, conversely, that the behaviour of the whole leads to beneficial changes in the behaviours of the agents of which it is formed. Many social computing systems, from Facebook to Amazon, are built to enable or to take advantage of CI. Here I define social computing systems as digital systems that have no value unless they are used by at least two participants, and in which those participants play significant roles in affecting one another’s behaviour. This is a broad definition that embraces Google Search as much as email, wikis, and blogs, and in which the behaviour of humans and the surrounding structures and systems they belong to are at least as important as the algorithms and interfaces that support them. Unfortunately, the same processes that lead to the wisdom of crowds can at least as easily result in the stupidity of mobs, including phenomena like filter bubbles and echo chambers that may be harmful in themselves or that render systems open to abuse such as trolling, disinformation campaigns, vote brigading, and successful state manipulation of elections. If we can build better models of social computing systems, taking into account their human and contextual elements, then we stand a better chance of being able to avoid their harmful effects and using them for good. To this end I have coined the term “
ochlotecture”, from the Classical Greek ὄχλος (ochlos), meaning “multitude” and τέκτων (tektōn) meaning “builder”. In this seminar I will identify some of the main ochlotectural elements that contribute to collective intelligence, describe some of the ways it can be undermined, and explore some of the ramifications as they relate to social software design and management.
Like this:
Like Loading...
Related
Yes, I am reminded of when I worked at Australian Defence HQ and was sent on management training. We were divided into teams and given a task. The task was easy to assess, as we had to produce a number and the closer to the correct number the better. We were asked to first recommend a number individually, then form a team consensus. The lesson we were meant to learn was the the team came up with a better answer. But I took the average of all the individual answers, and that produced an even better result. I concluded that the team discussion detracted from good decision making. So when back in the workplace I would routinely ask team members before a meeting for their opinion. If there was general agreement, which there usually was, I would circulate that and cancel the meeting. I have been using that approach for decades since, for everything from government decision making to awarding academic prizes.