Announcing the First International Symposium on Educating for Collective Intelligence (and some thoughts on collective intelligence)

First International Symposium on Educating for Collective Intelligence | UTS:CIC

Free-to-register International online symposium, December 5th, 2024, 12-3pm PST

Start time:

This is going to be an important symposium, I think.

I will be taking 3 very precious hours out of my wedding anniversary to attend, in fairness unintentionally: I did not do the timezone conversion when I submitted my paper so I thought it was the next day. However,  I have not cancelled despite the potentially dire consequences, partly because the line-up of speakers is wonderful, partly because we all use the words “collective intelligence” (CI) but we come from diverse disciplinary areas and we mean sometimes very different things by them (so there will be some potentially inspiring conversations) and partly for a bigger reason that I will get to at the end of this post.  You can read abstracts and most of the position papers on the symposium website,

In my own position paper  I have invented the term ochlotecture (from the Classical Greek ὄχλος (ochlos), meaning something like “multitude” and τέκτων (tektōn) meaning “builder”) to describe the structures and processes of a collection of people, whether it be a small seminar group, a network of researchers, or a set of adherents to a world religion. An ochlotecture includes elements like names, physical/virtual spaces, structural hierarchies, rules, norms, mythologies, vocabularies, and purposes, as well as emergent phenomena occurring through individual and subgroup interactions, most notably the recursive cycle of information capture, processing, and (re)presentation that I think characterizes any CI. Through this lens, I can see both what is common and what distinguishes the different kinds of CI described in these position papers a bit more clearly. In fact, my own use of the term has changed a few times over the years so it helps me make sense of my own thoughts on the matter too.

Where I’ve come from that leads me here

symbolic representation of collective intelligenceI have been researching CI and education for a long time. Initially, I used the term very literally to describe something very distinct from individual intelligence, and largely independent of it.  My PhD, started in 1997, was inspired by the observation that (even then) there were at least tens of thousands of very good resources (people, discussions, tutorials, references, videos, courseware etc) openly available on the Web to support learners in most subject areas, that could meet almost any conceivable learning need. The problem was and remains how to find the right ones. These were pre-Google times but even the good-Google of olden days (a classic application of collective intelligence as I was using the term) only showed the most implicitly popular, not those that would best meet a particular learner’s needs. As a novice teacher, I also observed that, in a typical classroom, the students’ combined knowledge and ability to seek more of it far exceeded my own.  I therefore hit upon the idea of using a nature-inspired evolutionary approach to collectively discover and recommend resources, that led me very quickly into the realm of evolutionary theory and thence to the dynamics of self-organizing systems, complex adaptive systems, stigmergy, flocking, city planning, markets, and collective intelligence.

And so I became an ochlotect. I built a series of self-organizing social software systems that used stuff like social navigation (stigmergy), evolutionary, and flocking algorithms to create environments that both shaped and were shaped by the crowd. Acknowledging that “intelligence” is a problematic word, I simply called these collectives, a name inspired by Star Trek TNG’s Borg (the pre-Borg-Queen Borg, before the writers got bored or lazy). The intelligence of a “pure” collective as I conceived it back then was largely to be found in the algorithm, not the individual agents. Human stock markets are no smarter than termite mounds by this way of thinking (and they are not). I was trying to amplify the intelligence of crowds while avoiding the stupidity of mobs by creating interfaces and algorithms that made value to learners a survival characteristic. I was building systems that played some of the roles of a teacher but that were powered by collectives consisting of learners.  Some years later, Mark Zuckerberg hit on the idea of doing the exact opposite, with considerably greater success, making a virtue out of systems that amplified collective stupidity, but the general principles behind both EdgeRank and my algorithms were similar.

When I say that I “built” systems, though, I mean that I built the software part. I came to increasingly realize that the largest part of all of them was always the human part: what the individuals did, and the surrounding context in which they did it, including the norms, the processes, the rules, the structures, the hierarchies, and everything else that formed the ochlotecture, was intrinsic to their success or failure.  Some of those human-enacted parts were as algorithmic as the software environments I provided and were no smarter than those used by termites (e.g. “click on the results from the top of the list or in bigger fonts”), but many others were designed, and played critical roles.  This slightly more complex concept of CI played a major supporting role in my first book providing a grounded basis for the design of social software systems that could support maximal learner control. In it I wound up offering a set of 10 design principles that addressed human, organizational, pedagogical and tech factors as well as emergent collective characteristics that were prerequisites if social software systems were to evolve to become educationally useful.

Collectives also formed a cornerstone of my work with Terry Anderson over the next decade or so, and our use of the term evolved further. In our first few papers, starting  in 2007, we conflated the dynamic process with the individual agents who made it happen: for us back then, a collective was the people and processes (a sort of cross between my original definition and a social configuration the Soviets were once fond of) and so we treated a collective as somewhat akin to a group or a network. Before too long we realized that was dumb and separated these elements out, categorizing three primary social forms (the set, the net, and the group) that could blend, and from which collectives could emerge and interact, as a different kind of ochlotectural entity altogether. This led us to a formal abstract definition of collectives that continues to get the odd citation to this day. We wrote a book about social media and learning in which this abstract definition of collectives figured largely, and designed The Landing to take advantage of it (not well – it was a learning experience). It appears in my position paper, too.

Collectives have come back with a vengeance but wearing different clothes in my work of the last decade, including my most recent book. I am a little less inclined to use the word “collective” now because I have come to understand all intelligence as collective, almost all of it mediated and often enacted through technologies. Technologies are the assemblies we construct from stuff to do stuff, and the stuff that they do then forms some of the stuff from which we construct more stuff to do stuff. A single PC alone, for instance, might contain hundreds of billions of instances of technologies in its assembly. A shelf of books might contain almost as many, not just in words and letters but in the concepts, theories, and models they make. As for the processes of making them, editing them, manufacturing the paper and the ink, printing them, distributing them, reading them, and so on… it’s a massive, constantly evolving, ever-adapting, partly biological system, not far off from natural ecosystems in its complexity, and equally diverse. Every use of a technology is also a technology, from words in your head to flying a space ship, and it becomes part of the stuff that can be organized by yourself or others. Through technique (technologies enacted intracranially), technologies are parts of us and we are parts of them, and that is what makes us smart.  Collective behaviour in humans can occur without technologies but what makes it collective intelligence is a technological connectome that grows, adapts, evolves, replicates, and connects every one of us to every other one of us: most of what we think is the direct result of assembling what we and others, stretching back in time and outward in space, have created. The technological connectome continuously evolves as we connect and orchestrate the vast web of technologies in which we participate, creating assemblies that have never occurred the same way twice, maybe thousands of times every day: have you ever even brushed your teeth or eaten a mouthful of cereal exactly the same way twice, in your whole life? Every single one of us is doing this, and quite a few of those technologies magnify the effects, from words to drawing to numbers to  writing to wheels to screws to ships to postal services to pedagogical methods to printing to newspapers to libraries to broadcast networks to the Internet to the World Wide Web to generative AI. It is not just how we are able to be individually smart: it is an indivisible part of that smartness. Or stupidity. Whatever. The jury is out. Global warming, widening inequality, war, epidemics of obesity, lies, religious bigotry, famine and many other dire phenomena are a direct result of this collective “intelligence”, as much as Vancouver, the Mona Lisa, and space telescopes. Let’s just stick with “collective”.

The obligatory LLM connection and the big reason I’m attending the symposium

My position paper for this symposium wanders a bit circuitously towards a discussion of the collective nature of large language models (LLMs) and their consequent global impact on our education systems. LLMs are collectives in their own right, with algorithms that are not only orders of magnitude more complex than any of their predecessors, but that are unique to every instantiation of them, operating from and on vast datasets, presenting results to users who also feed those datasets. This is what makes them capable of very convincingly simulating both the hard (inflexible, correct) and the soft (flexible, creative) technique of humans, which is both their super-power and the cause of the biggest threat they pose. The danger is that a) they replace the need to learn the soft technique ourselves (not necessarily a disaster if we use them creatively in further assemblies) and, more worryingly, b) that we learn ways of being human from collectives that, though made of human stuff, are not human. They will in turn become parts of all the rest of the collectives in which we participate. This can and will change us. It is happening now, frighteningly fast, even faster and at a greater scale than similar changes that the Zuckerbergian style of social media have also brought about.

As educators, we should pay attention to this. Unfortunately, with their emphasis on explicit measurable outcomes,  combined with the extrinsic lure of credentials, the ochlotecture of our chronically underfunded educational systems is not geared towards compensating for these tendencies. In fact, exactly the reverse. LLMs can already both teach and meet those explicit outcomes far more effectively than most humans, at a very compelling price so, more and more, they will. Both students and teachers are replaceable components in such a system. The saving grace and/or problem is that, though they matter, and they are how we measure educational success, those explicit outcomes are not in fact the most important ends of education, albeit that they are means to those ends.

The things that matter more are the human ways of thinking, of learning, and of seeing, that we learn while achieving such outcomes; the attitudes, values, connections, and relationships; our identities and the ways we learn to exist in our societies and cultures. It’s not just about doing and knowing: it’s about being, it’s about love, fear, wonder, and hunger. We don’t have to (and can’t) measure those because they all come for free when humans and the stuff they create are the means through which explicit outcomes are achieved. It’s an unavoidable tacit curriculum that underpins every kind of intentional and most unintentional learning we undertake, for better or (too often) for worse. It’s the (largely) non-technological consequence of the technologies in which we participate, and how we participate in them. Technologies don’t make us less human, on the whole: they are exactly what make us human.

We will learn such things from generative AIs, too, thanks to the soft technique they mimic so well, but what we will learn to be as a result will not be quite human. Worse, the outputs of the machines will begin to dominate their own inputs, and the rest will come from humans who have been changed by their interactions with them, like photocopies of photocopies, constantly and recursively degrading. In my position paper I argue for the need to therefore cherish the human parts of these new collectives in our education systems far more than we have before, and I suggest some ways of doing that. It matters not just to avoid model collapse in LLMs, but to prevent model collapse in the collective intelligence of the whole human race. I think that is quite important, and that’s the real reason I will spend some of my wedding anniversary talking with some very intelligent and influential people about it.

 

 

Sets, nets and groups revisited

Here are the slides from a talk I gave earlier today, hosted by George Siemens and his fine team of people at Human Systems. Terry Anderson helped me to put the slides together, and offered some great insights and commentary after the presentation but I am largely to blame for the presentation itself. Our brief was to talk about sets, nets and groups, the theme of our last book Teaching Crowds: learning and social media and much of our work together since 2007 but, as I was the one presenting, I bent it a little towards generative AI and my own intertwingled perspective on technologies and collective cognition, which is most fully developed (so far) in my most recent book, How Education Works: Teaching, Technology, and Technique. If you’re not familiar with our model of sets, nets, groups and collectives, there’s a brief overview on the Teaching Crowds website. It’s a little long in the tooth but I think it is still useful and will help to frame what follows.

A recreation of the famous New Yorker cartoon, "On the Internet no one knows you are a dog" showing a dog using a web browser - but it is a robot dog
A recreation of the famous New Yorker cartoon, “On the Internet no one knows you are a dog” – but it is a robot dog

The key new insight that appears for the first time in this presentation is that, rather than being a fundamental social form in their own right, groups consist of technological processes that make use of and help to engender/give shape to the more fundamental forms of nets and sets. At least, I think they do: I need to think and talk some more about this, at least with Terry, and work it up into a paper, but I haven’t yet thought through all the repercussions. Even back when we wrote the book I always thought of groups as technologically mediated entities but it was only when writing these slides in the light of my more recent thinking on technology that I paid much attention to the phenomena that they actually orchestrate in order to achieve their ends. Although there are non-technological prototypes – notably in the form of families – these are emergent rather than designed. The phenomena that intentional groups primarily orchestrate are those of networks and sets, which are simply configurations of humans and their relationships with one another. Modern groups – in a learning context, classes, cohorts, tutorial groups, seminar groups, and so on – are designed to fulfill more specific purposes than their natural prototypes, and they are made possible by technological inventions such as rules, roles, decision-making processes, and structural hierarchies. Essentially, the group is a purpose-driven technological overlay on top of more basic social forms. It seems natural, much as language seems natural, because it is so basic and fundamental to our existence and how everything else works in human societies, but it is an invention (or many inventions, in fact) as much as wheels and silicon chips.

Groups are among the oldest and most highly evolved of human technologies and they are incredibly important for learning, but they have a number of inherent flaws and trade-offs/Faustian bargains, notably in their effects on individual freedoms, in scalability (mainly achieved through hierarchies), in sometimes unhealthy power dynamics, and in limitations they place on roles individuals play in learning. Modern digital technologies can help to scale them a little further and refine or reify some of the rules and roles, but the basic flaws remain. However, modern digital technologies also offer other ways of enabling sets and networks of people to support one another’s learning, from blogs and mailing lists to purpose-built social networking systems, from Wikipedia and Academia.edu to Quora, in ways that can (optionally) integrate with and utilize groups but that differ in significant ways, such as in removing hierarchies, structuring through behaviour (collectives) and filtering or otherwise mediating messages. With some exceptions, however, the purposes of large-scale systems of this nature (which would provide an ideal set of phenomena to exploit) are not usually driven by a need for learning, but by a need to gain attention and profit. Facebook, Instagram, LinkedIn, X, and others of their ilk have vast networks to draw on but few mechanisms that support learning and limited checks and balances for reliability or quality when it does occur (which of course it does). Most of their algorithmic power is devoted to driving engagement, and the content and purpose of that engagement only matters insofar as it drives further engagement. Up to a point, trolls are good for them, which is seldom if ever true for learning systems. Some – Wikipedia, the Khan Academy, Slashdot, Stack Exchange, Quora, some SubReddits, and so on – achieve both engagement and intentional support for learning. However, they remain works in progress in the latter regard, being prone to a host of ills from filter bubbles and echo chambers to context collapse and the Matthew Effect, not to mention intentional harm by bad actors. I’ve been exploring this space for approaching 30 years now, but there remains almost as much scope for further research and development in this area as there was when I began. Though progress has been made, we have yet to figure out the right rules and structures to deal with a great many problems, and it is increasingly difficult to slot the products of our research into an increasingly bland, corporate online space dominated by a shrinking number of bland, centralized learning management systems that continue to refine their automation of group processes and structures and, increasingly, to ignore the sets and networks on which they rely.

With that in mind, I see big potential benefits for generative AIs – the ultimate collectives – as supporters and enablers for crowds of people learning together. Generative AI provides us with the means to play with structures and adapt in hitherto impossible ways, because the algorithms that drive their adaptations are indefinitely flexible, the reified activities that form them are vast, and the people that participate in them play an active role in adjusting and forming their algorithms (not the underpinning neural nets but the emergent configurations they take). These are significant differences from traditional collectives, that tend to have one purpose and algorithm (typically complex but deterministic), such as returning search results or engaging network interactions.  I also see a great many potential risks, of which I have written fairly extensively of late, most notably in playing soft orchestral roles in the assembly that replace the need for humans to learn to play them. We tread a fine line between learning utopia and learning dystopia, especially if we try to overlay them on top of educational systems that are driven by credentials. Credentials used to signify a vast range of tacit knowledge and skills that were never measured, and (notwithstanding a long tradition of cheating) that was fine as long as nothing else could create those signals, because they were serviceable proxies. If you could pass the test or assignment, it meant that you had gone through the process and learned a lot more than what was tested. This has been eroded for some time, abetted by social media like Course Hero or Chegg that remain quite effective ways of bypassing the process for those willing to pay a nominal sum and accept the risk. Now that generative AI can do the same at considerably lower cost, with greater reliability, and lower risk, without having gone through the process, they no longer make good signifiers and, anyway (playing Devil’s advocate), it remains unclear to what extent those soft, tacit skills are needed now that generative AIs can achieve them so well.  I am much encouraged by the existence of George’s Paul LeBlanc’s lab initiative, the fact that George is the headliner chief scientist for it, its intent to enable human-centred learning in an age of AI, and its aspiration to reinvent education to fit. We need such endeavours. I hope they will do some great things.

Informal Learning in Digital Contexts | Handbook of Open, Distance, and Digital Education

This is the second of two chapters by Terry Anderson and me (the other being on the topic of pedagogical paradigms, that I shared a week or two ago) from Springer’s Handbook of Open, Distance, and Digital Education.

The ‘paradigms’ chapter more or less wrote itself – we’ve churned those ideas around for long enough now that we both know the topic rather well – but this one caused us a lot more trouble. Our difficulties were largely due to the fact that we started out with roughly as much idea about what the term ‘informal learning’ means as anyone else. In other words, we kind of recognized it when we saw it, but could come up with no plausible definition that was not either simply wrong, incomplete, or vaguely defined as ‘not formal’ (sometimes adding the utterly circular cop-out notion of ‘non-formal’). As we later figured, ‘formal’ is no better defined than ‘informal’, so that didn’t help. Faced with the need to cover a fairly representative sample of work in the area, we therefore made a mess of it. Our initial draft consisted mainly of a set of examples culled mainly from Terry’s encyclopaedic knowledge of the literature in the field, bound together in loosely connected themes. Because the literature we were citing was based on a large, vague, and often mutually contradictory variety of understandings of ‘informal learning’ the chapter reflected this too: the parts were fine, but the whole was quite incoherent. We needed a better framework.

So we started to brainstorm a few different ways of thinking about the problem, looking at as many ways the term was used as we could find, identifying common patterns and frequently associated concepts, trying to distinguish necessary from sufficient conditions, and consequently finding a much bigger mess than the one we had started with. The amount of fuzzy thinking and loose, almost arbitrary terminology found in the field of informal learning turns out to be quite staggering. It’s not a field: it’s a jungle.

Not for the first time, though, I found Michael Erault’s work in the area to be an inspiration and source of clarity. Erault doesn’t try to come up with a single defining characteristic, instead recognizing that there is a richly variegated continuum of informal-to-formal ways that people learn from and with one another (at least in the workplace settings he has studied). Although (as far as I know) he didn’t  explicitly use the term, the sets of characteristics that Erault uses to identify relative degrees of informality seemed to me to imply that he was thinking in terms of what Wittgenstein described as Familienähnlichkeit (family resemblances). No single cluster of characteristics define learning as informal (or formal, for that matter) but, if enough are present, we can usually recognize it as one or the other, or somewhere in between.

This gave us a useful starting point, but it still left a lot of vagueness, and  Erault’s focus on informal workplace learning did not fully address all of the meanings and instantiations of informal learning that are particularly significant when examining digital contexts – all the stuff that happens in exchanges through social media, for instance, from Quora to YouTube tutorials and back through email, Reddit, and Twitter. Also, it seemed to gloss over the formal stuff which (as we noted) is as poorly defined as ‘informal’, and that almost never occurs in anything resembling a ‘pure’ form: there is hardly ever any formal learning without informal learning lurking close by. It would be a lot easier if we just talked about formal teaching, because that does refer to a much clearer set of better-defined activities, but teaching is not at all the same thing as learning. Indeed, sometimes the relationship is very oblique indeed, notwithstanding Frere’s claims that you cannot call it teaching unless learning occurs. And then there’s the complex role of credentials of various kinds in both assessing and influencing learning. We wanted to find a way to capture the richness of that, but could find no existing work that worked well enough for us.

We went through a lot of different concepts and representations (yes, there were Venn diagrams!) before finally hitting on the notion that it is not so much a two-dimensional continuum between formal and informal, but a multi-dimensional spectrum defined in terms of relative degrees of dependence/independence and intentionality/non-intentionality.

 

Informal learning as a 3D continuum, with dimensions of dependence/self-direction and incidental/intentional

We (tentatively) reckon that we can situate at least most existing work in the field within this framework, and that it provides a helpful way of thinking about whatever is happening in a particular moment of a learning trajectory (another concept from Erault that I’ve found very useful in the past, especially when talking about transactional control in my first book). An individual’s learning trajectory will constantly wind around this space and, when other individuals are involved (not just formal teachers), their paths will affect one another in interesting ways. After we’d worked this out, the rest of the chapter fell more or less into place. You can read the result here.

Here’s the chapter abstract:

Governments, business leaders, educators, students, and parents realize the need to inculcate a culture of lifelong learning – learning that spans geography, time, and lifespan. This learning has both formal and informal components. In this chapter, we examine the conceptual basis upon which informal learning is defined and some of the tools and techniques used to support informal learning. We overview the rapid development in information and communications technologies that not only creates opportunities for learners, teachers, and researchers but also challenges us to create equitable and culturally appropriate tools and contexts in which high-quality, continuous learning is available to all.

Reference

Dron J., Anderson T. (2022) Informal Learning in Digital Contexts. In: Zawacki-Richter O., Jung I. (eds) Handbook of Open, Distance and Digital Education. Springer, Singapore. https://doi.org/10.1007/978-981-19-0351-9_84-1

The physics of social spaces are not like the physics of physical spaces

Image: please respect my privacyOver the last week I peripherally participated in an interesting exchange of views on Twitter between Jesse Stommel and Stephen Downes that raises some fascinating issues about the nature of online social spaces. It started with a plea from Jesse:

“Dear [insert company name], searching every mention of your company and jumping into conversations where you haven’t been tagged or invited is invasive. Stop doing that.”

Stephen took exception to this, pointing out that:

“If I use a company name in a public forum, I expect they will take interest and maybe even reply. It’s a *public* forum. That’s how they work.”

What followed explored some fascinating territory, but the essence of the main arguments are (I skim the nuances), on Jesse’s side, that we have a reasonable expectation of being left alone during a private conversation in any public space and, on Stephen’s side, that there should be no expectation of privacy in a public digital space like Twitter, and that any claims to it tread on extremely dangerous ground.  The central question is thus whether there are such things as private conversations on Twitter.

Stephen’s big concern is that, taken to its logical conclusion, laying claim to privacy on Twitter opens the door for outrages like the Proctorio vs Linkletter case, in which Proctorio claimed that “Mr. Linkletter infringed its copyright, circumvented technological protection measures, and breached confidence” by sharing one of its fully public (though not publicized) YouTube videos with students. YouTube quite closely resembles Twitter in its social structure (though little else), so it is a good analogy. Stephen is, I think rightly, concerned at ‘calling out’ individuals or organizations for invading ‘private’ conversations in public spaces because it implies the unilateral imposition of norms, rules of behaviour, and expectations by one individual or group on another, in a space that neither owns.

Jesse’s counter-arguments are interesting, and subtle. He strongly rejects Stephen’s analogy with the Proctorio case because all he is doing is asserting his right to privacy, not abusing his market position or trying to cause harm. It’s just a request to be let alone, calling on what he sees as norms of politeness, not a demand that this should be enshrined in rules or legislation. He observes that, though Twitter is a public space, it has variegation that emerges because of (often tacit, seldom explicit) ways that many (not all) people use it, which in turn is supported by the ways that Twitter’s algorithms push some kinds of tweet more than others. For this particular case in point, he notes that the algorithm tends to broadcast initial tweets more than it does replies, so what follows in a set of replies could be assumed by its participants to be a less public conversation. In fact, as I understand his argument, Jesse thinks of it as a private conversation in a public space, analogous to having a private conversation in a public park where one might be inadvertently overheard, but it would be rude to deliberately listen in or contribute unless invited. If this were a true analogy then I might support it. But, if it is true, then so are quite a few other things, and that’s where it starts to get interesting.

I’ve been a Twitter user for approaching 15 years now and it has never occurred to me till now that any of my conversations might in any way be construed as private. They are sometimes personal, for sure, but definitely not private. Conversations are soft technologies that are flexible, mutable, and situated, and (without further clues like people quietly conversing in a corner) you need to read them in order to know whether you would be intruding on them, which means that they are simply not private. Without further reasons to assume privacy, it is just a conversation in public between two people to which other people are not invited.

So the crux of Jesse’s argument seems to be the notion that a happenstance of Twitter’s current implementation that makes some tweets less likely to be seen than others, combined with a set of norms relating to that, that may or may not be shared by others, allows one to claim that a conversation is not just personal but private.

The physics of online social spaces

Twitter is, as Stephen says and Jesse agrees, for the most part a completely public space (not counting direct messaging or constraints on tweets to only those you follow/are following) but, as the example of the relative prominence given to initial tweets compared with replies to them amply demonstrates, it does have a structure. It is just one that does not obey anything like the same physics as a physical space. You can achieve a measure of privacy in a public physical space because there has to be proximity in space and time in order to communicate at all, and there are limits to human voice projection, ability to hear, and ability to attend to multiple conversations at once. There are also visual clues that people are talking privately. Though there is variegation in structure, none of those limits apply in Twitter or, for that matter, most online social spaces.

Early in the conversation I chipped in to observe that one of the many differences between private conversations in physical space and Twitter exchanges is that tweets are persistent. They are a little like graffiti left in public spaces that continues to communicate long after the initial intent has passed, and may be happened upon at any time in the future in quite different contexts than those imagined by the graffiti artist. Jesse’s response to that was that there’s a difference between graffiti on a public building in five foot high letters and graffiti on a shady tree or in a tunnel. Again, his point is that there are parts of Twitter where there might be a reasonable expectation of relative privacy, where it would be rude to join the conversation. Though I agree that it is often possible to tell from reading a conversation whether you might be welcome or not (and yes, social norms apply to that), my big problem with Jesse’s argument is that proximity in Twitter-space is not just defined by relative position in a dialogue or likelihood of appearance in a Twitter feed, as he seems to imply.

Beyond its support for conversations between individuals, Twitter embodies two distinct but overlapping social forms: the network and the set. @mentions in Twitter combined with its ‘following’ functionality are the main drivers for the network form. If you follow someone or they mention you then your message becomes proximal to them. That’s a big part of Twitter’s physics, and it has no analogue in physical space. Thus, your conversation is very likely to be overheard by others because you are (metaphorically) standing right next to them and chipping your words in five foot letters in stone where they can and will be found, now and in the future. If you wanted to have a private conversation in a park then you wouldn’t stand less than a metre away from someone that you didn’t want to listen in and shout in their face. But that’s not all.

Hashtags and search terms are the main drivers for the set social form, which at least closely competes with if not exceeds the value of social networks in Twitter. When you use a hashtag or even a distinctive word (say, the name of a company or person) then your message becomes proximal to those who follow that hashtag or who have saved a search for that keyword. So you are not just standing right next to everyone in your social network, but to the potentially much larger social set of people who are interested in keywords that you use in your conversation. Again, you might not intend it, you might not even be able to see them, but you are shouting in their faces.

Maybe you do have a right to privacy in any public space, but that right does not overrule simple physics. You have to know  the physics of that space in order to know what ‘private’ means within it. And the simple physics of Twitter means that ‘next to’ and ‘within hearing distance’ extends to anyone with an interest in you or what you are saying in the sentences you write. If you want different social physics that support privacy, then you need to take your conversation to a different space, because Twitter doesn’t work that way. You can ask for non-interference in a personal conversation, but not for privacy.

Designing better social physics

Retrato cubista del escritor español Ramón Gómez De la Sena por el pintor mexicano Diego RiveraAs it happens, we grappled a lot with issues of context and privacy exactly like this when we designed the social physics of the Landing.  Its social physics are deliberately designed to make precisely those nooks and niches that Jesse wants to find in Twitter.  The Landing starts with discretionary access control for every post and every profile field (we chose to build it using the Elgg framework because of its support for this). Like the much missed (and never hit) Google+ it also allows you to create circles, that are not just useful for following but, more significantly, for limiting access to particular individuals. Again, that came for free with Elgg, though we added some enhancements to forefront it, and to make it usable.

It’s not just about the content, though; it’s about presentation of self (we were influenced in this by Goffman’s dramaturgical analysis). We also therefore built a range of context-switching tools – notably tabbed profiles and pinboards (known internally as ‘sets’) – that allow you to present a completely different facade to different circles, groups, and sets of people. This is not just concerned with showing or hiding different fields and content, but with looking completely different and showing completely different stuff to different people. The public facade of my profile is not the same as the one displayed to my friends and, if I wished, I could present different facades to all the different circles or groups of people I follow or belong to. We’ve still not solved the temporal issue – like most social sites, the fundamental unit of communication is still persistent graffiti. In fact, to a large extent we wanted it that way, because it’s a site for collective learning, and so it has to have a collective memory though, like memories in brains, it would be useful to have short-term memories too. However, simply letting posts expire is not the solution, in part due to the many ways that digital content can be copied and archived but, more importantly, because forgetting is and must be an active process that cannot and should not be automated. My earlier CoFIND system did have a way to deal with that (memories had to be actively maintained by active interest and use by members or, though they would never be fully lost, they would be far less likely to be recalled) but we didn’t make much use of that idea on the Landing, save in isolated pockets, because it would have really irritated the many people or groups that engage intermittently (e.g. in iterations of paced courses).

Unfortunately, most of the Landing’s context-switching features are not even slightly intuitive (especially to those already familiar with the cruder social physics of popular social media) so most are very rarely used. Google+, with its massively simplified version of the same idea, probably failed at least in part for this reason. Such complexity can work, with the right membership. Slashdot, for instance, has an extraordinarily rich and ever-evolving social physics, and it has thrived for about 25 years, but the reasons for its success probably lie at least in part in its tagline ‘News for Nerds’. Its members are not phased by complex interfaces, and it is well-enough designed to work reasonably well if you don’t engage with all the features.

Perhaps a bigger issue, though, is that the richer social physics of both Slashdot and the Landing only work if you happen to be a member. For public posts, like this one, the physics are very much like those of Twitter or Facebook.

For now, the best bet is to use different social spaces for different aspects of your life but, thanks largely to Facebook’s single-minded and highly effective undermining of OpenSocial, there’s not a lot of ways to seamlessly move between them right now while retaining a rich and faceted identity. At least there’s still RSS, which is how come you might be reading this on the Landing (where it is originally posted) or at https://jondron.ca/ (which will automagically then push it to Twitter), but it’s not ideal.

It’s very challenging to design a digital space that is both richly supportive of human social needs and easy to use. The Landing is definitely not the solution, but the underlying idea – that people are richly faceted social beings who interact and present themselves differently to different people at different times –  still makes sense to me. As the conversation between Jesse and Stephen shows, there is a need for support for that more than ever.

Tim Berners-Lee: we must regulate tech firms to prevent ‘weaponised’ web

TBL is rightfully indignant and concerned about the fact that “what was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms.” The Web, according to Berners-Lee, is at great risk of degenerating into a few big versions of Compuserve or AOL sucking up most of the bandwidth of the Internet, and most of the attention of its inhabitants. In an open letter, he outlines the dangers of putting so much power into hands that either see it as a burden, or who actively exploit it for evil.

I really really hate Facebook more than most, because it aggressively seeks to destroy all that is good about the Web, and it is ruthlessly efficient at doing so, regardless of the human costs. Yes, let’s kill that in any way that we can, because it is actually and actively evil, and shows no sign of getting any nicer. I am somewhat less concerned that Google gets 87% of all online searches (notwithstanding the very real dangers of a single set of algorithms shaping what we find), because most of Google’s goals are well aligned with those of the Web. The more openly people share and link, the better it gets, and the more money Google makes. It is very much in Google’s interest to support an open, highly distributed, highly connected Web, and the company is as keen as everyone else to avoid the dangers of falsehoods, bias, and the spread of hatred (which are among the very things that Facebook feeds upon), and, thanks to its strong market position and careful hiring practices, it is more capable of doing so than pretty much anyone else. Google rightly hates Facebook (and others of its ilk) not just because it is a competitor, but because it removes things from the open Web, probably spreads lies more easily than truths, and so reduces Google’s value.

I am somewhat bothered that the top 100 sites (according to WIkipedia, based on Alexa and SimilarWeb results) probably get far more traffic than the next few thousand put together, and that the long tail pretty much flattens to approximately zero after that. However, that’s an inevitable consequence of the design of the Web (it’s a scale-free network subject to power laws), and ‘approximately zero’ may actually translate to hundreds of thousands or even millions of people, so it’s not quite the skewed mess that it seems. It is, as TBL observes, very disturbing that big companies with big pockets purchase potential competitors and stifle innovation, and I agree that (like all monopolies) they should be regulated, but there’s no way they are ever going to get everything or everyone, at least without the help of politicians and evil legislation, because it’s a really long tail.

It is also very interesting that even the top 10 – according to just about all the systems that measure such things – includes the unequivocally admirable and open Wikipedia itself, and also Reddit which, though now straying from its fully open model, remains excellently social and open. In different ways, both give more than they take.

It is also worth noting that there are many different ways to calculate rank. Moz.com (based on the Mozscape web index of 31 Billion domains and 165 Billion pages) has a very different view of things, for instance, in which Facebook doesn’t even make it to the domains listing, and is way below WordPress and several others in the popular pages list, which is a direct result of it being a closed and greedy system. Quantcast’s perspective is somewhat different again, albeit only focused on US sites which are a small but significant portion of the whole.

Most significantly, and to reiterate the point because it is worth making, the long tail is very long indeed. Regardless of the dangers of a handful of gigantic platforms casting their ugly shadows over the landscape, I am extremely heartened by the fact that, now, over 30% of all websites run on WordPress, which is both open source and very close to the distributed ideal that TBL espouses, allowing individuals and small communities to stake their claims, make a space, and link (profusely) with one another, without lock-in, central control, or inhibition of any kind. That 30% puts any one of the big monoliths, including Facebook, very far into the shade. And, though WordPress’s nearest competitor (Joomla, also open source) accounts for a ‘mere’ 3% of all websites, there are hundreds if not thousands of similar systems, not to mention a huge number of pages (50% of the total, according to W3Techs) that people still roll for themselves.

Yes, the greedy monoliths are extremely dangerous and should, where possible, be avoided, and it is certainly worth looking into ways of regulating their activities, nationally and internationally, as many governments are already doing and should continue to do so. We must ever be vigilant. But the Web continues to grow, and to diversify regardless of their pernicious influence because it is far bigger than all of them put together.

Address of the bookmark: https://www.theguardian.com/technology/2018/mar/11/tim-berners-lee-tech-companies-regulations

Originally posted at: https://landing.athabascau.ca/bookmarks/view/3105535/tim-berners-lee-we-must-regulate-tech-firms-to-prevent-weaponised-web

Facebook has a Big Tobacco Problem

A perceptive article listing some of Facebook’s evils and suggesting an analogy between the tactics used by Big Tobacco and those used by the company. I think there are a few significant differences. Big Tobacco is not one company bent on profit no matter what the cost. Big tobacco largely stopped claiming it was doing good quite a long time ago. And Big Tobacco only kills and maims people’s bodies. Facebook is aiming for the soul. The rest is just collateral damage.

Address of the bookmark: https://mondaynote.com/facebook-has-a-big-tobacco-problem-f801085109a

Originally posted at: https://landing.athabascau.ca/bookmarks/view/3046034/facebook-has-a-big-tobacco-problem

Facebook’s days may be numbered as UK youth abandon the platform

The end of Facebook couldn’t come soon enough, but we’ve been reading headlines not unlike this for around a decade, yet still its malignant tumour in the lungs of the Web grows, sucking the air out of all good things.

Despite losses in the youth market (not only in the UK), as the article notes, Facebook has deep pockets and is metastasizing at a frightening rate. Instagram and WhatsApp are only the most prominent recent growths, and no doubt far from the last. Also, the main tumour itself is still evolving, backed by development funding that staggers belief. It would take a lot to cure us of this awful thing. On the optimistic side, however, Metcalfe’s Law works just as well in reverse as going forward. Networks can grow exponentially, but they can shrink just as fast. Perhaps these small losses will be the start of a cascade. Let’s hope so.

 

 

Address of the bookmark: http://www.alphr.com/facebook/1008480/facebook-youth-numbers-drop-over-55-rise

Originally posted at: https://landing.athabascau.ca/bookmarks/view/3037494/facebook%E2%80%99s-days-may-be-numbered-as-uk-youth-abandon-the-platform

Addicted to learning or addicted to grades?

Skinner teaching machine 08

Figure 1: Skinner’s teaching machine

It is not much of a surprise that many apps are designed to be addictive, nor that there is a whole discipline behind making them so, but I was particularly interested in the delightfully named Dopamine Labs‘ use of behaviourist techniques (operant conditioning with variable ratio scheduling, I think), and the reasoning behind it. As the article puts it:

One of the most popular techniques … is called variable reinforcement or variable rewards. 
It involves three steps: a trigger, an action and a reward.
A push notification, such as a message that someone has commented on your Facebook photo, is a trigger; opening the app is the action; and the reward could be a “like” or a “share” of a message you posted.
These rewards trigger the release of dopamine in the brain, making the user feel happy, possibly even euphoric, Brown says.
“Just by controlling when and how you give people that little burst of dopamine, you can get them to go from using [the app] a couple times a week to using it dozens of times a week.”

For well-designed social media and games, the reward is intrinsic to the activity, and perfectly aligned with its function. If the intent is to create addicts – which, in both kinds of system, it probably is – the trick is to design an environment that builds rewards into the algorithms (the rules) of the system, and to keep them coming, ideally making it possible for the rewards to increase in intensity as the user gains greater expertise or experience, but varying ratios or intervals between rewards to keep things interesting. Though this particular example falls out from behaviourist theory, it is also well supported by cognitivist and brain-based understandings of how we think. Drug dealers know this too, as it happens. If you want to keep people using your product, this is how to make your product particularly addictive.

Learning addicts

Lovers of learning experience addiction too. The more we learn, the more there is to learn, the greater the depth and pleasure there is to be found in doing so, and the sporadic ups and downs, especially when faced with challenges we eventually solve, are part of the joy of it. Increasing mastery of anything is a reward in itself that seems quite intrinsic to our make-up, and to that of many other animals. Doing it in a social context is even better, as we share in the learning of others and gain value (social capital, different perspectives, help overcoming problems, etc) in the process. We gain greater control, greater autonomy, greater capability to live our lives as we want to live them, which is very motivating. As long as the reward comes from the activity itself, and the activity is not harmful, this is good news. It makes sense from an evolutionary perspective. We are innately motivated to learn, because learning is an extremely valuable survival characteristic. Learning generally makes dopamine positively drip from our eyeballs.

So what’s the problem with applying the principle in education?

None at all, until you hit something that you do not wish to learn, that is too difficult to master right now, that is too boring, that has no obvious rewards in and of itself. The correct response to this problem is, ideally, to find what there is to love in it. Good teachers can help with that a lot, inspiring, revealing, supporting, demonstrating, and discussing. Other learners can make a huge difference too, supporting, modelling behaviours, filling gaps, and so on. We very often learn things for other people, with other people, or because of other people. Educational systems offer a good substrate for that.

If intrinsic motivation fails to move us, then at least the motivation should be self-determined. Figure 2 shows a very successful and well-validated model of motivation (from Ryan and Deci) that, amongst other things, usefully describes differing degrees of extrinsic motivation (external, introjected, identified, and integrated) that, as they approach the right of the diagram, increasingly approach intrinsic motivation in value, though ‘external regulation’ is rather different, of which more soon. When intrinsic motivation fails, what we need is some kind of internal regulation to push us onwards. It is not a bad idea to find some internally regulated reason that aligns with your beliefs about yourself and your goals, or that at least fits with some purpose or goal that you find valuable. It’s sometimes useful to develop a bit of ‘grit‘ – to be able to do something that you don’t love doing in order to be able to do things that you do love doing, to find reasons for learning stuff that are meaningful and fit with your personal values, even if the immediately presenting activity is not fun in itself. Again, teachers and other people can help a lot with that, by showing ways that they are doing so themselves, by providing support, by engaging, or by being the reason that we do something in the first place. It’s all very social, at its heart.

Amp-55-1-68-fig1a

Figure 2: Forms of motivation

That social element is important, and not clearly represented in the diagram, despite being a critical aspect of intrinsic motivation and mattering a lot for the ‘higher’ identified forms of extrinsic motivation. From an evolutionary perspective, I suspect this ability to learn because of the presence of others accounts for our species’ apparent dominance in our ecosystems. We are not particularly clever as independent individuals but, collectively, we are mighty smart. This could not be the case without having an innate inclination to value, and to gain value from, other people, and for this to have the consequence that others very materially contribute towards our motivation to do something. I guess I should mention that ‘innate’ does not mean ‘pre-programmed’ – this is almost certainly an emergent phenomenon. But it is a big part of who we are.

Grade addicts

So far so good. Educational systems are, at least in principle, very effective ways of bringing people together. It all goes horribly wrong, however, when the educators’ response to amotivation (or worse, to motivation to avoid) is to change the rules by throwing in extrinsic rewards and punishments, like grades, say, or applying other controls to the process like forced attendance. Externally regulated extrinsic motivation is extremely dangerous.

Extrinsic rewards and punishments do work, in the sense that they coerce people and other animals into behaving as the giver of the rewards or punishments wishes them to behave. And yes, dopamine is implicated. This immediate effectiveness is what makes them so alluring. But it’s like giving an athlete performance-enhancing but ultimately harmful drugs. Rewards and punishments are also highly addictive and, like other addictions, you need more and more to sustain your addiction because you become inured to the effects, and withdrawal gets more painful the longer you are addicted. This works two ways. Those that get the rewards (the good grades, gold stars, praise, whatever) go on to want more of them, and will do what they need to get them, whether or not there are any further benefits (like, say, learning). Cheating is one popular way to do this. Tactical study, where the student tries to do what will get good grades rather than learn for the love of it, is another. But grading, though extrinsically motivating for the most part, is not always effective: bad grades can achieve the opposite effect, like drugs spiked with something horrible. Those that get grades as punishments often try to avoid them by whatever means they can: dropping out and cheating (a way to bypass the system to get hold of the good stuff) are popular solutions.

The biggest problems, however, come when you take the rewards/punishments away. As a vast body of research has shown and continues to show, this diminishes intrinsic motivation and often eliminates it altogether. If people are not very inclined to do something then you can temporarily boost interest by adding extrinsic rewards or punishments but, when you take them away, people are considerably less inclined to do the thing than they were before your started even when they originally liked to do it. At a high level this can be explained by the fact that, in giving a reward or punishment, you are drawing attention away from (crowding out) the thing itself and, at the same time, sending a strong signal that the activity itself is not rewarding enough in itself to be worth doing. But I am not sure that this fully explains the very strong negative effects on motivation that we actually see when rewards or punishments are withdrawn. I idly speculate that part of the reason for this effect might be the dopamine crash. We come to associate an activity with a dopamine boost and, when that boost is no longer forthcoming, it can be very disappointing, like smoking a nicotine-free cigarette (trust me – that’s awful). Cold turkey is not the best state to be in, especially when you associate it with an activity like learning something. It could really put you off a subject. This is just a thought: I know of no evidence that it is true, but it seems a plausible hypothesis that would be worth testing.

Whatever the cause, the effects are terrible. By extrinsically driving our students, we kill the love of the activity itself for those that might have loved it, and permanently prevent those that might have later found it valuable from ever wanting to do it again. Remarkably few survive unscathed, and a disproportionate number of those that do go on to become teachers, and so the cycle continues. I don’t think this is how education should be, and I don’t think it is what most of us in the system intend from it.

Getting out of the loop

The only really effective way to ensure lifelong interest and ongoing love of learning is to find the reward in the activity itself, not in an extrinsic reward. The games and social applications described in this article do that very well but it is important to remember that the intent of the designers of the applications is to increase addiction to them in order to sell or promote the product, and that there is perfect alignment between the reward and the activity itself. This is built into the rule system. In an education system that is driven by marks, we are making grades (not learning) the product, and making those the source of the addiction. This is very different. It has nothing to do with the activity of learning itself: it is extrinsic to the process. It might be even more effective give our students addictive drugs (higher concentrations equate to higher grades) to increase the incentive. I’m surprised no one has tried this.

But, seriously, what we really need to be doing is to make learning the addiction.

We can reduce the harm to an extent by removing grades from the teaching process and focusing on useful feedback and encouragement instead. If forced to judge, we can use pass/fail grades that are still harmful but not quite as controlling. If we are inexplicably drawn to grading, then we can build systems similar to those of ‘likes’ and badges of social media where, instead of rewards we give awards – in other words, we remove the expectation of a grade but, where merit is found, sometimes show our approval – and we can make that a social process, so that it is not dominated by a teacher and therefore does not involve exercise of arbitrary power. We can use pedagogies that give teachers and students the chance to model and demonstrate their passion and interest. We can encourage students to reflect on why they are doing it, ideally shared so they can gain inspiration from others. We can help students to integrate work with other things that matter to them. We can help them personalize their own learning so that it is appropriately challenging, not too dull, not to hard, and so that it matches the goals they set for themselves. We can help them to set those goals, and help them to figure out how to attain them. We can make them participants in the grading process, picking outcomes and assessments that match their interests and needs. We can build communities that support and nourish learning through sharing and mutual support. This is just a small sample of ways – there are really quite a few things that we can do, even within a broken system, to make learning addictive, to find ways to make it rewarding in and of itself, even when there is little initial interest to build upon. But we are still stuck in a system that treats grades as rewards, so we are still faced with a furious current pushing against all of our efforts.

Really, we need to change the system, but just  a bit: our current educational systems have evolved for pragmatic reasons, mainly because alternatives are too expensive or inconvenient for teachers to manage, not because they are any good for learners. One of the consequences of that is that it is almost impossible to run an institutional course or program without at least some form of grading, even if only at pass/fail level, even if only at the end.

An obvious big part of the solution is to decouple learning and grading. Some more advanced competency-based approaches already do that, as do things like challenge assessments and assessment of prior experience and learning, to some extent project/essay/thesis paths, outcomes-based programs, and even some kinds of professional exams (the latter not in a good way, for the most part, because they tend to drive the process). However, there are risks that universities might turn into an up-market version of driving schools, teaching how to pass the tests and doing just as they are doing now, rather than enabling more expansive learning as they should. To avoid that, it is critical that learners are involved in helping to determine their own personalized outcomes, and very much not to have those learning outcomes ‘personalized’ for them – personal, not personalized, as Alfie Kohn puts it and as Stephen Downes agrees. Grades that learners control, for activities that they choose to undertake, are many times better than grades that someone else imposes. It would also be a good idea either to split teaching activities into assemblable chunks, or into open narratives, without alignment with specific awards or qualifications. Students might build competences from smaller pieces – often from different sources – in order to seek a specific award, or might gain more than one award from a single learning narrative (or perhaps from a couple that overlap). It would be a very good idea to provide ways to mentor and help learners to seek appropriate paths, perhaps through personal tuition, and/or through automated help, and/or through membership of supportive communities (I am a fan of action learning sets for this kind of thing). Such mechanisms might also assist in the preparation of portfolios of evidence that would be an obvious way to manage the formal assessment process. I’m not in any way suggesting that we educators (especially for adult learners) should get rid of our accreditation role, merely that we should stop using it to drive our teaching and to enforce compliance in our students.

I think that such relatively small tweaks to how we teach and assess could have massive benefits further upstream. In one fell swoop it would change the focus of educational systems from grades to learning, and change the reward structure from extrinsic to intrinsic. Instead of building fixed-length courses with measurable outcomes that we the teachers control, we could create ecosystems for learning, where cooperation and collaboration would have greater value than competition, where learners are really part of a club, not a cohort, where teachers are perceived as enablers of learning, not as causes, and certainly not as judges. The words ‘learner-centred’ have been much over-used, often being a shorthand for ‘a friendlier way of making students comply with our demands’ or ‘helping students to get better grades’, but I think they fairly accurately denote what this sort of system would entail when taken seriously. Some of my friends and colleagues prefer ‘learning-centred’ and that works for me too. But really this is about being more human and more humane. It’s about breaking the machines that determine what we do and how we do it, and focusing instead on what we – collectively and individually – want to be. We can do this by thinking carefully about what motivates people, as opposed to attempting to motivate them. As soon as our attitude is one of ‘how can we make our students to this?’ rather than ‘how can we help our students to do this?’ we have failed. It’s easy to create addicts of extrinsic motivation. It is hard to make addicts of learning. But, sometimes, the hard way is the right way.

 

Address of the bookmark: http://www.cbc.ca/news/technology/marketplace-phones-1.4384876

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2816324/addicted-to-learning-or-addicted-to-grades

Signal : now with proper desktop apps

Signal is arguably the most open, and certainly the most secure, privacy-preserving instant messaging/video or voice-calling system available today. It is open source, ad-free, standards-based, simple, and very well designed. Though not filled with bells and whistles, for most purposes it is a far better alternative to Facebook-owned WhatsApp or other near-competitors like Viber, FaceTime, Skype, etc, especially if you have any concerns about your privacy. Like all such things, Metcalfe’s Law means its value increases with every new user added to the network. It’s still at the low end of the uptake curve, but you can help to change that – get it now and tell your friends!

Like most others of its ilk it hooks into your cellphone number rather than a user name but, once you have installed it on your smartphone, you can associate that number (via a simple 2D barcode) with a desktop client. Until recently it only supported desktop machines via a Chrome browser (or equivalent – I used Vivaldi) but the new desktop clients are standalone, so you don’t have to grind your system to a halt or share data with Google to install it. It is still a bit limited when it comes to audio (simple messaging only) and there still appears to be no video support (which is available on smartphone clients) but this is good progress.

Address of the bookmark: https://signal.org/download/

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2813683/signal-now-with-proper-desktop-apps

The Ghost in the Machines of Loving Grace | Library Babel Fish

An article from Barbara Fister about the role and biases of large providers like Google and Facebook in curating, sorting, filtering their content, usefully contrasted with academic librarians’ closely related but importantly different roles. Unlike a library, such systems (and especially Facebook) are not motivated to provide things that are in the interests of the public good. As Fister writes:

“The thing is, Facebook literally can’t afford to be an arbiter. It profits from falsehoods and hype. Social media feeds on clicks, and scandalous, controversial, emotionally-charged, and polarizing information is good for clicks. Things that are short are more valuable than things that are long. Things that reinforce a person’s world view are worth more than those that don’t fit so neatly and might be passed over. Too much cruft will hurt the brand, but too little isn’t good, either. The more we segment ourselves into distinct groups through our clicks, the easier it is to sell advertising. And that’s what it’s about.”

These are not new points but they are well stated and well situated. I particularly like the point that lies and falsehoods are not a reason to censor a resource in and of themselves. We need the ugliness in order to better understand and value the beauty, and we need the whole story, not filtered parts of it that suit the criteria of some arbitrary arbiter. As Fister writes:

“There’s a level of trust there, that our students can and will approach a debate with genuine curiosity and integrity. There’s also a level of healthy distrust. We don’t believe it’s wise to leave decisions about truth and falsehood up to librarians.”

Indeed. She also has good things to say about personalization:

“If libraries were as personalized, you would wave your library card at the door and enter a different library than the next person who arrives. We’d quickly tidy away the books you haven’t shown interest in before; we’d do everything we could to provide material that confirms what you already believe. That doesn’t seem a good way to learn or grow. It seems dishonest.”

Exactly so.  She does, though, tell us about how librarians do influence things, and there’s only a fine and fuzzy (but significant) line between this and the personalization she rejects:

“Newer works on the topic will be shelved nearby that will problematize the questionable work and put it in context.”

I’m not sure that there is much difference in kind between this approach to influencing students and the targeted ads of Google or Facebook. However, there is a world of difference in the intent. What the librarian does is about sense making, and it accords well with one of the key principles I described in my first book of providing signposts, not fenceposts. To give people control, they have to first of all have the choices in the first place, but also they need to know why they are worth making. Organizing relevant works together on the shelf is helping students to make informed choices, scaffolding the research process by showing alternative perspectives. Offering relevant ads, though it might be dishonestly couched in terms of helping people to find the products they want, is not about helping them with what they want to do, but exploiting them to encourage them to do what you want them to do, for your own benefit, not theirs. That’s all the difference in the world.

That difference in intent is one of the biggest differentiators between a system like the Landing and a general-purpose public social media site, and that’s one big reason why it could never make any sense for us to replace the Landing with, say, a Facebook group (a suggestion that still gets aired from time to time, on the utterly mistaken assumption that they duplicate each other’s functionality). The Landing is a learning commons, a network of people that, whatever they might be doing here, share an intent to learn, where people are valued for what they bring to one another, not for what they bring to the owners and shareholders of the company that runs the site. Quite apart from other issues around ownership, privacy and functionality, that’s a pretty good reason to keep it.

 

Address of the bookmark: https://www.insidehighered.com/blogs/library-babel-fish/ghost-machines-loving-grace