History flow in Wikipedia edits

This is fascinating, not only in providing a bit of information about the resilience of Wikipedia pages to vandalism but also in graphically illustrating the flow to the adjacent possible, ever-growing and branching, that makes the system not just a repository of knowledge but a means through which it grows.

Address of the bookmark: http://www.research.ibm.com/visual/projects/history_flow/results.htm

Group dynamics key to avoiding tragedy of the commons

Interesting report on an article suggesting that the tragedy of the commons may be avoidable more easily at smaller scales. Obvious but interesting, this suggests to me that the principle of parcellation that helps drive differentiation in evolution has some ubiquitous underpinnings – it works in much the same way whether you consider competition or cooperation.

Address of the bookmark: http://arstechnica.com/science/news/2011/06/group-dynamics-key-to-avoiding-tragedy-of-the-commons.ars?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+arstechnica%2Findex+%28Ars+Technica+-+Featured+Content%29

Privacy on Social Networks: American, Chinese, and Indian Perspectives – IEEE Spectrum

Shared with me by a student (self-referentially anonymised for public broadcast but thanks, you know who you are!) this is a brief article about a research paper comparing attitudes to privacy and trust on social network sites in different countries – specifically, India, China and the US.

Some of the differences are to be expected – telemarketers and poor privacy laws probably account for a little of the greater reticence to share private information in the US, for example, and the article mentions use of fake identities in China due to fear of restrictive governments. But there are some big disparities that may point to more profound cultural differences. It would be interesting to explore the extent to which this is determined by surrounding culture and context, and how much by the path dependencies of the popular social network sites in these countries. For instance, Facebook itself, with its famously callous and exploitative regard for user privacy, might have a lot to do with the apparently greater concerns for privacy in the US. Yang Wang’s site has more interesting research in this area at http://www.cs.cmu.edu/~yangwan1/projects.html#sns-privacy

Address of the bookmark: http://spectrum.ieee.org/tech-talk/telecom/internet/privacy-on-social-networks-american-chinese-and-indian-perspectives/?utm_source=techalert&utm_medium=email&utm_campaign=052611

Driving with chickens: the pedagogy/technology thing again

Chickens and eggs

I’ve been involved in a conversation recently in which a participant described a problem as one of chickens and eggs – should we look at new technologies and adapt teaching methods to use them, or should we look at how people learn and design technologies to help that process? The latter is the conventional perspective on learning technology design, which I think is fundamentally wrong. I gave a pat answer about the technology/pedagogy dance, adjacent possibles and the need to consider it from a systemic perspective, not as a question of priorities. Thinking about it some more, I have a metaphor that may help to explain my position better.

Main premise: most intentional learning has a destination in mind – learning outcomes, competences, objectives and so on. At least, it has a direction or trajectory.

Getting around without teachers

Some destinations are easy to reach – if it is right in front of you then you can usually walk and don’t really need much assistance.

Further destinations can be reached if there are signposts, more easily if there are roads. Wikipedia and a good old Google search are not bad for this, when the destinations are nearby. 

Some are too far to walk – then we need further assistance. 

Learning technologies

We can more easily reach further destinations if we have a vehicle. Courses, programs of instruction, textbooks and computer-aided instruction are not bad for this. Now we are seriously entering the world of learning technologies.

For efficiency, that vehicle may have many seats, though it will not necessarily take the most direct route (if it needs to pick up or drop off passengers along the way) nor may it drop us precisely where we want to go. That’s the nature of most courses, textbooks and programs.

A vehicle needs a driver. Let’s, for the sake of argument, think of that driver as being the teacher: it’s an ugly metaphor but it quite accurately describes certain pedagogical approaches – especially when travelling in a bus. Of course, the learner can be the driver, or the learner’s friend, or they can take it in turns…teachers don’t have to be formally recognised and certified. Taxis are great, but expensive. One day we might have effective automated versions of taxis, but for now I would not risk my life in one of them.

There’s no point in having a driver if there is no vehicle to drive. The vehicle must have requisite parts like wheels, brakes, an engine, spark plugs (for a petrol-powered vehicle) and so on, must fit on the road, and that needs an infrastructure of garages and so on to support it. Generally speaking, there must be a road, it should be well-maintained and lead in roughly the right direction – at least, in combination, the roads should lead to the destination, with as little discomfort and delay as possible.  We have technologies like classrooms, whiteboards, learning management systems and so on that fulfil that role.

(As an aside, some destinations have poorly made-up roads leading to them and some have no roads at all – then we might need good guides with a fair amount of experience of navigating similar territory. Most PhDs follow that kind of pattern. A parallel set of learning technologies and a pretty interesting one is needed here.)

The driver needs to follow some rules – it’s generally bad to make arbitrary decisions about which side of the road to drive on and how one treats traffic lights, for instance. That doesn’t negate the need for creativity in driving, but it has to follow some constraints. Again, these are part of the technology for learning – softer than the mechanical constraints, but still technologies – and often provided by institutions, regulations, patterns, templates and so on.

Subject matter and pedagogies

The driver needs to navigate effectively – to reach the destination as pleasantly or as fast as possible. Personally, I’m usually a fan of making journeys interesting, but sometimes speed matters more. Knowing the way is not dissimilar to knowing the subject matter, tempered with some other considerations not least of which is knowing what landmarks to point out along the way, which roads to avoid, what the passengers might like to see. That leads us to pedagogy.

The driver needs skills, to apply principles, from simple stuff like not pressing the brake and accelerator at the same time, knowing how to steer, to more complex heuristics about what distance to keep from others in wet weather, how to read a map, or even how to double-declutch. This is actually still part of the technology – repeatable patterns and processes that we can communicate with others that help in the orchestration of phenomena to some purpose. This, combined with a bit of the navigation stuff, is close to the notion of pedagogy.

 

Transports of delight

And there are good drivers and bad drivers – we can all learn to follow the rules, more or less, but great drivers go beyond that and all drivers, to a greater or lesser extent, adapt the rules, change behaviours, respond differently in novel situations. And that’s what being a great teacher is really about. A great driver can break a lot of rules and still get us there faster, more comfortably, more safely, even in a rickety vehicle, than a bad one who follows all the rules.

Necessary, not sufficient conditions

It makes no sense to talk about some things coming before others on this journey or being more important. To a greater or lesser extent that varies a bit with context, all are necessary, none are sufficient. Wheels are as important as engines are as important as drivers, in the sense that the absence of any would stop us getting to our destination. Sure, we can work around the loss of some things: we might manage without seats, or roads, or signposts, or some traffic regulations, or a good driver, but the journey would be a lot riskier, probably longer (or would feel longer) and less comfortable without them. What matters is that everything should work together and play its part in getting where we want to go, how we want to get there.

Chickens? Eggs? Whatever

So, there’s no chicken and egg problem here. The issue is one of designing a system with parts that play nicely with each other, including pedagogies, subject matter knowledge, digital systems, social systems, organisational systems and a whole lot more.

I suspect there’s fun to be had with this metaphor – what about trains, planes, horses and carts, roller skates, cycles, etc? Who are the traffic police? What about car co-ops? Who decides whether a vehicle is roadworthy? But that’s maybe for another time.

Disgruntlement against the machine

I am feeling rather grumpy and sleep-deprived today thanks to a classic example of hard technology.

I have an unfortunate tendency to travel between continents and have credit cards on each continent so have grown used to being disturbed from time to time at odd hours of the night by people checking for fraud and card-theft. It’s irritating and usually stupid but I’m quite glad, on balance, that they are paying attention for those odd occasions when it really matters.

It has always been a pretty hard system, with card company employees following hard procedures when alerted by (typically very dumbly-) automated systems that suggest unusual card use patterns. The questions to ascertain your identity can be taxing. Trying to remember the names of nearby streets to your home or the birthdates of relatives when you are jet lagged and have been awoken at 3 in the morning is never fun and I’m guessing the employees might have received a fair amount of abuse, not to mention odd answers in the past. Well, now they don’t. Now, it is fully automated, involving a lot of pressing of buttons in response to irritating and slow questions. No human being is involved in the process, thereby eliminating the last bit of softness in what was already a very hard system. Computers will tirelessly call you every few minutes in the middle of the night, leaving messages that start in the middle because they cannot figure out that they are talking to voice mail, until you respond.

The central principle for making this process hard is not just automation, but replacement. If this were an additional process to extend the current labour-intensive system then it would actually, in some ways, make the whole system softer. But it’s not: what used to be partly human is now wholly machine. It also employs other classic hard technology features of filtering and limiting: choices are reduced to digital answers, traversing a decision tree that (in this case) appears to have been designed by a three-year-old and which allows no grey answers.

Soft system design is very different. Soft systems have built-in flexibility to adapt. When they do automate they extend, aggregating automation with what is already there, not replacing it. They suggest and recommend but do not enforce actions. They allow shades of grey. In a soft system version of the fraud detection system, you could break out from the machine at any moment to talk to a person: in fact, it would be the first option offered. Maybe you could even ask for a call that did not disturb you in the middle of the night, especially if (as is usually the case) you probably know why they are calling you so could reduce the alert level straight away by saying ‘yes, I am abroad’ or ‘yes, I did buy a plane ticket today because yes, I am abroad’ or ‘yes, like many times in the past, I bought a plane ticket from a place where I have very often bought plane tickets to travel from a location I usually travel from to a location I usually travel to and, if your stupid fraud detection algorithms had paid attention to the easily discernible fact that I had checked my online account for sufficient funds a few minutes previously and had then entered the correct code in your commendable online fraud protection system at the time of purchase, and that you probably noticed that it happened in a different timezone to your own so it might be a bit inconsiderate of you to call me at 3:30am, 3:40am, 3:50am, 4:00am and 4:10am, then we would not be having this stupid conversion right now, you buffle-headed buffoon. I spit on your tiny head and curse you and all your family.’ Or words to that effect. Yes, soft systems can be hard.

The neediness of soft technologies

This site, The Landing, is a bit like a building. The more people that enter that building, the more valuable it becomes. The real value and substance of the site is not the building itself but what goes on and what can go on inside it.

If it doesn’t provide useful rooms and other spaces that fit the needs of the people within, or if the people inside cannot find the rooms they are looking for, then it needs to be improved – better signposts, easier halls, stairways and elevators, bigger doors, different room layouts. This matters and it’s certainly a big part of what influences behaviour: we shape our dwellings and afterwards our dwellings shape our lives, as Churchill put it. However, like nearly all social technologies, the Landing is a soft technology, where many of the structures are not created by architects and designers but by the inhabitants of the space. Far more than in almost any physical building, it is the people, the stuff they share and the ways they share it that make it what it is. They are the ones that decide conventions, rules, methods, procedures, interlinked tools and so on that overlay on the basic edifice to turn it into whatever they need it or want it to be. 

Soft technologies are functionally incomplete. They are needy, by definition lacking every necessary part of the technological assembly that makes them useful. They can become many different technologies by aggregation or integration with other technologies, including not only physical/software tools but also and more significantly methods, norms, processes and patterns that are entirely embodied in human minds.

Hard technologies are those that are more complete, less needy. The more they do what they do without the need to aggregate them with different technologies, the harder they become. All technologies, soft or hard, will play some part in bigger systems and almost all if not all will rely on those systems for not only meaning but also their existence and continued functionality – for example, power, maintenance or, in the case of non-corporeal technologies like laws, pedagogies and management processes, embodiment. However, harder technologies play far more limited, fixed roles in those systems than softer ones. A factory tooled to produce milk bottles probably does that really well, consistently, and fast but, without significant retooling and reorganisation, is not going to produce glass ornaments or thermometers. A metal tube and  furnace need the methods and processes employed by the glass blower to turn raw materials into anything at all but, because there are few limits to those methods and processes and those can be adjusted and adapted almost continuously, can be used in many different ways to create many different things. The needier a technology, the more ways there are to fulfil those needs and consequently the more creative and rich the potential outcomes may be.

A microchip is a very needy technology. Assembled with others, it can become still needier: a computer, for example, is the very personification of neediness, doing nothing and being nothing until we add software to make it be almost anything we want it to be – the universal machine. Conversely, in a watch or a cash register or an automated call answering system it becomes part of something more complete, that does what it does and nothing more – it needs nothing and does what it does: the personification of hardness.

Although automation is a typical feature of harder technologies, it depends entirely on what is being automated and how it is done. Henry Ford’s classic production line turned out a lot of similar things, all of them black: it was archetypally hard, a system needing little else intrinsic to the system to make it complete. Automation largely replaced the need for technologies needing skill and decision making to make them complete.  Email, on the other hand, an archetypal soft technology, actually gained softness from automation of (for instance) MIME handling of rich-media enclosures. What was the preserve of technically savvy nerds with a firm grasp of uuencoding tools became open to all with standard for rich media handling that automated a formally manual (and very soft) process. This was possible because automation was aggregated with the existing technology rather than replacing it. The original technology lost absolutely none of its initial softness in the process but instead gained new potential for different ways of being used – photo journals, audio broadcasts, rich scheduling tools and so on. Neediness and automation are not mutually exclusive when that automation augments but does not replace softer processes. Such automation adds new affordances without taking any existing affordances away.

Twitter is a nice example of an incredibly soft social technology that has become yet softer through automation. Twitter is soft because it is can be many different things: it is very malleable, very assemblable with other technologies, very evolvable  and very connectable (both in and out). A big part of what makes it brilliant is that it does one small trick, like a stick or a screwdriver or a wheel and, like those technologies, it needs other technologies, soft or hard, to make it complete. Twitter’s evolution demonstrates well how soft technologies are functionally needy.  For instance, hashtags to classify subject matter into sets, and the user of @ symbols to refer to people in nets were not part of its original design. They started as soft technologies – conventions used by tweeters to turn it into a more useful technology for their particular needs, adding new functionality by inventing processes and methods that were aggregated by them with the tool itself. To begin with they were very prone to error and using them was a manual and not altogether trivial process. What happened next is really interesting – the makers of Twitter hardened these technologies and made them function within the Twitter system, and to function well, with efficiency and freedom from error – classic hallmarks of a hard technology. But, far from making Twitter more brittle or harder, this automation of soft technologies actually softened it further. It became softer because Twitter was adding to the assembly, not replacing any part of it, and these additions opened up their own new and interesting adjacent possibilities (mining social nets, recommending and exploring tags, for example). Crucially, the parts that were hardened took absolutely nothing away from what it could do previously: users of Twitter could completely ignore the new functionality if they wished, without suffering at all. 

So, back to the Landing. The Landing is simple toolset with a set of affordances, a needy technology that by itself does almost nothing apart from letting people share, network and communicate. By itself, it is hopeless for almost anything more complex than that, but those capacities make it capable of being a part of a literally infinite possible variety of harder and softer technologies. Only in assembly with social, managerial, pedagogical and other processes does it become closer to or, if that’s what people want, further from completeness. And we, its architects, can help soften the system further by adding new tools that augment but do not replace the things it already does, thereby making it needier still, increasing its functional incompleteness by adding new incomplete functions.

It’s a funny goal: to intentionally build systems that, as they grow in size and complexity, lack more and more. Systems that actually become less complete the more complete we try to make them. It reminds me a little of fractal figures which, as we zoom in to look at them in greater detail, turn out to be infinitely empty as well as infinitely full.