The neediness of soft technologies

This site, The Landing, is a bit like a building. The more people that enter that building, the more valuable it becomes. The real value and substance of the site is not the building itself but what goes on and what can go on inside it.

If it doesn’t provide useful rooms and other spaces that fit the needs of the people within, or if the people inside cannot find the rooms they are looking for, then it needs to be improved – better signposts, easier halls, stairways and elevators, bigger doors, different room layouts. This matters and it’s certainly a big part of what influences behaviour: we shape our dwellings and afterwards our dwellings shape our lives, as Churchill put it. However, like nearly all social technologies, the Landing is a soft technology, where many of the structures are not created by architects and designers but by the inhabitants of the space. Far more than in almost any physical building, it is the people, the stuff they share and the ways they share it that make it what it is. They are the ones that decide conventions, rules, methods, procedures, interlinked tools and so on that overlay on the basic edifice to turn it into whatever they need it or want it to be. 

Soft technologies are functionally incomplete. They are needy, by definition lacking every necessary part of the technological assembly that makes them useful. They can become many different technologies by aggregation or integration with other technologies, including not only physical/software tools but also and more significantly methods, norms, processes and patterns that are entirely embodied in human minds.

Hard technologies are those that are more complete, less needy. The more they do what they do without the need to aggregate them with different technologies, the harder they become. All technologies, soft or hard, will play some part in bigger systems and almost all if not all will rely on those systems for not only meaning but also their existence and continued functionality – for example, power, maintenance or, in the case of non-corporeal technologies like laws, pedagogies and management processes, embodiment. However, harder technologies play far more limited, fixed roles in those systems than softer ones. A factory tooled to produce milk bottles probably does that really well, consistently, and fast but, without significant retooling and reorganisation, is not going to produce glass ornaments or thermometers. A metal tube and  furnace need the methods and processes employed by the glass blower to turn raw materials into anything at all but, because there are few limits to those methods and processes and those can be adjusted and adapted almost continuously, can be used in many different ways to create many different things. The needier a technology, the more ways there are to fulfil those needs and consequently the more creative and rich the potential outcomes may be.

A microchip is a very needy technology. Assembled with others, it can become still needier: a computer, for example, is the very personification of neediness, doing nothing and being nothing until we add software to make it be almost anything we want it to be – the universal machine. Conversely, in a watch or a cash register or an automated call answering system it becomes part of something more complete, that does what it does and nothing more – it needs nothing and does what it does: the personification of hardness.

Although automation is a typical feature of harder technologies, it depends entirely on what is being automated and how it is done. Henry Ford’s classic production line turned out a lot of similar things, all of them black: it was archetypally hard, a system needing little else intrinsic to the system to make it complete. Automation largely replaced the need for technologies needing skill and decision making to make them complete.  Email, on the other hand, an archetypal soft technology, actually gained softness from automation of (for instance) MIME handling of rich-media enclosures. What was the preserve of technically savvy nerds with a firm grasp of uuencoding tools became open to all with standard for rich media handling that automated a formally manual (and very soft) process. This was possible because automation was aggregated with the existing technology rather than replacing it. The original technology lost absolutely none of its initial softness in the process but instead gained new potential for different ways of being used – photo journals, audio broadcasts, rich scheduling tools and so on. Neediness and automation are not mutually exclusive when that automation augments but does not replace softer processes. Such automation adds new affordances without taking any existing affordances away.

Twitter is a nice example of an incredibly soft social technology that has become yet softer through automation. Twitter is soft because it is can be many different things: it is very malleable, very assemblable with other technologies, very evolvable  and very connectable (both in and out). A big part of what makes it brilliant is that it does one small trick, like a stick or a screwdriver or a wheel and, like those technologies, it needs other technologies, soft or hard, to make it complete. Twitter’s evolution demonstrates well how soft technologies are functionally needy.  For instance, hashtags to classify subject matter into sets, and the user of @ symbols to refer to people in nets were not part of its original design. They started as soft technologies – conventions used by tweeters to turn it into a more useful technology for their particular needs, adding new functionality by inventing processes and methods that were aggregated by them with the tool itself. To begin with they were very prone to error and using them was a manual and not altogether trivial process. What happened next is really interesting – the makers of Twitter hardened these technologies and made them function within the Twitter system, and to function well, with efficiency and freedom from error – classic hallmarks of a hard technology. But, far from making Twitter more brittle or harder, this automation of soft technologies actually softened it further. It became softer because Twitter was adding to the assembly, not replacing any part of it, and these additions opened up their own new and interesting adjacent possibilities (mining social nets, recommending and exploring tags, for example). Crucially, the parts that were hardened took absolutely nothing away from what it could do previously: users of Twitter could completely ignore the new functionality if they wished, without suffering at all. 

So, back to the Landing. The Landing is simple toolset with a set of affordances, a needy technology that by itself does almost nothing apart from letting people share, network and communicate. By itself, it is hopeless for almost anything more complex than that, but those capacities make it capable of being a part of a literally infinite possible variety of harder and softer technologies. Only in assembly with social, managerial, pedagogical and other processes does it become closer to or, if that’s what people want, further from completeness. And we, its architects, can help soften the system further by adding new tools that augment but do not replace the things it already does, thereby making it needier still, increasing its functional incompleteness by adding new incomplete functions.

It’s a funny goal: to intentionally build systems that, as they grow in size and complexity, lack more and more. Systems that actually become less complete the more complete we try to make them. It reminds me a little of fractal figures which, as we zoom in to look at them in greater detail, turn out to be infinitely empty as well as infinitely full. 

 

Innovations in learning and teaching

Recently I received an email asking me to identify, with almost no constraints, some examples of innovative teaching and learning practices in universities. Gosh, that’s a tricky one. I don’t think I can provide a sensible answer, for several reasons:

 

  • I’m aware of no teachers (including learning designers, mentors, tutors, coordinators, professors, etc) who have *not* innovated in teaching nor many who don’t do so as a matter of course. There are differing degrees of innovation, naturally, but to teach is to learn and it is necessarily a creative process. I don’t see how it would be possible to teach without innovating. They might not be very astounding or good innovations, of course.
  • Maybe it depends at what scale you are looking at it. If I had no significant innovations in every course that I write and maybe in every lesson or activity I design then I think I would give up now. It could be as small a thing as finding a new way to express an old problem, a use of a trick used elsewhere in a new setting, or as big a thing as a whole new way of conducting the process. It’s all innovation.
  • Innovation in learning is a trickier one still to pin down which reflects an important issue that there are many teaching activities that fail to lead to effective learning and even more learning that involves nothing much like teaching. The use of paper mills for contract cheating and hint sites for exam cheating is pretty innovative sometimes.
  • And then there’s the issue of innovation vs invention – in many universities it is undeniably innovative to use an LMS or get rid of exams while many have dissed such things as prehistoric dinosaurs that are not fit for purpose for over a decade (for the LMS) and over 200 years (for the exam) In each case, about the time it was invented, in fact.
  • Similarly, the kinds of innovation that would matter somewhere like Athabasca would not be the same as for a conventional campus-based university – approaches to self-paced learning, for instance, would have little applicability elsewhere. 
  • Much of this relates to the fact that innovation is very context-sensitive. For some contexts, simply using a different tone of voice might be a major innovation. In others, one might have to try harder.
  • This also relates back to the re-invention problem: much of what we still identify as innovative was suggested by Dewey a hundred years ago. 
  • Is an innovation in making more reliable summative assessments an example of an innovation in learning and teaching? Or a means to improve the efficiency of student script processing using OCR or LSA tools? Or a citation management tool? I’m not sure. It depends on context.
  • What about MOOCs? The teaching is often from within a university but the learning is not.

 

An innovation, by and large, is a novel application of an existing idea in a different setting. It’s not about inventing something never seen before, but of doing something in a context where it has not been tried previously. This comes back to the adjacent possible and some stronger variants on technological determinism. Once some technologies and systems are in place it is inevitable that other things will follow. In some cases, this is obvious and indisputable: for instance, a combination of LMS availability and a mandate to use it by an institution means that simply using it is not an innovation – you may innovate in the ways you use it, but not simply in using it. In other cases, the effect is subtler but no less compelling. For example, we have long known that dialogue can be a very powerful tool for learning but, for those involved in distance education, the opportunities to use it used to be expensive and impractical, for the most part. When large-scale ubiquitous cheap and simple communication became available it was not innovative to use it – it would be totally bizarre not to use it, in fact, a sign of idiocy or extreme complacency. There may be some details about the implementation and adapting cost-effectively to specific technologies that could be described as innovative, but the imperative to use the tools in the first place for learning is as compelling as the institutional edict: it’s too obvious to be described as an innovation, unless we describe everything we do as an innovation. Which, of course, in some ways it probably is.

So – does anyone have any ideas for answers to the question?  At a large scale I’m thinking that some of the more interesting innovations of the last couple of decades might include (bearing in mind these are not new inventions and there are lots of uninnovative ways to go about them):

  • Google search and Wikipedia: the two most successful online learning tools ever created, I think. Everyone who has ever used them to learn has probably found innovative ways to learn as a result. In terms of impact, these two tools (and their ilk) are having a greater transformative effect on learning in universities and elsewhere than anything since the invention of the printing press. They are the tip of the wedge that will, eventually, completely transform formal education.
  • e-portfolios: nothing new in concept, but the associated pedagogies, benefits of electronic aggregation, supporting tools and processes mean they seem to be gaining a lot of traction the world over and are a darn good learner-centred idea whose time has come.
  • action learning: an old-ish idea (at least early 90s, probably before) but one of the few truly andragogic pedagogies that has achieved some transformative effects where it has been used
  • MOOCs: connectivist approaches, openness, large scaling, lack of coercion to learn, and a genuinely different approach to semi-formal learning make these and their cousins still pretty innovative. It’s not all a good innovation. They probably only benefit a very small proportion of the participants or, more accurately, the ones that really do participate probably gain a lot more than those who participate less, but the use of emergence, crowds, distributed networks, reified connections and so on shows what I believe to be the right direction to be heading, even if the pedagogies, supporting infrastructures, formal processes for recognition and tools are not quite there yet.

I could probably think of hundreds of smaller innovations, ways of using pedagogies and other tools differently, new tools, new processes, new combinations. But that’s just the problem – it’s really hard for me to see the wood for the trees.

 

 

Google to Launch Major New Social Network Called Circles, Possibly Today (Updated)

It sounds like Google is heading in the same direction that we are heading on the Landing, offering different ways of interacting with different people. This is necessary in the evolution if social software. It will be interesting to discover whether they are also thinking of personal as well as social contexts – not only do we present different facets of ourselves to different people at different times (and the same people at different times – a much trickier problem) but we also adopt very different roles at different times in our personal lives. I think differently, need different things, talk to different people and read different things depending on what I am doing and what I mean to do.

That’s the idea behind the poorly named ‘context switcher’ that is being developed at Athabasca – to adopt different personas at different times and different contexts both for other people and for our own personal purposes. I Just wish we had a better name that made the meaning more obvious. ‘Circles’ is pretty good in a social context but less meaningful in a personal context so I would reject that. Lately I’ve been thinking that ‘facets’ captures the meaning better (it is about different facets of ourselves, whether for our own benefit or the benefit of others) but ‘facets’ is (like context switching) maybe a little technical. It works well for me and anyone else who has ever read Ranganathan, but maybe lacks popular appeal.

Any and all ideas appreciated!

Address of the bookmark: http://www.readwriteweb.com/archives/google_to_launch_major_new_social_network_called_c.php?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+readwriteweb+%28ReadWriteWeb%29

How The New York Times Is Incorporating Social & Algorithmic Recommendations

Interesting report about use of various forms of recommendations in NYT. The article suggests a likely division into human-edited, friend-recommended and algorithmically recommended stories that neatly captures what Terry Anderson and I have been discussing in terms of groups, networks and collectives. The transition between hierarchical group (the editor decides) to network (your friends suggest) to collective (sets are mined for crowd opinions) mirrors the traditional classroom, the network and the collective intelligence of some Web systems in online learning.

Address of the bookmark: http://mashable.com/2011/03/10/new-york-times-recommendations-2/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Mashable+%28Mashable%29

Technologies and learning

I’ve spent far more time than is healthy over the past few years thinking about technology, learning and education, and how they fit with each other. I was interested to read this recent meta-meta-study on the effects of computers in education, but it really tells us nothing we did not already know (though it has some good insights into why it is tricky and what may be needed).

The trouble with a focus on a tool, especially something like a computer that is a potentially infinite number of tools, is that it tells us practically nothing at all of value about the learning technology. All education, bar none, is technology-enhanced learning and all, bar none, involves tools – minimally, cognitive/social tools like pedagogies that are assumed to lead to learning (and, usually, tools to assess it has happened), organisational tools to support bringing people together, clocks to assist that process, spaces constructed to not hinder it too much, not to mention the ultimate toolset, language itself. That’s just a small part of the list, of course. Most education, especially in a formal context, involves dozens or even hundreds or thousands of tools, assembled into technologies which may themselves be part of technological assemblies, in order for it to happen. The issue is not whether a technology like language (say) is used, but how it is used. And that’s what no metastudy that focuses on a single set of tools will ever tell us in any useful way. For that matter, it seldom comes out properly in the original studies themselves. You might just as stupidly ask what effect chalk has on learning. Used well, in conjunction with other tools like blackboards, classroom seating arrangements, intelligent pedagogies and a caring teacher, it can have a hugely beneficial effect and, without it (assuming other co-occurring variables like the presence of a blackboard and a pedagogy that requires it) things can go terribly wrong. Defining a technology must include thinking about what it uses and what it is being used for – otherwise it is just talking about objects that are of no interest or value. So, we should be looking at the technological assemblies that we use and how they work together, of which specific tools are a necessary but not even close-to sufficient component. We are not going to show anything valuable about computers per se because they are universal tools, media and environments: because of that flexibility, they can be used to improve learning. They let us do pretty much anything that we want if we can program and use them effectively. If tools can improve learning, and computers can be pretty much any tool, then of course they can be of phenomenal value. That’s just basic logic. It would be stupid to suggest otherwise. It’s not even worth asking the question.  We might ask reasonable questions about the economics of using them, access or health issues and so on, yet it is as certain as night follows day that computers can help people to learn. But how? Now, that is a really good (and less well-answered) research question which actually strikes at the heart of what all education is about.

Bearing that in mind, I have been wondering of late about the differences between social interactions online and face to face. Some differences appear to be obvious, even in the most immersive of online communication systems – the lack of important cues like scent, touch, peripheral vision, limitations on hearing background noises, limitations of rendition of video (even in 3D at high resolution) the fact that no commitment to meet in one place has been made (and therefore no continuation beyond the communication event itself), the fact that each participant exists in an environment where they are differently distracted, and so on. But, of course, such things may occur in face to face environments too. People have disabilities that limit shared sensations, if I sit opposite you at a table, my distractions are different from yours (I once failed an interview at least partly because I alone was facing a window over the sea and thought I could see whales playing in the waves, but that’s another story) and my commitment to go to a class down the hall may be very different from yours to come from a poorly connected village 50 miles away. In most respects, there are analogous situations in the most mundane of face to face meetings to those we experience routinely in online scenarios and, though the scale of effect may vary, the means of dealing with problems may be more straightforward and the ubiquity of the problems may vary, we still have to face them. 

I’d be really interested to hear of any research that has looked into such constraints in a face to face setting without the intervention of computers – differences caused by seating arrangements, differences caused by being at the front of the class or the back, the effects of a teacher with body odour issues, the effects of distance traveled to class on commitment, and so on. Does anyone know of such studies? I’ve read a few here and there but not looked too carefully at the literature. I’m guessing some work must have been done on this, especially with regard to the effects of disabilities. My suspicion is that such easy and commonplace problems might tell us some useful things about how to fill the transactional distance gap in online systems.

 

How Facebook Is Killing Your Authenticity

Yet another article bemoaning the uni-dimensionality of Facebook identity – something we have been banging on about for a really long time. I guess there are two potential outcomes for this groundswell of revolt:

  1. A mass (and probably slow) move away from Facebook to a federated and more or less loosely joined set of identities and/or the kind of context-switching functionality we are working on (still) for the Landing.
  2. Facebook waking up to the problem and doing more about it than adding some group functionality

I think both are clearly happening but I fear the second option might be more likely to succeed in the short term than the first. Facebook developers are very smart and I’m certain they have been working hard on the problem for some while. But the last thing the Web needs is centralised control. We need to own our multiple identities and to be free to adopt innovative solutions. Unfortunately, reliance on a central provider reduces our capacity to manage multiple identities (it’s not a technical limitation but Metcalfe’s and Reid’s laws ensure that alternatives have a geometrically dwindling change of success) and constrains innovation in exactly the place it is needed most right now.

Address of the bookmark: http://www.businessinsider.com/how-facebook-is-killing-your-authenticity-2011-3

Woz to educators: “be brave, use the new technology”

Steve Wozniak in great inspirational form discussing a very straightforward, pragmatic and obvious approach to education which is hard to argue with. It is, as he observes, very very far from the norm.

The interview ends with a few comments on the much-maligned Apple Newton which make me wonder a bit – the idea was to make the computer do the work for you but one of the more memorable things about the Newton was the high failure rate in how it interpreted what was written on it. This is a big risk in hardening technologies – the more the computer does for you, the fewer decisions you need to make, the more control the programmer has over your life. This is particularly bad when the programmer fails but, even when the program works as it should, we need to be acutely aware of how our work is being shaped by the design of the system. I think a big difference between the Newton and the iPad (which he also mentions) is that the iPad gives much greater control to the end-user, not at an individual app level but in the wide range of apps that may be selected. The problem becomes one of finding the right app, not of battling with the machine which is, of course, still quite a big problem. But it is a problem that is soluble by ordinary mortals, not programmers. And that is a big difference.

Address of the bookmark: http://arstechnica.com/apple/news/2011/03/woz-to-educators-be-brave-use-the-new-technology.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss

learner-teaching-learning analytics

I’ve been having some interesting discussions in Banff this week with folks interested in ‘learning analytics’. I put it in quotes because I’m not convinced that it is a) a distinct field or b) one thing.

Ignoring issues of massive overlaps and shared values with other fields (such as data mining, collaborative filtering, adaptive hypermedia, natural language processing, learning design and evaluation and so on) which make it hard to distinguish at times, it seems to me that there are at least three subfields:

learner analytics: used by admins, policy makers, governments and so on to see what learners are doing with a view to taking some action at a pragmatic or policy level as a result. May also be used by teachers to monitor and understand learners and their needs. Rarely, but potentially, of use to learners.

teaching analytics: looking at the success or otherwise of teaching interventions – courses, assessments, teaching acts, content construction, learning design, etc, with a view to changing the teaching process to make it better. Pretty much exclusively the domain of those involved in the teaching process like teachers and instructional designers.

learning analytics: looking at how people are learning, including construction of artefacts, interactions with others, progression, etc, with a view to taking direct action to improve it, usually (but by no means necessarily) by and for the learner.

I care about learning analytics and see great practical value in teaching analytics. Analysing learning and teaching is almost entirely about helping people to learn and, while it may be poorly done, the intentions are almost all aimed at making learners’ lives better. Analysing learners involves some murkier areas: it may have many motivations, including potentially risky ones like implementing efficiencies, targeting for marketing, allocating resources and so on as well as clearly good things like identifying under-represented groups or at-risk learners. I suspect that it may become the most popular analytics domain in education but, because of the dangers, it demands more serious cross-disciplinary and ethically well-considered research than the others.