Oh hell. I really liked Skype. Damn. Time to move to Google Talk.
Address of the bookmark: http://www.cbc.ca/news/business/story/2011/05/10/microsoft-skype-sale.html?ref=rss
Oh hell. I really liked Skype. Damn. Time to move to Google Talk.
Address of the bookmark: http://www.cbc.ca/news/business/story/2011/05/10/microsoft-skype-sale.html?ref=rss
Quite interesting lecture notes on an alternative future for online learning. Nothing new, a bit gung-ho about the potential and not a lot on the risks, but nice to see people are thinking about this and consolidating ideas in this area.
Address of the bookmark: http://halfanhour.blogspot.com/2011/05/crowdsourcing-future-of-elearning.html
I am feeling rather grumpy and sleep-deprived today thanks to a classic example of hard technology.
I have an unfortunate tendency to travel between continents and have credit cards on each continent so have grown used to being disturbed from time to time at odd hours of the night by people checking for fraud and card-theft. It’s irritating and usually stupid but I’m quite glad, on balance, that they are paying attention for those odd occasions when it really matters.
It has always been a pretty hard system, with card company employees following hard procedures when alerted by (typically very dumbly-) automated systems that suggest unusual card use patterns. The questions to ascertain your identity can be taxing. Trying to remember the names of nearby streets to your home or the birthdates of relatives when you are jet lagged and have been awoken at 3 in the morning is never fun and I’m guessing the employees might have received a fair amount of abuse, not to mention odd answers in the past. Well, now they don’t. Now, it is fully automated, involving a lot of pressing of buttons in response to irritating and slow questions. No human being is involved in the process, thereby eliminating the last bit of softness in what was already a very hard system. Computers will tirelessly call you every few minutes in the middle of the night, leaving messages that start in the middle because they cannot figure out that they are talking to voice mail, until you respond.
The central principle for making this process hard is not just automation, but replacement. If this were an additional process to extend the current labour-intensive system then it would actually, in some ways, make the whole system softer. But it’s not: what used to be partly human is now wholly machine. It also employs other classic hard technology features of filtering and limiting: choices are reduced to digital answers, traversing a decision tree that (in this case) appears to have been designed by a three-year-old and which allows no grey answers.
Soft system design is very different. Soft systems have built-in flexibility to adapt. When they do automate they extend, aggregating automation with what is already there, not replacing it. They suggest and recommend but do not enforce actions. They allow shades of grey. In a soft system version of the fraud detection system, you could break out from the machine at any moment to talk to a person: in fact, it would be the first option offered. Maybe you could even ask for a call that did not disturb you in the middle of the night, especially if (as is usually the case) you probably know why they are calling you so could reduce the alert level straight away by saying ‘yes, I am abroad’ or ‘yes, I did buy a plane ticket today because yes, I am abroad’ or ‘yes, like many times in the past, I bought a plane ticket from a place where I have very often bought plane tickets to travel from a location I usually travel from to a location I usually travel to and, if your stupid fraud detection algorithms had paid attention to the easily discernible fact that I had checked my online account for sufficient funds a few minutes previously and had then entered the correct code in your commendable online fraud protection system at the time of purchase, and that you probably noticed that it happened in a different timezone to your own so it might be a bit inconsiderate of you to call me at 3:30am, 3:40am, 3:50am, 4:00am and 4:10am, then we would not be having this stupid conversion right now, you buffle-headed buffoon. I spit on your tiny head and curse you and all your family.’ Or words to that effect. Yes, soft systems can be hard.
This site, The Landing, is a bit like a building. The more people that enter that building, the more valuable it becomes. The real value and substance of the site is not the building itself but what goes on and what can go on inside it.
If it doesn’t provide useful rooms and other spaces that fit the needs of the people within, or if the people inside cannot find the rooms they are looking for, then it needs to be improved – better signposts, easier halls, stairways and elevators, bigger doors, different room layouts. This matters and it’s certainly a big part of what influences behaviour: we shape our dwellings and afterwards our dwellings shape our lives, as Churchill put it. However, like nearly all social technologies, the Landing is a soft technology, where many of the structures are not created by architects and designers but by the inhabitants of the space. Far more than in almost any physical building, it is the people, the stuff they share and the ways they share it that make it what it is. They are the ones that decide conventions, rules, methods, procedures, interlinked tools and so on that overlay on the basic edifice to turn it into whatever they need it or want it to be.
Soft technologies are functionally incomplete. They are needy, by definition lacking every necessary part of the technological assembly that makes them useful. They can become many different technologies by aggregation or integration with other technologies, including not only physical/software tools but also and more significantly methods, norms, processes and patterns that are entirely embodied in human minds.
Hard technologies are those that are more complete, less needy. The more they do what they do without the need to aggregate them with different technologies, the harder they become. All technologies, soft or hard, will play some part in bigger systems and almost all if not all will rely on those systems for not only meaning but also their existence and continued functionality – for example, power, maintenance or, in the case of non-corporeal technologies like laws, pedagogies and management processes, embodiment. However, harder technologies play far more limited, fixed roles in those systems than softer ones. A factory tooled to produce milk bottles probably does that really well, consistently, and fast but, without significant retooling and reorganisation, is not going to produce glass ornaments or thermometers. A metal tube and furnace need the methods and processes employed by the glass blower to turn raw materials into anything at all but, because there are few limits to those methods and processes and those can be adjusted and adapted almost continuously, can be used in many different ways to create many different things. The needier a technology, the more ways there are to fulfil those needs and consequently the more creative and rich the potential outcomes may be.
A microchip is a very needy technology. Assembled with others, it can become still needier: a computer, for example, is the very personification of neediness, doing nothing and being nothing until we add software to make it be almost anything we want it to be – the universal machine. Conversely, in a watch or a cash register or an automated call answering system it becomes part of something more complete, that does what it does and nothing more – it needs nothing and does what it does: the personification of hardness.
Although automation is a typical feature of harder technologies, it depends entirely on what is being automated and how it is done. Henry Ford’s classic production line turned out a lot of similar things, all of them black: it was archetypally hard, a system needing little else intrinsic to the system to make it complete. Automation largely replaced the need for technologies needing skill and decision making to make them complete. Email, on the other hand, an archetypal soft technology, actually gained softness from automation of (for instance) MIME handling of rich-media enclosures. What was the preserve of technically savvy nerds with a firm grasp of uuencoding tools became open to all with standard for rich media handling that automated a formally manual (and very soft) process. This was possible because automation was aggregated with the existing technology rather than replacing it. The original technology lost absolutely none of its initial softness in the process but instead gained new potential for different ways of being used – photo journals, audio broadcasts, rich scheduling tools and so on. Neediness and automation are not mutually exclusive when that automation augments but does not replace softer processes. Such automation adds new affordances without taking any existing affordances away.
Twitter is a nice example of an incredibly soft social technology that has become yet softer through automation. Twitter is soft because it is can be many different things: it is very malleable, very assemblable with other technologies, very evolvable and very connectable (both in and out). A big part of what makes it brilliant is that it does one small trick, like a stick or a screwdriver or a wheel and, like those technologies, it needs other technologies, soft or hard, to make it complete. Twitter’s evolution demonstrates well how soft technologies are functionally needy. For instance, hashtags to classify subject matter into sets, and the user of @ symbols to refer to people in nets were not part of its original design. They started as soft technologies – conventions used by tweeters to turn it into a more useful technology for their particular needs, adding new functionality by inventing processes and methods that were aggregated by them with the tool itself. To begin with they were very prone to error and using them was a manual and not altogether trivial process. What happened next is really interesting – the makers of Twitter hardened these technologies and made them function within the Twitter system, and to function well, with efficiency and freedom from error – classic hallmarks of a hard technology. But, far from making Twitter more brittle or harder, this automation of soft technologies actually softened it further. It became softer because Twitter was adding to the assembly, not replacing any part of it, and these additions opened up their own new and interesting adjacent possibilities (mining social nets, recommending and exploring tags, for example). Crucially, the parts that were hardened took absolutely nothing away from what it could do previously: users of Twitter could completely ignore the new functionality if they wished, without suffering at all.
So, back to the Landing. The Landing is simple toolset with a set of affordances, a needy technology that by itself does almost nothing apart from letting people share, network and communicate. By itself, it is hopeless for almost anything more complex than that, but those capacities make it capable of being a part of a literally infinite possible variety of harder and softer technologies. Only in assembly with social, managerial, pedagogical and other processes does it become closer to or, if that’s what people want, further from completeness. And we, its architects, can help soften the system further by adding new tools that augment but do not replace the things it already does, thereby making it needier still, increasing its functional incompleteness by adding new incomplete functions.
It’s a funny goal: to intentionally build systems that, as they grow in size and complexity, lack more and more. Systems that actually become less complete the more complete we try to make them. It reminds me a little of fractal figures which, as we zoom in to look at them in greater detail, turn out to be infinitely empty as well as infinitely full.
This is brilliant. Please can we redesign our educational system now? Pretty please?
Address of the bookmark: http://chronicle.com/article/The-Shadow-Scholar/125329/
Interesting article about large scale deployment of iPads to all faculty and students. Not many conclusions but some good justifications and anecdotal comments.
Address of the bookmark: http://www.educause.edu/EDUCAUSE+Review/EDUCAUSEReviewMagazineVolume46/iMobilePerspectivesOniPadsibrW/226163
Recently I received an email asking me to identify, with almost no constraints, some examples of innovative teaching and learning practices in universities. Gosh, that’s a tricky one. I don’t think I can provide a sensible answer, for several reasons:
An innovation, by and large, is a novel application of an existing idea in a different setting. It’s not about inventing something never seen before, but of doing something in a context where it has not been tried previously. This comes back to the adjacent possible and some stronger variants on technological determinism. Once some technologies and systems are in place it is inevitable that other things will follow. In some cases, this is obvious and indisputable: for instance, a combination of LMS availability and a mandate to use it by an institution means that simply using it is not an innovation – you may innovate in the ways you use it, but not simply in using it. In other cases, the effect is subtler but no less compelling. For example, we have long known that dialogue can be a very powerful tool for learning but, for those involved in distance education, the opportunities to use it used to be expensive and impractical, for the most part. When large-scale ubiquitous cheap and simple communication became available it was not innovative to use it – it would be totally bizarre not to use it, in fact, a sign of idiocy or extreme complacency. There may be some details about the implementation and adapting cost-effectively to specific technologies that could be described as innovative, but the imperative to use the tools in the first place for learning is as compelling as the institutional edict: it’s too obvious to be described as an innovation, unless we describe everything we do as an innovation. Which, of course, in some ways it probably is.
So – does anyone have any ideas for answers to the question? At a large scale I’m thinking that some of the more interesting innovations of the last couple of decades might include (bearing in mind these are not new inventions and there are lots of uninnovative ways to go about them):
I could probably think of hundreds of smaller innovations, ways of using pedagogies and other tools differently, new tools, new processes, new combinations. But that’s just the problem – it’s really hard for me to see the wood for the trees.
It sounds like Google is heading in the same direction that we are heading on the Landing, offering different ways of interacting with different people. This is necessary in the evolution if social software. It will be interesting to discover whether they are also thinking of personal as well as social contexts – not only do we present different facets of ourselves to different people at different times (and the same people at different times – a much trickier problem) but we also adopt very different roles at different times in our personal lives. I think differently, need different things, talk to different people and read different things depending on what I am doing and what I mean to do.
That’s the idea behind the poorly named ‘context switcher’ that is being developed at Athabasca – to adopt different personas at different times and different contexts both for other people and for our own personal purposes. I Just wish we had a better name that made the meaning more obvious. ‘Circles’ is pretty good in a social context but less meaningful in a personal context so I would reject that. Lately I’ve been thinking that ‘facets’ captures the meaning better (it is about different facets of ourselves, whether for our own benefit or the benefit of others) but ‘facets’ is (like context switching) maybe a little technical. It works well for me and anyone else who has ever read Ranganathan, but maybe lacks popular appeal.
Any and all ideas appreciated!
Address of the bookmark: http://www.readwriteweb.com/archives/google_to_launch_major_new_social_network_called_c.php?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+readwriteweb+%28ReadWriteWeb%29
Interesting report about use of various forms of recommendations in NYT. The article suggests a likely division into human-edited, friend-recommended and algorithmically recommended stories that neatly captures what Terry Anderson and I have been discussing in terms of groups, networks and collectives. The transition between hierarchical group (the editor decides) to network (your friends suggest) to collective (sets are mined for crowd opinions) mirrors the traditional classroom, the network and the collective intelligence of some Web systems in online learning.
Address of the bookmark: http://mashable.com/2011/03/10/new-york-times-recommendations-2/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Mashable+%28Mashable%29
I’ve spent far more time than is healthy over the past few years thinking about technology, learning and education, and how they fit with each other. I was interested to read this recent meta-meta-study on the effects of computers in education, but it really tells us nothing we did not already know (though it has some good insights into why it is tricky and what may be needed).
The trouble with a focus on a tool, especially something like a computer that is a potentially infinite number of tools, is that it tells us practically nothing at all of value about the learning technology. All education, bar none, is technology-enhanced learning and all, bar none, involves tools – minimally, cognitive/social tools like pedagogies that are assumed to lead to learning (and, usually, tools to assess it has happened), organisational tools to support bringing people together, clocks to assist that process, spaces constructed to not hinder it too much, not to mention the ultimate toolset, language itself. That’s just a small part of the list, of course. Most education, especially in a formal context, involves dozens or even hundreds or thousands of tools, assembled into technologies which may themselves be part of technological assemblies, in order for it to happen. The issue is not whether a technology like language (say) is used, but how it is used. And that’s what no metastudy that focuses on a single set of tools will ever tell us in any useful way. For that matter, it seldom comes out properly in the original studies themselves. You might just as stupidly ask what effect chalk has on learning. Used well, in conjunction with other tools like blackboards, classroom seating arrangements, intelligent pedagogies and a caring teacher, it can have a hugely beneficial effect and, without it (assuming other co-occurring variables like the presence of a blackboard and a pedagogy that requires it) things can go terribly wrong. Defining a technology must include thinking about what it uses and what it is being used for – otherwise it is just talking about objects that are of no interest or value. So, we should be looking at the technological assemblies that we use and how they work together, of which specific tools are a necessary but not even close-to sufficient component. We are not going to show anything valuable about computers per se because they are universal tools, media and environments: because of that flexibility, they can be used to improve learning. They let us do pretty much anything that we want if we can program and use them effectively. If tools can improve learning, and computers can be pretty much any tool, then of course they can be of phenomenal value. That’s just basic logic. It would be stupid to suggest otherwise. It’s not even worth asking the question. We might ask reasonable questions about the economics of using them, access or health issues and so on, yet it is as certain as night follows day that computers can help people to learn. But how? Now, that is a really good (and less well-answered) research question which actually strikes at the heart of what all education is about.
Bearing that in mind, I have been wondering of late about the differences between social interactions online and face to face. Some differences appear to be obvious, even in the most immersive of online communication systems – the lack of important cues like scent, touch, peripheral vision, limitations on hearing background noises, limitations of rendition of video (even in 3D at high resolution) the fact that no commitment to meet in one place has been made (and therefore no continuation beyond the communication event itself), the fact that each participant exists in an environment where they are differently distracted, and so on. But, of course, such things may occur in face to face environments too. People have disabilities that limit shared sensations, if I sit opposite you at a table, my distractions are different from yours (I once failed an interview at least partly because I alone was facing a window over the sea and thought I could see whales playing in the waves, but that’s another story) and my commitment to go to a class down the hall may be very different from yours to come from a poorly connected village 50 miles away. In most respects, there are analogous situations in the most mundane of face to face meetings to those we experience routinely in online scenarios and, though the scale of effect may vary, the means of dealing with problems may be more straightforward and the ubiquity of the problems may vary, we still have to face them.
I’d be really interested to hear of any research that has looked into such constraints in a face to face setting without the intervention of computers – differences caused by seating arrangements, differences caused by being at the front of the class or the back, the effects of a teacher with body odour issues, the effects of distance traveled to class on commitment, and so on. Does anyone know of such studies? I’ve read a few here and there but not looked too carefully at the literature. I’m guessing some work must have been done on this, especially with regard to the effects of disabilities. My suspicion is that such easy and commonplace problems might tell us some useful things about how to fill the transactional distance gap in online systems.