The Web Index

An interesting set of statistics about access to the Web as well as many other metrics relating to use, availability and freedom on the Web, ranking nearly every country on various different scales. Canada makes a mediocre showing at 15th overall, with a disappointing nearly-80% having access, and falling well short of perfect on most other metrics too. The recent CBC report that ranks Canada 53rd in the world on upload speeds is also sobering. Like all such statistics, these need to be looked at critically and considered in context, but it is none-the-less a good starting point for discussion. See http://www.webfoundation.org/projects/the-web-index/ for more on the project, and how the figures are calculated.

Address of the bookmark: http://thewebindex.org/data/all/scores/

Media Multitasking Behavior: Concurrent Television and Computer Usage

This study looks at multitasking behaviour measured by the amount and frequency of attention paid to a computer screen and TV. It is interesting, if flawed, at least partly because of the differences it claims to show between multitasking behaviour in older and younger people. The researchers claim to show that there is not much age-related difference in overall time spent looking at things when multitasking, but that younger people’s gaze tends to flit much more frequently – the differences between age groups on this measure are actually quite huge. The researchers don’t make any notable claims about whether this is a good or a bad thing, but it is a result that helps to explain other findings that older people are better at multitasking, inasmuch as they retain more of what they have been paying attention to and are typically less easily distracted (I think I may be an outlier here!). However, the big flaw that I see in this study is that it used staff and students at a university as subjects. University staff are trained to concentrate in quite peculiar ways because that is what scholarly study is all about, and have typically spent a great many years acquiring that habit, so they are not at all representative of older people in general. It would at least be useful to compare this demographic with other older people who do not habitually concentrate very hard and very persistently on one thing for a living.

Address of the bookmark: http://online.liebertpub.com/doi/full/10.1089/cyber.2010.0350

Five myths about Moocs | Opinion | Times Higher Education

Diana Laurillard chipping in with a perceptive set of observations, most interestingly describing education as a personal client industry, in which tutor/student ratios are remarkably consistent at around 1:25, so it is no great surprise that it doesn’t scale up. Seems to me that she is quite rightly attacking a particular breed of EdX, Coursera, etc xMOOC but it doesn’t have to be this way, and she carefully avoids discussing *why* that ratio is really needed – her own writings and her variant on conversation theory suggest there might be alternative ways of looking at this.

Her critique that xMOOCs appear to succeed only for those that already know how to be self-guided learners is an old chestnut that hits home. She is right in saying that MOOCs (xMOOCs) are pretty poor educational vehicles if the only people who benefit are those that can already drive, and it supports her point about the need for actual teachers for most people *if* we continue to teach in a skeuomorphic manner, copying the form of traditional courses without thinking why we do what we do and how courses actually work.

For me this explains clearly once again that the way MOOCs are being implemented is wrong and that we have to get away from the ‘course’ part of the acronym, and start thinking about what learners really need, rather than what universities want to give them.

Address of the bookmark: http://www.timeshighereducation.co.uk/comment/opinion/five-myths-about-moocs/2010480.article

Christ, I hate Blackboard

From Dave Noon, what Boing Boing describes as a ‘Lovecraftian rant’ about how much the author hates Blackboard. Brilliant.

If you don’t know Blackboard, it is a learning management system from the Blackboard corporation. The company produces very weak products for the educational market but has captured quite a lot of the territory using a business model built on deliberate lock-in (it was able to gain a foothold early on and keeps its position by making it very hard to migrate to a different platform) combined with an ‘acquire and eliminate’ approach to superior but less-entrenched competitors and, where that fails, aggressive patent trolling.

Address of the bookmark: http://www.lawyersgunsmoneyblog.com/2014/01/christ-i-hate-blackboard

Thirteen Ways of Looking at a MOOC | The Seven Futures

Charming variant on a Wallace Stevens poem, replacing the blackbird with the MOOC. A little heavy on metaphor and simile here and there but makes a lot more sense than most scholarly articles I’ve read on the subject of MOOCs, and I’ve read really too many of them.

Address of the bookmark: http://www.thesevenfutures.com/blog/thirteen-ways-looking-mooc-0

George Siemens Gets Connected – Technology – The Chronicle of Higher Education

My friend and inspirational thought leader George gets well-deserved recognition in this in-depth Chronicle article that gives George’s background as well as an overview of some of his ideas, particularly as they relate to MOOCs. The article has one minor error: it’s Dr Siemens, not Mr Siemens – he has at least two doctorates, one earned, the other awarded.

Address of the bookmark: http://chronicle.com/article/George-Siemens-Gets-Connected/143959/?cid=wc&utm_source=wc&utm_medium=en

Peer Learning Handbook | Peeragogy.org

Interesting and free evolving handbook for learning with and from others without formal structures and courses.

It’s a little overblown in singing its own praises and a little lacking in much substance yet, not to mention having a cringeworthy (albeit memorable and descriptive) name. But it is still evolving and there are some very sound ideas from the connectivist family gathered together here in a very digestible, non-scholarly, practical form, from a number of excellent thinkers, and it is nice to see that it practices what it preaches. A worthwhile resource that should help to move things forward in useful ways.

Address of the bookmark: http://peeragogy.org/

Teens In The UK Say Facebook Is Dead – Business Insider

This story has been carried in numerous news outlets over the past few days, most with more hype than this one. 

The hype is a little premature: Facebook is not dead yet, though it is very interesting that it is no longer the network of choice amongst younger people, not only in the UK, and has not been for most of the past year. Though a billion or more users will take a while to leave, the ugliest company in social media will need to do something amazing really soon if it is to survive. If it does go under then it might happen surprisingly rapidly, thanks to the inverse of Metcalfe’s Law, especially as Facebook is already suffocating under its own flab. It is the biggest we have ever seen but it is certainly not too big to die and, once the exodus gains momentum, could happen in months rather than years. Like MySpace, Hi5 and others that have fallen out of favour, it will likely collapse in a big way but won’t totally vanish, especially given some sensible investments in things like Instagram that do make a lot of sense. Is this a bad thing? While mostly evil in its business practices, it has made some significant contributions to open source projects, but not enough to compensate for the harm it has done to the Internet in general: I won’t be sorry to see it go. It doesn’t need to be replaced. That’s not how things work any more.

Address of the bookmark: http://www.businessinsider.com/teens-in-the-uk-say-facebook-is-dead-2013-12

Classrooms may one day learn us – but not yet

Thanks to Jim and several others who have recently brought my attention to IBM’s rather grandiose claim that, in a few years, classrooms will learn us. The kinds of technology described in this article are not really very new. They have been just around the corner since the 60s and have been around in quantity since the early 90s when adaptive hypermedia (AH) and intelligent tutoring systems (ITS) rose to prominence, spawning a great many systems, and copious research reported on in hundreds of conferences, books and journal articles. A fair bit of my early work in the late 90s was on applying such things to an open corpus, which is the kind of thing that has blossomed (albeit indirectly) into the recently popular learning analytics movement. Learning analytics systems are essentially very similar to AH systems but mostly leave the adaptation stage of the process up to the learner and/or teacher and tend to focus more on presenting information about the learning process in a useful way than on acting on the results. I’ve maintained more than a passing interest in this area but I remain a little on the edge of the field because my ambitions for such tools have never been to direct the learning process. For me, this has always been about helping people to help one another to learn, not to tell them or advise them on how to learn, because people are, at least till now, the best teachers and an often-wasted resource. This seemed intuitively obvious to me from the start and, as a design pattern, it has served me well. Of late, I have begun to understand better why it works, hence this post.

The general principle behind any adaptive system for learning is that there are learners, some kind of content, and some means of adapting the content to the learners. This implies some kind of learner model and a means of mapping that to the content, although I believe (some disagree) that the learner model can be disembodied in constituent pieces and can even happily exist outside the systems we build, in the heads of learners. Learning analytics systems are generally all about the learner model and not much else, while adaptive systems also need a content model and a means of bringing the two together.  

Beyond some dedicated closed-corpus systems, there are some big obstacles to building effective adaptive systems for learning, or that support the learning process by tracking what we are doing.  It’s not that these are bad ideas in principle – far from it. The problem is more to do with how they are automated and what they automate. Automation is a great idea when it works. If the tasks are very well defined and can be converted into algorithms that won’t need to be changed too much over time, then it can save a lot of effort and let us do things we could not do before, with greater efficiency. If we automate the wrong things, use the wrong data, or get the automation a little wrong, we create at least as many problems as we solve. Learning management systems are a simple case in point: they automated abstracted versions of existing teaching practice, thus making it more likely that existing practices would be continued in an online setting, even though they had in many cases emerged for pragmatic rather than pedagogic reasons that made little sense in an online environment. In fact, the very process of abstraction made this more likely to happen. Worse, we make it very much harder to back out when we automate, because we tend to harden a system, making it less flexible and less resilient. We set in stone what used to be flexible and open. It’s worse still if we centralize that, because then whole systems depend on what we have set in stone and you cannot implement big changes in any area without scrapping the whole thing. If the way we teach is wrong then it is crazy to try to automate it. Again, learning management systems show this in spades, as do many of the more popular xMOOC systems. They automate at least some of the wrong things (e.g. courses, grading, etc). So we had better be mighty sure about what we are automating and why we are doing it. And this is where things begin to look a bit worrying for IBM’s ‘vision’. At the heart of it is the assumption that classrooms, courses, grades and other paraphenalia of educational systems are all good ideas that are worth preserving. The problem here is that these evolved in an ecosystem that made them a sensible set of technologies at the time but that have very little to do with best practice or research into learning. This is not about learning – it is about propping up a poorly adapted system.

If we ignore the surrounding systems and start with a clean slate, then this should be a set of problems about learning. The first problem for learning analytics is to identify what are we should be analyzing, the second is to understand what the data mean and how to process them, the third to decide what to do about that. Our knowledge on all three stages is intermediate at best. There are issues concerning what to capture, what we can dicover about learners through the information we capture, and how we should use that knowledge to help them learn better. Central to all of this is what we actually know about education and what we have discovered works best – not just statistically or anecdotally, but for any and all individuals. Unfortunately, in education, the empirical knowledge we have to base this on is very weak indeed.

So far, the best we can come up with that is fairly generalizable (my favourite example being spaced learning) is typically only relevant to small and trivial learning tasks like memorization or simple skill acquisition. We’re pretty good at figuring out how to teach simple things well, and ITS and AH systems have done a pretty fair job under such circumstances, where goals (seldom learning goals – more often proxies like marks on tests or retention rates) are very clear and/or learning outcomes very simple. As soon as we aim for more complex learning tasks, the vast majority of studies of education are either specific, qualitative and anecdotal, or broad and statistical, or (more often than should be the case) both. Neither is of much value when trying to create an algorithmic teacher, which is the explicit goal of AH and ITS, and is implied in the teaching/learning support systems provided by learning analytics.  

There are many patterns that we do know a lot about, though they don’t help much here.  We know, for example, that one-to-one mastery teaching on average works really brilliantly – Bloom’s 2-sigma challenge still stands, about 30 years after it was first made. One-to-one teaching is not a process that can be replicated algorithmically: it is simply a configuration of people that allows the participants to adapt, interact and exchange or co-develop knowledge with each other more effectively than configurations where there is less direct contact between people.  It lets learners express confusion or enthusiasm as directly as possible, and for the teacher to provide tailored responses, giving full and undistracted attention. It allows teachers to directly care both for the subject and for the student, and to express that caring effectively. It allows targeted teaching to occur, however that teaching might be enacted. It is great for motivation because it ticks all the boxes on what makes us self-motivated. But it is not a process and tells us nothing at all about how best to teach nor how best to learn in any way that can be automated, save that people can, on the whole, be pretty good at both, at least on average.

We also know that social constructivist models can, on average, be effective, for probably related reasons. it can also be a complete disaster. But fans of such approaches wilfully ignore the rather obvious fact that lots of people often learn very well indeed without them – the throwaway ‘on average’ covers a massive range of differences between real people, teachers and learners, and between the same people at different times in different contexts. This shouldn’t come as a surprise because a lot of teaching leads to some learning and most teaching is neither one-to-one nor inspired by social constructivist thinking. Personally, I have learned phenomenal amounts, been inspired and discovered many things through pretty dreadful teaching technologies and processes, including books and lectures and even examined quizzes. Why does it work? Partly because how we are taught is not the same thing at all as how we learn. How you and I learn from the same book is probably completely different in myriad ways. Partly it is because it ain’t what you do to teach but how you do it that makes the biggest difference. We do not yet have an effective algorithmic way of making or even identifying creative and meaningful decisions about what will help people to learn best – it is something that people and only people do well. Teachers can follow an identical course design with identical subject matter and turn it into a pile of junk or a work of art, depending on how they do it, how enthusiastic they are about it, how much eye contact they make, how they phrase it, how they pace it, their intonation, whether they turn to the wall, whether they remembered to shave, whether they stammer etc, etc, etc, and the same differentiators may work sometimes and not work others, may work for some people sometimes and not others. Sometimes, even awful teaching can lead to great learning, if the learners are interested and learn despite rather than because of the teacher, taking things into their own hands because the teaching is so awful. Teaching and learning, beyond simple memory and training tasks, are arts and not sciences. True, some techniques appear to work more often than not (but not always), but there is always a lot of mysterious stuff that is not replicable from one context to the next, save in general patterns and paradigms that are mostly not easily reduced to algorithms. It is over-ambitious to think that we can automate in software something we do not understand well enough to turn into an algorithm. Sure, we learn tricks and techniques, just like any artist, and it is possible to learn to be a good teacher just as it is possible to learn to be a good sculptor, painter or designer. We can learn much of what doesn’t work, and methods for dealing with tricky situations, and even a few rules of thumb to help us to do it better and processes for learning from our mistakes. But, when it comes down to basics, it is a creative process that can be done well, badly or with inspiration, whether we follow rules of thumb or not, and it takes very little training to become proficient. Some of the best teachers I’ve ever known have used the worst techniques. I quite like the emphasis that Alexandra Cristea and others have put on designing good authoring environments for adaptive systems because they then become creative tools rather than ends in themselves, but a good authoring tool has, to date, proved elusive and far too few people are working on this problem.

‘Nothing is less productive than to make more efficient what should not be done at all’. Peter Drucker

The proponents of learning analytics reckon they have an answer to this problem, by simply providing more information, better aggregated and more easily analyzed. It is still a creative and responsive teacher doing the teaching and/or a learner doing learning, so none of the craft or art is lost,  but now they have more information, more complete, more timely, better presented, to help them with the task so that they can do it better. The trouble is that, if the information is about the wrong things, it will be worse than useless. We have very little idea what works in education from a process point of view so we do not know what to collect or how to represent it, unless all we are doing is relying on proxies that are based on an underlying model that we know with absolute certainty is at least partly incorrect or, at best, is massively incomplete. Unless we can get a clearer idea of how education works, we are inevitably going to be making a system that we know to be flawed to be more efficient than it was. Unfortunately, it is not entirely clear where the flaws lie especially as what may be a flaw for one may not be for another, and a flaw in one context may be a positive benefit in another.  When performing analytics or building adaptive systems of any kind, we focus on proxies like grades, attention, time-on-task, and so on – things that we unthinkingly value in the broken system and that mean different things to different people in different contexts.  Peter Drucker made an important observation about this kind of thing:

Nothing is less productive than to make more efficient what should not be done at all‘.

A lot of systems of this nature improve the efficiency of bad ideas. Maybe they valorize behaviourist learning models and/or mediaeval or industrial forms of teaching. Maybe they increase the focus on grading. Maybe they rely on task-focused criteria that ignore deeper connective discoveries. Maybe they contain an implied knowledge model that is based on experts’ views of a subject area, which does not normally equate to the best way to come by that knowledge. Maybe they assume that time on task matters or, just as bad, that less time spent learning means the system is working better (both and neither are true). Maybe they track progress through a system that, at its most basic level, is anti-educational. I have seen all these flaws and then some. The vast majority of tools are doing education-process analytics, not learning analytics. Even those systems that use a more open form of analytics which makes fewer assumptions about what should be measured, using data mining techniques to uncover hidden patterns, typically have risky systemic effects: they afford plentiful opportunities for filter bubbles, path dependencies, Matthew Effects and harmful feedback loops, for example. But there is a more fundamental difficulty for these systems.  Whenever you make a model it is, of necessity, a simplification, and the rules for simplification make a difference. Models are innately biased, but we need them, so the models have to be good. If we don’t know what it is that works in the first place then we cannot have any idea whether the patterns we pick out and use to help people guide their learning journeys are a cause, an effect or a by-product of something else entirely. If we lack an explicit and accurate or useful model in the first place, we could just again be making something more efficient that should never be done at all. This is not to suggest that we should abandon the effort, because it might be a step to finding a better model, but it does suggest we should treat all findings gathered this way with extreme scepticism and care, as steps towards a model rather than an end in themselves.

In conclusion, from a computing perspective, we don’t really know much about what to measure, we don’t have great grounds for deciding how to process what we have measured, and we don’t know much at all about how to respond to what we have processed. Real teachers and learners know this kind of thing and can make sense of the complexity because we don’t just rely on algorithms to think. Well, OK, that’s not necessarily entirely true, but the algorithms are likely at a neural network level as well as an abstract level and are probably combinatorially complex in ways we are not likely to understand for quite a while yet. It’s thus a little early to be predicting a new generation of education. But it’s a fascinating area to research that is full of opportunities to improve things, albeit with one important proviso: we should not be entrusting a significant amount of our learning to such systems just yet, at least not on a massive scale. If we do use them, it should be piecemeal and we should try diverse systems rather than centralizing or standardizing in ways that the likes of Knewton are trying to do. It’s bit like putting a computer in charge of decisions whether or not to launch nuclear missiles. If the computer were amazingly smart, reliable and bug-free, in a way that no existing computer even approaches, it might make sense. If not, if we do not understand all the processes and ramifications of decisions that have to be made along the way, including ways to avoid mistakes, accidents and errors, it might be better to wait. If we cannot wait, then using a lot of different systems and judging their different outputs carefully might be a decent compromise. Either way, adaptive teaching and learning systems are undoubtedly a great idea, but they are, have long been, and should remain on the fringes until we have a much clearer idea of what they are supposed to be doing. 

Facebook Is A Fundamentally Broken Product That Is Collapsing Under Its Own Weight

An article from Business Insider reporting on Benedict Evans’s compelling analysis of Facebook’s big challenge. Essentially, there is too much data, and Facebook’s algorithms cannot cope. In fact, algorithms are part of the problem…

today, you could post that you’re getting married, but only half of your friends might see that posting because of the News Feeds’ algorithms.”

And algorithms are not the solution…

 “If you have 1,500 emails coming in every day, you wouldn’t say, ‘I need better algorithms.'”

So what next?

By this time next year we could have 3,000 posts, links, videos, status updates, etc., all flowing through the News Feed. It’s a struggle to sort through 1,500; how will Facebook deal with sorting through 3,000?”

Basically Facebook is broken and, unless its henchpeople and minions can come up with something radically new, it is not going to be fixed and it will just get worse. Sure, Facebook as a central service is not going away any time soon (probably – Metcalfe’s Law works in reverse too, so I’d not want to place any bets on that) but it doesn’t work as a social network any more, precisely because of the avaricious, amoral, single-minded network-building design that made it what it is today. I think it did a very sensible thing in buying, but not fully integrating, Instagram, because it can only grow now by moving into other ecosystems and dissociating the core from the satellites. It probably needs to go on quite a big spending spree now.

Seeing Facebook begin to fail, at least in its core, pleases me because it rose to success by cynical exploitation. It went places other social networking systems that predated it, as well as most that have come since, feared or had no inclination to go. You can’t have too many predators or parasites of one kind in an ecosystem otherwise the whole system falls apart. Or, to look at it another way, Facebook got too fat eating its own users, and now it can’t digest them any more. Either way, we’re much better off without it.

Address of the bookmark: http://www.businessinsider.com/facebook-news-feed-benedict-evans-2013-12#ixzz2nqI8Zbzw