Teens In The UK Say Facebook Is Dead – Business Insider

This story has been carried in numerous news outlets over the past few days, most with more hype than this one. 

The hype is a little premature: Facebook is not dead yet, though it is very interesting that it is no longer the network of choice amongst younger people, not only in the UK, and has not been for most of the past year. Though a billion or more users will take a while to leave, the ugliest company in social media will need to do something amazing really soon if it is to survive. If it does go under then it might happen surprisingly rapidly, thanks to the inverse of Metcalfe’s Law, especially as Facebook is already suffocating under its own flab. It is the biggest we have ever seen but it is certainly not too big to die and, once the exodus gains momentum, could happen in months rather than years. Like MySpace, Hi5 and others that have fallen out of favour, it will likely collapse in a big way but won’t totally vanish, especially given some sensible investments in things like Instagram that do make a lot of sense. Is this a bad thing? While mostly evil in its business practices, it has made some significant contributions to open source projects, but not enough to compensate for the harm it has done to the Internet in general: I won’t be sorry to see it go. It doesn’t need to be replaced. That’s not how things work any more.

Address of the bookmark: http://www.businessinsider.com/teens-in-the-uk-say-facebook-is-dead-2013-12

Classrooms may one day learn us – but not yet

Thanks to Jim and several others who have recently brought my attention to IBM’s rather grandiose claim that, in a few years, classrooms will learn us. The kinds of technology described in this article are not really very new. They have been just around the corner since the 60s and have been around in quantity since the early 90s when adaptive hypermedia (AH) and intelligent tutoring systems (ITS) rose to prominence, spawning a great many systems, and copious research reported on in hundreds of conferences, books and journal articles. A fair bit of my early work in the late 90s was on applying such things to an open corpus, which is the kind of thing that has blossomed (albeit indirectly) into the recently popular learning analytics movement. Learning analytics systems are essentially very similar to AH systems but mostly leave the adaptation stage of the process up to the learner and/or teacher and tend to focus more on presenting information about the learning process in a useful way than on acting on the results. I’ve maintained more than a passing interest in this area but I remain a little on the edge of the field because my ambitions for such tools have never been to direct the learning process. For me, this has always been about helping people to help one another to learn, not to tell them or advise them on how to learn, because people are, at least till now, the best teachers and an often-wasted resource. This seemed intuitively obvious to me from the start and, as a design pattern, it has served me well. Of late, I have begun to understand better why it works, hence this post.

The general principle behind any adaptive system for learning is that there are learners, some kind of content, and some means of adapting the content to the learners. This implies some kind of learner model and a means of mapping that to the content, although I believe (some disagree) that the learner model can be disembodied in constituent pieces and can even happily exist outside the systems we build, in the heads of learners. Learning analytics systems are generally all about the learner model and not much else, while adaptive systems also need a content model and a means of bringing the two together.  

Beyond some dedicated closed-corpus systems, there are some big obstacles to building effective adaptive systems for learning, or that support the learning process by tracking what we are doing.  It’s not that these are bad ideas in principle – far from it. The problem is more to do with how they are automated and what they automate. Automation is a great idea when it works. If the tasks are very well defined and can be converted into algorithms that won’t need to be changed too much over time, then it can save a lot of effort and let us do things we could not do before, with greater efficiency. If we automate the wrong things, use the wrong data, or get the automation a little wrong, we create at least as many problems as we solve. Learning management systems are a simple case in point: they automated abstracted versions of existing teaching practice, thus making it more likely that existing practices would be continued in an online setting, even though they had in many cases emerged for pragmatic rather than pedagogic reasons that made little sense in an online environment. In fact, the very process of abstraction made this more likely to happen. Worse, we make it very much harder to back out when we automate, because we tend to harden a system, making it less flexible and less resilient. We set in stone what used to be flexible and open. It’s worse still if we centralize that, because then whole systems depend on what we have set in stone and you cannot implement big changes in any area without scrapping the whole thing. If the way we teach is wrong then it is crazy to try to automate it. Again, learning management systems show this in spades, as do many of the more popular xMOOC systems. They automate at least some of the wrong things (e.g. courses, grading, etc). So we had better be mighty sure about what we are automating and why we are doing it. And this is where things begin to look a bit worrying for IBM’s ‘vision’. At the heart of it is the assumption that classrooms, courses, grades and other paraphenalia of educational systems are all good ideas that are worth preserving. The problem here is that these evolved in an ecosystem that made them a sensible set of technologies at the time but that have very little to do with best practice or research into learning. This is not about learning – it is about propping up a poorly adapted system.

If we ignore the surrounding systems and start with a clean slate, then this should be a set of problems about learning. The first problem for learning analytics is to identify what are we should be analyzing, the second is to understand what the data mean and how to process them, the third to decide what to do about that. Our knowledge on all three stages is intermediate at best. There are issues concerning what to capture, what we can dicover about learners through the information we capture, and how we should use that knowledge to help them learn better. Central to all of this is what we actually know about education and what we have discovered works best – not just statistically or anecdotally, but for any and all individuals. Unfortunately, in education, the empirical knowledge we have to base this on is very weak indeed.

So far, the best we can come up with that is fairly generalizable (my favourite example being spaced learning) is typically only relevant to small and trivial learning tasks like memorization or simple skill acquisition. We’re pretty good at figuring out how to teach simple things well, and ITS and AH systems have done a pretty fair job under such circumstances, where goals (seldom learning goals – more often proxies like marks on tests or retention rates) are very clear and/or learning outcomes very simple. As soon as we aim for more complex learning tasks, the vast majority of studies of education are either specific, qualitative and anecdotal, or broad and statistical, or (more often than should be the case) both. Neither is of much value when trying to create an algorithmic teacher, which is the explicit goal of AH and ITS, and is implied in the teaching/learning support systems provided by learning analytics.  

There are many patterns that we do know a lot about, though they don’t help much here.  We know, for example, that one-to-one mastery teaching on average works really brilliantly – Bloom’s 2-sigma challenge still stands, about 30 years after it was first made. One-to-one teaching is not a process that can be replicated algorithmically: it is simply a configuration of people that allows the participants to adapt, interact and exchange or co-develop knowledge with each other more effectively than configurations where there is less direct contact between people.  It lets learners express confusion or enthusiasm as directly as possible, and for the teacher to provide tailored responses, giving full and undistracted attention. It allows teachers to directly care both for the subject and for the student, and to express that caring effectively. It allows targeted teaching to occur, however that teaching might be enacted. It is great for motivation because it ticks all the boxes on what makes us self-motivated. But it is not a process and tells us nothing at all about how best to teach nor how best to learn in any way that can be automated, save that people can, on the whole, be pretty good at both, at least on average.

We also know that social constructivist models can, on average, be effective, for probably related reasons. it can also be a complete disaster. But fans of such approaches wilfully ignore the rather obvious fact that lots of people often learn very well indeed without them – the throwaway ‘on average’ covers a massive range of differences between real people, teachers and learners, and between the same people at different times in different contexts. This shouldn’t come as a surprise because a lot of teaching leads to some learning and most teaching is neither one-to-one nor inspired by social constructivist thinking. Personally, I have learned phenomenal amounts, been inspired and discovered many things through pretty dreadful teaching technologies and processes, including books and lectures and even examined quizzes. Why does it work? Partly because how we are taught is not the same thing at all as how we learn. How you and I learn from the same book is probably completely different in myriad ways. Partly it is because it ain’t what you do to teach but how you do it that makes the biggest difference. We do not yet have an effective algorithmic way of making or even identifying creative and meaningful decisions about what will help people to learn best – it is something that people and only people do well. Teachers can follow an identical course design with identical subject matter and turn it into a pile of junk or a work of art, depending on how they do it, how enthusiastic they are about it, how much eye contact they make, how they phrase it, how they pace it, their intonation, whether they turn to the wall, whether they remembered to shave, whether they stammer etc, etc, etc, and the same differentiators may work sometimes and not work others, may work for some people sometimes and not others. Sometimes, even awful teaching can lead to great learning, if the learners are interested and learn despite rather than because of the teacher, taking things into their own hands because the teaching is so awful. Teaching and learning, beyond simple memory and training tasks, are arts and not sciences. True, some techniques appear to work more often than not (but not always), but there is always a lot of mysterious stuff that is not replicable from one context to the next, save in general patterns and paradigms that are mostly not easily reduced to algorithms. It is over-ambitious to think that we can automate in software something we do not understand well enough to turn into an algorithm. Sure, we learn tricks and techniques, just like any artist, and it is possible to learn to be a good teacher just as it is possible to learn to be a good sculptor, painter or designer. We can learn much of what doesn’t work, and methods for dealing with tricky situations, and even a few rules of thumb to help us to do it better and processes for learning from our mistakes. But, when it comes down to basics, it is a creative process that can be done well, badly or with inspiration, whether we follow rules of thumb or not, and it takes very little training to become proficient. Some of the best teachers I’ve ever known have used the worst techniques. I quite like the emphasis that Alexandra Cristea and others have put on designing good authoring environments for adaptive systems because they then become creative tools rather than ends in themselves, but a good authoring tool has, to date, proved elusive and far too few people are working on this problem.

‘Nothing is less productive than to make more efficient what should not be done at all’. Peter Drucker

The proponents of learning analytics reckon they have an answer to this problem, by simply providing more information, better aggregated and more easily analyzed. It is still a creative and responsive teacher doing the teaching and/or a learner doing learning, so none of the craft or art is lost,  but now they have more information, more complete, more timely, better presented, to help them with the task so that they can do it better. The trouble is that, if the information is about the wrong things, it will be worse than useless. We have very little idea what works in education from a process point of view so we do not know what to collect or how to represent it, unless all we are doing is relying on proxies that are based on an underlying model that we know with absolute certainty is at least partly incorrect or, at best, is massively incomplete. Unless we can get a clearer idea of how education works, we are inevitably going to be making a system that we know to be flawed to be more efficient than it was. Unfortunately, it is not entirely clear where the flaws lie especially as what may be a flaw for one may not be for another, and a flaw in one context may be a positive benefit in another.  When performing analytics or building adaptive systems of any kind, we focus on proxies like grades, attention, time-on-task, and so on – things that we unthinkingly value in the broken system and that mean different things to different people in different contexts.  Peter Drucker made an important observation about this kind of thing:

Nothing is less productive than to make more efficient what should not be done at all‘.

A lot of systems of this nature improve the efficiency of bad ideas. Maybe they valorize behaviourist learning models and/or mediaeval or industrial forms of teaching. Maybe they increase the focus on grading. Maybe they rely on task-focused criteria that ignore deeper connective discoveries. Maybe they contain an implied knowledge model that is based on experts’ views of a subject area, which does not normally equate to the best way to come by that knowledge. Maybe they assume that time on task matters or, just as bad, that less time spent learning means the system is working better (both and neither are true). Maybe they track progress through a system that, at its most basic level, is anti-educational. I have seen all these flaws and then some. The vast majority of tools are doing education-process analytics, not learning analytics. Even those systems that use a more open form of analytics which makes fewer assumptions about what should be measured, using data mining techniques to uncover hidden patterns, typically have risky systemic effects: they afford plentiful opportunities for filter bubbles, path dependencies, Matthew Effects and harmful feedback loops, for example. But there is a more fundamental difficulty for these systems.  Whenever you make a model it is, of necessity, a simplification, and the rules for simplification make a difference. Models are innately biased, but we need them, so the models have to be good. If we don’t know what it is that works in the first place then we cannot have any idea whether the patterns we pick out and use to help people guide their learning journeys are a cause, an effect or a by-product of something else entirely. If we lack an explicit and accurate or useful model in the first place, we could just again be making something more efficient that should never be done at all. This is not to suggest that we should abandon the effort, because it might be a step to finding a better model, but it does suggest we should treat all findings gathered this way with extreme scepticism and care, as steps towards a model rather than an end in themselves.

In conclusion, from a computing perspective, we don’t really know much about what to measure, we don’t have great grounds for deciding how to process what we have measured, and we don’t know much at all about how to respond to what we have processed. Real teachers and learners know this kind of thing and can make sense of the complexity because we don’t just rely on algorithms to think. Well, OK, that’s not necessarily entirely true, but the algorithms are likely at a neural network level as well as an abstract level and are probably combinatorially complex in ways we are not likely to understand for quite a while yet. It’s thus a little early to be predicting a new generation of education. But it’s a fascinating area to research that is full of opportunities to improve things, albeit with one important proviso: we should not be entrusting a significant amount of our learning to such systems just yet, at least not on a massive scale. If we do use them, it should be piecemeal and we should try diverse systems rather than centralizing or standardizing in ways that the likes of Knewton are trying to do. It’s bit like putting a computer in charge of decisions whether or not to launch nuclear missiles. If the computer were amazingly smart, reliable and bug-free, in a way that no existing computer even approaches, it might make sense. If not, if we do not understand all the processes and ramifications of decisions that have to be made along the way, including ways to avoid mistakes, accidents and errors, it might be better to wait. If we cannot wait, then using a lot of different systems and judging their different outputs carefully might be a decent compromise. Either way, adaptive teaching and learning systems are undoubtedly a great idea, but they are, have long been, and should remain on the fringes until we have a much clearer idea of what they are supposed to be doing. 

Facebook Is A Fundamentally Broken Product That Is Collapsing Under Its Own Weight

An article from Business Insider reporting on Benedict Evans’s compelling analysis of Facebook’s big challenge. Essentially, there is too much data, and Facebook’s algorithms cannot cope. In fact, algorithms are part of the problem…

today, you could post that you’re getting married, but only half of your friends might see that posting because of the News Feeds’ algorithms.”

And algorithms are not the solution…

 “If you have 1,500 emails coming in every day, you wouldn’t say, ‘I need better algorithms.'”

So what next?

By this time next year we could have 3,000 posts, links, videos, status updates, etc., all flowing through the News Feed. It’s a struggle to sort through 1,500; how will Facebook deal with sorting through 3,000?”

Basically Facebook is broken and, unless its henchpeople and minions can come up with something radically new, it is not going to be fixed and it will just get worse. Sure, Facebook as a central service is not going away any time soon (probably – Metcalfe’s Law works in reverse too, so I’d not want to place any bets on that) but it doesn’t work as a social network any more, precisely because of the avaricious, amoral, single-minded network-building design that made it what it is today. I think it did a very sensible thing in buying, but not fully integrating, Instagram, because it can only grow now by moving into other ecosystems and dissociating the core from the satellites. It probably needs to go on quite a big spending spree now.

Seeing Facebook begin to fail, at least in its core, pleases me because it rose to success by cynical exploitation. It went places other social networking systems that predated it, as well as most that have come since, feared or had no inclination to go. You can’t have too many predators or parasites of one kind in an ecosystem otherwise the whole system falls apart. Or, to look at it another way, Facebook got too fat eating its own users, and now it can’t digest them any more. Either way, we’re much better off without it.

Address of the bookmark: http://www.businessinsider.com/facebook-news-feed-benedict-evans-2013-12#ixzz2nqI8Zbzw

Boston Study: What Higher Standardized Test Scores Don’t Mean

Interesting report and interview on the relationship between test scores of ‘crystallized skills’ (what schools teach) and ‘fluid intelligence’ (basically, the ability to think). Of course, there is none. Futhermore, teaching makes almost no contribution to logical thinking and problem solving in novel situations, at least for the 1400 eighth graders being studied.

where a school accounted for approximately 1/3 of the variation in state test scores, they accounted for very near zero of the variation on these fluid cognitive skill measures.”

This is hardly surprising in a world where the success of teaching is measured by standardized tests, and teaching is focused on achieving good results in those tests. The researchers are right to observe that crystallized skills are important, so this is not necessarily all bad news: schools appear to have some effect. However, I strongly suspect this is a short-term effect (as long as is needed to pass the test) and much less than it could be due to the extrinsic motivation designed into the system which actively degrades the students’  intrinsic motivation to learn. Whether or not that’s true, it’s a terrible indictment of an educational system that it affords no opportunities to develop the thinking skills that matter more. These skills are not measured in the standardized tests nor could they be measured in that way without destroying what they seek to observe. This doesn’t mean that we need better tests. We need better education.

Address of the bookmark: http://commonhealth.wbur.org/2013/12/standard-test-fluid-skills

Top UK headteacher: Michael Gove is 'pressing the rewind button'

An article from the Guardian that makes me glad my kids have already gone through the UK school system. The pigeon-brained fool in charge of UK education right now, Michael Gove, is doing his level best to set school education in that country back a hundred years, ignorantly or wilfully ignoring every shred of educational research over the past century. He is living proof that an expensive education doesn’t automatically lead to an educated person and might even lead to the reverse: allegedly, he was a somewhat intelligent child, at least before he went to an independent school. Surprising. Thank heavens for people like Tricia Kelleher, the main subject of this article, whose common-sense critique rings true. I particularly like her complementary observations:

“If Michael Gove is saying we should just value what is in Pisa, then we might as well just collapse the curriculum and teach what will come top.”

and

“My worry is we are now going to be driven towards Pisa because Pisa becomes the next altar we worship at. But it is really a cul-de-sac in learning terms.”

Well said.

It makes me wonder about why we allow elected representatives with much less than no knowledge of education to run/ruin our educational systems. There must be some appeal among a significant number of people in the lunatic measures of success that they latch onto but that actually guarantee failure, such as PISA, standardized testing and the deliberate teaching of things that alienate children, along with counter-productivity initiatives that seek efficiency but that liquidize the baby with the bathwater. I’m guessing that these ideas might resonate with and spring from some of those who were brought up under the long-discredited behaviourist regime that blighted the mid-twentieth century and that still refuses to die in some places, even among educators. Few of us are very rational beings and we suffer, amongst many other things, from irrational primacy biases, choice-supportive biases, confirmation biases, irrational escalation and endowment effects that together lead us to believe that what was done to us was the right way to do things, no matter how much the available evidence proves that it was not.  Unfortunately, those who were damaged by behaviourist teaching approaches have been taught one of the best ways not to learn so, notwithstanding a good many who rise above it and/or who learned to learn in other ways, this may be a vicious cycle that is doomed to repeat itself for a while longer. 

Address of the bookmark: http://www.theguardian.com/politics/2013/dec/19/headteacher-michael-gove-tricia-kelleher-education-reforms

Who's Cheating Whom?

I love Alfie Kohn – his writing is consistently clear, constructive and filled with sound arguments based on bulletproof research that continue to surprise even though the conclusions are completely obvious to anyone who spends a moment thinking about it. In this essay he shows how we, the teachers and our institutions, are the principle cause of cheating, creating elaborate and demotivating gotchas and systems designed to make cheating rewarding and, perhaps, inevitable. As a result, we are cheating students out of the joy learning. We are teaching them not to learn. Full of useful insights and simple but not simplistic solutions.

Address of the bookmark: http://www.alfiekohn.org/teaching/cheating.htm

Donald Clark Plan B: When Big Data goes bad: 6 epic fails

Donald once again in brilliant form cracking open a bunch of academic memes that still pervade the education system and have way too much influence on those that fund it. Especially good on challenging the awful data underlying standardization and comparisons like university league tables and PISA scores on which governments and journalists thrive.

 

Address of the bookmark: http://donaldclarkplanb.blogspot.co.uk/2013/11/when-big-data-goes-bad-6-epic-fails.html

Being-taught habits vs learning styles

In case the news has not got through to anyone yet, research into learning styles is pointless. The research that proves this is legion but, for instance, see (for just a tiny sample of the copious and damning evidence):

Riener, C., & Willingham, D. (2010). The Myth of Learning Styles. Change: The Magazine of Higher Learning Change: The Magazine of Higher Learning, 42(5), 32-35. doi:doi: 10.1080/00091383.2010.503139

Derribo, M. H., & Howard, K. (2007). Advice about the use of learning styles: A major myth in education. Journal of college reading and learning, 37, 2.

Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. 041543).

No one denies that it is possible to classify people in all sorts of ways with regards to things that might affect how they learn, nor that everyone is different, nor that there are some similarities and commonalities between how people prefer to or habitually go about learning. When these elaborately constructed theories claim no more than that people are different in interesting and sometimes identifiably consistent ways, then I have little difficult accepting them in principle, though it’s always worth observing that there are well over 100 of these theories and they cannot all be right. There is typically almost nothing in any of them that could prove them wrong either. This is a hallmark of pseudo-science and should set our critical sensors on full alert. The problem comes when the acolytes of whatever nonsense model is their preferred flavour try to take the next step and tell us that this means we should teach people in particular ways to match their particular learning styles. There is absolutelly no plausible evidence that knowing someone’s learning style, however it is measured, should have any influence whatsoever on how we should teach them, apart from the obvious requirement that we should cater for diversity and provide multiple paths to success. None. This is despite many decades spent trying to prove that it makes a difference. It doesn’t.

It is consequently a continual source of amazement to me when people pipe up in conversations to say that we should consider student learning styles when designing courses and learning activities. Balderdash. There is a weak case to be made that, like astrology (exactly like astrology), such theories serve a useful purpose of encouraging people to reflect on what they do and how they behave. They remind teachers to consider the possibility that there might be more than one way to learn something and so they are more likely to produce useful learning experiences that cater for diverse needs, to try different things and build flexibility into their teaching. Great – I have no objection to that at all, it’s what we should be aiming for. But it would be a lot more efficient to simply remind people of that simple and obvious fact rather than to sink vast sums of money and human resources into perpetuating these foolish myths. And there is a darker side to this. If we tell people that they are (just a random choice) ‘visual’, or  ‘sensing’ or ‘intuitive’ or ‘sequential’ learners then they will inevitably be discouraged from taking different approaches. If we teach them in a way that we think fits a mythical need, we do not teach them in other ways. This is harmful. It is designed to put learners in a filter bubble. The worst of it is that learners then start to believe it themselves and ignore or undervalue other ways of learning.

Being-taught habits

The occasion for this rant came up in a meeting yesterday, where it was revealed that a surprising number of our students describe their learning style (by which they actually mean their learning preference) to be to listen to a video lecture. I’m not sure where to begin with that. I would have been flabbergasted had I not heard similar things before. Even learning style believers would have trouble with that one. One of the main things that is worth noting, however, is that this is actually a description not of a learning preference but of a ‘being-taught habit’. Not as catchy, but that’s what it is.

I have spent much of my teaching career not so much teaching as unteaching: trying to break the appalling habits that our institutional education systems beat into us until we come to believe that the way we are being taught is actually a good way to learn. This is seldom the case – on the whole, educational systems have to achieve a compromise between cost-efficiency and effective teaching –  but, luckily, people are often smart enough to learn despite poor teaching systems. Indeed, sometimes, people learn because of poor teaching systems, inasmuch as (if they are interested and have not had the passion sucked out of them) they have to find alternative ways to learn, and so become more motivated and more experienced in the process of learning itself. Indeed, problem-based and enquiry-based techniques (which are in principle a good idea) sometimes intentionally make use of that kind of dynamic, albeit usually with a design that supports it and offers help and guidance where needed.

If nothing else, one of the primary functions of an educational system should be to enable people to become self-directed, capable lifelong learners. Learning the stuff itself and gaining competence in a subject area or skill in doing something is part of that – we need foundations on which to build. But it is at least as much about learning ways of learning. There are many many ways to learn, and different ways work better for different people learning different things. We need to be able to choose from a good toolkit and use approaches that work for the job in hand, not that match the demands of some pseudo-scientific claptrap.

Rant over.

 

Endnote (die die die)

I’m generally liking both the price and the performance of Mavericks on my relatively new-ish Mac, although there are some compatibility issues here and there, including with some of my most used software like MailTags, and although it won’t work on my old but still serviceable and well-loved first-generation Intel Mac.

But one incompatibility is really upsetting me, especially as I have deadlines to meet – EndNote X5. Thomson Reuters have no intention of fixing this and suggest upgrading to X7, which will get an update ‘in the next few weeks’. I have been irritated by EndNote too many times over the past few years, with perfectly servicable versions failing each time a new version of Word (another hateful piece of software) comes out and requiring costly updates, despite adding absolutely no new functionality of any value at all for over 10 years. Not to mention Thomson’s evil and cynical attack on the open-source Zotero. But this is ridiculous. X5 came out in late 2011, I bought it in 2012, and there have been two pointless and expensive updates since then, neither of which is anything more than a minor point-release. I reluctantly paid for a copy of X5 because, despite not wanting to use it and having perfectly decent free and open alternatives like Zotero and the pre-acquisition version of Mendeley, I work with people that do use it and it makes life easier to share the same reference manager. Now I give up. It has long been the case that EndNote is bloated, buggy and overpriced. Thomson Reuters are able to get away with it because of lock-in and path dependencies. When it was one of only a handful of options it was about as good as it got, so lots of people used it and it spread like a disease for compatibility reasons. I don’t care how difficult it makes it to work with collaborators around the world, or the effort involved in learning new quirks of new reference managers, I will no longer support Thomson’s greed. Their lack of interest in their locked-in customers as anything other than cash cows is more appalling than their ugly software. On the bright side, it will hopefully reduce my dependency on MS-Word (same collaboration issues) too.

I’m defaulting to Zotero but, if anyone has any alternative suggestions (I don’t mind paying if it is worth the money), do pass them on!

Pedagogy – Scrap exams to create schools of the future – news – TES

A report on the findings of this year’s Equinox Summit. Amongst the more interesting:

the summit’s conclusion was that, in less than 20 years, “knowing facts will have little value”, meaning that schools will have to scrap conventional examinations and grades and replace them with more “qualitative assessment”. This would measure a student’s all-round ability, rather than testing their knowledge in a particular subject.”

A lot of other sound and common-sense ideas are reported on here. All good stuff.

Address of the bookmark: http://www.tes.co.uk/article.aspx?storyCode=6365265