Scraping Google Scholar to write your PhD literature chapter

Gephi diagram of data created by BibnetThis looks really excellent – it scrapes Google Scholar, starting with a search that reveals work you already know about and that you think is significant. From those search results it generates an exportable Gephi map of authors, subject/disciplinary areas and links between them. Basically, it automatically (well – a little effort and a bit of Google Scholar/Gephi competence needed) maps out connected research areas and authors, mined from Google Scholar, including their relative significance and centrality, shaped to fit your research interests. Doing this manually, as most researchers do, takes a really long time, and it is incredibly easy to miss significant authors and connections. This looks like a fantastic way to help build a literature review, and great scaffolding to help with exploring a research area. I see endless possibilities and uses. Of course, it is only as good as the original query, and only as good as Google Scholar’s citation trail, but that’s an extremely good start, and it could be iterated many times to refine the results further. The code for the tool, Bibnet, is available through Github.

Address of the bookmark: https://mystudentvoices.com/scraping-google-scholar-to-write-your-phd-literature-chapter-2ea35f8f4fa1#.y43s1qg4l

Multiclick

This is great fun and quite fascinating – do try it out. You get to click on a rectangle, then see where other people have clicked – many thousands of them.

This system is incredibly similar to part of an experiment on collective social navigation behaviour that I performed over ten years ago, albeit mine was at a much smaller scale and graphically a little coarser, and I deliberately asked people to click where they thought most other people would click. What’s interesting is that, though I only had a couple of hundred participants overall, and only just over a hundred got this view, the heat map of this new system is almost exactly the same shape as mine, though the nuances are more defined here thanks to the large numbers involved.

In my experiment (the paper was called ‘On the stupidity of mobs’) this was the control case: the other subjects got to see where others in their group had previously clicked. They did not see the clicks of the control group and did not know how later subjects might behave, so finding the most popular point was not as trivial as it sounds. I was expecting stupid behaviour in those that could see where others had clicked but it was not quite so simple. It appeared that people reacted in three distinctly different ways to seeing the clicks of others. About a third followed the herd (as anticipated) and about a third deliberately avoided the herd (not expected). About a third continued to make reasoned decisions, apparently uninfluenced by others, much as those without such cues. Again, I had not expected this. I should have expected it, of course. Similar issues were well known in the context of weighted lists such as Google Search results or reviews on Amazon, where some users deliberately seek less highly rated items or ignore list order in an attempt to counter perceived bias, and I had seen –  but not well understood – similar effects in earlier case studies with other more practically oriented social navigation systems. People are pretty diverse! I wonder whether the researchers here are aiming for something similar? It does offer the opportunity to try again later (not immediately) so they could in theory analyze the results of the influence of others in a similar way. I’d love to see those results.

Address of the bookmark: http://boltkey.cz/multiclick/

The Bonus Effect – Alfie Kohn

Alfie Kohn in brilliant form once again, reaffirming his place as the most eloquent writer on motivation this century, this time taking on the ‘bonus effect’ – the idea that giving rewards makes those rewards themselves more desirable while simultaneously devaluing the activity leading to them. It seems that, though early research was equivocal, more recent studies show that this is real:

“When people are promised a monetary reward for doing a task well, the primary outcome is that they get more excited about money. This happens even when they don’t meet the standard for getting paid. And when a reward other than money is used — raffle tickets for a gift box, in this case — the effect is the same: more enthusiasm about what was used as an incentive.”

Also:

“The more closely a reward is conditioned on how well one has done something, the more that people come to desire the reward and, as earlier research has shown, the more they tend to lose interest in whatever they had to do to get the reward.”

As Kohn summarizes:

‘If the question is “Do rewards motivate people?” the answer is “Sure — they motivate people to get rewards.”’

We have long known that performance-related pay is a terrible idea, and that performance-related bonuses achieve the precise opposite of their intended effects. This is a great explanation of more of the reasons behind that empirical finding.

As it happens, Athabasca University operates just such a system, flying in the face of five decades of research that shows unequivocally that it is positively self-defeating. It’s bad enough when used to drive workers on a production line. For creative and problem-solving work, it is beyond ridiculous. Of course, as Kohn notes, exactly the same dynamic underlies most of our teaching too:

“If we try to justify certain instructional approaches by saying they’ll raise test scores, we’re devaluing those approaches while simultaneously elevating the importance of test scores. The same is true of education research that uses test results as the dependent variable.”

The revolution cannot come soon enough.

Address of the bookmark: http://www.alfiekohn.org/blogs/bonus/

ACM TSC

A new ACM journal, Transactions on Social Computing. The scope seems good, with strong interests expressed not just in the computational side but the human and social side of the field. It will be interesting to see how this develops. Too much of the field is dominated by variations on either a theme of hard computing, especially social network analysis and filtering algorithms, or soft, expansive, but technologically underinformed studies. Both extremes matter, but the important stuff is found at their intersection, and neither is of much interest on its own. This looks like it might hit the sweet spot.

Address of the bookmark: http://tsc.acm.org/about.cfm

Journal paper: p-Learning’s unwelcome legacy

My latest paper is now available in the open journal, TD Tecnologie Didattiche.

The paper summarizes and expands on much of what I have been talking and writing about over the past year or two, looking into the ways that the boundaries that heavily determine the pedagogies of p-learning (collocated learning and teaching) have been imported wholesale into the e-learning environment, despite the fact that most were only needed in the first place because of those boundaries. I pay particular attention to the systemic harmful effect this has on intrinsic motivation: a vast proportion of what we do is designed to attempt to overcome this central inherent weakness in institutional p-learning.

Bizarrely, when we teach online, we tend to intentionally add those same boundaries back in (e.g. through logins, closed walled gardens, differentiated roles, scheduled courses, grades, etc) and actually make things worse. We replicate the stupid stuff but only rarely incorporate the saving graces of p-learning such as the value of a diverse learning community, easy ways to show caring, responsiveness, tacit knowledge transfer, modelling of ways of being, etc. Meanwhile we largely ignore what makes e-learning valuable, such as diversity, freedom, connectivity, and near-instant access to the knowledge of billions. We dumbly try to make better courses (a notable by-product of physical constraints, seldom a great way to learn) when what we actually need is better learning, and force people to stay on those courses using threats/rewards in the form of grades and accreditation. I explain how, historically, this came about: from a technology evolution perspective it is quite understandable, as were horseless carriages, but it is still very foolish. The paper begins to explore how we might flip our institutions and adopt practices that make more sense, given the algorithmic, fuzzy, metaphorical, emergent, illusory, temporally indistinct, space-crossing boundaries of e-learning, and it includes some descriptions of some ways this is already happening, as well as some ways it might.

Abstract:

Formal teaching of adults has evolved in a context defined, initially, by the constraints of physical boundaries. Classroom walls directly entail timetables, norms and rules of behaviour, social segregation into organized groups and, notably, the course as a fundamental unit of instruction. Our adult education systems are well adapted to provide efficient and cost-effective teaching within those boundaries. Digitally embodied boundaries are far more fluid, open, permeable, scalable, metaphorical and fuzzy.
This has helped to drive the increasing dominance of e-learning in intentional informal learning and yet methods that emerge from physical boundaries dominate institutional e-learning, though they are a poor fit with the media. This paper is an exploration of the implications of the removal of physical boundaries to online pedagogies, many of which challenge our most cherished educational foundations and assumptions.

Address of the bookmark: http://www.tdjournal.itd.cnr.it/article/view/891

Research reveals the dark side of wearable fitness trackers

From CNN, a report suggesting that fitness trackers are not always wonderful things. 

The only thing that surprises me about this is that the reported demotivating effects are not much stronger. I suspect this is an artefact of the way the research was conducted. For this kind of study that relies on self-reported feelings, especially where subjects are invested in wanting this to be a good thing, unwanted effects are likely to be inaccurately reported.

“When we asked the women how they felt without their Fitbit, many reported feeling “naked” (45%) and that the activities they completed were wasted (43%). Some even felt less motivated to exercise (22%).”

The fact that 43% of subjects thought activities without the aid of a Fitbit were wasted implies that the few that reported feeling less motivated were just the extremes. At least another 21% were clearly demotivated too, even by these skewed results, and I am almost certain that a deeper delve into the feelings of the remainder would have revealed far more that felt this way. It may be that they either had other forms of extrinsic motivation to compensate or that, in a small percentage of cases, their intrinsic motivation was so high that they could overcome the effects. A clearer view appears when the researchers asked about how the tool worked:

“Perhaps more alarming, many felt under pressure to reach their daily targets (79%) and that their daily routines were controlled by Fitbit (59%). Add to this that almost 30% felt that Fitbit was an enemy and made them feel guilty, and suddenly this technology doesn’t seem so perfect.”

This result is more along the lines of what most other research reveals as probable, and it strongly suggests that the demotivating effects are much stronger than those that were self-reported.  As with all extrinsic motivation, it kind-of works as long as the extrinsic motivation continues to be applied. It’s addictive. In fact, like most addictions, there are usually diminishing returns on this, so the rewards/punishments have to be increased over time for them to achieve similar effects. It would be interesting to return to these subjects at a later date to see how their feelings have changed.

I suppose that it is not too bad a thing that there are people doing more exercise than they otherwise might because a device is rewarding, punishing, and goading them to do so. However, it creates a dependency that is great for Fitbit, but bad for the soul, and bad for long-term fitness because, without these devices, people will feel even less motivated than before. Moreover, it rewards only certain kinds of exercise, mainly walking and jogging.

I have felt these effects myself, having been the recipient of a gift of a similar Polar Flow device and having worn it for over a year. It goaded me with commands to get up and jog, and set targets for me that I could not control and did not want. As a result, I found that I cycled less, sailed less, exercised less and did fewer of the things that the device did not record. Perhaps I walked a bit more (often in preference to cycling) but, overall, my fitness suffered. This happened despite knowing full well in advance what it was going to do to my motivation.  I thought I could overcome that, but it’s a powerful drug. It has taken months to recover from this. I do now wear a Pebble watch that does record similar information and that has similar blind spots, but it does not (yet) try to be proactive in goading me to walk or jog. I feel more in control of it, seeing it now as a bit of partial information rather than a dumb coach nagging me to behave as its programmers want me to behave. I choose when and whether to view the information, and I choose what actions to take on it. This reveals a general truth about technologies of this nature: they should informate, not automate.

Address of the bookmark: http://www.cnn.com/2016/09/01/health/dark-side-of-fitness-trackers/

Climate Change Is Making This Portable Air Conditioner a Must-Have Summer Accessory

And so the world ends. Sadly, I don’t think the title was intended ironically. zero breeze portable air conditioner

This kind of destructive local thinking creeps in all over the place. For example, Athabasca University is in financial trouble, so individual departments are being charged with reducing their own costs. Our IT Services department’s approach is to remove customizations and custom-built applications that everyone uses, buying in baseline systems to replace them, thus (in theory, not reality) eliminating a large chunk of its support burden. Unfortunately, exactly the same tasks that used to be performed by fast, reliable, error-free machines are now therefore performed by slow, unreliable, mistake-prone human beings – all of us – instead, at vastly increased cost (millions of dollars) and vastly decreased efficiency. It’s killing us, increasing workload while decreasing agency, productivity, creativity, and organizational intelligence. Though only destroying a university rather than the whole world, it’s just as dumb as building air conditioners to combat the effects of global warming.

Address of the bookmark: http://gizmodo.com/climate-change-is-making-this-portable-air-conditioner-1785687572

Frequent password changes are the enemy of security

tl;dr – forcing people to regularly change their passwords is counter-productive and actually leads to less security (not to mention more errors, more support calls, more rage against the machine). Of course, in the event of a security breach, it is essential to do so. But to enforce regular changes not only doesn’t help, it actually hinders security. The more frequently changes are required, the worse it gets.

This article draws, a bit indirectly, from a large-scale study of forced password changing, available at https://www.cs.unc.edu/~reiter/papers/2010/CCS.pdf though it is far from the only one, including this at http://people.scs.carleton.ca/~paulv/papers/expiration-authorcopy.pdf which provides a mathematical proof that frequent password changing is not worth the hassles and complications it causes. NIST in the US and CESG in the UK have advised against it in recent years because it is ineffective and counterproductive.

Athabasca University has recently implemented a new ‘frequent change’ policy that is patchily enforced across different systems. We need to rethink this. It is 1970s thinking based on a technician’s hunch, and the empirical evidence shows clearly that it is wrong.

In a perfect world we would find ways to do away with this outmoded and flaky approach to authentication, but the mainstream alternatives and even some more exotic methods are not that great. Most rely on something you have – typically a cellphone or fob device – as well as something you know, the same general principle as chip-and-pin (still one of the most effective authentication methods). I don’t mind having to do that for things that demand high security, and I use two-factor authentication where I can for accounts that I care about, but it’s a big pain. If we’re going to use passwords, though, they need to be good ones, and we should not be forced to change them unless they might have been compromised.

Address of the bookmark: http://arstechnica.com/security/2016/08/frequent-password-changes-are-the-enemy-of-security-ftc-technologist-says/

True costs of information technologies

Switchboard (public domain)Microsoft unilaterally and quietly changed the spam filtering rules for Athabasca University’s O365 email system on Thursday afternoon last week. On Friday morning, among the usual 450 or so spams in my spam folder (up from around 70 per day in the old Zimbra system) were over 50 legitimate emails, including one to warn me that this was happening, claiming that our IT Services department could do nothing about it because it’s a vendor problem. Amongst junked emails were all those sent to the allstaff alias (including announcements about our new president), student work submissions, and many personal messages from students, colleagues, and research collaborators.

The misclassified emails continue to arrive, 5 days on.  I have now switched off Microsoft’s spam filter and switched to my own, and I have risked opening emails I would never normally glance at, but I have probably missed a few legitimate emails. This is perhaps the worst so far in a long line of ‘quirks’ in our new O365 system, including persistently recurring issues of messages being bounced for a large number of accounts, and it is not the first caused by filtering systems: many were affected by what seems to be a similar failure in the Clutter filter in May.

I assume that, on average, most other staff at AU have, like me, lost about half an hour per day so far to this one problem. We have around 1350 employees, so that’s around 675 hours – 130 working days – being lost every day it continues. This is not counting the inevitable security breaches, support calls, proactive attempts at problem solving, and so on, nor the time for recovery should it ever be fixed, nor the lost trust, lost motivation, the anger, the conversations about it, the people that will give up on it and redirect emails to other places (in breach of regulations and at great risk to privacy and security, but when it’s a question of being able to work vs not being able to work, no one could be blamed for that). The hours I have spent writing this might be added to that list, but this happens to relate very closely indeed to my research interests (a great case study and catalyst for refining my thoughts on this), so might be seen as a positive side-effect and, anyway, the vast majority of that time was ‘my own’: faculty very rarely work normal 7-hour days.

Every single lost minute per person every day equates to the time of around 3 FTEs when you have 1350 employees. When O365 is running normally it costs me around five extra minutes per day, when compared with its predecessor, an ancient Zimbra system.  I am a geek that has gone out of his way to eliminate many of the ill effects: others may suffer more.  It’s mostly little stuff: an extra 10-20 seconds to load the email list, an extra 2-3 seconds to send each email, a second or two longer to load them, an extra minute or two to check the unreliable and over-spammed spam folder, etc. But we do such things many times a day. That’s not including the time to recover from interruptions to our work, the time to learn to use it, the support requests, the support infrastructure, etc, etc.

To be fair, whether such time is truly ‘lost’ depends on the task. Those ‘lost’ seconds may be time to reflect or think of other things. The time is truly lost if we have to put effort into it (e.g. checking spam mail) or if it is filled with annoyance at the slow speed of the machine, but may sometimes simply be used in ways we would not otherwise use it.  I suspect that flittering attention while we wait for software to do its thing creates habits of mind that are both good and bad. We are likely more distracted, find it harder to concentrate for long periods, but we probably also develop different ways of connecting things and different ways of pacing our thinking. It certainly changes us, and more research is needed on how it affects us. Either way, time spent sorting legitimate emails from spam is, at least by most measures of productivity, truly time lost, and we have lost a lot of it.

Feeding the vampires

It goes without saying that, had we been in control of our own email system, none of this would have happened. I have repeatedly warned that putting one of the most central systems of our university into the hands of an external supplier, especially one with a decades-long history of poor software, broken or proprietary standards, weak security, inadequate privacy policies, vicious antagonism to competitors, and a predatory attitude to its users, is a really stupid idea. Microsoft’s goal is profit, not user satisfaction: sometimes the two needs coincide, often they do not. Breakages like this are just a small part of the problem. The worst effects are going to be on our capacity to innovate and adapt, though our productivity, engagement and workload will all suffer before the real systemic failures emerge.  Microsoft had to try hard to sell it to us, but does not have to try hard to keep us using it, because we are now well and truly locked in on all sides by proprietary, standards-free tools that we cannot control, cannot replace, cannot properly understand, that change under our feet without warning, that will inevitably insinuate themselves into our working lives. And it’s not just email and calendars (that can use only slightly broken standards) but completely opaque standards-free proprietary tools like OneDrive, OneNote and Yammer. Now we have lost standards-compliance and locked ourselves in, we have made it unbelievably difficult to ever change our minds, no matter how awful things get. And they will get more awful, and the costs will escalate. This makes me angry. I love my university and am furious when I see it being destroyed by avoidable idiocy.

O365 is only one system among many similar tools that have been foisted upon us in the last couple of years, most of which are even more awful, if marginally less critical to our survival. They have replaced old, well-tailored, mostly open tools that used to just work: not brilliantly, seldom prettily, but they did the job fast and efficiently so that we didn’t have to. Our new systems make us do the work for them. This is the polar opposite of why we use IT systems in the first place, and it all equates to truly lost time, lost motivation, lost creativity, lost opportunity.

From leave reporting to reclaiming expenses to handling research contracts to managing emails, let’s be very conservative indeed and say that these new baseline systems just cost us an average of an extra 30 minutes per working day per person on top of what we had before (for me, it is more like an hour, for others, more).  If the average salary of an AU employee is $70,000/year that’s $5,400,000 per year in lost productivity. It’s much worse than that, though, because the work that we are forced to do as a result is soul-destroying, prescriptive labour, fitting into a dominative system as a cog into a machine. I feel deeply demotivated by this, and that infects all the rest of my work. I sense similar growing disempowerment and frustration amongst most of my colleagues.

And it’s not just about the lost time of individuals. Almost always, other people in the system have to play a role that they did not play before (this is about management information systems, not just the digital tools), and there are often many iterations of double-checking and returned forms,  because people tend to be very poor cogs indeed.  For instance, the average time it takes for me to get recompense for expenses is now over 6 months, up from 2-4 weeks before. The time it takes to simply enter a claim alone is up from a few minutes to a few hours, often spread over months, and several other people’s time is also taken up by this process. Likewise, leave reporting is up from 2 minutes to at least 20 minutes, usually more, involving a combination of manual emails, tortuous per-hour entry, the ability to ask for and report leave on public holidays and weekends, and a host of other evils. As a supervisor, it is another world of pain: I have lost many hours to this, compounding the ‘mistakes’ of others with my own (when teaching computing, one of the things I often emphasize is that there is no such thing as user error: while they can make mistakes and do weird stuff we never envisaged, it is our failure to design things right that is the problem). This is not to mention the hours spent learning the new systems, or the effects on productivity, not just in time and motivation, but in preventing us from doing what we are supposed to do at all. I am doing less research, not just because my time is taken with soul-destroying cog-work, but because it is seldom worth the hassle of claiming, or trying to manage projects using badly designed tools that fit better – though not well – in a factory. Worse, it becomes part of the culture, infecting other processes like ethics reviews, student-tutor interactions, and research & development. In an age when most of the world has shaken off the appalling, inhuman, and empirically wrong ideas of Taylorism, we are becoming more and more Taylorist. As McLuhan said, we shape our tools and our tools shape us.

To add injury to insult, these awful things actually cost money to buy and to run –  often a lot more money than they were planned to cost, making a lot less savings or even losses, even in the IT Services department where they are justified because they are supposed to be cutting costs. For instance, O365 cost nearly three times initial estimates on which decisions were based, and it appears that it has not reduced the workload for those having to support it, nor the network traffic going in and out of the university (in fact it may be much worse), all the while costing us far more per year to access than the reliable and fully-featured elderly open source product it replaced. It also breaks a lot more. It is hard to see what we have gained here, though it is easy to see many losses.

Technological debt

The one justification for this suicidal stupidity is that our technological debt – the time taken to maintain, extend, and manage old systems – is unsustainable. So, if we just buy baseline tools without customization, especially if we outsource the entire management role to someone else, we save money because we don’t have to do that any more.

This is – with more than due respect – utter bullshit.

Yes, there is a huge investment involved over years whenever we build tools to do our jobs and, yes, if we do not put enough resources into maintaining them then we will crawl to a halt because we are doing nothing but maintenance. Yes, combinatorial complexity and path dependencies mean that the maintenance burden will always continue to rise over time, at a greater-than-linear rate. The more you create, the more you have to maintain, and connections between what we create adds to the complexity. That’s the price of having tools that work. That’s how systems work. Get over it. That’s how all technology evolves, including bureaucratic systems. Increasing complexity is inevitable and relentless in all technological systems, not withstanding the occasional paradigm shift that kind-of starts the ball rolling again. Anyone that had stuck around in an organization long enough to see the long-term effects of their interventions would know this.

These new baseline systems are in no way different, save for one: rather than putting the work into making the machines work for us, we instead have to evolve, maintain and manage processes in which we do the work of machines. The complexity therefore impacts on every single human being that is having to enact the machine, not just developers. This is crazy. Exactly the same work has to be done, with exactly the same degree of precision as that of the machines (actually more, because we have to add procedures to deal with the errors that software is less likely to make). It’s just that now it is done by slow, unreliable, fallible, amotivated human beings. For creative or problem-solving work, it would be a good thing to take tasks away from machines that humans should be doing. For mechanistic, process-driven work where human error means it breaks, it is either great madness, great stupidity, or great evil. There are no other options. At a time when our very survival is under threat, I cannot adequately express my deep horror that this is happening.

I suspect that the problem is in a large part due to short-sighted local thinking, which is a commonplace failure in hierarchical systems, and that gets worse the deeper and more divisive the hierarchies go.  We only see our own problems without understanding or caring about where we sit in the broader system. Our IT directors believe that their job is to save money in ITS (the department dealing with IT), rather than to save money for the university. But, not only are they outsourcing our complex IT functions to cloud-based companies (a terrible idea for aforementioned reasons), they are outsourcing the work of information technologies to the rest of the university. The hierarchies mean a) that directors seldom get to see or hear of the trouble it causes, b) they mix mainly with others at or near their hierarchical level who do not see it either, and c) that they tend to see problems in caricature, not as detailed pictures of actual practices. As the hierarchies deepen and separate,  those within a branch communicate less with others in parallel branches or those more than a layer above or below. Messages between layers are, by design, distorted and filtered. The more layers, the greater the distortion. People take further actions based on local knowledge, and their actions affect the whole tree. Hierarchies are particularly awful when coupled with creative work of the sort we do at Athabasca or fields where change is frequent and necessary. They used to work OK for factories that did not vary their output much and where everything was measurable though, in modern factories, that is rarely true any more. For a university, especially one that is online and that thus lacks many of the short circuits found in physical institutions, deepening hierarchies are a recipe for disaster. I suppose that it goes without saying that Athabasca University has, over the past few years, seen a huge deepening in those hierarchies.

True costs

Our university is in serious financial trouble that it would not be in were it not for these systems. Even if we had kept what we had, without upgrading, we would already be many millions of dollars better off, countless thousands of hours would not have been wasted, we would be far more motivated, we would be far more creative, and we would still have some brilliant people that we have lost as a direct result of this process. All of this would be of great benefit to our students and we would be moving forwards, not backwards. We have lost vital capacity to innovate, lost vital time to care about what we are supposed to be doing rather than working out how the machine works. The concept of a university as a machine is not a great one, though there are many technological elements and processes that are needed to make it run. I prefer to think of it like an ecosystem or an organism. As an online university, our ecosystem/body is composed of people and machines (tools, processes, methods, structures, rules, etc). The machinery is just there to support and sustain the people, so they can operate as a learning community and perform their roles in educating, researching and community engagement. The more that we have to be the machines, the less efficiently the machinery will run, and the less human we can all be. It’s brutal, ugly, and self-destructive.

When will we learn that the biggest costs of IT are to its end users, not to IT Services? We customized and created the tools that we have replaced for extremely good reasons: to make our university and its systems run better, faster, more efficiently, more effectively. Our ever-growing number of new off-the-shelf and outsourced systems, that take more of our time, intellectual and emotional effort, have wasted and continue to waste countless millions of dollars, not to mention huge costs in lost motivation and ill will, not to mention in loss of creativity and caring. In the process we have lost control of our tools, lost the expertise to run them, lost the capability to innovate in the one field in which we, as an online institution, must and should have most expertise. This is killing us. Technological debt is not voided by replacing custom parts with generic pieces. It is transferred at a usurious rate of interest to those that must replace the lost functionality with human labour.

It won’t be easy to reverse this suicidal course, and I would not enjoy being the one tasked with doing so. Those who were involved in implementing these changes might find it hard to believe, because it has taken years and a great deal of pain to do so (and it is far from over yet – the madness continues), but breaking the system was hundreds of times easier than it will be to fix it. The first problem is that the proprietary junk that has been foisted upon us, especially when hosted in the cloud, is a one-way valve for our data, so it will be fiendishly hard to get it back again. Some of it will be in formats that cannot be recovered without some data loss. New ways of working that rely on new tools will have insinuated themselves, and will have to be reversed. There will be plentiful down-time, with all the associated costs. But it’s not just about data. From a systems perspective this is a Humpty Dumpty problem. When you break a complex system, from a body to an ecosystem, it is almost impossible to ever restore it to the way it was. There are countless system dependencies and path dependencies, which mean that you cannot simply start replacing pieces and assume that it will all work. The order matters. Lost knowledge cannot be regained – we will need new knowledge. If we do manage to survive this vandalism to our environment, we will have to build afresh, to create a new system, not restore the old. This is going to cost a lot. Which is, of course, exactly as Microsoft and all the other proprietary vendors of our broken tools count upon. They carefully balance the cost of leaving them against what they charge. That’s how it works. But we must break free of them because this is deeply, profoundly, and inevitably unsustainable.

‘Our Technology Is Our Ideology’: George Siemens on the Future of Digital Learning | EdSurge News

Great article reporting on George Siemens’s and Rory McGreal’s (both Athabasca University profs) take on the promise and threats of adaptive technologies, learning analytics, data-driven approaches to education, and personalization. George and Rory have quite different perspectives on these issues but both are absolutely right. George emphasizes that this is about human beings working together to learn, and that our institutional systems are often quite antagonistic to that, imposing counter-productive power relationships and focusing on task completion rather than learning.  As he puts it,

“If we do things right, we could fix many of the things that are really very wrong with the university system, in that it treats people like objects, not human beings. It pushes us through like an assembly-line model rather than encouraging us to be self-motivated, self-regulated, self-monitoring human beings.”

Absolutely. This is a battle we should have won many years ago, but still it persists.

Rory emphasizes that we have to take a whole-system view of this, rather than attempt to personalize things for the learner. As he puts it,

“This focus on the learner is a big mistake. We should look at the whole learning system and how it works—the learner, teacher, technologist, administration, community.”

I totally agree, albeit that the word ‘this’ matters here: there are many ways to focus on the learner that are an extremely good idea, though personalization is not one of them. Personal, not personalized, as Alfie Kohn puts it. I’m not certain Rory would entirely agree with that – focusing on personal needs can be expensive, and Rory normally argues that it is better to teach a lot of people sub-optimally than to help a few to learn optimally. It’s hard to disagree, but it’s a wicked and situated problem, and it depends a great deal on the kind of learning involved and the kinds of people doing it. We should at least be aiming for both: personal, and cost-effective.

 

Address of the bookmark: https://www.edsurge.com/news/2016-08-11-our-technology-is-our-ideology-george-siemens-on-the-future-of-digital-learning