This is a slide deck for a talk I’m giving today, at a faculty workshop, on the subject of learning outcomes.
I think that well-considered learning outcomes can be really helpful when planning and designing learning activities, especially where there is a need to assess learning. They can help keep a learning designer focused, and to remember to ensure that assessment activities actually make a positive contribution to learning. They can also be helpful to teachers while teaching, as a framework to keep them on track (if they wish to remain on track). However, that’s about it. Learning outcomes are not useful when applied to bureaucratic ends, they are very poor descriptors of what learning actually happens, as a rule, and they are of very little (if any) use to students under most circumstances (there are exceptions – it’s a design issue, not a logical flaw).
The big point of my talk, though, is that we should be measuring what students have actually learned, not whether they have learned what we think we have taught, and that the purpose of everything we do should be to support learning, not to support bureaucracy.
I frame this in terms of the relationships between:
what we teach (what we actually teach, not just what we think we are teaching, including stuff like attitudes, beliefs, methods of teaching, etc),
what a student learns in the process (an individual student, not students as a whole), and
what we assess (formally and summatively, not necessarily as part of the learning process).
There are many things that we teach that any given student will not learn, albeit that (arguably) we wouldn’t be teaching at all if learning were not happening for someone. Most students get a small subset of that. There are also many things that we teach without intentionally teaching, not all of them good or useful.
There are also very many things that students learn that we do not teach, intentionally or otherwise. In fact, it is normal for us to mandate this as part of a learning design: any mildly creative or problem-solving/inquiry-oriented activity will lead to different learning outcomes for every learner. Even in the most horribly regimented teaching contexts, students are the ones that connect everything together, and that’s always going to include a lot more than what their teachers teach.
Similarly, there are lots of things that we assess that we do not teach, even with great constructive alignment. For example, the students’ ability to string a sentence together tends to be not just a prerequisite but something that is actively graded in typical assessments.
My main points are that, though it is good to have a teaching plan (albeit that it should be flexible, reponsive to student needs, and should accommodate serendipity)learning :
students should be participants in planning outcomes and
we should assess what students actually learn, not what we think we are teaching.
From a learning perspective, there’s less than no point in summatively judging what learners have not learned. However, that’s exactly what most institutions actually do. Assessment should be about how learners have positively changed, not whether they have met our demands.
This also implies that students should be participants in the planning and use of learning outcomes: they should be able to personalize their learning, and we should recognize their needs and interests. I use andragogy to frame this, because it is relatively uncontroversial, is easily understood, and doesn’t require people to change everything in their world view to become better teachers, but I could have equally used quite a large number of other models. Connectivism, Communities of Practice, and most constructivist theories, for instance, force us to similar conclusions.
I suggest that appreciative inquiry may be useful as an approach to assessment, inasmuch as the research methodology is purpose-built to bring about positive change, and its focus on success rather than failure makes sense in a learning context.
I also suggest the use of outcome mapping (and its close cousin, outcome harvesting) as a means of capturing unplanned as well as planned outcomes. I like these methods because they only look at changes, and then try to find out what led to those changes. Again, it’s about evaluation rather than judgment.
This is my Spotlight Session from the 34th Distance Teaching & Learning Conference, at Wisconsin Madison, August 8th, 2018. Appropriately enough, I did this online and at a distance thanks to my ineptitude at dealing with the bureaucracy of immigration. Unfortunately my audio died as we moved to the Q&A session so, if anyone who was there (or anyone else) has any questions or observations, do please post them here! Comments are moderated.
The talk was concerned with how online learning is fundamentally different from in-person learning, and what that means for how (or even whether) we teach, in the traditional formal sense of the word.
Teaching is always a gestalt process, an emergent consequence of the actions of many teachers, including most notably the learners themselves, which is always greater than (and notably different from) the sum of its parts. This deeply distributed process is often masked by the inevitable (thanks to physics in traditional classrooms) dominance of an individual teacher in the process. Online, the mask falls off. Learners invariably have both far greater control and far more connection with the distributed gestalt. This is great, unless institutional teachers fight against it with rewards and punishments, in a pointless and counter-productive effort to try to sustain the level of control that is almost effortlessly attained by traditional in-person teachers, and that is purely a consequence of solving problems caused by physical classroom needs, not of the needs of learners. I describe some of the ways that we deal with the inherent weaknesses of in-person teaching especially relating to autonomy and competence support, and observe how such pedagogical methods are a solution to problems caused by the contingent side effects of in person teaching, not to learning in general.
The talk concludes with some broad characterization of what is different when teachers choose to let go of that control. I observe that what might have been Leonardo da Vinci’s greatest creation was his effective learning process, without which none of the rest of his creations could have happened. I am hopeful that now, thanks to the connected world that we live in, we can all learn like Leonardo, if and only if teachers can learn to let go.
At least in Ontario, it seems that there are about as many women as men taking STEM programs at undergraduate level. This represents a smaller percentage of women taking STEM subjects overall because there are way more women entering university in the first place. A more interesting reading of this, therefore, is not that we have a problem attracting women to science, technology, engineering, and mathematics, but that we have a problem attracting men to the humanities, social sciences, and the liberal arts. As the article puts it:
“it’s not that women aren’t interested in STEM; it’s that men aren’t interested in poetry—or languages or philosophy or art or all the other non-STEM subjects.”
That’s a serious problem.
As someone with qualifications in both (incredibly broad) areas, and interests in many sub-areas of each, I find the arbitrary separation between them to be ludicrous, leading to no end of idiocy at both extremes, and little opportunity for cross-fertilization in the middle. It bothers me greatly that technology subjects like computing or architecture should be bundled with sciences like biology or physics, but not with social sciences or arts, which are way more relevant and appropriate to the activities of most computer professionals. In fact, it bothers me that we feel the need to separate out large fields like this at all. Everyone plays lip service to cross-disciplinary work but, when we try to take that seriously and cross the big boundaries, there is so much polarization between the science and arts communities that they usually don’t even understand one another, let alone work in harmony. We don’t just need more men in the liberal arts – we need more scientists, engineers, and technologists to cross those boundaries, whatever their gender. And, vice versa, we need more liberal artists (that sounds odd, but I have no better term) and social scientists in the sciences and, especially, in technology.
But it’s also a problem of category errors in the other direction. This clumping together of the whole of STEM conceals the fact that in some subjects – computing, say – there actually is a massive gender imbalance (including in Ontario), no matter how you mess with the statistics. This is what happens when you try to use averages to talk about specifics: it conceals far more than it reveals.
I wish I knew how to change that imbalance in my own designated field of computing, an area that I deliberately chose precisely because it cuts across almost every other field and did not limit me to doing one kind of thing. I do arts, science, social science, humanities, and more, thanks to working with machines that cross virtually every boundary.
I suspect that fixing the problem has little to do with marketing our programs better, nor with any such surface efforts that focus on the symptoms rather than the cause. A better solution is to accept and to celebrate the fact that the field of computing is much broader and vastly more interesting than the tiny subset of it that can be described as computer science, and to build up from there. It’s especially annoying that the problem exists at Athabasca where a wise decision was made long ago not to offer a computer science program. We have computing and information systems programs, but not any programs in computer science. Unfortunately, thanks to a combination of lazy media and computing profs (suffering from science envy) that promulgate the nonsense, even good friends of mine that should know better sometimes describe me as a computer scientist (I am emphatically not), and even some of our own staff think of what we do as computer science. To change that perception means not just a change in nomenclature, but a change in how and what we, at least in Athabasca, teach. For example, we might mindfully adopt an approach that contextualizes computing around projects and applications, rather than its theory and mechanics. We might design a program that doesn’t just lump together a bunch of disconnected courses and call it a minor but that, in each course (if courses are even needed), actively crosses boundaries – to see how code relates to poetry, how art can inform and be informed by software, how understanding how people behave can be used in designing better systems, how learning is changed by the tools we create, and so on.
We don’t need disciplines any more, especially not in a technology field. We need connections. We don’t need to change our image. We need to change our reality. I’m finding that to be quite a difficult challenge right now.
I love this art project – a forest that owns itself and that makes money on its own behalf, eventually with no human control or ownership. From the blurb…
“The Project emerged from research in the fields of crypto governance, smart contracts, economics and questions regarding representations of natural systems in the techno-sphere. It creates a framework whereby a forest is able to sell licences to log its own trees through automated processes, smart contracts and blockchain technology. “
But it gets better…
“The terra0 project creates a scenario whereby the forest, augmented through automated processes, utilitizes itself and thereby accumulates capital. A shift from valorisation through third parties to a self-utilization makes it possible for the forest to procure its real counter-value and eventually buy itself. The augmented forest is not only owner of itself, but is thus in the position to buy more ground and therefore to expand.”
A really nice project from the Editions at Play team at Google, in which blockchain is used both to limit supply to a digital book (only 100 copies made) and, as the book is passed on, to make it ‘age,’ in the sense that each reader must remove two words from each page and add one of their own before passing it on (that they are obliged to do). Eventually, it decays to the point of being useless, though I think the transitional phases might be very interesting in their own right.
I was thinking something very vaguely along these lines would be an interesting idea and had started making notes about how it would work, but it seemed so blindingly obvious that somebody must have already done it. Blockchain technologies for publishing are certainly being considered by many people, and some are being implemented. The Alliance of Independent Authors seems to have the most practical plans for using Blockchain for that purpose. Another similar idea comes with the means to partially compensate publishers for such things (as though they needed even more undeserved profits). Another interesting idea is to use Blockchain Counterparty tokens to replace ISBN numbers. However, A Universe Explodes is the only example I have so far found of building in intentional decay. It’s one of a range of wonderfully inventive and inspiring books that could only possibly exist in digital media at the brilliant Editions at Play site.
Though use of Blockchain for publishing is a no-brainer, it’s the decay part that I like most, and that I was thinking about before finding this. Removing and adding words is not an accurate representation of the typical decay of a physical book, and it is not super-practical at a large scale, delightful though it is. My first thoughts were, in a pedestrian way, to build in a more authentic kind of decay. It might, for instance, be possible to simply overlay a few more pixels with each reading, or to incrementally grey-out or otherwise visually degrade the text (which might have some cognitive benefits too, as it happens). That relies, however, on a closed application system, or a representation that would be a bit inflexible (e.g. a vector format like SVG to represent the text, or even a bitmap) otherwise it would be too easy to remove such additions simply by using a different application. And, of course, it would be bad for people with a range of disabilities, although I guess you could perform similar mutilations of other representations of the text just as easily. That said, it could be made to work. There’s no way it is even close to being as good as making something free of DRM, of course, but it’s a refinement that might be acceptable to greedy publishers that would at least allow us to lend, give, or sell books that we have purchased to others.
My next thought was that you could, perhaps more easily and certainly more interestingly, make marginalia (graphics and text) a permanent feature of the text once ownership was transferred, which would be both annoying and enlightening, as it is in physical books. One advantage would be that it reifies the concept of ownership – the intentional marks made on the book are a truer indication of the chain of owners than anything more abstract or computer-generated. It could also be a really interesting and useful way to tread a slightly more open path than most ugly DRM implementations, inasmuch as it could allow the creation of deliberately annotated editions (with practical or artistic intent) without the need for publisher permission. That would be good for textbooks, and might open up big untapped markets: for instance, I’d quite often rather buy an ebook annotated by one of my favourite authors or artists than the original, even if it cost more. It could be interestingly subversive, too. I might even purchase one of Trump’s books if it were annotated (and re-sold) by journalists from the Washington Post or Michael Moore, for example. And it could make a nice gift to someone to provide a personally embellished version of a text. Combined with the more prosaic visual decay approach, this could become a conversation between annotators and, eventually, become a digital palimpsest in which the original text all but disappears under generations of annotation. I expect someone has already thought of that but, if not, maybe this post can be used to stop someone profiting from it with a patent claim.
In passing, while searching, I also came across http://www.eruditiondigital.co.uk/what-we-do/custos-for-ebooks.php which is both cunning and evil: it lets publishers embed Bitcoin bounties in ebooks that ‘pirates’ can claim and, in the process, alert the publisher to the identity of the person responsible. Ugly, but very ingenious. As the creators claim, it turns pirates on other pirates by offering incentives, yet keeping the whole process completely anonymous. Eeugh.
The Verge reports on a variety of studies that show taking notes with laptops during lectures results in decreased learning when compared with notes taken using pen and paper. This tells me three things, none of which is what the article is aiming to tell me:
That the institutions are teaching very badly. Countless decades of far better evidence than that provided in these studies shows that giving lectures with the intent of imparting information like this is close to being the worst way to teach. Don’t blame the students for poor note taking, blame the institutions for poor teaching. Students should not be put in such an awful situation (nor should teachers, for that matter). If students have to take notes in your lectures then you are doing it wrong.
That the students are not skillful laptop notetakers. These studies do not imply that laptops are bad for notetaking, any more than giving students violins that they cannot play implies that violins are bad for making music. It ain’t what you do, it’s the way that you do it. If their classes depend on effective notetaking then teachers should be teaching students how to do it. But, of course, most of them probably never learned to do it well themselves (at least using laptops). It becomes a vicious circle.
That laptop and, especially, software designers have a long way to go before their machines disappear into the background like a pencil and paper. This may be inherent in the medium, inasmuch as a) they are vastly more complex toolsets with much more to learn about, and b) interfaces and apps constantly evolve so, as soon as people have figured out one of them, everything changes under their feet. It becomes a vicious cycle.
The extra cognitive load involved in manipulating a laptop app (and stopping the distractions that manufacturers seem intent on providing even if you have the self-discipline to avoid proactively seeking them yourself) can be a hindrance unless you are proficient to the point that it becomes an unconscious behaviour. Few of us are. Tablets are a better bet, for now, though they too are becoming overburdened with unsought complexity and unwanted distractions. I have for a couple of years now been taking most of my notes at conferences etc with an Apple Pencil and an iPad Pro, because I like the notetaking flexibility, the simplicity, the lack of distraction (albeit that I have to actively manage that), and the tactile sensation of drawing and doodling. All of that likely contributes to making it easier to remember stuff that I want to remember. The main downside is that, though I still gain laptop-like benefits of everything being in one place, of digital permanence, and of it being distributed to all my devices, I have, in the process, lost a bit in terms of searchability and reusability. I may regret it in future, too, because graphic formats tend to be less persistent over decades than text. On the bright side, using a tablet, I am not stuck in one app. If I want to remember a paper or URL (which is most of what I normally want to remember other than my own ideas and connections that are sparked by the speaker) I tend to look it up immediately and save it to Pocket so that I can return to it later, and I do still make use of a simple notepad for things I know I will need later. Horses for courses, and you get a lot more of both with a tablet than you do with a pencil and paper. And, of course, I can still use pen and paper if I want a throwaway single-use record – conference programs can be useful for that.
Signal is arguably the most open, and certainly the most secure, privacy-preserving instant messaging/video or voice-calling system available today. It is open source, ad-free, standards-based, simple, and very well designed. Though not filled with bells and whistles, for most purposes it is a far better alternative to Facebook-owned WhatsApp or other near-competitors like Viber, FaceTime, Skype, etc, especially if you have any concerns about your privacy. Like all such things, Metcalfe’s Law means its value increases with every new user added to the network. It’s still at the low end of the uptake curve, but you can help to change that – get it now and tell your friends!
Like most others of its ilk it hooks into your cellphone number rather than a user name but, once you have installed it on your smartphone, you can associate that number (via a simple 2D barcode) with a desktop client. Until recently it only supported desktop machines via a Chrome browser (or equivalent – I used Vivaldi) but the new desktop clients are standalone, so you don’t have to grind your system to a halt or share data with Google to install it. It is still a bit limited when it comes to audio (simple messaging only) and there still appears to be no video support (which is available on smartphone clients) but this is good progress.
Oh drat. So Doppler Labs is no more. This is very sad.
I love my Here One bluetooth earbuds, have recommended them to many people, and would do so again. For simple noise cancelling they run countless rings around every other headphones and earbuds I have ever tried, including top of the line Bose devices costing a lot more (not that these were cheap). The moment that you turn the external sound down and enter a state of blissful silence is miraculous. But they are so much more than that: having entered that world of silence you can bring up sounds that you want to hear, notably the voices of people around you or (more specifically thanks to 6 built-in microphones) in front of you (or, for secret agents, behind you). It is quite eerie to sit on a bus and hear, with fair clarity, the conversations of people around you but to barely hear the rumble and clatter of the bus itself. It’s not always perfect, but it is still pretty remarkable. I’ve even been able to talk with people on a float plane, with massively reduced rumble and noticeably enhanced speech, almost normally. And it is marvellous to be cycling while listening to music while being able to hear approaching traffic and other significant things around me well enough to be safe. Or to wander through a park in the heart of a noisy city and hear nothing but birdsong. I particularly love being able to sit in a crowded bar or restaurant and to hear the conversation of people on the other side of the table but not those of the rest of the room (though it still has difficulty dealing with over-loud music). As a former professional musician with consequent hearing loss, this is transformative: I don’t need a hearing aid (yet) most of the time but, for those odd occasions when my hearing fails me, Here One provides a great solution. To cap it off, the sound quality for music etc is top notch – vastly superior to any other earbuds I have ever owned (mind you, they cost more than twice as much as any I have hitherto owned, so I would hope so). I suspect that at least some of the reason for this is that they store a hearing profile for me that knows which frequencies cause me difficulty and that therefore shape the sound to suit me better. They are basically computers for the ears.
There are weaknesses, some of which have till now been improving through software upgrades since I got the things. It’s a big pain having to control the buds from a cellphone for even pretty simple stuff like volume control. Though there are a few things that can be done by tapping them/double-tapping them (like switching off the noise cancelling or answering a phone) the process is unreliable and there’s a limited range of things you can do that way. The battery life, though improved since the first release and now quicker to recharge, is not that great, notwithstanding the fact that you can charge them two or three times from the case itself. I would prefer to be able to plug in a cable and/or battery booster to use on long flights without interruption. Despite multiple options for earpieces, they don’t always feel firmly set in my ears and, because the seal is pretty solid when they are inserted right, it can get uncomfortable on take-off and landing in planes, especially if you have a cold. And they don’t have a flight mode so, technically, I shouldn’t be doing that anyway. It is really annoying when bluetooth fails as, inevitably, it sometimes does (even though it may not be the fault of the earphones). It is hard to pair them with multiple devices, and the set-up for non-supported devices (anything that is not an iPhone or Android phone, basically) is gruelling and unreliable. It would be nice if they were waterproof. They stick out a bit, albeit not as much as most bluetooth buds. Sometimes they fail to turn off and cause feedback when returned to the case. But these are things I can live with, in return for wearing a completely new category of smart device that enhances the quality of my life.
I was really looking forward to some of the promised new features, especially real-time language translations, but I guess that will have to wait until it is a standard cellphone/smartwatch feature because it is no longer going to come from Doppler Labs. I am much more worried about the loss of support, and the fact that what I have now is what I will have for as long as the buds themselves last: it was one of the appealing things about them that they got better with each software/firmware update. If security flaws are discovered, they won’t get fixed. More worryingly, next time I change my phone (a common event) I may not be able to install the software that is essential to making them work at all. Even if I can, my experience with older iOS devices is that upgrades to phone operating systems often render older software unusable, so they could become a very expensive bit of junk very quickly. It would be nice to think that Doppler Labs might open source their software so that this is not a problem but, from the article, it sounds like they will be selling off the patents to the highest bidder and the chances of opening things up are therefore pretty slim. I fear there are not enough of the things out there in the wild to spark a community-based alternative. On the bright side, no doubt the brilliant innovations will be snapped up by a bigger, more sustainable firm and will find their way into more mainstream devices (Apple would be foolish to miss this one), but I will miss this company and I will miss this product.
This is the second high profile and apparently highly successful Kickstarter device that I have owned to suffer this fate, and I fear the outcomes will be similar. My Pebble watch continues to do basic service but I don’t know for how much longer. There has been nothing new arriving for it since the company folded earlier this year, and the apps it used to run are diminishing every week, as services that they rely upon fold. In olden days, we used to be able to continue to use our devices no matter what happened to their manufacturers. Nowadays, not so much.
I doubt that I will learn my lessons well from this as I am a great optimist when faced with a revolutionary new technology, but it’s something we all have to remember: software embedded in our hardware is an ongoing commitment, and we are surrounded by the stuff at work and at home, from TVs to cars to watches to lightbulbs to routers to phones, and so on. Increasingly, we’re no longer buying a product, we are buying into a service, so the quality and potential longevity of the company is even more important than the quality of the machinery. The only truly effective way to keep it safe, reliable, and sustainable would be for it to be open source and/or to use open standards, and for it not to rely on a single cloud-based service to operate. Sadly, far too little of the Internet of Things comes close to that. And far too much of it is hidden behind DRM, closed APIs, and other sinful mechanisms.
Though Microsoft has been unusually prone to the kind of chicanery described in this article for most of its existence, the problem of price hiking combined with shifting, decaying, or dying cloud services is inherent in the cloud model they are using itself.
Cloud services can make good sense when they are directly replaceable with competitive alternatives: there are compelling reasons to, say, run your virtual servers in the cloud (whether in virtual machines or containers), or to handle network services like DDoS protection, DNS management, or spam filtering, or even (under some circumstances) to run relatively high level application layer services like databases, SMTP mail, or web servers. As long as you can treat a service exactly like a utility – including, crucially, the ability to simply, cheaply, and fairly painlessly switch service providers (including back in-house) whenever you want or need to do so – then it can provide resilience, scalability, predictable costs, and agility. Sometimes, it can even save money. There are still lots of potential pitfalls: complex management concerns like privacy, security, performance, faults, configuration, and accounting need to be treated with utmost caution, service contract negotiation is a complex and trap-strewn art, training and integration can be fiendishly difficult to manage when you no longer control the service and it changes under your feet, and there are potential unpredictable problems ahead when companies go bust, change hands, or become subject to dangerous legislative changes. But, on the whole, a true utility service can often be a sensible use of limited funds.
The soon-to-be-defunct Outlook.com Premium looks deceptively like a utility service on the surface, ostensibly offering what look a lot like simple, straightforward, SMTP/IMAP/POP email services, with a cutesy (ie. from Hell) web front end, with the (optional) capacity to choose a domain that could be migrated elsewhere. To a savvy user, it could be treated as little more than a utility service. However, there’s a lot of integrated frippery, from tricks to embed large images, to proprietary metadata, to out-of-office settings, to integrations with other Microsoft tools, that makes it less portable the more you use it, especially for the less technically adept target audience it is aimed at, especially if you are using Microsoft Outlook or the Web interface to manage it. Along with some subtle bending of protocols that make even the simplest of migrations fraught with difficulty and subject to lost metadata at best, by far the most likely exit strategy for most users will be to shift to the (more expensive) O365 which, though not identical, has features that are close enough and easily-migrated enough to suit the average Joe. And that’s what Microsoft wants.
O365 is not a utility service at all, despite using the lure of almost generic email and calendaring (potentially replaceable services) to hook you in. It’s a cloud-based application suite filled to the brim with proprietary applications, systems and protocols, almost all of which are purpose-built to lock your data, processes, and skill set into a non-transferable cloud that is owned and controlled by an entity that does not have your interests as its main concern. In fact, exactly the opposite: its main concern is to get as much money from you as possible over as long a period as it can. If it were a utility like, say, electricity to your home, it would be one that required you to only plug in its own devices, using sockets that could not be duplicated, running at voltages and frequencies no one else uses. Its employees would walk into your house and replace your appliances and devices with different ones whenever they wanted (often replacing your stove while you were cooking on it), dropping and adding features as they felt like it. The utility company would be selling information about what devices you use, and when, to which channels you tuned your TV, what you were eating, and so on, to anyone willing to pay. You would have to have a microwave and toaster whether you wanted one or not, and you couldn’t switch any of them off. It would install cameras and microphones in your home that it or its government could use to watch everything you do. Every now and then it would increase its prices to just a bit less than it would cost to rip everything out and replace it with standards-based equipment you could use anywhere. Though it would offer a lot of different devices, all with different and unintuitive switches and remote controls (because it had bought most of them from other companies), none of them would work properly and, as they were slowly replaced with technologies made by the company itself, they would get steadily worse over a period of years, and steadily harder to replace with anything else. You would have to accept what you were given, no matter how poorly it fitted your needs, and you would be unable to make any changes to any of them, no matter how great the need or how useless they were to you. Perish the thought that you or your home might have any unique requirements, or that you might want to be a bit creative yourself. Welcome to Microsoft’s business model! And welcome to the world of (non-utility) cloud services.
Bad clouds closer to home
Given the tone of this article it is perhaps mildly ironic that Engadget, the source of it, reporting on the product less than a year ago, gave advice that “the Premium service might strike a good balance between that urge for customization and the safety net you get through tech giants like Microsoft.” You’d think a tech-focused site like Engadget would know better. I suspect that many of their reporters have not been alive as long as some of us have been in the business, and so they are still learning how this works.
It’s a short-sighted stupidity that infects way too many purchasing decisions even by seasoned IT professionals, whether it be for groupware like O365, or LMSs like Moodle, or HR hiring systems, or leave reporting systems, or e-book renting, or online exam systems, or timesheet applications, or CRM systems, or whatever. My own university has fallen prey to the greedy, malfunctioning, locked-in clutches of all but one of the aforementioned cloud services, and more, and the one it thankfully avoided was a mighty close call. All are baseline systems with limited customizations, that require people to play the role of machines, or that replace roles that should be done by humans with rigid rules and automation. Usually they do both. It is unsurprising that they are weak because they are not built for how we work: they are built for average organizations with average needs. If such a mythical beast actually exists I have never seen it, but we are a very long way from average in almost every way. Quite apart from the inherent business model flaws in outsourced cloud-hosted applications they cannot hope to match the functionality of systems we host and control ourselves or that rely on utility cloud services. They inevitably leave some things soft that should be hard (for example, I spend too much time dealing with mistakes entering leave requests because the system we rent allows people to include – without any signal that it is a bad idea – weekends and public holidays in their leave requests) and some things hard that should be soft (for example, I cannot modify a leave request once it has been made). A utility cloud service or self-hosted system could be modified and assembled with other utility services or self-hosted systems at will, allowing it to be exactly as soft or hard as needed. Things that are hard to do in-house can be outsourced, but many things do not need to be. Managing your own IT systems does cost a lot of money, but nothing like as much as the overall cost to an organization of cloud-based alternatives. Between them, our bad cloud systems cost equivalent of the time of (at least) scores of FTEs, including that of highly paid professors and directors, when compared with custom-built self-hosted systems they replace. You could get a lot of IT staff and equipment for that kind of money. Worse, all are deeply demoralizing, all are inefficient, and all stymie creativity, greatly reducing, and reducing the value of, the knowledge within the organization itself.
It’s a huge amount harder getting out of bad cloud services that it is getting into them (that’s the business model that makes them so bad) but, if we are to survive, we have to escape from such foolishness. The longer we leave it, the harder it gets.
A nice overview of where the NGDLE concept was earlier this year. We really need to be thinking about this at AU because the LMS alone will not take us where we need to be. One of the nice things about this article is that it talks quite clearly about the current and future roles of existing LMSs, placing them quite neatly within the general ecosystem implied by the NGDLE.
The article calls me out on my prediction that the acronym would not catch on though, in my defence, I think it would have been way more popular with a better acronym! The diagram is particularly useful as a means to understand the general concept at, if not a glance, then at least pretty quickly…