A Universe Explodes: A Blockchain Book, from Editions At Play

A Universe Explodes A really nice project from the Editions at Play team at Google, in which blockchain is used both to limit supply to a digital book (only 100 copies made) and, as the book is passed on, to make it ‘age,’ in the sense that each reader must remove two words from each page and add one of their own before passing it on (that they are obliged to do). Eventually, it decays to the point of being useless, though I think the transitional phases might be very interesting in their own right.

I was thinking something very vaguely along these lines would be an interesting idea and had started making notes about how it would work, but it seemed so blindingly obvious that somebody must have already done it. Blockchain technologies for publishing are certainly being considered by many people, and some are being implemented.   The Alliance of Independent Authors seems to have the most practical plans for using Blockchain for that purpose. Another similar idea comes with the means to partially compensate publishers for such things (as though they needed even more undeserved profits). Another interesting idea is to use Blockchain Counterparty tokens to replace ISBN numbers. However, A Universe Explodes is the only example I have so far found of building in intentional decay. It’s one of a range of wonderfully inventive and inspiring books that could only possibly exist in digital media at the brilliant Editions at Play site.

Though use of Blockchain for publishing is a no-brainer, it’s the decay part that I like most, and that I was thinking about before finding this. Removing and adding words is not an accurate representation of the typical decay of a physical book, and it is not super-practical at a large scale, delightful though it is. My first thoughts were, in a pedestrian way, to build in a more authentic kind of decay. It might, for instance, be possible to simply overlay a few more pixels with each reading, or to incrementally grey-out or otherwise visually degrade the text (which might have some cognitive benefits too, as it happens). That relies, however, on a closed application system, or a representation that would be a bit inflexible (e.g. a vector format like SVG to represent the text, or even a bitmap) otherwise it would be too easy to remove such additions simply by using a different application. And, of course, it would be bad for people with a range of disabilities, although I guess you could perform similar mutilations of other representations of the text just as easily. That said, it could be made to work. There’s no way it is even close to being as good as making something free of DRM, of course, but it’s a refinement that might be acceptable to greedy publishers that would at least allow us to lend, give, or sell books that we have purchased to others.

My next thought was that you could, perhaps more easily and certainly more interestingly, make marginalia (graphics and text) a permanent feature of the text once ownership was transferred, which would be both annoying and enlightening, as it is in physical books. One advantage would be that it reifies the concept of ownership – the intentional marks made on the book are a truer indication of the chain of owners than anything more abstract or computer-generated. It could also be a really interesting and useful way to tread a slightly more open path than most ugly DRM implementations, inasmuch as it could allow the creation of deliberately annotated editions (with practical or artistic intent) without the need for publisher permission. That would be good for textbooks, and might open up big untapped markets: for instance, I’d quite often rather buy an ebook annotated by one of my favourite authors or artists than the original, even if it cost more. It could be interestingly subversive, too. I might even purchase one of Trump’s books if it were annotated (and re-sold) by journalists from the Washington Post or Michael Moore, for example. And it could make a nice gift to someone to provide a personally embellished version of a text. Combined with the more prosaic visual decay approach, this could become a conversation between annotators and, eventually, become a digital palimpsest in which the original text all but disappears under generations of annotation. I expect someone has already thought of that but, if not, maybe this post can be used to stop someone profiting from it with a patent claim.

In passing, while searching, I also came across http://www.eruditiondigital.co.uk/what-we-do/custos-for-ebooks.php which is both cunning and evil: it lets publishers embed Bitcoin bounties in ebooks that ‘pirates’ can claim and, in the process, alert the publisher to the identity of the person responsible. Ugly, but very ingenious. As the creators claim, it turns pirates on other pirates by offering incentives, yet keeping the whole process completely anonymous. Eeugh.

Address of the bookmark: https://medium.com/@teau/a-universe-explodes-a-blockchain-book-ab75be83f28

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2874113/a-universe-explodes-a-blockchain-book-from-editions-at-play

Evidence mounts that laptops are terrible for students at lectures. So what?

The Verge reports on a variety of studies that show taking notes with laptops during lectures results in decreased learning when compared with notes taken using pen and paper. This tells me three things, none of which is what the article is aiming to tell me:

  1. That the institutions are teaching very badly. Countless decades of far better evidence than that provided in these studies shows that giving lectures with the intent of imparting information like this is close to being the worst way to teach. Don’t blame the students for poor note taking, blame the institutions for poor teaching. Students should not be put in such an awful situation (nor should teachers, for that matter). If students have to take notes in your lectures then you are doing it wrong.
  2. That the students are not skillful laptop notetakers. These studies do not imply that laptops are bad for notetaking, any more than giving students violins that they cannot play implies that violins are bad for making music. It ain’t what you do, it’s the way that you do it. If their classes depend on effective notetaking then teachers should be teaching students how to do it. But, of course, most of them probably never learned to do it well themselves (at least using laptops). It becomes a vicious circle.
  3. That laptop and, especially, software designers have a long way to go before their machines disappear into the background like a pencil and paper. This may be inherent in the medium, inasmuch as a) they are vastly more complex toolsets with much more to learn about, and b) interfaces and apps constantly evolve so, as soon as people have figured out one of them, everything changes under their feet. It becomes a vicious cycle.

The extra cognitive load involved in manipulating a laptop app (and stopping the distractions that manufacturers seem intent on providing even if you have the self-discipline to avoid proactively seeking them yourself) can be a hindrance unless you are proficient to the point that it becomes an unconscious behaviour. Few of us are. Tablets are a better bet, for now, though they too are becoming overburdened with unsought complexity and unwanted distractions. I have for a couple of years now been taking most of my notes at conferences etc with an Apple Pencil and an iPad Pro, because I like the notetaking flexibility, the simplicity, the lack of distraction (albeit that I have to actively manage that), and the tactile sensation of drawing and doodling. All of that likely contributes to making it easier to remember stuff that I want to remember. The main downside is that, though I still gain laptop-like benefits of everything being in one place, of digital permanence, and of it being distributed to all my devices, I have, in the process, lost a bit in terms of searchability and reusability. I may regret it in future, too, because graphic formats tend to be less persistent over decades than text. On the bright side, using a tablet, I am not stuck in one app. If I want to remember a paper or URL (which is most of what I normally want to remember other than my own ideas and connections that are sparked by the speaker) I tend to look it up immediately and save it to Pocket so that I can return to it later, and I do still make use of a simple notepad for things I know I will need later. Horses for courses, and you get a lot more of both with a tablet than you do with a pencil and paper. And, of course, I can still use pen and paper if I want a throwaway single-use record – conference programs can be useful for that.





Address of the bookmark: https://www.theverge.com/2017/11/27/16703904/laptop-learning-lecture

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2871283/evidence-mounts-that-laptops-are-terrible-for-students-at-lectures-so-what

Signal : now with proper desktop apps

Signal is arguably the most open, and certainly the most secure, privacy-preserving instant messaging/video or voice-calling system available today. It is open source, ad-free, standards-based, simple, and very well designed. Though not filled with bells and whistles, for most purposes it is a far better alternative to Facebook-owned WhatsApp or other near-competitors like Viber, FaceTime, Skype, etc, especially if you have any concerns about your privacy. Like all such things, Metcalfe’s Law means its value increases with every new user added to the network. It’s still at the low end of the uptake curve, but you can help to change that – get it now and tell your friends!

Like most others of its ilk it hooks into your cellphone number rather than a user name but, once you have installed it on your smartphone, you can associate that number (via a simple 2D barcode) with a desktop client. Until recently it only supported desktop machines via a Chrome browser (or equivalent – I used Vivaldi) but the new desktop clients are standalone, so you don’t have to grind your system to a halt or share data with Google to install it. It is still a bit limited when it comes to audio (simple messaging only) and there still appears to be no video support (which is available on smartphone clients) but this is good progress.

Address of the bookmark: https://signal.org/download/

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2813683/signal-now-with-proper-desktop-apps

The Downfall of Doppler Labs: Inside the Last Days of a Hardware Startup | WIRED

Oh drat. So Doppler Labs is no more. This is very sad.

I love my Here One bluetooth earbuds, have recommended them to many people, and would do so again. For simple noise cancelling they run countless rings around every other headphones and earbuds I have ever tried, including top of the line Bose devices costing a lot more (not that these were cheap). The moment that you turn the external sound down and enter a state of blissful silence is miraculous. But they are so much more than that: having entered that world of silence you can bring up sounds that you want to hear, notably the voices of people around you or (more specifically thanks to 6 built-in microphones) in front of you (or, for secret agents, behind you). It is quite eerie to sit on a bus and hear, with fair clarity, the conversations of people around you but to barely hear the rumble and clatter of the bus itself. It’s not always perfect, but it is still pretty remarkable. I’ve even been able to talk with people on a float plane, with massively reduced rumble and noticeably enhanced speech, almost normally. And it is marvellous to be cycling while listening to music while being able to hear approaching traffic and other significant things around me well enough to be safe. Or to wander through a park in the heart of a noisy city and hear nothing but birdsong. I particularly love being able to sit in a crowded bar or restaurant and to hear the conversation of people on the other side of the table but not those of the rest of the room (though it still has difficulty dealing with over-loud music). As a former professional musician with consequent hearing loss, this is transformative: I don’t need a hearing aid (yet) most of the time but, for those odd occasions when my hearing fails me, Here One provides a great solution. To cap it off, the sound quality for music etc is top notch – vastly superior to any other earbuds I have ever owned (mind you, they cost more than twice as much as any I have hitherto owned, so I would hope so). I suspect that at least some of the reason for this is that they store a hearing profile for me that knows which frequencies cause me difficulty and that therefore shape the sound to suit me better. They are basically computers for the ears.

There are weaknesses, some of which have till now been improving through software upgrades since I got the things. It’s a big pain having to control the buds from a cellphone for even pretty simple stuff like volume control. Though there are a few things that can be done by tapping them/double-tapping them (like switching off the noise cancelling or answering a phone) the process is unreliable and there’s a limited range of things you can do that way. The battery life, though improved since the first release and now quicker to recharge, is not that great, notwithstanding the fact that you can charge them two or three times from the case itself. I would prefer to be able to plug in a cable and/or battery booster to use on long flights without interruption. Despite multiple options for earpieces, they don’t always feel firmly set in my ears and, because the seal is pretty solid when they are inserted right, it can get uncomfortable on take-off and landing in planes, especially if you have a cold. And they don’t have a flight mode so, technically, I shouldn’t be doing that anyway. It is really annoying when bluetooth fails as, inevitably, it sometimes does (even though it may not be the fault of the earphones). It is hard to pair them with multiple devices, and the set-up for non-supported devices (anything that is not an iPhone or Android phone, basically) is gruelling and unreliable. It would be nice if they were waterproof. They stick out a bit, albeit not as much as most bluetooth buds. Sometimes they fail to turn off and cause feedback when returned to the case. But these are things I can live with, in return for wearing a completely new category of smart device that enhances the quality of my life.

I was really looking forward to some of the promised new features, especially real-time language translations, but I guess that will have to wait until it is a standard cellphone/smartwatch feature because it is no longer going to come from Doppler Labs. I am much more worried about the loss of support, and the fact that what I have now is what I will have for as long as the buds themselves last: it was one of the appealing things about them that they got better with each software/firmware update. If security flaws are discovered, they won’t get fixed. More worryingly, next time I change my phone (a common event) I may not be able to install the software that is essential to making them work at all. Even if I can, my experience with older iOS devices is that upgrades to phone operating systems often render older software unusable, so they could become a very expensive bit of junk very quickly. It would be nice to think that Doppler Labs might open source their software so that this is not a problem but, from the article, it sounds like they will be selling off the patents to the highest bidder and the chances of opening things up are therefore pretty slim. I fear there are not enough of the things out there in the wild to spark a community-based alternative. On the bright side, no doubt the brilliant innovations will be snapped up by a bigger, more sustainable firm and will find their way into more mainstream devices (Apple would be foolish to miss this one), but I will miss this company and I will miss this product.

This is the second high profile and apparently highly successful Kickstarter device that I have owned to suffer this fate, and I fear the outcomes will be similar. My Pebble watch continues to do basic service but I don’t know for how much longer. There has been nothing new arriving for it since the company folded earlier this year, and the apps it used to run are diminishing every week, as services that they rely upon fold. In olden days, we used to be able to continue to use our devices no matter what happened to their manufacturers. Nowadays, not so much.

I doubt that I will learn my lessons well from this as I am a great optimist when faced with a revolutionary new technology, but it’s something we all have to remember: software embedded in our hardware is an ongoing commitment, and we are surrounded by the stuff at work and at home, from TVs to cars to watches to lightbulbs to routers to phones, and so on. Increasingly, we’re no longer buying a product, we are buying into a service, so the quality and potential longevity of the company is even more important than the quality of the machinery. The only truly effective way to keep it safe, reliable, and sustainable would be for it to be open source and/or to use open standards, and for it not to rely on a single cloud-based service to operate. Sadly, far too little of the Internet of Things comes close to that. And far too much of it is hidden behind DRM, closed APIs, and other sinful mechanisms.

Address of the bookmark: https://www.wired.com/story/inside-the-downfall-of-doppler-labs/

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2812321/the-downfall-of-doppler-labs-inside-the-last-days-of-a-hardware-startup-wired

Ominous clouds

Clouds over the West Pier, Brighton Though Microsoft has been unusually prone to the kind of chicanery described in this article for most of its existence, the problem of price hiking combined with shifting, decaying, or dying cloud services is inherent in the cloud model they are using itself.

Good clouds

Cloud services can make good sense when they are directly replaceable with competitive alternatives: there are compelling reasons to, say, run your virtual servers in the cloud (whether in virtual machines or containers), or to handle network services like DDoS protection, DNS management, or spam filtering, or even (under some circumstances) to run relatively high level application layer services like databases, SMTP mail, or web servers. As long as you can treat a service exactly like a utility – including, crucially, the ability to simply, cheaply, and fairly painlessly switch service providers (including back in-house) whenever you want or need to do so – then it can provide resilience, scalability, predictable costs, and agility. Sometimes, it can even save money. There are still lots of potential pitfalls: complex management concerns like privacy, security, performance, faults, configuration, and accounting need to be treated with utmost caution, service contract negotiation is a complex and trap-strewn art, training and integration can be fiendishly difficult to manage when you no longer control the service and it changes under your feet, and there are potential unpredictable problems ahead when companies go bust, change hands, or become subject to dangerous legislative changes. But, on the whole, a true utility service can often be a sensible use of limited funds.

The soon-to-be-defunct Outlook.com Premium looks deceptively like a utility service on the surface, ostensibly offering what look a lot like simple, straightforward, SMTP/IMAP/POP email services, with a cutesy (ie. from Hell) web front end, with the (optional) capacity to choose a domain that could be migrated elsewhere. To a savvy user, it could be treated as little more than a utility service. However, there’s a lot of integrated frippery, from tricks to embed large images, to proprietary metadata, to out-of-office settings, to integrations with other Microsoft tools, that makes it less portable the more you use it, especially for the less technically adept target audience it is aimed at, especially if you are using Microsoft Outlook or the Web interface to manage it. Along with some subtle bending of protocols that make even the simplest of migrations fraught with difficulty and subject to lost metadata at best, by far the most likely exit strategy for most users will be to shift to the (more expensive) O365 which, though not identical, has features that are close enough and easily-migrated enough to suit the average Joe. And that’s what Microsoft wants.

Bad clouds

O365 is not a utility service at all, despite using the lure of almost generic email and calendaring (potentially replaceable services) to hook you in. It’s a cloud-based application suite filled to the brim with proprietary applications, systems and protocols, almost all of which are purpose-built to lock your data, processes, and skill set into a non-transferable cloud that is owned and controlled by an entity that does not have your interests as its main concern. In fact, exactly the opposite: its main concern is to get as much money from you as possible over as long a period as it can. If it were a utility like, say, electricity to your home, it would be one that required you to only plug in its own devices, using sockets that could not be duplicated, running at voltages and frequencies no one else uses. Its employees would walk into your house and replace your appliances and devices with different ones whenever they wanted (often replacing your stove while you were cooking on it), dropping and adding features as they felt like it. The utility company would be selling information about what devices you use, and when, to which channels you tuned your TV, what you were eating, and so on, to anyone willing to pay. You would have to have a microwave and toaster whether you wanted one or not, and you couldn’t switch any of them off. It would install cameras and microphones in your home that it or its government could use to watch everything you do. Every now and then it would increase its prices to just a bit less than it would cost to rip everything out and replace it with standards-based equipment you could use anywhere. Though it would offer a lot of different devices, all with different and unintuitive switches and remote controls (because it had bought most of them from other companies), none of them would work properly and, as they were slowly replaced with technologies made by the company itself, they would get steadily worse over a period of years, and steadily harder to replace with anything else. You would have to accept what you were given, no matter how poorly it fitted your needs, and you would be unable to make any changes to any of them, no matter how great the need or how useless they were to you. Perish the thought that you or your home might have any unique requirements, or that you might want to be a bit creative yourself. Welcome to Microsoft’s business model! And welcome to the world of (non-utility) cloud services.

Bad clouds closer to home

Given the tone of this article it is perhaps mildly ironic that Engadget, the source of it, reporting on the product less than a year ago, gave advice that “the Premium service might strike a good balance between that urge for customization and the safety net you get through tech giants like Microsoft.” You’d think a tech-focused site like Engadget would know better. I suspect that many of their reporters have not been alive as long as some of us have been in the business, and so they are still learning how this works.

It’s a short-sighted stupidity that infects way too many purchasing decisions even by seasoned IT professionals, whether it be for groupware like O365, or LMSs like Moodle, or HR hiring systems, or leave reporting systems, or e-book renting, or online exam systems, or timesheet applications, or CRM systems, or whatever. My own university has fallen prey to the greedy, malfunctioning, locked-in clutches of all but one of the aforementioned cloud services, and more, and the one it thankfully avoided was a mighty close call. All are baseline systems with limited customizations, that require people to play the role of machines, or that replace roles that should be done by humans with rigid rules and automation. Usually they do both. It is unsurprising that they are weak because they are not built for how we work: they are built for average organizations with average needs. If such a mythical beast actually exists I have never seen it, but we are a very long way from average in almost every way. Quite apart from the inherent business model flaws in outsourced cloud-hosted applications they cannot hope to match the functionality of systems we host and control ourselves or that rely on utility cloud services. They inevitably leave some things soft that should be hard (for example, I spend too much time dealing with mistakes entering leave requests because the system we rent allows people to include – without any signal that it is a bad idea – weekends and public holidays in their leave requests) and some things hard that should be soft (for example, I cannot modify a leave request once it has been made). A utility cloud service or self-hosted system could be modified and assembled with other utility services or self-hosted systems at will, allowing it to be exactly as soft or hard as needed. Things that are hard to do in-house can be outsourced, but many things do not need to be. Managing your own IT systems does cost a lot of money, but nothing like as much as the overall cost to an organization of cloud-based alternatives. Between them, our bad cloud systems cost equivalent of the time of (at least) scores of FTEs, including that of highly paid professors and directors, when compared with custom-built self-hosted systems they replace. You could get a lot of IT staff and equipment for that kind of money. Worse, all are deeply demoralizing, all are inefficient, and all stymie creativity, greatly reducing, and reducing the value of, the knowledge within the organization itself.

It’s a huge amount harder getting out of bad cloud services that it is getting into them (that’s the business model that makes them so bad) but, if we are to survive, we have to escape from such foolishness. The longer we leave it, the harder it gets.

Address of the bookmark: https://www.engadget.com/2017/10/30/microsoft-axes-outlook-com-premium-features/

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2810115/ominous-clouds

The NGDLE: We Are the Architects | EDUCAUSE

A nice overview of where the NGDLE concept was earlier this year. We really need to be thinking about this at AU because the LMS alone will not take us where we need to be. One of the nice things about this article is that it talks quite clearly about the current and future roles of existing LMSs, placing them quite neatly within the general ecosystem implied by the NGDLE.

The article calls me out on my prediction that the acronym would not catch on though, in my defence, I think it would have been way more popular with a better acronym! The diagram is particularly useful as a means to understand the general concept at, if not a glance, then at least pretty quickly…

ngdle overview

Address of the bookmark: https://er.educause.edu/articles/2017/7/the-ngdle-we-are-the-architects

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2752680/the-ngdle-we-are-the-architects-educause

Instagram uses 'I will rape you' post as Facebook ad in latest algorithm mishap

Another in a long line of algorithm fails from the Facebook stable, this time from Instagram…

"I will rape you" post from Instagram used for advertising the service

This is a postcard from our future when AI and robots rule the planet. Intelligence without wisdom is a very dangerous thing. See my recent post on Amazon’s unnerving bomb-construction recommendations for some thoughts on this kind of problem, and how it relates to attempts by some researchers and developers to use learning analytics beyond its proper boundaries.


Address of the bookmark: https://www.theguardian.com/technology/2017/sep/21/instagram-death-threat-facebook-olivia-solon

Original page

Athabasca’s bright future

Tony BatesThe always excellent Tony Bates provides a very clear summary of Ken Coates’s Independent Third-Party Review of Athabasca University released a week or two ago and, as usual, provides a great critical commentary as well as some useful advice on next steps.

Tony rightly points out that our problems are more internal than external, and that the solutions have to come from us, not from outside. To a large extent he hits the nail right on the head when he notes:

Major changes in course design, educational technology, student support and administration, marketing and PR are urgently needed to bring AU into advanced 21st century practice in online and distance learning. I fear that while there are visionary faculty and staff at AU who understand this, there is still too much resistance from traditionalists and those who see change as undermining academic excellence or threatening their comfort zone.

It is hard to disagree. But, though there are too many ostriches among our staff and we do have some major cultural impediments to overcome, it is far less people that impede our progress than it is our design itself, and the technologies – especially the management technologies – of which it consists. That must change, as a corequisite to changing the culture that goes along with it. With some very important exceptions (more on that below) our culture is almost entirely mediated through our organizational and digital technologies, most notably in the form of very rigid processes, procedures and rules, but also through our IT. Our IT should, but increasingly does not, embody those processes. The processes still exist, of course – it’s just that people have to perform them instead of machines. Increasingly often, to make matters worse, we shape our processes to our ill-fitting IT rather than vice versa, because the ‘technological debt’ of adapting them to our needs and therefore having to maintain them ourselves is considered too great (a rookie systems error caused by splitting IT into a semi-autonomous unit that has to slash its own costs without considering the far greater price paid by the university at large). Communication, when it occurs, is almost all explicit and instrumental. We do not yet have enough of the tacit flows of knowledge and easy communication that patch over or fix the (almost always far greater) flaws that exist in such processes in traditional bricks and mortar institutions. The continual partial attention and focused channels of communication resulting from working online mean that we struggle with tacit knowledge and the flexibility of embedded dialogue in ways old fashioned universities never have to even think about. One of the big problems with being so process-driven is that, especially in the absence of richer tacit communication, it is really hard to change those processes, especially because they have evolved to be deeply entangled with one another – changing one process almost always means changing many, often in structurally separate parts of the institutional machine, and involves processes of its own that are often entangled with those we set out to change. As a result for much of its operation, our university does what it does despite us, not because of us. Unlike traditional universities, we have nothing else to fall back on when it fails, or when things fall between cracks. And, though we likely have far fewer than most traditional universities, there are still very many cracks to fall through.

This, not coincidentally, is exactly true of our teaching too. We are pretty darn good at doing what we explicitly intend to do: our students achieve learning outcomes very well, according to the measures we use. AU is a machine that teaches, which is fine until we want the machine to do more than what it is built to do or when other, faster, lighter, cheaper machines begin to compete with it.  As well as making it really hard to make even small changes to teaching, what gets lost – and what matters about as much as what we intentionally teach – is the stuff we do not intend to teach, the stuff that makes up the bulk of the learning experience in traditional universities, the stuff where students learn to be, not just to do. It’s whole-person learning. In distance and online learning, we tend to just concentrate on parts we can measure and we are seldom even aware of the rest. There is a hard and rigid boundary between the directed, instrumental processes and the soft, invisible patterns of culture and belonging, beyond which we rarely cross. This absence is largely what gives distance learning a bad reputation, though it can be a strength if focused teaching of something well-defined is exactly what is needed, or if students are able to make the bigger connections in other ways (true of many of our successful students), when the control that the teaching method provides is worth all the losses and where a more immersive experience might actually get in the way. But it’s a boundary that alienates a majority of current and prospective students. A large percentage of even those we manage to enrol and keep with us would like to feel more connected, more a part of a community, more engaged, more belonging. A great many more don’t even join us in the first place because of that perceived lack, and a very large number drop out before submitting a single piece of work as a direct result.

This is precisely the boundary that the Landing is intended to be a step towards breaking down.


If we cannot figure out how to recover that tacit dimension, there is little chance that we can figure out how to teach at a distance in a way that differentiates us from the crowd and that draws people to us for the experience, rather than for the qualification. Not quite fair. Some of us will. If you get the right (deeply engaged) tutor, or join the right (social and/or open) course, or join the Landing, or participate in local meet-ups, or join other social media groups, you may get a fair bit of the tacit, serendipitous, incidental learning and knowledge construction that typifies a traditional education. Plenty of students do have wonderful experiences learning with others at AU, be it with their tutors or with other students. We often see those ones at convocation – ones for whom the experience has been deep, meaningful, and connected. But, for many of our students and especially the ones that don’t make it to graduation (or even to the first assignment), the chances of feeling that you belong to something bigger, to learn from others around you, to be part of a richer university experience, are fairly low. Every one of our students needs to be very self-directed, compared with those in traditional institutions – that’s a sina qua non of working online – but too many get insufficient support and too little inspiration from those around them to rise beyond that or to get through the difficult parts. This is not too surprising, given that we cannot do it for ourselves either. When faced with complicated things demanding close engagement, too many of our staff fall back on the comfortable, easy solution of meeting face to face in one of our various centres rather than taking the hard way, and so the system remains broken. This can and will change.

Moving on

I am much heartened by the Coates report which, amongst other things but most prominently and as our central value proposition, puts our leadership in online and distance education at the centre of everything. This is what I have unceasingly believed we should do since the moment I arrived. The call to action of Coates’s report is fundamentally to change our rigid dynamic, to be bold, to innovate without barriers, to evolve, to make use of the astonishingly good resources – primarily our people – to (again) lead the online learning world. As a virtual institution this should be easier than it would be for others but, perversely, it is exactly the opposite. This is for aforesaid reasons, and also because the boundaries of our IT systems create the boundaries of our thinking, and embed processes more deeply and more inflexibly than almost any bricks and mortar establishment could hope to do. We need soft systems, fuzzy systems, adaptable systems, agile systems for our teaching, research, and learning community development, and we need hard systems, automated systems, custom tailored, rock solid systems for our business processes, including the administrational and assessment recording outputs of the teaching process. This is precisely the antithesis of what we have now. As Coates puts it:

“AU should rebrand itself as the leading Canadian centre for online learning and twenty- first century educational technology. AU has a distinct and potentially insurmountable advantage. The university has the education technology professionals needed to provide leadership, the global reputation needed to attract and hold attention, and the faculty and staff ready to experiment with and test new ideas in an area of emerging national priority. There is a critical challenge, however. AU currently lacks the ICT model and facilities to rise to this opportunity.”

We live in our IT…

We have long been challenged with our IT systems, but things were not always so bad. Our ICT model has made a 180 degree turnaround in the past few years in the exact opposite direction to one that will support continuing evolution and innovation, driven by people that know little about our core mission and that have failed to understand what makes us special as a university. The best defence offered for these poor decisions is usually that ‘most other universities are doing it,’ but we are not most other universities.  ICTs are not just support tools or performance enhancers for us. We are our IT. It is our one and only face to our students and the world. Without IT, we are literally nothing. We have massively underinvested in developing our IT, and what we have done in recent years has destroyed our lead, our agility, and our morale. Increasingly, we have rented generic, closed, off-the-shelf cloud-based applications that would be pretty awful in a factory, that force us into behaviours that make no sense, that sap our time and will, and that are so deeply inappropriate for our very unique distributed community that they stifle all progress, and cut off almost all avenues of innovation in the one area that we are best placed to innovate and lead. We have automated things that should not be automated and let fall into disrepair the things that actually give us an edge. For instance, we rent an absurdly poor CRM system to manage student interactions, building a call centre for customers when we should be building relationships with students, embedding our least savoury practices of content delivery still further, making tweaks to a method of teaching that should have died when we stopped using the postal service for course packs. Yes, when it works, it incrementally improves a broken system, so it looks OK (not great) on reports, but the system it enhances is still irrevocably broken and, by further tying it into a hard embodiment in an ill-fitting application, the chances of fixing it properly diminish further. And, of course, it doesn’t work, because we have rented an ill-fitting system designed for other things with little or no consideration of whether it meets more than coarse functional needs. This can and must change.

Meanwhile, we have methodically starved the environments that are designed for us and through which we have innovated in the past, and that could allow us to evolve. Astonishingly, we have had no (as in zero) central IT support for research for years now, getting by on a wing and a prayer, grabbing for bits of overtime where we can, or using scarce, poorly integrated departmental resources. Even very well-funded and well-staffed projects are stifled by it because almost all of our learning technology innovations are completely reliant on access, not only to central services (class lists, user logins, LMS integration, etc), but also to the staff that are able to perform integrations, manage servers, install software, configure firewalls, etc, etc.  We have had a 95% complete upgrade for the Landing sitting in the wings for nearly 2 years, unable to progress due to lack of central IT personnel to implement it, even though we have sufficient funds to pay for them and then some, and the Landing is actively used by thousands of people. Even our mainstream teaching tools have been woefully underfunded and undermined: we run a version of Moodle that is past even its security update period, for instance, and that creaks along only thanks to a very small but excellent team supporting it. Tools supporting more innovative teaching with more tenuous uptake, such as Mahara and OpenSIM servers, are virtual orphans, riskily trundling along with considerably less support than even the Landing.

This can and will change.

… but we are based in Athabasca

There are other things in Coates’s report that are given a very large emphasis, notably advice to increase our open access, particularly through forming more partnerships with Northern Albertan colleges serving indigenous populations (good – and we will need smarter, more human, more flexible, more inclusive systems for that, too), but mainly a lot of detailed recommendations about staying in Athabasca itself. This latter recommendation seems to have been forced upon Coates, and it comes with many provisos. Coates is very cognizant of the fact that being based in the remote, run-down town of Athabasca is, has been, and will remain a huge and expensive hobble. He mostly skims over sensitive issues like the difficulty of recruiting good people to the town (a major problem that is only slightly offset by the fact that, once we have got them there, they are quite unlikely to leave), but makes it clear that it costs us very dearly in myriad other ways.

… the university significantly underestimates the total cost of maintaining the Athabasca location. References to the costs of the distributed operation, including commitments in the Town of Athabasca, typically focus on direct transportation and facility costs and do not incorporate staff and faculty time. The university does not have a full accounting of the costs associated with their chosen administrative and structural arrangements.”

His suggestions, though making much of the value of staying in Athabasca and heavily emphasizing the importance of its continuing role in the institution, involve moving a lot of people and infrastructure out of it and doing a lot of stuff through web conferencing. He walks a tricky political tightrope, trying to avoid the hot potato of moving away while suggesting ways that we should leave. He is right on both counts.

Short circuits in our communications infrastructure

Though cost, lack of decent ICT infrastructure, and difficulties recruiting good people are factors in making Athabasca a hobble for us, the biggest problem is, again, structural. Unlike those working online, among those living and working in the town of Athabasca itself, all the traditional knowledge flows occur without impediment, almost always to the detriment of more inclusive ways of online communication. Face to face dialogue inevitably short-circuits online engagement – always has, always will. People in Athabasca, as any humans would and should, tend to talk among themselves, and tend to only communicate with others online, as the rest of us do, in directed, intentional ways. This might not be so bad were it not for the fact that Athabasca is very unrepresentative of the university population as a whole, containing the bulk of our administrators, managers, and technical staff, with less than 10 actual faculty in the region. This is a separate subculture, it is not the university, but it has enormous sway over how we evolve. It is not too surprising that our most critical learning systems account for only about 5% of our IT budget because that side of things is barely heard of among decision-makers and implementors that live there and they only indirectly have to face the consequences of its failings (a matter made much worse by the way we disempower the tutors that have to deal with them most of all, and filter their channels of communication through just a handful of obligated committee members). It is no surprise that channels of communication are weak because those that design and maintain them can easily bypass the problems they cause. In fact, if there were more faculty there, it would be even worse, because then we would never face any of the problems encountered by our students. Further concentrations of staff in Edmonton (where most faculty reside), St Albert (mainly our business faculty) and Calgary do not help one bit, simply building further enclaves, which again lead to short circuits in communication and isolated self-reinforcing clusters that distort our perspectives and reduce online communication. Ideas, innovations, and concerns do not spread because of hierarchies that isolate them, filter them as they move up through the hierarchy, and dissipate them in Athabasca. Such clustering could be a good part of the engine that drives adaptation: natural ecosystems diversify thanks to parcellation. However, that’s not how it works here, thanks to the aforementioned excess in structure and process and the fact that those clusters are far from independently evolving. They are subject to the same rules and the same selection pressures as one another, unable to independently evolve because they are rigidly, structurally, and technologically bound to the centre. This is not evolution – it is barely even design, though every part of it has been designed and top-down structures overlay the whole thing. It’s a side effect of many small decisions that, taken as a whole, result in a very flawed system.

This can and must change.

The town of Athabasca and what it means to us

Athabasca high street

Though I have made quite a few day trips to Athabasca over the years, I had never stayed overnight until around convocation time this year. Though it was a busy few days so I only had a little chance to explore, I found it to be a fascinating place that parallels AU in many ways. The impression it gives is of a raw, rather broken-down and depressed little frontier town of around 4,000 souls (a village by some reckonings) and almost as many churches. It was once a thriving staging post on the way to the Klondike gold rush, when it was filled with the rollicking clamour of around 20,000 prospectors dreaming of fortunes. Many just passed through, but quite a few stayed, helping to define some of its current character but, when the gold rush died down, there was little left to sustain a population. Much of the town still feels a bit temporary, still a bit of a campground waiting to turn into a real town. Like much of Northern Alberta, its fortunes in more recent years have been significantly bound to the oil business, feeding an industry that has no viable future and the morals of an errant crow, tied to its roller coaster fortunes. There are signs that money has been around, from time to time: a few nice buildings, a bit of landscaping here and there, a memorial podium at Athabasca Landing.  But there are bigger signs that it has left.

Athabasca Landing

Today, Athabasca’s bleak main street is filled with condemned buildings, closed businesses, discount stores, and shops with ‘sale’ signs in their windows. There are two somewhat empty town centre pubs, where a karaoke night in one will denude the other of almost all its customers.

There are virtually no transit links to the outside world: one Greyhound bus from Edmonton (2 hours away) comes through it, in the dead of night, and passenger trains stopped running decades ago. The roads leading in and out are dangerous: people die way too often getting there, including one of our most valued colleagues in my own school. It is never too far from being reclaimed by the forces of nature that surround it. Moose, bear, deer, and coyotes wander fairly freely. Minus forty temperatures don’t help, nor does a river that is pushed too hard by meltwaters from the rapidly receding Athabasca Glacier and that is increasingly polluted by the side-effects of oil production.


So far so bleak. But there are some notable upsides too. The town is full of delightfully kind, helpful, down-to-earth people infused with that wonderful Canadian spirit of caring for their neighbours, grittily facing the elements with good cheer, getting up early, eating dinner in the late afternoon, gathering for potlucks in one another’s houses, and organizing community get-togethers. The bulk of housing is well cared-for, set in well-tended gardens, in quiet, neat little streets. I bet most people there know their neighbours and their kids play together. Though tainted by its ties with the oil industry, the town comes across as, fundamentally, a wholesome centre for homesteaders in the region, self-reliant and obstinately surviving against great odds by helping one another and helping themselves. The businesses that thrive are those selling tools, materials, and services to build and maintain your farm and house, along with stores for loading your provisions into your truck to get you through the grim winters. It certainly helps that a large number of residents are employees of the university, providing greater diversity than is typically found in such settlements, but they are frontier folk like the rest. They have to be.

It would be unthinkable to pull the university out at this point – it would utterly destroy an already threatened town and, I think, it would cause great damage to the university. This was clearly at the forefront of Coates’s mind, too. The solution is not to withdraw from this strange place, but to dilute and divert the damage it causes and perhaps, even, to find ways to use its strengths. Greater engagement with Northern communities might be one way to save it – we have some big largely empty buildings up there that will be getting emptier, and that might not be a bad place for some face-to-face branching out, perhaps semi-autonomously, perhaps in partnership with colleges in the region. It also has potential as a place for a research retreat though it is not exactly a Mecca that would draw people to it, especially without transit links to sustain it. A well-designed research centre cost a fortune to build, though, so it would be nice to get some use out of it.

Perhaps more importantly, we should not pull out because Athabasca is a part of the soul of the institution. It is a little fitting that Athabasca University has – not without resistance – had its fortunes tied to this town. Athabasca is kind-of who we are and, to a large extent, defines who we should aspire to be. As an institution we are, right now, a decaying frontier town on the edge of civilization that was once a thriving metropolis, forced to help ourselves and one another battle with the elements, a caring bunch of individuals bound by a common purpose but stuck in a wilderness that cares little for us and whose ties with the outside world are fickle, costly, and tenuous. Athabasca is certainly a hobble but it is our hobble and, if we want to move on, we need to find ways to make the best of it – to find value in it, to move people and things away from it that it impedes the most, at least where we can, but to build upon it as a mythic hub that helps to define our identity, a symbolic centre for our thinking. We can and will help ourselves and one another to make it great again. And we have a big advantage that our home town lacks: a renewable and sustainable resource and product. Very much unlike Athabasca the town, the source of our wealth is entirely in our people, and the means we have for connecting them. We have the people already: we just need to refocus on the connection.

Computer science students should learn to cheat, not be punished for it

This is a well thought-through response to a recent alarmist NYT article about cheating among programming students.

The original NYT article is full of holy pronouncements about the evils of plagiarism, horrified statistics about its extent, and discussions of the arms wars, typically involving sleuthing by markers and evermore ornate technological fixes that are always one step behind the most effective cheats (and one step ahead of the dumber ones). This is a lose-lose system. No one benefits. But that’s not the biggest issue with the article. Nowhere does the NYT article mention that it is largely caused by the fact that we in academia typically tell programming students to behave in ways that no programmer in their right mind would ever behave (disclaimer: the one programming course that I currently teach, very deliberately, does not do that, so I am speaking here as an atypical outlier).

As this article rightly notes, the essence of programming is re-use of code. Although there are certainly egregiously immoral and illegal ways to do that (even open source coders normally need to religiously cite their sources for significant uses of code written by others), applications are built on layer upon layer upon layer of re-used code, common subroutines and algorithms, snippets, chunks, libraries, classes, components, and a thousand different ways to assemble (in some cases literally) the code of others. We could not do programming at all without 99% of the code that does what we want it to do being written by others. Programmers knit such things together, often sharing their discoveries and improvements so that the whole profession benefits and the cycle continues. The solution to most problems is, more often than not, to be found in StackExchange forums, Reddit, or similar sites, or in open source repositories like Github, and it would be an idiotic programmer that chose not to (very critically and very carefully) use snippets provided there. That’s pretty much how programmers learn, a large part of how they solve problems, and certainly how they build stuff. The art of it is in choosing the right snippet, understanding it, fitting it into one’s own code, selecting between alternative solutions and knowing why one is better (in a given context) than another. In many cases, we have memorized ways of doing things so that, even if we don’t literally copy and paste, we repeat patterns (whole lines and blocks) that are often identical to those that we learned from others. It would likely be impossible to even remember where we learned such things, let alone to cite them.  We should not penalize that – we should celebrate it. Sure, if the chunks we use are particulary ingenious, or particularly original, or particularly long, or protected by a licence, we should definitely credit their authors. That’s just common sense and decency, as well as (typically) a legal requirement. But a program made using the code of others is no less plagiarism than Kurt Schwitters was a plagiarist of the myriad found objects that made up his collages, or a house builder is a plagiarist of its bricks.

And, as an aside, please stop calling it ‘Computer Science’. Programming is no more computer science than carpentry is woodworking science. It bugs me that ‘computer science’ is used so often as a drop-in synonym for programming in the popular press, reinforced by an increasing number of academics with science-envy, especially in North America. There are sciences used in computing, and a tiny percentage of those are quite unique to the discipline, but that’s a miniscule percentage of what is taught in universities and colleges, and a vanishingly small percentage of what nearly all programmers actually do. It’s also worth noting that computer science programs are not just about programming: there’s a whole bunch of stuff we teach (and that computing professionals do) about things like databases, networks, hardware, ethics, etc that has nothing whatsoever to do with programming (and little to do with science). Programming, though, especially in its design aspects, is a fundamentally human activity that is creative, situated, and inextricably entangled with its social and organizational context. Apart from in some research labs and esoteric applications, it is normally closer to fine art than it is to science, though it is an incredibly flexible activity that spans a gamut of creative pursuits analogous to a broad range of arts and crafts from poetry to music to interior design to engineering. Perhaps it is most akin to architecture in the ways it can (depending on context) blend art, craft, engineering, and (some) science but it can be analogous to pretty much any creative pursuit (universal machines and all that).

Address of the bookmark: https://thenextweb.com/dd/2017/05/30/lets-teach-computer-science-students-to-cheat/#.tnw_FTOVyGc4

Original page


Learnium is yet another attempt to overlay a cloud-based social medium on institutional learning, in the same family as systems like Edmodo, Wikispaces Classroom, Lore, GoingOn, etc, etc. I deliberately exclude from this list the far more excellent, theoretically grounded, and innovative Curatr, as well as dumb bandwagoners like – of all things – Blackboard (not deserving of a link but you could look up their atrocious social media management tools if you want to see how not to do this).

Learnium has a UK focus and it includes mobile apps as well as institutional integration tools. It looks slick, has a good range of tools, and seems to be gaining a little traction. This is trying to do something a little like what we tried to do with the Landing, but it should not be confused with the Landing in intent or design philosophy, notwithstanding some superficial similarities. Although the Landing is often used for teaching purposes, it deliberately avoids things like institutional roles, and deliberately blurs such distinctions when its users make use of them (eg. when they create course groups). It can be quite confusing for students expecting a guided space and top-down structure, and annoying if you are a teacher trying to control the learning space to behave that way, but that’s simply not how it is designed to work. The Landing is a learning space, where everyone is a teacher, not an institutional teaching space where the role is reserved for a few.

Learnium has a far more institutionally managed, teacher/course-oriented perspective. From what I can tell, it’s basically an LMS, cut down in some places, enhanced in its social aspects. It’s closer to Canvas than Moodle in that regard. It might have some value for teachers that like the social media tools but that dislike the lack of teacher-control, lack of privacy, deeply problematic ethics, and ugly intrusions of things like Facebook, and who do not want the cost or hassle of managing their own environments.  It is probably a more congenial environment for social pedagogies than most institutional LMSs, allowing learning to spread beyond class groups and supporting some kinds of social networking. There is a lot of scope and potential for vertical social networks like this that serve a particular kind of community in a tailored fashion. This is very much not Facebook, and that’s a very good thing.

But Learnium is an answer to the question ‘how can I use social media in my courses?’ rather than ‘how can social media help to change how people learn?’ It is also an answer to the question of ‘how can Learnium make money?’ rather than ‘how can Learnium help its users?’ And, like any cloud-based service of this nature (sadly including Curatr), it is not a safe place to entrust your learning community: things like changes to terms of service, changes to tools, bankcruptcy ,and takeovers are an ever-present threat. With the exception of open systems that allow you to move everything, lock stock and barrel, to somewhere else with no significant loss of data or functionality, an institution (and its students) can never own a cloud-based system like this. It might be a small difference from an end user perspective, at least until it blows up, but it’s all the difference in the world.

Address of the bookmark: https://www.learnium.com/about/institutions/

Original page