Ominous clouds

Clouds over the West Pier, Brighton Though Microsoft has been unusually prone to the kind of chicanery described in this article for most of its existence, the problem of price hiking combined with shifting, decaying, or dying cloud services is inherent in the cloud model they are using itself.

Good clouds

Cloud services can make good sense when they are directly replaceable with competitive alternatives: there are compelling reasons to, say, run your virtual servers in the cloud (whether in virtual machines or containers), or to handle network services like DDoS protection, DNS management, or spam filtering, or even (under some circumstances) to run relatively high level application layer services like databases, SMTP mail, or web servers. As long as you can treat a service exactly like a utility – including, crucially, the ability to simply, cheaply, and fairly painlessly switch service providers (including back in-house) whenever you want or need to do so – then it can provide resilience, scalability, predictable costs, and agility. Sometimes, it can even save money. There are still lots of potential pitfalls: complex management concerns like privacy, security, performance, faults, configuration, and accounting need to be treated with utmost caution, service contract negotiation is a complex and trap-strewn art, training and integration can be fiendishly difficult to manage when you no longer control the service and it changes under your feet, and there are potential unpredictable problems ahead when companies go bust, change hands, or become subject to dangerous legislative changes. But, on the whole, a true utility service can often be a sensible use of limited funds.

The soon-to-be-defunct Outlook.com Premium looks deceptively like a utility service on the surface, ostensibly offering what look a lot like simple, straightforward, SMTP/IMAP/POP email services, with a cutesy (ie. from Hell) web front end, with the (optional) capacity to choose a domain that could be migrated elsewhere. To a savvy user, it could be treated as little more than a utility service. However, there’s a lot of integrated frippery, from tricks to embed large images, to proprietary metadata, to out-of-office settings, to integrations with other Microsoft tools, that makes it less portable the more you use it, especially for the less technically adept target audience it is aimed at, especially if you are using Microsoft Outlook or the Web interface to manage it. Along with some subtle bending of protocols that make even the simplest of migrations fraught with difficulty and subject to lost metadata at best, by far the most likely exit strategy for most users will be to shift to the (more expensive) O365 which, though not identical, has features that are close enough and easily-migrated enough to suit the average Joe. And that’s what Microsoft wants.

Bad clouds

O365 is not a utility service at all, despite using the lure of almost generic email and calendaring (potentially replaceable services) to hook you in. It’s a cloud-based application suite filled to the brim with proprietary applications, systems and protocols, almost all of which are purpose-built to lock your data, processes, and skill set into a non-transferable cloud that is owned and controlled by an entity that does not have your interests as its main concern. In fact, exactly the opposite: its main concern is to get as much money from you as possible over as long a period as it can. If it were a utility like, say, electricity to your home, it would be one that required you to only plug in its own devices, using sockets that could not be duplicated, running at voltages and frequencies no one else uses. Its employees would walk into your house and replace your appliances and devices with different ones whenever they wanted (often replacing your stove while you were cooking on it), dropping and adding features as they felt like it. The utility company would be selling information about what devices you use, and when, to which channels you tuned your TV, what you were eating, and so on, to anyone willing to pay. You would have to have a microwave and toaster whether you wanted one or not, and you couldn’t switch any of them off. It would install cameras and microphones in your home that it or its government could use to watch everything you do. Every now and then it would increase its prices to just a bit less than it would cost to rip everything out and replace it with standards-based equipment you could use anywhere. Though it would offer a lot of different devices, all with different and unintuitive switches and remote controls (because it had bought most of them from other companies), none of them would work properly and, as they were slowly replaced with technologies made by the company itself, they would get steadily worse over a period of years, and steadily harder to replace with anything else. You would have to accept what you were given, no matter how poorly it fitted your needs, and you would be unable to make any changes to any of them, no matter how great the need or how useless they were to you. Perish the thought that you or your home might have any unique requirements, or that you might want to be a bit creative yourself. Welcome to Microsoft’s business model! And welcome to the world of (non-utility) cloud services.

Bad clouds closer to home

Given the tone of this article it is perhaps mildly ironic that Engadget, the source of it, reporting on the product less than a year ago, gave advice that “the Premium service might strike a good balance between that urge for customization and the safety net you get through tech giants like Microsoft.” You’d think a tech-focused site like Engadget would know better. I suspect that many of their reporters have not been alive as long as some of us have been in the business, and so they are still learning how this works.

It’s a short-sighted stupidity that infects way too many purchasing decisions even by seasoned IT professionals, whether it be for groupware like O365, or LMSs like Moodle, or HR hiring systems, or leave reporting systems, or e-book renting, or online exam systems, or timesheet applications, or CRM systems, or whatever. My own university has fallen prey to the greedy, malfunctioning, locked-in clutches of all but one of the aforementioned cloud services, and more, and the one it thankfully avoided was a mighty close call. All are baseline systems with limited customizations, that require people to play the role of machines, or that replace roles that should be done by humans with rigid rules and automation. Usually they do both. It is unsurprising that they are weak because they are not built for how we work: they are built for average organizations with average needs. If such a mythical beast actually exists I have never seen it, but we are a very long way from average in almost every way. Quite apart from the inherent business model flaws in outsourced cloud-hosted applications they cannot hope to match the functionality of systems we host and control ourselves or that rely on utility cloud services. They inevitably leave some things soft that should be hard (for example, I spend too much time dealing with mistakes entering leave requests because the system we rent allows people to include – without any signal that it is a bad idea – weekends and public holidays in their leave requests) and some things hard that should be soft (for example, I cannot modify a leave request once it has been made). A utility cloud service or self-hosted system could be modified and assembled with other utility services or self-hosted systems at will, allowing it to be exactly as soft or hard as needed. Things that are hard to do in-house can be outsourced, but many things do not need to be. Managing your own IT systems does cost a lot of money, but nothing like as much as the overall cost to an organization of cloud-based alternatives. Between them, our bad cloud systems cost equivalent of the time of (at least) scores of FTEs, including that of highly paid professors and directors, when compared with custom-built self-hosted systems they replace. You could get a lot of IT staff and equipment for that kind of money. Worse, all are deeply demoralizing, all are inefficient, and all stymie creativity, greatly reducing, and reducing the value of, the knowledge within the organization itself.

It’s a huge amount harder getting out of bad cloud services that it is getting into them (that’s the business model that makes them so bad) but, if we are to survive, we have to escape from such foolishness. The longer we leave it, the harder it gets.

Address of the bookmark: https://www.engadget.com/2017/10/30/microsoft-axes-outlook-com-premium-features/

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2810115/ominous-clouds

The NGDLE: We Are the Architects | EDUCAUSE

A nice overview of where the NGDLE concept was earlier this year. We really need to be thinking about this at AU because the LMS alone will not take us where we need to be. One of the nice things about this article is that it talks quite clearly about the current and future roles of existing LMSs, placing them quite neatly within the general ecosystem implied by the NGDLE.

The article calls me out on my prediction that the acronym would not catch on though, in my defence, I think it would have been way more popular with a better acronym! The diagram is particularly useful as a means to understand the general concept at, if not a glance, then at least pretty quickly…

ngdle overview

Address of the bookmark: https://er.educause.edu/articles/2017/7/the-ngdle-we-are-the-architects

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2752680/the-ngdle-we-are-the-architects-educause

Instagram uses 'I will rape you' post as Facebook ad in latest algorithm mishap

Another in a long line of algorithm fails from the Facebook stable, this time from Instagram…

"I will rape you" post from Instagram used for advertising the service

This is a postcard from our future when AI and robots rule the planet. Intelligence without wisdom is a very dangerous thing. See my recent post on Amazon’s unnerving bomb-construction recommendations for some thoughts on this kind of problem, and how it relates to attempts by some researchers and developers to use learning analytics beyond its proper boundaries.

 

Address of the bookmark: https://www.theguardian.com/technology/2017/sep/21/instagram-death-threat-facebook-olivia-solon

Original page

Athabasca’s bright future

Tony BatesThe always excellent Tony Bates provides a very clear summary of Ken Coates’s Independent Third-Party Review of Athabasca University released a week or two ago and, as usual, provides a great critical commentary as well as some useful advice on next steps.

Tony rightly points out that our problems are more internal than external, and that the solutions have to come from us, not from outside. To a large extent he hits the nail right on the head when he notes:

Major changes in course design, educational technology, student support and administration, marketing and PR are urgently needed to bring AU into advanced 21st century practice in online and distance learning. I fear that while there are visionary faculty and staff at AU who understand this, there is still too much resistance from traditionalists and those who see change as undermining academic excellence or threatening their comfort zone.

It is hard to disagree. But, though there are too many ostriches among our staff and we do have some major cultural impediments to overcome, it is far less people that impede our progress than it is our design itself, and the technologies – especially the management technologies – of which it consists. That must change, as a corequisite to changing the culture that goes along with it. With some very important exceptions (more on that below) our culture is almost entirely mediated through our organizational and digital technologies, most notably in the form of very rigid processes, procedures and rules, but also through our IT. Our IT should, but increasingly does not, embody those processes. The processes still exist, of course – it’s just that people have to perform them instead of machines. Increasingly often, to make matters worse, we shape our processes to our ill-fitting IT rather than vice versa, because the ‘technological debt’ of adapting them to our needs and therefore having to maintain them ourselves is considered too great (a rookie systems error caused by splitting IT into a semi-autonomous unit that has to slash its own costs without considering the far greater price paid by the university at large). Communication, when it occurs, is almost all explicit and instrumental. We do not yet have enough of the tacit flows of knowledge and easy communication that patch over or fix the (almost always far greater) flaws that exist in such processes in traditional bricks and mortar institutions. The continual partial attention and focused channels of communication resulting from working online mean that we struggle with tacit knowledge and the flexibility of embedded dialogue in ways old fashioned universities never have to even think about. One of the big problems with being so process-driven is that, especially in the absence of richer tacit communication, it is really hard to change those processes, especially because they have evolved to be deeply entangled with one another – changing one process almost always means changing many, often in structurally separate parts of the institutional machine, and involves processes of its own that are often entangled with those we set out to change. As a result for much of its operation, our university does what it does despite us, not because of us. Unlike traditional universities, we have nothing else to fall back on when it fails, or when things fall between cracks. And, though we likely have far fewer than most traditional universities, there are still very many cracks to fall through.

This, not coincidentally, is exactly true of our teaching too. We are pretty darn good at doing what we explicitly intend to do: our students achieve learning outcomes very well, according to the measures we use. AU is a machine that teaches, which is fine until we want the machine to do more than what it is built to do or when other, faster, lighter, cheaper machines begin to compete with it.  As well as making it really hard to make even small changes to teaching, what gets lost – and what matters about as much as what we intentionally teach – is the stuff we do not intend to teach, the stuff that makes up the bulk of the learning experience in traditional universities, the stuff where students learn to be, not just to do. It’s whole-person learning. In distance and online learning, we tend to just concentrate on parts we can measure and we are seldom even aware of the rest. There is a hard and rigid boundary between the directed, instrumental processes and the soft, invisible patterns of culture and belonging, beyond which we rarely cross. This absence is largely what gives distance learning a bad reputation, though it can be a strength if focused teaching of something well-defined is exactly what is needed, or if students are able to make the bigger connections in other ways (true of many of our successful students), when the control that the teaching method provides is worth all the losses and where a more immersive experience might actually get in the way. But it’s a boundary that alienates a majority of current and prospective students. A large percentage of even those we manage to enrol and keep with us would like to feel more connected, more a part of a community, more engaged, more belonging. A great many more don’t even join us in the first place because of that perceived lack, and a very large number drop out before submitting a single piece of work as a direct result.

This is precisely the boundary that the Landing is intended to be a step towards breaking down.

https://landing.athabascau.ca/file/view/410777/video-decreasing-the-distance

If we cannot figure out how to recover that tacit dimension, there is little chance that we can figure out how to teach at a distance in a way that differentiates us from the crowd and that draws people to us for the experience, rather than for the qualification. Not quite fair. Some of us will. If you get the right (deeply engaged) tutor, or join the right (social and/or open) course, or join the Landing, or participate in local meet-ups, or join other social media groups, you may get a fair bit of the tacit, serendipitous, incidental learning and knowledge construction that typifies a traditional education. Plenty of students do have wonderful experiences learning with others at AU, be it with their tutors or with other students. We often see those ones at convocation – ones for whom the experience has been deep, meaningful, and connected. But, for many of our students and especially the ones that don’t make it to graduation (or even to the first assignment), the chances of feeling that you belong to something bigger, to learn from others around you, to be part of a richer university experience, are fairly low. Every one of our students needs to be very self-directed, compared with those in traditional institutions – that’s a sina qua non of working online – but too many get insufficient support and too little inspiration from those around them to rise beyond that or to get through the difficult parts. This is not too surprising, given that we cannot do it for ourselves either. When faced with complicated things demanding close engagement, too many of our staff fall back on the comfortable, easy solution of meeting face to face in one of our various centres rather than taking the hard way, and so the system remains broken. This can and will change.

Moving on

I am much heartened by the Coates report which, amongst other things but most prominently and as our central value proposition, puts our leadership in online and distance education at the centre of everything. This is what I have unceasingly believed we should do since the moment I arrived. The call to action of Coates’s report is fundamentally to change our rigid dynamic, to be bold, to innovate without barriers, to evolve, to make use of the astonishingly good resources – primarily our people – to (again) lead the online learning world. As a virtual institution this should be easier than it would be for others but, perversely, it is exactly the opposite. This is for aforesaid reasons, and also because the boundaries of our IT systems create the boundaries of our thinking, and embed processes more deeply and more inflexibly than almost any bricks and mortar establishment could hope to do. We need soft systems, fuzzy systems, adaptable systems, agile systems for our teaching, research, and learning community development, and we need hard systems, automated systems, custom tailored, rock solid systems for our business processes, including the administrational and assessment recording outputs of the teaching process. This is precisely the antithesis of what we have now. As Coates puts it:

“AU should rebrand itself as the leading Canadian centre for online learning and twenty- first century educational technology. AU has a distinct and potentially insurmountable advantage. The university has the education technology professionals needed to provide leadership, the global reputation needed to attract and hold attention, and the faculty and staff ready to experiment with and test new ideas in an area of emerging national priority. There is a critical challenge, however. AU currently lacks the ICT model and facilities to rise to this opportunity.”

We live in our IT…

We have long been challenged with our IT systems, but things were not always so bad. Our ICT model has made a 180 degree turnaround in the past few years in the exact opposite direction to one that will support continuing evolution and innovation, driven by people that know little about our core mission and that have failed to understand what makes us special as a university. The best defence offered for these poor decisions is usually that ‘most other universities are doing it,’ but we are not most other universities.  ICTs are not just support tools or performance enhancers for us. We are our IT. It is our one and only face to our students and the world. Without IT, we are literally nothing. We have massively underinvested in developing our IT, and what we have done in recent years has destroyed our lead, our agility, and our morale. Increasingly, we have rented generic, closed, off-the-shelf cloud-based applications that would be pretty awful in a factory, that force us into behaviours that make no sense, that sap our time and will, and that are so deeply inappropriate for our very unique distributed community that they stifle all progress, and cut off almost all avenues of innovation in the one area that we are best placed to innovate and lead. We have automated things that should not be automated and let fall into disrepair the things that actually give us an edge. For instance, we rent an absurdly poor CRM system to manage student interactions, building a call centre for customers when we should be building relationships with students, embedding our least savoury practices of content delivery still further, making tweaks to a method of teaching that should have died when we stopped using the postal service for course packs. Yes, when it works, it incrementally improves a broken system, so it looks OK (not great) on reports, but the system it enhances is still irrevocably broken and, by further tying it into a hard embodiment in an ill-fitting application, the chances of fixing it properly diminish further. And, of course, it doesn’t work, because we have rented an ill-fitting system designed for other things with little or no consideration of whether it meets more than coarse functional needs. This can and must change.

Meanwhile, we have methodically starved the environments that are designed for us and through which we have innovated in the past, and that could allow us to evolve. Astonishingly, we have had no (as in zero) central IT support for research for years now, getting by on a wing and a prayer, grabbing for bits of overtime where we can, or using scarce, poorly integrated departmental resources. Even very well-funded and well-staffed projects are stifled by it because almost all of our learning technology innovations are completely reliant on access, not only to central services (class lists, user logins, LMS integration, etc), but also to the staff that are able to perform integrations, manage servers, install software, configure firewalls, etc, etc.  We have had a 95% complete upgrade for the Landing sitting in the wings for nearly 2 years, unable to progress due to lack of central IT personnel to implement it, even though we have sufficient funds to pay for them and then some, and the Landing is actively used by thousands of people. Even our mainstream teaching tools have been woefully underfunded and undermined: we run a version of Moodle that is past even its security update period, for instance, and that creaks along only thanks to a very small but excellent team supporting it. Tools supporting more innovative teaching with more tenuous uptake, such as Mahara and OpenSIM servers, are virtual orphans, riskily trundling along with considerably less support than even the Landing.

This can and will change.

… but we are based in Athabasca

There are other things in Coates’s report that are given a very large emphasis, notably advice to increase our open access, particularly through forming more partnerships with Northern Albertan colleges serving indigenous populations (good – and we will need smarter, more human, more flexible, more inclusive systems for that, too), but mainly a lot of detailed recommendations about staying in Athabasca itself. This latter recommendation seems to have been forced upon Coates, and it comes with many provisos. Coates is very cognizant of the fact that being based in the remote, run-down town of Athabasca is, has been, and will remain a huge and expensive hobble. He mostly skims over sensitive issues like the difficulty of recruiting good people to the town (a major problem that is only slightly offset by the fact that, once we have got them there, they are quite unlikely to leave), but makes it clear that it costs us very dearly in myriad other ways.

… the university significantly underestimates the total cost of maintaining the Athabasca location. References to the costs of the distributed operation, including commitments in the Town of Athabasca, typically focus on direct transportation and facility costs and do not incorporate staff and faculty time. The university does not have a full accounting of the costs associated with their chosen administrative and structural arrangements.”

His suggestions, though making much of the value of staying in Athabasca and heavily emphasizing the importance of its continuing role in the institution, involve moving a lot of people and infrastructure out of it and doing a lot of stuff through web conferencing. He walks a tricky political tightrope, trying to avoid the hot potato of moving away while suggesting ways that we should leave. He is right on both counts.

Short circuits in our communications infrastructure

Though cost, lack of decent ICT infrastructure, and difficulties recruiting good people are factors in making Athabasca a hobble for us, the biggest problem is, again, structural. Unlike those working online, among those living and working in the town of Athabasca itself, all the traditional knowledge flows occur without impediment, almost always to the detriment of more inclusive ways of online communication. Face to face dialogue inevitably short-circuits online engagement – always has, always will. People in Athabasca, as any humans would and should, tend to talk among themselves, and tend to only communicate with others online, as the rest of us do, in directed, intentional ways. This might not be so bad were it not for the fact that Athabasca is very unrepresentative of the university population as a whole, containing the bulk of our administrators, managers, and technical staff, with less than 10 actual faculty in the region. This is a separate subculture, it is not the university, but it has enormous sway over how we evolve. It is not too surprising that our most critical learning systems account for only about 5% of our IT budget because that side of things is barely heard of among decision-makers and implementors that live there and they only indirectly have to face the consequences of its failings (a matter made much worse by the way we disempower the tutors that have to deal with them most of all, and filter their channels of communication through just a handful of obligated committee members). It is no surprise that channels of communication are weak because those that design and maintain them can easily bypass the problems they cause. In fact, if there were more faculty there, it would be even worse, because then we would never face any of the problems encountered by our students. Further concentrations of staff in Edmonton (where most faculty reside), St Albert (mainly our business faculty) and Calgary do not help one bit, simply building further enclaves, which again lead to short circuits in communication and isolated self-reinforcing clusters that distort our perspectives and reduce online communication. Ideas, innovations, and concerns do not spread because of hierarchies that isolate them, filter them as they move up through the hierarchy, and dissipate them in Athabasca. Such clustering could be a good part of the engine that drives adaptation: natural ecosystems diversify thanks to parcellation. However, that’s not how it works here, thanks to the aforementioned excess in structure and process and the fact that those clusters are far from independently evolving. They are subject to the same rules and the same selection pressures as one another, unable to independently evolve because they are rigidly, structurally, and technologically bound to the centre. This is not evolution – it is barely even design, though every part of it has been designed and top-down structures overlay the whole thing. It’s a side effect of many small decisions that, taken as a whole, result in a very flawed system.

This can and must change.

The town of Athabasca and what it means to us

Athabasca high street

Though I have made quite a few day trips to Athabasca over the years, I had never stayed overnight until around convocation time this year. Though it was a busy few days so I only had a little chance to explore, I found it to be a fascinating place that parallels AU in many ways. The impression it gives is of a raw, rather broken-down and depressed little frontier town of around 4,000 souls (a village by some reckonings) and almost as many churches. It was once a thriving staging post on the way to the Klondike gold rush, when it was filled with the rollicking clamour of around 20,000 prospectors dreaming of fortunes. Many just passed through, but quite a few stayed, helping to define some of its current character but, when the gold rush died down, there was little left to sustain a population. Much of the town still feels a bit temporary, still a bit of a campground waiting to turn into a real town. Like much of Northern Alberta, its fortunes in more recent years have been significantly bound to the oil business, feeding an industry that has no viable future and the morals of an errant crow, tied to its roller coaster fortunes. There are signs that money has been around, from time to time: a few nice buildings, a bit of landscaping here and there, a memorial podium at Athabasca Landing.  But there are bigger signs that it has left.

Athabasca Landing

Today, Athabasca’s bleak main street is filled with condemned buildings, closed businesses, discount stores, and shops with ‘sale’ signs in their windows. There are two somewhat empty town centre pubs, where a karaoke night in one will denude the other of almost all its customers.

There are virtually no transit links to the outside world: one Greyhound bus from Edmonton (2 hours away) comes through it, in the dead of night, and passenger trains stopped running decades ago. The roads leading in and out are dangerous: people die way too often getting there, including one of our most valued colleagues in my own school. It is never too far from being reclaimed by the forces of nature that surround it. Moose, bear, deer, and coyotes wander fairly freely. Minus forty temperatures don’t help, nor does a river that is pushed too hard by meltwaters from the rapidly receding Athabasca Glacier and that is increasingly polluted by the side-effects of oil production.

Athabasca

So far so bleak. But there are some notable upsides too. The town is full of delightfully kind, helpful, down-to-earth people infused with that wonderful Canadian spirit of caring for their neighbours, grittily facing the elements with good cheer, getting up early, eating dinner in the late afternoon, gathering for potlucks in one another’s houses, and organizing community get-togethers. The bulk of housing is well cared-for, set in well-tended gardens, in quiet, neat little streets. I bet most people there know their neighbours and their kids play together. Though tainted by its ties with the oil industry, the town comes across as, fundamentally, a wholesome centre for homesteaders in the region, self-reliant and obstinately surviving against great odds by helping one another and helping themselves. The businesses that thrive are those selling tools, materials, and services to build and maintain your farm and house, along with stores for loading your provisions into your truck to get you through the grim winters. It certainly helps that a large number of residents are employees of the university, providing greater diversity than is typically found in such settlements, but they are frontier folk like the rest. They have to be.

It would be unthinkable to pull the university out at this point – it would utterly destroy an already threatened town and, I think, it would cause great damage to the university. This was clearly at the forefront of Coates’s mind, too. The solution is not to withdraw from this strange place, but to dilute and divert the damage it causes and perhaps, even, to find ways to use its strengths. Greater engagement with Northern communities might be one way to save it – we have some big largely empty buildings up there that will be getting emptier, and that might not be a bad place for some face-to-face branching out, perhaps semi-autonomously, perhaps in partnership with colleges in the region. It also has potential as a place for a research retreat though it is not exactly a Mecca that would draw people to it, especially without transit links to sustain it. A well-designed research centre cost a fortune to build, though, so it would be nice to get some use out of it.

Perhaps more importantly, we should not pull out because Athabasca is a part of the soul of the institution. It is a little fitting that Athabasca University has – not without resistance – had its fortunes tied to this town. Athabasca is kind-of who we are and, to a large extent, defines who we should aspire to be. As an institution we are, right now, a decaying frontier town on the edge of civilization that was once a thriving metropolis, forced to help ourselves and one another battle with the elements, a caring bunch of individuals bound by a common purpose but stuck in a wilderness that cares little for us and whose ties with the outside world are fickle, costly, and tenuous. Athabasca is certainly a hobble but it is our hobble and, if we want to move on, we need to find ways to make the best of it – to find value in it, to move people and things away from it that it impedes the most, at least where we can, but to build upon it as a mythic hub that helps to define our identity, a symbolic centre for our thinking. We can and will help ourselves and one another to make it great again. And we have a big advantage that our home town lacks: a renewable and sustainable resource and product. Very much unlike Athabasca the town, the source of our wealth is entirely in our people, and the means we have for connecting them. We have the people already: we just need to refocus on the connection.

Computer science students should learn to cheat, not be punished for it

This is a well thought-through response to a recent alarmist NYT article about cheating among programming students.

The original NYT article is full of holy pronouncements about the evils of plagiarism, horrified statistics about its extent, and discussions of the arms wars, typically involving sleuthing by markers and evermore ornate technological fixes that are always one step behind the most effective cheats (and one step ahead of the dumber ones). This is a lose-lose system. No one benefits. But that’s not the biggest issue with the article. Nowhere does the NYT article mention that it is largely caused by the fact that we in academia typically tell programming students to behave in ways that no programmer in their right mind would ever behave (disclaimer: the one programming course that I currently teach, very deliberately, does not do that, so I am speaking here as an atypical outlier).

As this article rightly notes, the essence of programming is re-use of code. Although there are certainly egregiously immoral and illegal ways to do that (even open source coders normally need to religiously cite their sources for significant uses of code written by others), applications are built on layer upon layer upon layer of re-used code, common subroutines and algorithms, snippets, chunks, libraries, classes, components, and a thousand different ways to assemble (in some cases literally) the code of others. We could not do programming at all without 99% of the code that does what we want it to do being written by others. Programmers knit such things together, often sharing their discoveries and improvements so that the whole profession benefits and the cycle continues. The solution to most problems is, more often than not, to be found in StackExchange forums, Reddit, or similar sites, or in open source repositories like Github, and it would be an idiotic programmer that chose not to (very critically and very carefully) use snippets provided there. That’s pretty much how programmers learn, a large part of how they solve problems, and certainly how they build stuff. The art of it is in choosing the right snippet, understanding it, fitting it into one’s own code, selecting between alternative solutions and knowing why one is better (in a given context) than another. In many cases, we have memorized ways of doing things so that, even if we don’t literally copy and paste, we repeat patterns (whole lines and blocks) that are often identical to those that we learned from others. It would likely be impossible to even remember where we learned such things, let alone to cite them.  We should not penalize that – we should celebrate it. Sure, if the chunks we use are particulary ingenious, or particularly original, or particularly long, or protected by a licence, we should definitely credit their authors. That’s just common sense and decency, as well as (typically) a legal requirement. But a program made using the code of others is no less plagiarism than Kurt Schwitters was a plagiarist of the myriad found objects that made up his collages, or a house builder is a plagiarist of its bricks.

And, as an aside, please stop calling it ‘Computer Science’. Programming is no more computer science than carpentry is woodworking science. It bugs me that ‘computer science’ is used so often as a drop-in synonym for programming in the popular press, reinforced by an increasing number of academics with science-envy, especially in North America. There are sciences used in computing, and a tiny percentage of those are quite unique to the discipline, but that’s a miniscule percentage of what is taught in universities and colleges, and a vanishingly small percentage of what nearly all programmers actually do. It’s also worth noting that computer science programs are not just about programming: there’s a whole bunch of stuff we teach (and that computing professionals do) about things like databases, networks, hardware, ethics, etc that has nothing whatsoever to do with programming (and little to do with science). Programming, though, especially in its design aspects, is a fundamentally human activity that is creative, situated, and inextricably entangled with its social and organizational context. Apart from in some research labs and esoteric applications, it is normally closer to fine art than it is to science, though it is an incredibly flexible activity that spans a gamut of creative pursuits analogous to a broad range of arts and crafts from poetry to music to interior design to engineering. Perhaps it is most akin to architecture in the ways it can (depending on context) blend art, craft, engineering, and (some) science but it can be analogous to pretty much any creative pursuit (universal machines and all that).

Address of the bookmark: https://thenextweb.com/dd/2017/05/30/lets-teach-computer-science-students-to-cheat/#.tnw_FTOVyGc4

Original page

Learnium

Learnium is yet another attempt to overlay a cloud-based social medium on institutional learning, in the same family as systems like Edmodo, Wikispaces Classroom, Lore, GoingOn, etc, etc. I deliberately exclude from this list the far more excellent, theoretically grounded, and innovative Curatr, as well as dumb bandwagoners like – of all things – Blackboard (not deserving of a link but you could look up their atrocious social media management tools if you want to see how not to do this).

Learnium has a UK focus and it includes mobile apps as well as institutional integration tools. It looks slick, has a good range of tools, and seems to be gaining a little traction. This is trying to do something a little like what we tried to do with the Landing, but it should not be confused with the Landing in intent or design philosophy, notwithstanding some superficial similarities. Although the Landing is often used for teaching purposes, it deliberately avoids things like institutional roles, and deliberately blurs such distinctions when its users make use of them (eg. when they create course groups). It can be quite confusing for students expecting a guided space and top-down structure, and annoying if you are a teacher trying to control the learning space to behave that way, but that’s simply not how it is designed to work. The Landing is a learning space, where everyone is a teacher, not an institutional teaching space where the role is reserved for a few.

Learnium has a far more institutionally managed, teacher/course-oriented perspective. From what I can tell, it’s basically an LMS, cut down in some places, enhanced in its social aspects. It’s closer to Canvas than Moodle in that regard. It might have some value for teachers that like the social media tools but that dislike the lack of teacher-control, lack of privacy, deeply problematic ethics, and ugly intrusions of things like Facebook, and who do not want the cost or hassle of managing their own environments.  It is probably a more congenial environment for social pedagogies than most institutional LMSs, allowing learning to spread beyond class groups and supporting some kinds of social networking. There is a lot of scope and potential for vertical social networks like this that serve a particular kind of community in a tailored fashion. This is very much not Facebook, and that’s a very good thing.

But Learnium is an answer to the question ‘how can I use social media in my courses?’ rather than ‘how can social media help to change how people learn?’ It is also an answer to the question of ‘how can Learnium make money?’ rather than ‘how can Learnium help its users?’ And, like any cloud-based service of this nature (sadly including Curatr), it is not a safe place to entrust your learning community: things like changes to terms of service, changes to tools, bankcruptcy ,and takeovers are an ever-present threat. With the exception of open systems that allow you to move everything, lock stock and barrel, to somewhere else with no significant loss of data or functionality, an institution (and its students) can never own a cloud-based system like this. It might be a small difference from an end user perspective, at least until it blows up, but it’s all the difference in the world.

Address of the bookmark: https://www.learnium.com/about/institutions/

Original page

Teens unlikely to be harmed by moderate digital screen use

The results of quite a large study (120,000 participants) appear to show that ‘digital’ screen time, on average, correlates with increased well-being in teenagers up to a certain point, after which the correlation is, on average, mildly negative (but not remotely as bad as, say, skipping breakfast). There is a mostly implicit assumption, or at least speculation, that the effects are in some way caused by use of digital screens, though I don’t see strong signs of any significant attempts to show that in this study.

While this accords with common sense – if not with the beliefs of a surprising number of otherwise quite smart people – I am always highly sceptical of studies that average out behaviour, especially for something as remarkably vague as engaging with technologies that are related only insofar as they involve a screen. This is especially the case given that screens themselves are incredibly diverse – there’s a world of difference between the screens of an e-ink e-reader, a laptop, and a plasma TV, for instance, quite apart from the infinite range of possible different ways of using them, devices to which they can be attached, and activities that they can support. It’s a bit like doing a study to identify whether wheels or transistors affect well-being. It ain’t what you do, it’s the way that you do it. The researchers seem aware of this. As they rightly say:

“In future work, researchers should look more closely at how specific affordances intrinsic to digital technologies relate to benefits at various levels of engagement, while systematically analyzing what is being displaced or amplified,” Przybylski and Weinstein conclude. 

Note, though, the implied belief that there are effects to analyze. This remains to be shown. 

Address of the bookmark: https://www.eurekalert.org/pub_releases/2017-01/afps-tut011217.php

Moral panic: Japanese girls risk fingerprint theft by making peace-signs in photographs / Boing Boing

As Cory Doctorow notes, why this headline should single out Japanese girls as being particularly at risk – and that this is the appeal of it – is much more disturbing than the fact that someone figured out how to lift fingerprints that can be used to access biometric authentication systems from photos taken using an ‘ordinary camera’ at a considerable distance (3 metres). He explains the popularity of the news story thus:

I give credit to the news-hook: this is being reported as a risk that young women put themselves to when they flash the peace sign in photos. Everything young women do — taking selfies, uptalking, vocal fry, using social media — even reading novels! — is presented as a) unique to young women (even when there’s plenty of evidence that the trait or activity is spread among people of all genders and ages) and b) an existential risk to the human species (as in, “Why do these stupid girls insist upon showing the whole world their naked fingertips? Slatterns!”)

The technical feat intrigued me, so I found a few high-res scans of pictures of Churchill making the V sign, taken on very good medium or large format film cameras (from that era, 5″x4″ press cameras were most common, though some might have been taken on smaller formats and/or cropped) with excellent lenses, by professional photographers, under various lighting conditions, from roughly that distance. While, on the very best, with cross-lighting, a few finger wrinkles and creases were partly visible, there was no sign of a single whorl, and nothing like enough detail for even a very smart algorithm to figure out the rest. So, with a tiny fraction of the resolution, I don’t think you could just lift an image from the web, a phone, or even from a good compact camera to steal someone’s fingerprints unless the range were much closer and you were incredibly lucky with the lighting conditions and focus. That said, a close-up selfie using an iPhone 7+, with focus on the fingers, might well work, especially if you used burst mode to get slightly different images (I’m guessing you could mess with bas relief effects to bring out the details). You could also do it if you set out to do it. With something like a good 400mm-equivalent lens,  in bright light, with low ISO, cross-lit, large sensor camera (APS-C or higher), high resolution, good focus and small aperture, there would probably be enough detail. 

Address of the bookmark: https://boingboing.net/2017/01/12/moral-panic-japanese-girls-ri.html

Setapp – Netflix-style rental model for apps for Mac

Interesting. For $10USD/month, you get unlimited access to the latest versions of what is promised to be around 300 commercial Mac apps. Looking at the selection so far (about 50 apps), these appear to be of the sort that usually appear in popular app bundles (e.g. StackSocial etc), in which you can buy apps outright for a tiny fraction of the list price (quite often at a 99% reduction). I have a few of these already, for which I paid an average of 1 or 2 dollars apiece, albeit that they came with a bunch of useless junk that I did not need or already owned, so perhaps it’s more realistic to say they average more like $10 apiece. Either way, they can already be purchased for very little money, if you have the patience to wait for the right bundle to arrive. So why bother with this?

The main advantage of SetApp’s model is that, unlike those in bundles, which often nag you to upgrade to the next version at a far higher price than you paid almost as soon as you get them, you always get the latest version. It is also nice to have on-demand access to a whole library at any time: if you can wait for a few months they will probably turn up in a cheap pay-what-you-want app bundle anyway, but they are only rarely available when you actually need them.  I guess there is a small advantage in the curation service, but there are plenty of much better and less inherently biased ways to discover tools that are worth having. 

The very notable disadvantage is that you never actually own the apps – once you stop subscribing or the company changes conditions/goes bust, you lose access to them. For ephemerally useful things like disk utilities, conversion tools, etc this is no great hassle but, for things that save files in proprietary formats or supply a cloud service (many of them) this would be a massive pain. As there is (presumably) some mechanism for updating and checking licences, this might also be an even more massive pain if you happen to be on a plane or out of network range when either the app checks in or the licence is renewed. I don’t know which method SetApp uses to ensure that you have a subscription but, one way or another, lack of network access at some point in the proceedings could really screw things up. When (with high probability) SetApp goes bust, you will be left high and dry. Also, I’m guessing that it is unlikely that I would want more than a dozen or thereabouts of these in any given year, so each would cost me about $10 every year at the best of times. Though that might be acceptable for a major bit of software on which one’s livelihood depends, for the kind of software that is currently on show, that’s quite a lot of money, notwithstanding the convenience of being able to pick up a specialist tool when you need it at no extra cost. 

This is a fairly extreme assault on software ownership but closed-source software of all varieties suffers from the same basic problem: you don’t own the software that you buy.  Unlike use-once objects like movies or books, software tends to be of continuing value. The obvious solution is to avoid closed-source altogether and go for open source right the way down the stack: that’s always my preference. Unfortunately, there are still commercial apps that I find useful enough to pay for and, unfortunately, software decays. Even if you buy something outright that does the job perfectly, at some point the surrounding ecosystems (the operating system, network, net services, etc) will most likely render it useless or positively dangerous at some point. There are also some doubly annoying cases where companies stop supporting versions, lose databases, or get taken over by other companies, so software that you once owned and paid for is suddenly no longer yours (Cyberduck, I’m looking at you). Worst of all are those that depend on a cloud service over which you have no control at all and that will almost definitely go bust, or get taken over, or be subject to cyberattack, or government privacy breaches, or be unavailable when you need it, or that will change terms and conditions at some point to your extreme disadantage. Though there may be a small niche for such things and the immediate costs are often low enough to be tempting, as a mainstream approach to software provision, it is totally unsustainable.

 

Address of the bookmark: https://setapp.com/

Pebble dashed

Hell.

Pebble made my favourite smart watches. They were somewhat open, and the company understood the nature of the technology better than any of the mainstream alternatives. Well, at least they used to get it, until they started moving towards turning them into glorified fitness trackers, which is probably why the company is now being purchased by Fitbit.

So, no more Pebble and, worse, no more support for those that own (or, technically, paid for the right to use) a Pebble. If it were an old fashioned watch I’d grumble a bit about reneging on warranties but it would not prevent me from being able to use it. Thanks to the cloud service model, the watch will eventually stop working at all:

Active Pebble watches will work normally for now. Functionality or service quality may be reduced down the road. We don’t expect to release regular software updates or new Pebble features. “

Great. The most expensive watch I have ever owned has a shelf life of months, after which it will likely not even tell the time any more (this has already occurred on several occasions when it has crashed while I have not been on a viable network). On the bright side (though note the lack of promises):

We’re also working to reduce Pebble’s reliance on cloud services, letting all Pebble models stay active long into the future.”

Given that nearly all the core Pebble software is already open source, I hope that this means they will open source the whole thing. This could make it better than it has ever been. Interesting – the value of the watch would be far greater without the cloud service on which it currently relies. 

 

Address of the bookmark: https://www.kickstarter.com/projects/597507018/pebble-2-time-2-and-core-an-entirely-new-3g-ultra/posts/1752929