A blast from my past: Google reimplements CoFIND

While searching for a movie using Google Search last night I got (for the first time that I can recall) the option to tag the result, as described in this article. I was pleased to discover that the tool they provide for this is virtually identical (albeit with a much slicker and more refined modern interface overhaul) to the CoFIND system that underpinned my PhD, that I built over 20 years ago now. You are presented with a list of tags, and can select one or more that describe the movie, and/or suggest your own, effectively creating a multi-dimensional rating system that other users can use to judge what the movie is like. When I rated the movie last night, for instance, popular tags presented to me included ‘terrible acting’, ‘bad writing’, ‘clichéed’, ‘boring’ and so on. Having seen the movie, I agree about the bad writing and clichés – it was at the terrible end of the scale – but actually think most of the acting was fairly good, and it was not very boring. What is interestingly different about this, compared with other tagging systems currently available, is that this kind of tag is fuzzy – it represents a value statement about the movie that exists on a continuum, not a simple categorization. The sorting algorithm for the list of tags presented to you appears (like my original CoFIND) to be based mainly on simple popularity though it is possible that (like CoFIND) it uses other metrics like tag age and perhaps even a user model as well. It’s vastly more useful and powerful than the typical thumbs-up/thumbs-down that Google normally provides. The feature has sadly not reappeared on subsequent movie searches, so I am guessing that Google is either still testing it or trying to build up a sufficient base of recommendations by occasionally showing it to people, before opening it up to everyone.

Just in case Google or anyone else has tried to patent this, and to assert my prior art, you can find a description and screenshots (p183 and p184) of my original CoFIND system in chapter 6 of my PhD thesis as well as in many papers before and since, not to mention in a fair few blog posts. It’s out there in the public domain for anyone to use. The interface of my system was, even by the standards of the day, pretty awful and not even a fraction as good as the one provided by Google, but those were different times: it did work in exactly the same way, though. As I developed it further, the interface actually became much worse. Over the course of a few years I experimented with quite a range of methods to get and display ratings/tags, including an ill-conceived Likert scale as well as a much more successful early use of tag clouds, all of which added complexity and reduced usability. Some of these later systems are described and discussed in my PhD too.  In its final, refactored, and heavily evolved form that postdates my PhD by several years, a version of Cofind (last modified 2007) is actually still available, that almost reverts to the the Google-style tag selection approach of the original, with the slight tweak that, in CoFIND, you can disagree about any particular tag use (for instance, if you don’t believe it to be inane then you can cast a vote against that tag).  The interface remains at least as awful as the original, though, and not a patch on Google’s. The main other differences, apart from interface variations, are that the nomenclature differs (I used ‘qualities’  rather than ‘tags), and that CoFIND could be used for anything with a URL, not just movies. If you’re interested, click on any resource link in the system and you’ll see my primitive, ugly, frame-based attempt to do very much the same as Google is doing for movies (nb. unless you are logged in you cannot add new qualities but, for authorized users, a field appears at the end that is just like Google’s). Though primarily intended to share and recommend educational resources, CoFIND was very flexible and was, over the years, used for a range of other purposes from comparing interface designs to discovering images and videos. It was always flaky, ugly, and unscalable, but it worked well enough for my research and teaching purposes, and (because it provides RSS feeds) it was my go-to tool for sharing interesting links right up until 2007, after which I reverted to more conventional but better-maintained tools like the Landing or WordPress. 

A little bit of CoFIND background

I’ve written a fair bit about CoFIND, formally and informally, but not for a few years now, so here’s a little background for anyone that might be interested, and to remind myself of a little of what I learned all those years ago in the light of what I know now.

An evolving, self-organizing, social bookmarking tool

I started my PhD research in 1997 with the observation that, even then, there was a vast amount of stuff to learn from that could be easily found on the Web, but that it was really difficult to find good stuff, let alone stuff that was actually useful to a particular learner at a particular stage in their development. Remember that this was before Google even started, so things were significantly worse then than they are now. Infoseek was as good as it got.

I had also observed that, in any group of learners, people would find different things and, between them, discover a much larger range of useful resources than any one learner (or teacher) could do alone, a fact that I use in my teaching to this day. These would likely (and, it turns out, in reality) be better than what a teacher could find alone because, though individual learners might be less able to distinguish low from high quality, they would know what worked for them and sufficient numbers of eyes would weed out the bad stuff as long as there was a mechanism for it. This was where I came in.

The only such mechanisms widely available at the time were simple rating systems. However, learners have very different learning needs, so I immediately realized that ‘thumbs-up’ or simple Likert scales would not work. This was not about finding the one ‘best’ solution for everyone, but was instead concerned with finding a range of alternatives to fill different ecological niches, and somehow discovering the most useful solution in that niche for a given learner at a given time.  My initial idea was to make use of a crowd, not an individual curator, and to employ a process closely akin to natural evolution to kill bad suggestions and promote good ones, in order to create an ecosystem of learning resources rather than a simple database. CoFIND was a series of software solutions that explored and extended this initial idea.

CoFIND was, on the face of it, what would eventually come to be called a social bookmarking system – a means for learners to find and to share Web resources (and, later, other things) with one another, along with a mechanism for other learners to recommend or critique them. It was by no means the first social bookmarking system, but it was certainly not a common genre at the time, and I don’t think such a dedicated system had ever been used in education before (for all such assertions, I stand to be corrected), though other means of sharing links, from simple web pages or wikis or discussion forums to purpose-built teacher-curated tools were not that uncommon. A lot of my early research involved learning about self-organization and complex systems, in particular focusing on evolution and stigmergy (self-organization through signs left in the environment). As well as the survival-of-the-fittest dynamic, evolution furnished me with many useful concepts that I made good use of, such as the importance of parcellation, the necessity of death, ways to avoid skyhooks, benefits of spandrels, ways to leverage chance (including extinction events), and various approaches to supporting speciation.  As a result of learning about stigmergy I independently developed what later came to be know as tag clouds. I don’t believe that mine were the first ever tag clouds – weighted lists of one sort or another had been around for a few years – but, though mine didn’t then use the name, they were likely the first uses of such things in educational software, and almost certainly the first with this particular theoretical model to support them (again, I am happy to be corrected).

A collaborative filter

The name CoFIND is an acronym for ‘collaborative filter in n-dimensions’. The n dimensions were substantiated through what we (my supervisors and I) called qualities. We went through a long list of possible names for these, and I was drawn for a while to calling them ‘values’, but (unfortunately) we never thought of ‘tags’ because the term was not in common use for this kind of purpose at the time. After a phase of calling them q-tags, I now call qualities by the much more accessible name of ‘fuzzy tags’. Fuzzy tags are not just binary classifications of a topic but tags that describe what we value, or don’t value, in a resource, and how much we value it. While people may sometimes disagree about binary classifications (conventional tags) it is always possible to have different opinions about the application of fuzzy tags: some may find something interesting, for instance, while others may not, and others may feel it to be quite interesting, or incredibly so. Fuzzy tags are to do with fuzzy sets, that have a continuum of grades of membership, which is where the name comes from. Different versions of CoFIND used different ways to establish the fuzziness of a tag – the Likert Scale used in a few mid-period versions was my failed attempt to make it explicit but this was a nightmare for people to actually use.  The first versions used the same kind of frequency-based weighting as Google’s movie tags, but that was a bit coarse – I was uncomfortable with the averaging effect and the unbridled Matthew Effect that threatened to keep early tags at the top of the list for all time, that I rather coarsely kept in check with a simple age-related weighting that was only boosted when they were used (the unfortunate side effect of which being that, if a system was not used for a few weeks, all the tags vanished in a huge extinction event, albeit that they could be revived if anyone ever used one of the dead ones again). The final version was a bit in-between, allowing an indefinitely large scale via simple up-down ratings, balanced with an algorithm that included a decaying but renewable novelty weighting that adjusted to the frequency of use of the system as a whole. This still had the peculiar effect of evening out/initializing all of the tags over time if no one used the system, but at least it caused fewer catastrophes.

‘Traditional’ collaborative filters simply discover whether things are likely to be more valued or less valued on a usually implicit single dimension (good-bad, liked-disliked, useful-useless, etc). CoFIND’s qualities/fuzzy tags allowed people to express in what ways they were better or worse – more interesting, less helpful, more complex, less funny, etc, just as Google’s movie tagging allows you to express what you like or dislike about a movie, not just whether you liked it or not. In many tag-based systems, people tend to use quite a few simple tags that are inherently fuzzy (e.g. Flickr photos tagged as ‘beautiful’) but they are seldom differentiated in the software from those that simply classify a resource as fitting a particular category, so they are rarely particularly helpful in finding stuff to help with, say, learning.

I was building CoFIND just as the field of collaborative filtering was coming out of its infancy, so the precise definition of the term had yet to be settled. At the time, a collaborative filter (then usually called an ‘automated collaborative filter’) was simply any system that used prior explicit and/or implicit preferences of a number of previous users (a usually anonymous crowd) to help make better recommendations and/or filter out weaker recommendations for the current users. The PageRank algorithm that still underpins Google Search would perhaps have then been described as a collaborative filter, as was one of its likely inspirations, PHOAKS (People Helping One Another Know Stuff), that mined Usenet newsgroups for links, taking them as an implicit recommendation within the newsgroup topic area. By this definition, CoFIND was in fact a semi-automated collaborative filter that combined explicit preferences with automated matching. Nowadays the term ‘collaborative filter’ tends to only apply to a specific subset of recommender systems that automatically predict future interests by matching individual patterns of behaviour with those of multiple others, whether by item (people who bought this also bought…) or user (people whose past or expressed preferences seem to be like yours also liked…). I think that, if I built CoFIND today, I would simply refer to it more generically as a recommender system, to avoid confusion.

Disembodied user models

Rather than a collaborative filter, back in the late 90s Peter Brusilovsky saw CoFIND as a new species of educational adaptive hypermedia, as it was perhaps the first (or at least one of the first) that worked on an open corpus rather than through a closed corpus of linked resources. However, he and I were both puzzled about where to find the user model, which was part of Peter’s definition of adaptive hypermedia. I didn’t feel that it needed one, because users chose the things that mattered to them at runtime. In retrospect, I think that the trick behind CoFIND, and what still distinguishes it from almost all other systems apart from this fairly new Google tool, is that it disembodied and exposed the user model. Qualities were, in essence, the things that would normally be invisibly stored in a user model, but I made them visible, in an extreme variant of what Judy Kay later described as scrutable adaptation.  In effect, a learner chose their own learner model at the time they needed it. The reasoning behind doing so was that, for learners, past behaviour is usually a poor predictor of future needs, mainly because 1) learning changes people (so past preferences may have little bearing on future preferences), and 2) learning is driven by a vast number of things other than taste or past actions: we often have a need for it thrust upon us by an extrinsic agency, like a teacher, or a legislative demand for a driving licence, for instance. Qualities (fuzzy tags) allow us to express the current value of something to us, in a form that we can leave behind without a lot of sticky residue, and that future users can use. In fact, later versions did tend to slightly emphasize similar things to those people had added, categorized, or rated (fuzzily tagged) earlier, but this was just a pragmatic attempt to make the system more valuable as a personal bookmark store, and therefore to encourage more use of it, rather than an attempt to build a full-blown collaborative filter in the modern sense of the word.

Moving on

I still believe that, in principle, this is an excellent approach and I have been a little disappointed that more people have not taken up the idea and improved on it. The big and, at the time, insurmountable obstacles that I hit were 1) that it demands a lot of its users to provide both tags and resources, with little obvious personal benefit, so it is unlikely to get a lot of use, 2) that the cold-start problem that affects most collaborative filters (it relies on many users to be useful but no one will use it until it is useful) is magnified exponentially by every one of those n dimensions so it really demands a big lot of users, and 3) that it is fiendishly hard to represent the complex ecological niches effectively in an interface, making the cognitive load unusably high. Google seems to have made good progress on the last point (an evolution enabled by improved web standards and browsers combined with a simplification of the process, which together are enough to reduce the cognitive load by a sizeable amount), and has plenty sufficient numbers of users to cope with the first and second points, at least with regard to movie recommendations. It remains challenging to see how this would work in an educational setting in anything less than the largest of MOOCs or the most passionately focused of user bases. However, I would love to see Google extend this mechanism to OERs, courses, and other educational resources, from Quora answers to Kahn Academy tutorials, because they do have the numbers, and it would work well. For the same reasons, it would also be great to see it applied to something like StackExchange or similar large-scale systems (Reddit perhaps) where people go to seek solutions to learning problems. I doubt that I will build a new version of CoFIND as such, but the ideas behind it should live on, I think, and it’s great to see them back on a system as big as Google Search, even if it is so far only experimental and, so far, just used to recommend movies.

Power, responsibility, maps and plans: some lessons from being a Chair

Empty chair

I’ve reached the end of my first week of not Chairing the School of Computing & Information Systems here at Athabasca University, which is now in the capable hands of the very wonderful Ali Dewan.

Along with quite a few people that I know, I am amazed that I stuck it out for over 3 years. I was a most reluctant Chair in the first place, because I’d been in middle management roles before and knew much of what to expect. It’s really not my kind of thing at all. Ideologically and temperamentally I loathe hierarchies but I’d rather be at the top or at the bottom if I have to be in one at all. However, with the help of some cajoling, I eventually convinced myself that being a Chair is essentially much the same as being a teacher, which is an activity that I both enjoy and can mostly do reasonably well. Like a teacher (at least one that does the job well), the job of a Chair is to help nurture a learning community, and to make it possible for those in that community to achieve what they most want to achieve with as few obstacles as possible. Like teaching, it is not at all about telling, but about listening, supporting, and helping others to orchestrate the process for themselves, not so much about leadership as followership, about being a supportive friend. It’s a bit about nudging and inspiring, too, of sharing the excitement of discovery and growth with other people. It’s a bit about challenging people to be who they want to be, collectively and individually. It’s a bit about solving problems, a bit about being a shoulder to cry on, a bit about being a punchbag for those needing to let off steam, an arbiter in disputes. It could be fun. And I could always give it up after a few months if it didn’t work out. That was what I convinced myself.

On the bright side, I don’t think that I broke anything vital. I did help a couple of good things to happen, and I think that most of my staff were reasonably happy and empowered, a few of them more than before. One or two were probably less happy. But, in the grand scheme of it all, I left things much the same as or a little better than I found them, despite often strenuous efforts to bring about far more exciting changes. My tenure as Chair was, on the whole, not great, but not terrible. I have been wondering a bit about why that happened, and what I could or should have done differently, which is what the next part of this post is about.

Authority vs influence, responsibility vs power

One of my most notable discoveries (more accurately, rediscoveries) is that authority and responsibility barely, if at all, correlate with power and influence. In fact, for a middle management role like this, the precise inverse is true. One of the strange paradoxes of being in a position of more responsibility and authority has been that, in many ways, I feel that I’ve actually had considerably less capacity to bring about change, or to control my own life, than I had as a plain old professor.  It’s just possible that I may have overused the joke about a Chair being the one everyone gets to sit on, but it resonated with me. And this is not to contradict Uncle Ben’s sage advice to Spiderman – it may be true that with great power comes great responsibility, but that doesn’t mean that with great responsibility comes great power.

Partly the problem was just the myriad small but draining demands that had to be done throughout the course of a typical day (most of which were insufferably tedious and mostly mindless bureaucratic tasks that anyone else could do at least as well), as well as having to attend many more meetings, and to engage in a few much lengthier tasks like workload planning. It wore me down. I put a lot of things that were important to me, but that didn’t contribute to my role, to one side because there were too few chunks of uninterrupted time to do them. Blogging and sharing on social media, for instance.

Partly it was because I felt that my role was primarily to support those that reported to me – I had to do their bidding much more than they had to do mine. Instead of doing what I would intrinsically wish to do, much of the time I was trying to do what those that I supervised required of me. This was not just a result of my own views on leadership. I think a lot of it would have affected most people in the same position.

Partly it was because I often felt (with a little external reinforcement) that I must shut up and/or toe the line because I represented the School or the Dean or the University. Being the ‘face’ of the school meant that I often felt obliged to try to represent the opinions and demands of others, even when I disagreed with them. Often, I had to present a collective agenda, or that of an individual higher up the foodchain, rather than my own, whether or not I found it dull, mistaken, or pointless. Also, being a Chair puts you in some sensitive situations where a wrong step can easily lead to litigation, grievance proceedings, or (worse) very unhappy people. I’m not naturally tactful or taciturn, to say the least, so this was tricky at times. I sometimes stayed quiet when I might otherwise have spoken out.

The upshot of it is that, as a Chair, I was directly responsible both to my Dean and to the people I supervised (not to mention more or less directly to students, visitors, admins, tech staff, VPAs, etc, etc), and I consequently felt that I had very little control over my own life at all. Admittedly it was at least partly due to my very intentional approach to the role, but I think similar issues would emerge no matter what leadership style I had adopted. There’s a surprising amount of liberty in being at the bottom of a hierarchy, at least when (like all academics) you are expected – nay, actually required – to be creative, self-starting, and largely autonomous in your work. Academic freedom is a wonderful thing, and some of it is subdued when you move a little way up the scale.

Some compensations 

There have been plentiful compensations, of course. I wouldn’t have stayed this long if it had been uniformly awful. Being a Chair made some connections easier to make, within and beyond the university, and has helped me get to know my colleagues a lot better. And I have some great colleagues: it would have been much harder to manage had I not had such friendly, supportive, smart, creative, willing, and capable team to work with. I solved or at least made fair progress on a few problems, none huge but all annoying, and helped to lay the groundwork for some ongoing improvements. There were opportunities for creativity here and there. I will miss some of the ways I could help shape our values and systems simply thanks to being a Chair, rather than having to actually work at it. I’ll miss being the default person people came to with interesting ideas. I’ll miss the very small but not trivial stipend. I’ll miss being involved by default in most decisions that affect the school. I’ll miss the kudos. I’ll miss being a formal hub in a network, albeit a small one.

Not quite like teaching

In most ways I was right about the job being much like teaching. Most of the skills, techniques, goals, and patterns are very similar, but there’s one big difference that I had not thought enough about. On the whole, most actual teachers engage with learners over a fairly fixed period, or at least for a fixed project, and there is a clear beginning, middle, and end, with well defined rituals, rules, and processes to mark their passage. This is even true to an extent of more open forms of teaching like apprenticeship and mentorship. Although this in some ways relates to any kind of project, the fact that people, working together in a social group, are both the focus and the object of change, makes it fairly distinctive. I can’t think of many other human activities that are particularly similar to teaching in this regard, apart from perhaps some team sports or, especially, performing arts.

To be a teacher without a specific purpose in mind is a surprisingly different kind of activity, like producing an improvised play that has no script, no plot, no beginning, and no end. Although a teacher is responsible to their students, much as I was responsible to my staff, the responsibility is tightly delimited in time and in scope, so it remains quite manageable, for the most part. In retrospect, I think I should have planned it better. I probably should have set more distinct goals, milestones, tasks, sub-projects, etc. I should have planned for a very clear and intentional end, and set much firmer boundaries. It would not have been easy, though, as many goals emerged over the years, a lot changed when we got our new (and much upgraded) administration, and a lot depended on serendipity and opportunism. I had, at first, no idea how long I would stick with the role. Until quite some time into it, I had only a limited idea about what changes I might even be allowed to accomplish (not much, as it happens, with no budget, a freeze on course development, diminishing staff numbers, need to fit faculty plans, etc). It might have been difficult to plan too far ahead, though it would have been really useful to have had a map showing the directions we might have gone and the limits of the territory. I think there may be useful lessons to be learned from this about support for self-directed lifelong learning.

Lessons for learning and teaching

A curse of institutional learning can be the many scales of rigid structure it provides, that too often take agency away from learners and limit support for diversity. However, it also supports an individual learner’s agency to have a good map of the journey ahead, even if all that they are given is the equivalent of a bus route, showing only the fixed paths their learning will take. I have long grappled with the tensions and trade-offs between surfing the adjacent possible and following a planned learning path. I spent a lot of time in the late 1990s and early 2000s designing online systems that leveraged the crowd to allow learners to help one another to learn, but most of them only helped with finding what to do next, or to solve a current problem, not to chart a whole journey. Figuring out an effective way to plan ahead without sacrificing learner control was one of the big outstanding research problems left to be solved when I finished my PhD (in self-organized learning in networks) very many moons ago, and it still is. There are lots of ineffective ways that I and others have tried, of course. Obvious approaches like matching paths through collaborative filtering or similar techniques are a dead-end: there are way too many extraneous variables to confound it, way too much variation in start and end points to effectively cater for, even if you start with a huge dataset. This is not to mention the blind-leading-the-blind issues, the fact that learning changes people so past activity poorly predicts future behaviour, and the fact that there is often a narrative context that assumes specific prior activities have occurred and known future activities will follow. Using ontologies is even worse, because the knowledge map of a subject developed by subject experts is seldom if ever the best map for learning and may be among the worst. The most promising approaches I have seen, and that I had a doctoral student working on myself until he had to give up in the mid 2000s, mine the plans of many experts (e.g. by looking at syllabuses) to identify common paths and branches for a particular subject, combining them with whatever other information can be gleaned to come up with a good direction for a specific learner and learning need. However, there are plenty of issues with that, too, not least of which being the fact that institutional teaching assumes a very distinctive context, and suffers from a great many constraints (from having to be squashed into a standardized length to fitting preferred teaching patterns and schedules), that learners unhindered by such arbitrary concerns would neither want nor need. Many syllabuses are actually thoughtlessly copied from the same templates (e.g. from a professional association model syllabus), or textbooks, and may be awful in the same ways. And, again, narrative matters. If you took a chunk out of one of my courses and inserted it somewhere else it would often change its meaning and value utterly.

This is a problem I would dearly love to solve. Though I stand by my teaching approaches, one of the biggest perennial complaints about the tools and methods I tend to use is that it is easy to feel lost, especially if the helping hands of others are not around when needed. There are always at least a few students who would, as a matter of principle, rather be told what to do, how to do it, and where to go next. The majority would prefer to work in an environment that avoids the need for unnecessary decisions, such as where to upload a file, that have little to do with what they are trying to learn. My role (and that of my tutors, and the design of my courses) is to help them through all that, to relieve them of their dependency on being told what to do, and to help them at least understand why things are done the way they are done. However, that can result in quite inconsistent experiences if I or tutors let the ball slip for a moment. It can be hard for people who have been taught, often over decades, that teaching is telling, and that learning can reliably be accomplished by following a set of teacher-determined steps, to be set adrift to figure it out in their own ways.

It is made far worse by the looming threat of grades that, though eliminated in my teaching itself, still lie in wait at the end of the path as extrinsic targets. Students often find it hard to know in advance how they will meet the criteria, or even whether they have met them when they reach the end. I can and do tell them all of this, of course, usually repeatedly and in many ways and using many media, but the fact that at least some remain puzzled just proves the point: teaching is not telling. Again, a lot of manual social intervention is necessary. But that leads to the issue that following one of my courses demands a big leap of faith (mainly in me) that it will turn out OK in the end. It usually takes effort and time to build such trust, which is costly for all concerned, and is easily lost with a careless word or a missed message.  It would be really useful for my students to have a better map that allows them to plan detours and take more alternative transit options for themselves, especially with overlays to show recommended routes, warnings of steep hills and traffic, and real-time information about the whereabouts of people on their network and points of interest along the way. It would, of course, also be really handy to have a big ‘you are here’ label.  I would have really liked such a map when I started out as Chair.

Moving on

Leaving the Chair role behind still feels a little like stepping off a boat after a rough voyage, and either the land or my legs feel weird, I’m not sure which. As my balance returns, I am much looking forward to catching up with things I put to one side over the past 3 years. I’m happy to be getting back to doing more of what I do best, and I hope to be once more sharing more of my discoveries and cogitations in posts like this. It’s easier to move around with your feet on the ground than when you are sitting on a chair.

 

The return of the weblog – Ethical Tech

Blogs have evolved a bit over the past 20 years or so, and diversified. The always terrific Ben Werdmuller here makes the distinction between thinkpieces (what I tend to think of as vaguely equivalent to keynote presentations at a conference, less than a journal article, but carefully composed and intended as a ‘publication’) and weblogging (kind of what I am doing here when I bookmark interesting things I have been reading, or simply a diary of thoughts and observations). Among the surprisingly large number of good points that he makes in such a short post is that a weblog is best seen as a single evolving entity, not as a bunch of individual posts:

Blogging is distinct from journalism or formal writing: you jot down your thoughts and hit “publish”. And then you move on. There isn’t an editorial process, and mistakes are an accepted part of the game. It’s raw.

A consequence of this frequent, short posting is that the product isn’t a single post: it’s the weblog itself. Your website becomes a single stream of consciousness, where one post can build on another. The body of knowledge that develops is a reflection of your identity; a database of thoughts that you’ve put out into the world.

This is in contrast to a series of thinkpieces, which are individual articles that live by themselves. With a thinkpiece, you’re writing an editorial; with a blog, you’re writing the book of you, and how you think.

This is a good distinction. I also think that, especially in the posts of popular bloggers like Ben, the blog is also comprised of the comments, trackbacks, and pings that develop around it, as well as tweets, pins, curations, and connections made in other social media. Ideas evolve in the web of commentary and become part of the thing itself. The post is a catalyst and attractor, but it is only part of the whole, at least when it is popular enough to attract commentary.

This distributed and cooperative literary style can also be seen in other forms of interactive publication and dialogue – a Slashdot or Reddit thread, for instance, can sometimes be an incredibly rich source of knowledge, as can dialogue around a thinkpiece, or (less commonly) the comments section of online newspaper articles. What makes the latter less commonly edifying is that their social form tends to be that of the untarnished set, perhaps with a little human editorial work to weed out the more evil or stupid comments: basically, what matters is the topic, not the person. Untarnished sets are a magnet for trolls, and their impersonal nature that obscures the individual can lead to flaming, stupidity, and extremes of ill-informed opinion that crowd out the good stuff. Sites like Slashdot, StackExchange, and Reddit are also mostly set-based, but they use the crowd and an algorithm (a collective) to modulate the results, usually far more effectively than human editors, as well as to provide shape and structure to dialogues, so that dialogues become useful and informative. At least, they do when they work: none are close to perfect (though Slashdot, when used well, is closer than the rest because its algorithms and processes are far more evolved and far more complex, and individuals have far more control over the modulation) but the results can often be amazingly rich.

Blogs, though, tend to develop the social form of a network, with the blogger(s) at the centre. It’s a more intimate dialogue, more personal, yet also more public as they are almost always out in the open web, demanding no rituals of joining in order to participate, no membership, no commitment other than to the person writing the blog. Unlike dedicated social networks there is no exclusion, no pressure to engage, no ulterior motives of platforms trying to drive engagement, less trite phatic dialogue, more purpose, far greater ownership and control. There are plenty of exceptions that prove the rule and plenty of ways this egalitarian structure can be subverted (I have to clean out a lot of spam from my own blogs, for instance) but, as a tendency, it makes blogs still very relevant and valuable, and may go some way to explaining why around a quarter of all websites now run on WordPress, the archetypal blogging platform.

Address of the bookmark: https://words.werd.io/the-return-of-the-weblog-f6b702a7cf99

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2740999/the-return-of-the-weblog-%E2%80%93-ethical-tech

Strategies for successful learning at AU

Earlier today I responded to a prospective student who was, amongst other things, seeking advice on strategies for success on a couple of our self-paced programming courses. My response was just a stream of consciousness off the top of my head but I think it might be useful to others. Here, then, with some very light editing to remove references to specific courses, are a few fairly random thoughts on how to succeed on a self-paced online programming course (and, for the most part, other courses) at Athabasca University. In no particular order:

  • Try to make sure that people close to you know what you are doing and, ideally, are supportive. Other people can really help, not just for the mechanical stuff but for the emotional support. Online learning, especially the self-paced form we use, can feel a bit isolating at times, but there are lots of ways to close the gap and they aren’t all found in the course materials and processes. Find support wherever you can.
  • Make a schedule and try to keep to it, but don’t blame yourself if your deadlines slip a bit here and there – just adjust the plan. The really important thing is that you should feel in control of the process. Having such control is one of the huge benefits of our way of teaching, but you need to take ownership of the process yourself in order to experience the benefits.
  • If the course provides forums or other social engagement try to proactively engage in them. Again, other people really help.
  • You will have way more freedom than those in traditional classrooms, who have to follow a teacher simply because of the nature of physics. However, that freedom is a two-edged sword as you can sometimes be swamped with choices and not know which way to go. If you are unsure, don’t be afraid to ask for help. But do take advantage of the freedom. Set your own goals. Look for the things that excite you and explore further. Take breaks if you are getting tired. Play. Take control of the learning process and enjoy the ride.
  • Enjoy the challenges. Sometimes it will be hard, and you should expect that, especially in programming courses like these. Programming can be very frustrating at times – after 35 years of programming I can still spend days on a problem that turns out to involve a misplaced semi-colon! Accept that, and accept that even the most intractable problems will eventually be solved (and it is a wonderful feeling when you do finally get it to work). Make time to sleep on it. If you’re stuck, ask for help.
  • Get your work/life/learning balance right. Be realistic in your aspirations and expect to spend many hours a week on this, but make sure you make time to get away from it.
  • Keep a learning journal, a reflective diary of what you have done and how you have addressed the struggles, even if the course itself doesn’t ask for one. There are few more effective ways to consolidate and connect your learning than to reflect on it, and it can help to mark your progress: good to read when your motivation is flagging.
  • Get used to waiting for responses and find other things to learn in the meantime. Don’t stop learning because you are waiting – move on to something else, practice something you have already done, or reflect on what you have been doing so far.
  • Programming is a performance skill that demands constant and repeated practice. You just need to do it, get it wrong, do it again, and again, and again, until it feels like second nature. In many ways it is like learning a musical instrument or maybe even driving. It’s not something you can learn simply by reading or by being told, you really have to immerse yourself in doing it. Make up your own challenges if you run out of things to do.
  • Don’t just limit yourself to what we provide. Find forums and communities with appropriate interests. I am a big fan of StackOverflow.com for help and inspiration from others, though relevant subreddits can be useful and there are many other sites and systems dedicated to programming. Find one or two that make sense to you. Again, other people can really help.

Online learning can be great fun as long as you are aware of the big differences, primarily relating to control and personal agency. Our role is to provide a bit of structure and a supportive environment to enable you to learn, rather than to tell you stuff and make you do things, which can be disconcerting at first if you are used to traditional classroom learning. This puts more pressure on you, and more onus on you to organize and manage your own learning, but don’t ever forget that you are not ever really alone – we are here to help.

In summary, I think it really comes down to three big things, all of which are really about motivation, and all of which are quite different when learning online compared to face-to-face:

  1. Autonomy – you are in control, but you must take responsibility for your own learning. You can always delegate control to us (or others) when the going gets hard or choices are hard to make, but you are always free to take it back again, and there will be no one standing over you making you do stuff apart from yourself.
  2. Competence – there are few things more satisfying than being able to do more today than you could do yesterday. We provide some challenges and we try to keep them difficult-but-achievable at every stage along the way, but it is a great idea for you to also seek your own challenges, to play, to explore, to discover, especially if the challenges we offer are too difficult or too boring. Reflection can help a lot with this, as a means to recognize what, how, and why you have learned.
  3. Relatedness – never forget the importance of other people. You don’t have to interact with them if you don’t want to do so (that’s another freedom we offer), but it is at the very least helpful to think about how you belong in our community, your own community, and the broader community of learners and programmers, and how what and how you are learning can affect others (directly or indirectly).

This advice is by no means comprehensive! If you have other ideas or advice, or things that have worked for you, or things that you disagree with, do feel free to share them in the comments.

SCIS makes a great showing at HCI 2017, Vancouver

 

Ali Dewan presenting at HCI 2017

I had the pleasure to gatecrash the HCI 2017 conference in Vancouver today, which gave me the chance to see Dr Ali Dewan present three excellent papers in a row (two with his name on them) on a variety of themes, as well as a great paper written and presented by one of our students, Miao-Han Chang. Miao-Han Chang presenting

Both did superb jobs of presenting to a receptive crowd. Ali got particular acclaim from the audience for the first work he presented  (Combinatorial Auction based Mechanism Design for Course Offering Determination
by Anton Vassiliev, Fuhua Lin & M. Ali Akber Dewan) for its broad applicability in many areas beyond scheduling courses. 

Athabasca, and especially the School of Computing and Information Systems, has made a great showing at this prestigious conference, with contributions not just from Ali and Miao-Han, but also from Oscar (Fuhua) Lin, Dunwei Wen, Maiga Chang and Vive Kumar. Kurt Reifferscheid and Xiaokun Zhang also had a paper in the proceedings but were sadly not able to attend to present it.

 

Jon Dron and Ali Dewan at HCI 2017

Jon and Ali at the Vancouver Conference Centre after Ali’s marathon presentation stint. I detect a look of relief on Ali’s face!

 

Ali Dewan presenting

Papers

  • Combinatorial Auction based Mechanism Design for Course Offering Determination
    Anton Vassiliev, Fuhua Lin, M. Ali Akber Dewan, Athabasca University, Canada
  • Enhance the Use of Medical Wearables through Meaningful Data Analytics
    Kurt Reifferscheid, Xiaokun Zhang, Athabasca University, Canada
  • Classification of Artery and Vein in Retinal Fundus Images Based on the Context-Dependent Features
    Yang Yan, Changchun Normal University, P.R. China; Dunwei Wen, M. Ali Akber Dewan, Athabasca University, Canada; Wen-Bo Huang, Changchun Normal University, P.R. China
  • ECG Identification Based on PCA-RPROP
    Jinrun Yu, Yujuan Si, Xin Liu, Jilin University, P.R. China; Dunwei Wen, Athabasca University, Canada; Tengfei Luo, Jilin University, P.R. China; Liuqi Lang, Zhuhai College of Jilin University, P.R. China
  • Usability Evaluation Plan for Online Annotation and Student Clustering System – A Tunisian University Case
    Miao-Han Chang, Athabasca University, Canada; Rita Kuo, New Mexico Institute of Mining and Technology, United States; Fathi Essalmi, University of Kairouan, Tunisia; Maiga Chang, Vive Kumar, Athabasca University, Canada; Hsu-Yang Kung, National Pingtung University of Science and Technology, Taiwan

Athabasca’s bright future

Tony BatesThe always excellent Tony Bates provides a very clear summary of Ken Coates’s Independent Third-Party Review of Athabasca University released a week or two ago and, as usual, provides a great critical commentary as well as some useful advice on next steps.

Tony rightly points out that our problems are more internal than external, and that the solutions have to come from us, not from outside. To a large extent he hits the nail right on the head when he notes:

Major changes in course design, educational technology, student support and administration, marketing and PR are urgently needed to bring AU into advanced 21st century practice in online and distance learning. I fear that while there are visionary faculty and staff at AU who understand this, there is still too much resistance from traditionalists and those who see change as undermining academic excellence or threatening their comfort zone.

It is hard to disagree. But, though there are too many ostriches among our staff and we do have some major cultural impediments to overcome, it is far less people that impede our progress than it is our design itself, and the technologies – especially the management technologies – of which it consists. That must change, as a corequisite to changing the culture that goes along with it. With some very important exceptions (more on that below) our culture is almost entirely mediated through our organizational and digital technologies, most notably in the form of very rigid processes, procedures and rules, but also through our IT. Our IT should, but increasingly does not, embody those processes. The processes still exist, of course – it’s just that people have to perform them instead of machines. Increasingly often, to make matters worse, we shape our processes to our ill-fitting IT rather than vice versa, because the ‘technological debt’ of adapting them to our needs and therefore having to maintain them ourselves is considered too great (a rookie systems error caused by splitting IT into a semi-autonomous unit that has to slash its own costs without considering the far greater price paid by the university at large). Communication, when it occurs, is almost all explicit and instrumental. We do not yet have enough of the tacit flows of knowledge and easy communication that patch over or fix the (almost always far greater) flaws that exist in such processes in traditional bricks and mortar institutions. The continual partial attention and focused channels of communication resulting from working online mean that we struggle with tacit knowledge and the flexibility of embedded dialogue in ways old fashioned universities never have to even think about. One of the big problems with being so process-driven is that, especially in the absence of richer tacit communication, it is really hard to change those processes, especially because they have evolved to be deeply entangled with one another – changing one process almost always means changing many, often in structurally separate parts of the institutional machine, and involves processes of its own that are often entangled with those we set out to change. As a result for much of its operation, our university does what it does despite us, not because of us. Unlike traditional universities, we have nothing else to fall back on when it fails, or when things fall between cracks. And, though we likely have far fewer than most traditional universities, there are still very many cracks to fall through.

This, not coincidentally, is exactly true of our teaching too. We are pretty darn good at doing what we explicitly intend to do: our students achieve learning outcomes very well, according to the measures we use. AU is a machine that teaches, which is fine until we want the machine to do more than what it is built to do or when other, faster, lighter, cheaper machines begin to compete with it.  As well as making it really hard to make even small changes to teaching, what gets lost – and what matters about as much as what we intentionally teach – is the stuff we do not intend to teach, the stuff that makes up the bulk of the learning experience in traditional universities, the stuff where students learn to be, not just to do. It’s whole-person learning. In distance and online learning, we tend to just concentrate on parts we can measure and we are seldom even aware of the rest. There is a hard and rigid boundary between the directed, instrumental processes and the soft, invisible patterns of culture and belonging, beyond which we rarely cross. This absence is largely what gives distance learning a bad reputation, though it can be a strength if focused teaching of something well-defined is exactly what is needed, or if students are able to make the bigger connections in other ways (true of many of our successful students), when the control that the teaching method provides is worth all the losses and where a more immersive experience might actually get in the way. But it’s a boundary that alienates a majority of current and prospective students. A large percentage of even those we manage to enrol and keep with us would like to feel more connected, more a part of a community, more engaged, more belonging. A great many more don’t even join us in the first place because of that perceived lack, and a very large number drop out before submitting a single piece of work as a direct result.

This is precisely the boundary that the Landing is intended to be a step towards breaking down.

https://landing.athabascau.ca/file/view/410777/video-decreasing-the-distance

If we cannot figure out how to recover that tacit dimension, there is little chance that we can figure out how to teach at a distance in a way that differentiates us from the crowd and that draws people to us for the experience, rather than for the qualification. Not quite fair. Some of us will. If you get the right (deeply engaged) tutor, or join the right (social and/or open) course, or join the Landing, or participate in local meet-ups, or join other social media groups, you may get a fair bit of the tacit, serendipitous, incidental learning and knowledge construction that typifies a traditional education. Plenty of students do have wonderful experiences learning with others at AU, be it with their tutors or with other students. We often see those ones at convocation – ones for whom the experience has been deep, meaningful, and connected. But, for many of our students and especially the ones that don’t make it to graduation (or even to the first assignment), the chances of feeling that you belong to something bigger, to learn from others around you, to be part of a richer university experience, are fairly low. Every one of our students needs to be very self-directed, compared with those in traditional institutions – that’s a sina qua non of working online – but too many get insufficient support and too little inspiration from those around them to rise beyond that or to get through the difficult parts. This is not too surprising, given that we cannot do it for ourselves either. When faced with complicated things demanding close engagement, too many of our staff fall back on the comfortable, easy solution of meeting face to face in one of our various centres rather than taking the hard way, and so the system remains broken. This can and will change.

Moving on

I am much heartened by the Coates report which, amongst other things but most prominently and as our central value proposition, puts our leadership in online and distance education at the centre of everything. This is what I have unceasingly believed we should do since the moment I arrived. The call to action of Coates’s report is fundamentally to change our rigid dynamic, to be bold, to innovate without barriers, to evolve, to make use of the astonishingly good resources – primarily our people – to (again) lead the online learning world. As a virtual institution this should be easier than it would be for others but, perversely, it is exactly the opposite. This is for aforesaid reasons, and also because the boundaries of our IT systems create the boundaries of our thinking, and embed processes more deeply and more inflexibly than almost any bricks and mortar establishment could hope to do. We need soft systems, fuzzy systems, adaptable systems, agile systems for our teaching, research, and learning community development, and we need hard systems, automated systems, custom tailored, rock solid systems for our business processes, including the administrational and assessment recording outputs of the teaching process. This is precisely the antithesis of what we have now. As Coates puts it:

“AU should rebrand itself as the leading Canadian centre for online learning and twenty- first century educational technology. AU has a distinct and potentially insurmountable advantage. The university has the education technology professionals needed to provide leadership, the global reputation needed to attract and hold attention, and the faculty and staff ready to experiment with and test new ideas in an area of emerging national priority. There is a critical challenge, however. AU currently lacks the ICT model and facilities to rise to this opportunity.”

We live in our IT…

We have long been challenged with our IT systems, but things were not always so bad. Our ICT model has made a 180 degree turnaround in the past few years in the exact opposite direction to one that will support continuing evolution and innovation, driven by people that know little about our core mission and that have failed to understand what makes us special as a university. The best defence offered for these poor decisions is usually that ‘most other universities are doing it,’ but we are not most other universities.  ICTs are not just support tools or performance enhancers for us. We are our IT. It is our one and only face to our students and the world. Without IT, we are literally nothing. We have massively underinvested in developing our IT, and what we have done in recent years has destroyed our lead, our agility, and our morale. Increasingly, we have rented generic, closed, off-the-shelf cloud-based applications that would be pretty awful in a factory, that force us into behaviours that make no sense, that sap our time and will, and that are so deeply inappropriate for our very unique distributed community that they stifle all progress, and cut off almost all avenues of innovation in the one area that we are best placed to innovate and lead. We have automated things that should not be automated and let fall into disrepair the things that actually give us an edge. For instance, we rent an absurdly poor CRM system to manage student interactions, building a call centre for customers when we should be building relationships with students, embedding our least savoury practices of content delivery still further, making tweaks to a method of teaching that should have died when we stopped using the postal service for course packs. Yes, when it works, it incrementally improves a broken system, so it looks OK (not great) on reports, but the system it enhances is still irrevocably broken and, by further tying it into a hard embodiment in an ill-fitting application, the chances of fixing it properly diminish further. And, of course, it doesn’t work, because we have rented an ill-fitting system designed for other things with little or no consideration of whether it meets more than coarse functional needs. This can and must change.

Meanwhile, we have methodically starved the environments that are designed for us and through which we have innovated in the past, and that could allow us to evolve. Astonishingly, we have had no (as in zero) central IT support for research for years now, getting by on a wing and a prayer, grabbing for bits of overtime where we can, or using scarce, poorly integrated departmental resources. Even very well-funded and well-staffed projects are stifled by it because almost all of our learning technology innovations are completely reliant on access, not only to central services (class lists, user logins, LMS integration, etc), but also to the staff that are able to perform integrations, manage servers, install software, configure firewalls, etc, etc.  We have had a 95% complete upgrade for the Landing sitting in the wings for nearly 2 years, unable to progress due to lack of central IT personnel to implement it, even though we have sufficient funds to pay for them and then some, and the Landing is actively used by thousands of people. Even our mainstream teaching tools have been woefully underfunded and undermined: we run a version of Moodle that is past even its security update period, for instance, and that creaks along only thanks to a very small but excellent team supporting it. Tools supporting more innovative teaching with more tenuous uptake, such as Mahara and OpenSIM servers, are virtual orphans, riskily trundling along with considerably less support than even the Landing.

This can and will change.

… but we are based in Athabasca

There are other things in Coates’s report that are given a very large emphasis, notably advice to increase our open access, particularly through forming more partnerships with Northern Albertan colleges serving indigenous populations (good – and we will need smarter, more human, more flexible, more inclusive systems for that, too), but mainly a lot of detailed recommendations about staying in Athabasca itself. This latter recommendation seems to have been forced upon Coates, and it comes with many provisos. Coates is very cognizant of the fact that being based in the remote, run-down town of Athabasca is, has been, and will remain a huge and expensive hobble. He mostly skims over sensitive issues like the difficulty of recruiting good people to the town (a major problem that is only slightly offset by the fact that, once we have got them there, they are quite unlikely to leave), but makes it clear that it costs us very dearly in myriad other ways.

… the university significantly underestimates the total cost of maintaining the Athabasca location. References to the costs of the distributed operation, including commitments in the Town of Athabasca, typically focus on direct transportation and facility costs and do not incorporate staff and faculty time. The university does not have a full accounting of the costs associated with their chosen administrative and structural arrangements.”

His suggestions, though making much of the value of staying in Athabasca and heavily emphasizing the importance of its continuing role in the institution, involve moving a lot of people and infrastructure out of it and doing a lot of stuff through web conferencing. He walks a tricky political tightrope, trying to avoid the hot potato of moving away while suggesting ways that we should leave. He is right on both counts.

Short circuits in our communications infrastructure

Though cost, lack of decent ICT infrastructure, and difficulties recruiting good people are factors in making Athabasca a hobble for us, the biggest problem is, again, structural. Unlike those working online, among those living and working in the town of Athabasca itself, all the traditional knowledge flows occur without impediment, almost always to the detriment of more inclusive ways of online communication. Face to face dialogue inevitably short-circuits online engagement – always has, always will. People in Athabasca, as any humans would and should, tend to talk among themselves, and tend to only communicate with others online, as the rest of us do, in directed, intentional ways. This might not be so bad were it not for the fact that Athabasca is very unrepresentative of the university population as a whole, containing the bulk of our administrators, managers, and technical staff, with less than 10 actual faculty in the region. This is a separate subculture, it is not the university, but it has enormous sway over how we evolve. It is not too surprising that our most critical learning systems account for only about 5% of our IT budget because that side of things is barely heard of among decision-makers and implementors that live there and they only indirectly have to face the consequences of its failings (a matter made much worse by the way we disempower the tutors that have to deal with them most of all, and filter their channels of communication through just a handful of obligated committee members). It is no surprise that channels of communication are weak because those that design and maintain them can easily bypass the problems they cause. In fact, if there were more faculty there, it would be even worse, because then we would never face any of the problems encountered by our students. Further concentrations of staff in Edmonton (where most faculty reside), St Albert (mainly our business faculty) and Calgary do not help one bit, simply building further enclaves, which again lead to short circuits in communication and isolated self-reinforcing clusters that distort our perspectives and reduce online communication. Ideas, innovations, and concerns do not spread because of hierarchies that isolate them, filter them as they move up through the hierarchy, and dissipate them in Athabasca. Such clustering could be a good part of the engine that drives adaptation: natural ecosystems diversify thanks to parcellation. However, that’s not how it works here, thanks to the aforementioned excess in structure and process and the fact that those clusters are far from independently evolving. They are subject to the same rules and the same selection pressures as one another, unable to independently evolve because they are rigidly, structurally, and technologically bound to the centre. This is not evolution – it is barely even design, though every part of it has been designed and top-down structures overlay the whole thing. It’s a side effect of many small decisions that, taken as a whole, result in a very flawed system.

This can and must change.

The town of Athabasca and what it means to us

Athabasca high street

Though I have made quite a few day trips to Athabasca over the years, I had never stayed overnight until around convocation time this year. Though it was a busy few days so I only had a little chance to explore, I found it to be a fascinating place that parallels AU in many ways. The impression it gives is of a raw, rather broken-down and depressed little frontier town of around 4,000 souls (a village by some reckonings) and almost as many churches. It was once a thriving staging post on the way to the Klondike gold rush, when it was filled with the rollicking clamour of around 20,000 prospectors dreaming of fortunes. Many just passed through, but quite a few stayed, helping to define some of its current character but, when the gold rush died down, there was little left to sustain a population. Much of the town still feels a bit temporary, still a bit of a campground waiting to turn into a real town. Like much of Northern Alberta, its fortunes in more recent years have been significantly bound to the oil business, feeding an industry that has no viable future and the morals of an errant crow, tied to its roller coaster fortunes. There are signs that money has been around, from time to time: a few nice buildings, a bit of landscaping here and there, a memorial podium at Athabasca Landing.  But there are bigger signs that it has left.

Athabasca Landing

Today, Athabasca’s bleak main street is filled with condemned buildings, closed businesses, discount stores, and shops with ‘sale’ signs in their windows. There are two somewhat empty town centre pubs, where a karaoke night in one will denude the other of almost all its customers.

There are virtually no transit links to the outside world: one Greyhound bus from Edmonton (2 hours away) comes through it, in the dead of night, and passenger trains stopped running decades ago. The roads leading in and out are dangerous: people die way too often getting there, including one of our most valued colleagues in my own school. It is never too far from being reclaimed by the forces of nature that surround it. Moose, bear, deer, and coyotes wander fairly freely. Minus forty temperatures don’t help, nor does a river that is pushed too hard by meltwaters from the rapidly receding Athabasca Glacier and that is increasingly polluted by the side-effects of oil production.

Athabasca

So far so bleak. But there are some notable upsides too. The town is full of delightfully kind, helpful, down-to-earth people infused with that wonderful Canadian spirit of caring for their neighbours, grittily facing the elements with good cheer, getting up early, eating dinner in the late afternoon, gathering for potlucks in one another’s houses, and organizing community get-togethers. The bulk of housing is well cared-for, set in well-tended gardens, in quiet, neat little streets. I bet most people there know their neighbours and their kids play together. Though tainted by its ties with the oil industry, the town comes across as, fundamentally, a wholesome centre for homesteaders in the region, self-reliant and obstinately surviving against great odds by helping one another and helping themselves. The businesses that thrive are those selling tools, materials, and services to build and maintain your farm and house, along with stores for loading your provisions into your truck to get you through the grim winters. It certainly helps that a large number of residents are employees of the university, providing greater diversity than is typically found in such settlements, but they are frontier folk like the rest. They have to be.

It would be unthinkable to pull the university out at this point – it would utterly destroy an already threatened town and, I think, it would cause great damage to the university. This was clearly at the forefront of Coates’s mind, too. The solution is not to withdraw from this strange place, but to dilute and divert the damage it causes and perhaps, even, to find ways to use its strengths. Greater engagement with Northern communities might be one way to save it – we have some big largely empty buildings up there that will be getting emptier, and that might not be a bad place for some face-to-face branching out, perhaps semi-autonomously, perhaps in partnership with colleges in the region. It also has potential as a place for a research retreat though it is not exactly a Mecca that would draw people to it, especially without transit links to sustain it. A well-designed research centre cost a fortune to build, though, so it would be nice to get some use out of it.

Perhaps more importantly, we should not pull out because Athabasca is a part of the soul of the institution. It is a little fitting that Athabasca University has – not without resistance – had its fortunes tied to this town. Athabasca is kind-of who we are and, to a large extent, defines who we should aspire to be. As an institution we are, right now, a decaying frontier town on the edge of civilization that was once a thriving metropolis, forced to help ourselves and one another battle with the elements, a caring bunch of individuals bound by a common purpose but stuck in a wilderness that cares little for us and whose ties with the outside world are fickle, costly, and tenuous. Athabasca is certainly a hobble but it is our hobble and, if we want to move on, we need to find ways to make the best of it – to find value in it, to move people and things away from it that it impedes the most, at least where we can, but to build upon it as a mythic hub that helps to define our identity, a symbolic centre for our thinking. We can and will help ourselves and one another to make it great again. And we have a big advantage that our home town lacks: a renewable and sustainable resource and product. Very much unlike Athabasca the town, the source of our wealth is entirely in our people, and the means we have for connecting them. We have the people already: we just need to refocus on the connection.

The cost of admission to the unlearning zone

picture of dull classroom (pubic domain)I describe some of what I do as ‘unteaching’, so I find this highly critical article by Miss Smith – The Unlearning Zone –  interesting. Miss Smith dislikes the terms ‘ unteaching’ and ‘unlearning’ for some well-expressed aesthetic and practical reasons: as she puts it, they are terms “that would not be out of place in a particularly self-satisfied piece of poststructuralist literary analysis circa 1994.”  I partially agree. However, she also seems equally unenamoured with what she thinks they stand for. I disagree with her profoundly on this so, as she claims to be new to these terms, here is my attempt to explain a little about what I mean by them and why I think they are a useful part of the educators’ lexicon, and why they are crucially important for learners’ development in general.

First the terms…

Yes, ‘unteaching’ is an ugly neoligism and it doesn’t really make sense: that’s part of the appeal of using it – a bit of cognitive dissonance can be useful for drawing attention to something. However, it is totally true that someone who is untaught is just someone who has not (yet) been taught, so ‘unteaching’, seen through that light, is at best pointless, at worst self-contradictory.  On the other hand, it does seem to follow pretty naturally from ‘unlearning’ which, contrary to Miss Smith’s assertion, has been in common use for centuries and makes perfect sense. Have you ever had to unlearn bad habits? Me too.

As I understand it, ‘unteach’ is to ‘teach’ as ‘undo’ is to ‘do’.  Unteaching is still teaching, just as undoing is still doing, and unlearning is still learning. Perhaps deteaching would be a better term. Whatever we choose to call it, unteaching is concerned with intentionally dismantling the taught belief that teaching is about exerting power over learners to teach, and replacing it with the attitude that teachers are there to empower learners to learn. This is not a particularly radical idea. It is what all teachers should do anyway, I reckon. But it is worth drawing attention to it as a distinct activity because it runs counter to the tide, and the problem it addresses is virtually ubiquitous in education up to, and sometimes at, doctoral level.

Traditional teaching of the sort Miss Smith seems to defend in her critique does a lot more than teach a subject, skill, or way of thinking. It teaches that learning is a chore that is not valuable in and of itself, that learners must be forced to do it for some other purpose, often someone else’s purpose. It teaches that teaching is something done to students by a teacher: at its worst, it teaches that teaching is telling; at best, that teaching involves telling someone to do something. It’s not that (many) teachers deliberately seek these outcomes, but that they are the most likely lessons to be learned, because they are the ones that are repeated most often. The need for unteaching arises because traditional teaching, with luck in addition to whatever it intends to teach, teaches some terrible lessons about learning and the role of teaching in that process that must be unlearned.

What is unteaching?

Miss Smith claims that unteaching means “open plan classes, unstructured lessons and bean bags.” That’s not the way I see it at all. Unlike traditional teaching, with its timetables, lesson plans, learning objectives, and uniform tests, unteaching does not have its own technologies and methods, though it does, for sure, tend to be a precursor to connectivist, social constructivist, constructionist, and other more learner-centred ways of thinking about the learning process, which may sometimes be used as part of the process of unteaching itself. Such methods, models, and attitudes emerge fairly naturally when you stop forcing people to do your bidding. However, they are just as capable of being used in a controlling way as the worst of instructivist methods: the number of reports on such interventions that include words like ‘students must…’, ‘I make my students…’ or (less blatantly) ‘students (do X)’ far outnumber all others, and that is the very opposite of unteaching. The specific technologies (including pedagogies as much as open-plan classrooms and beanbags) are not the point. Lectures, drill-and-practice and other instructivist methods are absolutely fine, as long as:

  1. they at least attempt to do the job that students want or need,
  2. they are willingly and deliberately chosen by students,
  3. students are well-informed enough to make those choices, and
  4. students can choose to learn otherwise at any time.

No matter how cool and groovy your problem-based, inquiry-based, active methods might be, if they are imposed on students (especially with the use of threats for non-compliance and rewards for compliance – e.g. qualifications, grades, etc) then it is not unteaching at all: it’s just another way of doing the same kind of teaching that caused the problem in the first place. But if students have control – and ‘control’ includes being able to delegate control to someone else who can scaffold, advise, assist, instruct, direct, and help them when needed, as well as being able to take it back whenever they wish – then such methods can be very useful. So can lectures. To all those educational researchers that object to lectures, I ask whether they have ever found them valuable in a conference (and , if not, why did they go to a conference in the first place?). It’s not the pedagogy of lectures that is at fault. It’s the requirement to attend them and the accompanying expectation that people are going to learn what you are teaching as a result. That’s, simply put, empirically wrong. It doesn’t mean that lecturees learn nothing. Far from it. But what you teach and what they learn are different kinds of animal.

Problems with unteaching

It’s really easy to be a bad unteacher – I think that is what Miss Smith is railing against, and it’s a fair criticism. I’m often pretty bad at it myself, though I have had a few successes along the way too. Unteaching and, especially, the pedagogies that result from having done unteaching, are far more likely to go wrong, and they take a lot more emotional, intellectual, and social effort than traditional teaching because they don’t come pre-assembled. They have no convenient structures and processes in place to do the teaching for you.  Traditional teaching ‘works’ even when it doesn’t. If you throw someone into a school system, with all its attendant rewards, punishments, timetables, rules and curricula, and if you give them the odd textbook and assessment along the way, then most students will wind up learning something like what is intended to be taught by the system, no matter how awful the teachers might be. In such a system, students will rarely learn well, rarely persistently, rarely passionately, seldom kindly, and the love of learning will have been squashed out of many of them along the way (survivors often become academics and teachers themselves). But they will mostly pass tests at the end of it. With a bit of luck many might even have gained a bit of useful knowledge or skill, albeit that much will be not just wasted and forgotten as easily as a hotel room number when your stay is over, but actively disliked by the end of it. And, of course, they will have learned dependent ways of learning that will serve them poorly outside institutional systems.

To make things far worse, those very structures that assist the traditional teacher (grades, compulsory attendance, fixed outcomes, concept of failure, etc) are deeply antagonistic to unteaching and are exactly why it is needed in the first place. Unteachers face a huge upstream struggle against an overwhelming tide that threatens to drown passionate learning every inch of the way. The results of unteaching can be hard to defend within a traditional educational system because, by conventional measures, it is often inefficient and time-consuming. But conventional measures only make sense when you are trying to make everyone do the same things, through the same means, with the same ends, measured by and in order to meet the same criteria. That’s precisely the problem.

The final nail in unteaching’s coffin is that it is applied very unevenly across the educational system, so every freedom it brings is counterbalanced by a mass of reiterated antagonistic lessons from other courses and programs. Every time we unteach someone, two others reteach them.  Ideally, we should design educational systems that are friendlier to and more supportive of learner autonomy, and that are (above all else) respectful of learners as human beings. In K-12 teaching there are plenty of models to draw from, including Summerhill, Steiner (AKA Waldorf) schools, Montessori schools, Experiential Learning Schools etc. Few are even close to perfect, but most are at least no worse than their conventional counterparts, and they start with an attitude of respect for the children rather than a desire to make them conform. That alone makes them worthwhile. There are even some regional systems, such as those found in Finland or (recently) British Columbia, that are heading broadly in the right direction. In universities and colleges there are plenty of working models, from Oxford tutorials to Cambridge supervisions, to traditional theses and projects, to independent study courses and programs, to competency-based programs, to PLAR/APEL portfolios, and much more. It is not a new idea at all. There is copious literature and many theoretical models that have stood the test of time, from andragogy to communities of practice, through to teachings from Freire, Illich, Dewey and even (a bit quirkily) Vygotsky. Furthermore, generically and innately, most distance and e-learning unteaches better than its p-learning counterparts because teachers cannot exert the same level of control and students must learn to learn independently. Sadly, much of it is spoiled by coercing students with grades, thereby providing the worst of both worlds: students are forced to behave as the teacher demands in their terminal behaviours but, without physical copresence, are less empowered by guidance and emotional/social support with the process. Much of my own research and teaching is concerned with inverting that dynamic – increasing empowerment and social support through online learning, while decreasing coercion. I’d like to believe that my institution, Athabasca University, is largely dedicated to the same goal, though we do mostly have a way to go before we get it right.

Why it matters

Unteaching is to a large extent concerned with helping learners – including adult learners – to get back to the point at which most children start their school careers – driven by curiosity, personal interest, social value, joy, delight – but that is schooled out of them over years of being taught dependency.  Once misconceptions about what education is for, what teachers do, and how we learn, have been removed, teaching can happen much more effectively: supporting, nurturing, inspiring, challenging, responding, etc, but not controlling, not making students do things they are not ready to do for reasons that mean little to them and have even less to do with what they are learning.

However, though it is an immensely valuable terminal outcome, improved learning is perhaps not the biggest reason for unteaching. The real issue is moral: it’s simply the right thing to do. The greatest value is that students are far more likely to have been treated with the respect, care, and honour that all human beings deserve along the way. Not ‘care’ of the sort you would give to a dog when you train it to be obedient and well behaved. Care of the sort that recognizes and valorizes autonomy and diversity, that respects individuals, that cherishes their creativity and passion, that sees learners as ends in themselves, not products or (perish the thought) customers. That’s a lesson worth teaching, a way of being that is worth modelling. If that demands more effort, if it is more fallible, and if it means that fewer students pass your tests, then I’m OK with that. That’s the price of admission to the unlearning zone.

 

True costs of information technologies

Switchboard (public domain)Microsoft unilaterally and quietly changed the spam filtering rules for Athabasca University’s O365 email system on Thursday afternoon last week. On Friday morning, among the usual 450 or so spams in my spam folder (up from around 70 per day in the old Zimbra system) were over 50 legitimate emails, including one to warn me that this was happening, claiming that our IT Services department could do nothing about it because it’s a vendor problem. Amongst junked emails were all those sent to the allstaff alias (including announcements about our new president), student work submissions, and many personal messages from students, colleagues, and research collaborators.

The misclassified emails continue to arrive, 5 days on.  I have now switched off Microsoft’s spam filter and switched to my own, and I have risked opening emails I would never normally glance at, but I have probably missed a few legitimate emails. This is perhaps the worst so far in a long line of ‘quirks’ in our new O365 system, including persistently recurring issues of messages being bounced for a large number of accounts, and it is not the first caused by filtering systems: many were affected by what seems to be a similar failure in the Clutter filter in May.

I assume that, on average, most other staff at AU have, like me, lost about half an hour per day so far to this one problem. We have around 1350 employees, so that’s around 675 hours – 130 working days – being lost every day it continues. This is not counting the inevitable security breaches, support calls, proactive attempts at problem solving, and so on, nor the time for recovery should it ever be fixed, nor the lost trust, lost motivation, the anger, the conversations about it, the people that will give up on it and redirect emails to other places (in breach of regulations and at great risk to privacy and security, but when it’s a question of being able to work vs not being able to work, no one could be blamed for that). The hours I have spent writing this might be added to that list, but this happens to relate very closely indeed to my research interests (a great case study and catalyst for refining my thoughts on this), so might be seen as a positive side-effect and, anyway, the vast majority of that time was ‘my own’: faculty very rarely work normal 7-hour days.

Every single lost minute per person every day equates to the time of around 3 FTEs when you have 1350 employees. When O365 is running normally it costs me around five extra minutes per day, when compared with its predecessor, an ancient Zimbra system.  I am a geek that has gone out of his way to eliminate many of the ill effects: others may suffer more.  It’s mostly little stuff: an extra 10-20 seconds to load the email list, an extra 2-3 seconds to send each email, a second or two longer to load them, an extra minute or two to check the unreliable and over-spammed spam folder, etc. But we do such things many times a day. That’s not including the time to recover from interruptions to our work, the time to learn to use it, the support requests, the support infrastructure, etc, etc.

To be fair, whether such time is truly ‘lost’ depends on the task. Those ‘lost’ seconds may be time to reflect or think of other things. The time is truly lost if we have to put effort into it (e.g. checking spam mail) or if it is filled with annoyance at the slow speed of the machine, but may sometimes simply be used in ways we would not otherwise use it.  I suspect that flittering attention while we wait for software to do its thing creates habits of mind that are both good and bad. We are likely more distracted, find it harder to concentrate for long periods, but we probably also develop different ways of connecting things and different ways of pacing our thinking. It certainly changes us, and more research is needed on how it affects us. Either way, time spent sorting legitimate emails from spam is, at least by most measures of productivity, truly time lost, and we have lost a lot of it.

Feeding the vampires

It goes without saying that, had we been in control of our own email system, none of this would have happened. I have repeatedly warned that putting one of the most central systems of our university into the hands of an external supplier, especially one with a decades-long history of poor software, broken or proprietary standards, weak security, inadequate privacy policies, vicious antagonism to competitors, and a predatory attitude to its users, is a really stupid idea. Microsoft’s goal is profit, not user satisfaction: sometimes the two needs coincide, often they do not. Breakages like this are just a small part of the problem. The worst effects are going to be on our capacity to innovate and adapt, though our productivity, engagement and workload will all suffer before the real systemic failures emerge.  Microsoft had to try hard to sell it to us, but does not have to try hard to keep us using it, because we are now well and truly locked in on all sides by proprietary, standards-free tools that we cannot control, cannot replace, cannot properly understand, that change under our feet without warning, that will inevitably insinuate themselves into our working lives. And it’s not just email and calendars (that can use only slightly broken standards) but completely opaque standards-free proprietary tools like OneDrive, OneNote and Yammer. Now we have lost standards-compliance and locked ourselves in, we have made it unbelievably difficult to ever change our minds, no matter how awful things get. And they will get more awful, and the costs will escalate. This makes me angry. I love my university and am furious when I see it being destroyed by avoidable idiocy.

O365 is only one system among many similar tools that have been foisted upon us in the last couple of years, most of which are even more awful, if marginally less critical to our survival. They have replaced old, well-tailored, mostly open tools that used to just work: not brilliantly, seldom prettily, but they did the job fast and efficiently so that we didn’t have to. Our new systems make us do the work for them. This is the polar opposite of why we use IT systems in the first place, and it all equates to truly lost time, lost motivation, lost creativity, lost opportunity.

From leave reporting to reclaiming expenses to handling research contracts to managing emails, let’s be very conservative indeed and say that these new baseline systems just cost us an average of an extra 30 minutes per working day per person on top of what we had before (for me, it is more like an hour, for others, more).  If the average salary of an AU employee is $70,000/year that’s $5,400,000 per year in lost productivity. It’s much worse than that, though, because the work that we are forced to do as a result is soul-destroying, prescriptive labour, fitting into a dominative system as a cog into a machine. I feel deeply demotivated by this, and that infects all the rest of my work. I sense similar growing disempowerment and frustration amongst most of my colleagues.

And it’s not just about the lost time of individuals. Almost always, other people in the system have to play a role that they did not play before (this is about management information systems, not just the digital tools), and there are often many iterations of double-checking and returned forms,  because people tend to be very poor cogs indeed.  For instance, the average time it takes for me to get recompense for expenses is now over 6 months, up from 2-4 weeks before. The time it takes to simply enter a claim alone is up from a few minutes to a few hours, often spread over months, and several other people’s time is also taken up by this process. Likewise, leave reporting is up from 2 minutes to at least 20 minutes, usually more, involving a combination of manual emails, tortuous per-hour entry, the ability to ask for and report leave on public holidays and weekends, and a host of other evils. As a supervisor, it is another world of pain: I have lost many hours to this, compounding the ‘mistakes’ of others with my own (when teaching computing, one of the things I often emphasize is that there is no such thing as user error: while they can make mistakes and do weird stuff we never envisaged, it is our failure to design things right that is the problem). This is not to mention the hours spent learning the new systems, or the effects on productivity, not just in time and motivation, but in preventing us from doing what we are supposed to do at all. I am doing less research, not just because my time is taken with soul-destroying cog-work, but because it is seldom worth the hassle of claiming, or trying to manage projects using badly designed tools that fit better – though not well – in a factory. Worse, it becomes part of the culture, infecting other processes like ethics reviews, student-tutor interactions, and research & development. In an age when most of the world has shaken off the appalling, inhuman, and empirically wrong ideas of Taylorism, we are becoming more and more Taylorist. As McLuhan said, we shape our tools and our tools shape us.

To add injury to insult, these awful things actually cost money to buy and to run –  often a lot more money than they were planned to cost, making a lot less savings or even losses, even in the IT Services department where they are justified because they are supposed to be cutting costs. For instance, O365 cost nearly three times initial estimates on which decisions were based, and it appears that it has not reduced the workload for those having to support it, nor the network traffic going in and out of the university (in fact it may be much worse), all the while costing us far more per year to access than the reliable and fully-featured elderly open source product it replaced. It also breaks a lot more. It is hard to see what we have gained here, though it is easy to see many losses.

Technological debt

The one justification for this suicidal stupidity is that our technological debt – the time taken to maintain, extend, and manage old systems – is unsustainable. So, if we just buy baseline tools without customization, especially if we outsource the entire management role to someone else, we save money because we don’t have to do that any more.

This is – with more than due respect – utter bullshit.

Yes, there is a huge investment involved over years whenever we build tools to do our jobs and, yes, if we do not put enough resources into maintaining them then we will crawl to a halt because we are doing nothing but maintenance. Yes, combinatorial complexity and path dependencies mean that the maintenance burden will always continue to rise over time, at a greater-than-linear rate. The more you create, the more you have to maintain, and connections between what we create adds to the complexity. That’s the price of having tools that work. That’s how systems work. Get over it. That’s how all technology evolves, including bureaucratic systems. Increasing complexity is inevitable and relentless in all technological systems, not withstanding the occasional paradigm shift that kind-of starts the ball rolling again. Anyone that had stuck around in an organization long enough to see the long-term effects of their interventions would know this.

These new baseline systems are in no way different, save for one: rather than putting the work into making the machines work for us, we instead have to evolve, maintain and manage processes in which we do the work of machines. The complexity therefore impacts on every single human being that is having to enact the machine, not just developers. This is crazy. Exactly the same work has to be done, with exactly the same degree of precision as that of the machines (actually more, because we have to add procedures to deal with the errors that software is less likely to make). It’s just that now it is done by slow, unreliable, fallible, amotivated human beings. For creative or problem-solving work, it would be a good thing to take tasks away from machines that humans should be doing. For mechanistic, process-driven work where human error means it breaks, it is either great madness, great stupidity, or great evil. There are no other options. At a time when our very survival is under threat, I cannot adequately express my deep horror that this is happening.

I suspect that the problem is in a large part due to short-sighted local thinking, which is a commonplace failure in hierarchical systems, and that gets worse the deeper and more divisive the hierarchies go.  We only see our own problems without understanding or caring about where we sit in the broader system. Our IT directors believe that their job is to save money in ITS (the department dealing with IT), rather than to save money for the university. But, not only are they outsourcing our complex IT functions to cloud-based companies (a terrible idea for aforementioned reasons), they are outsourcing the work of information technologies to the rest of the university. The hierarchies mean a) that directors seldom get to see or hear of the trouble it causes, b) they mix mainly with others at or near their hierarchical level who do not see it either, and c) that they tend to see problems in caricature, not as detailed pictures of actual practices. As the hierarchies deepen and separate,  those within a branch communicate less with others in parallel branches or those more than a layer above or below. Messages between layers are, by design, distorted and filtered. The more layers, the greater the distortion. People take further actions based on local knowledge, and their actions affect the whole tree. Hierarchies are particularly awful when coupled with creative work of the sort we do at Athabasca or fields where change is frequent and necessary. They used to work OK for factories that did not vary their output much and where everything was measurable though, in modern factories, that is rarely true any more. For a university, especially one that is online and that thus lacks many of the short circuits found in physical institutions, deepening hierarchies are a recipe for disaster. I suppose that it goes without saying that Athabasca University has, over the past few years, seen a huge deepening in those hierarchies.

True costs

Our university is in serious financial trouble that it would not be in were it not for these systems. Even if we had kept what we had, without upgrading, we would already be many millions of dollars better off, countless thousands of hours would not have been wasted, we would be far more motivated, we would be far more creative, and we would still have some brilliant people that we have lost as a direct result of this process. All of this would be of great benefit to our students and we would be moving forwards, not backwards. We have lost vital capacity to innovate, lost vital time to care about what we are supposed to be doing rather than working out how the machine works. The concept of a university as a machine is not a great one, though there are many technological elements and processes that are needed to make it run. I prefer to think of it like an ecosystem or an organism. As an online university, our ecosystem/body is composed of people and machines (tools, processes, methods, structures, rules, etc). The machinery is just there to support and sustain the people, so they can operate as a learning community and perform their roles in educating, researching and community engagement. The more that we have to be the machines, the less efficiently the machinery will run, and the less human we can all be. It’s brutal, ugly, and self-destructive.

When will we learn that the biggest costs of IT are to its end users, not to IT Services? We customized and created the tools that we have replaced for extremely good reasons: to make our university and its systems run better, faster, more efficiently, more effectively. Our ever-growing number of new off-the-shelf and outsourced systems, that take more of our time, intellectual and emotional effort, have wasted and continue to waste countless millions of dollars, not to mention huge costs in lost motivation and ill will, not to mention in loss of creativity and caring. In the process we have lost control of our tools, lost the expertise to run them, lost the capability to innovate in the one field in which we, as an online institution, must and should have most expertise. This is killing us. Technological debt is not voided by replacing custom parts with generic pieces. It is transferred at a usurious rate of interest to those that must replace the lost functionality with human labour.

It won’t be easy to reverse this suicidal course, and I would not enjoy being the one tasked with doing so. Those who were involved in implementing these changes might find it hard to believe, because it has taken years and a great deal of pain to do so (and it is far from over yet – the madness continues), but breaking the system was hundreds of times easier than it will be to fix it. The first problem is that the proprietary junk that has been foisted upon us, especially when hosted in the cloud, is a one-way valve for our data, so it will be fiendishly hard to get it back again. Some of it will be in formats that cannot be recovered without some data loss. New ways of working that rely on new tools will have insinuated themselves, and will have to be reversed. There will be plentiful down-time, with all the associated costs. But it’s not just about data. From a systems perspective this is a Humpty Dumpty problem. When you break a complex system, from a body to an ecosystem, it is almost impossible to ever restore it to the way it was. There are countless system dependencies and path dependencies, which mean that you cannot simply start replacing pieces and assume that it will all work. The order matters. Lost knowledge cannot be regained – we will need new knowledge. If we do manage to survive this vandalism to our environment, we will have to build afresh, to create a new system, not restore the old. This is going to cost a lot. Which is, of course, exactly as Microsoft and all the other proprietary vendors of our broken tools count upon. They carefully balance the cost of leaving them against what they charge. That’s how it works. But we must break free of them because this is deeply, profoundly, and inevitably unsustainable.

White elephants and other e-readers

When I get new devices I tend to make notes about them: it’s part of my tinkering approach to research, a way to explore the edges of the adjacent possible. Most of the notes don’t get read by anyone else. This often seems like a bit of a waste so, having had a couple of days of vacation (and thus mostly doing the work I felt like doing rather than the work I had to do) this post is an assemblage of notes about a few of the devices I have acquired over the past year or so, at least partially to support my thinking on e-readers (though I cover more features of the devices in my notes).

I am very interested in e-reading because I do a great deal of it, and it is the primary means by which most online learners learn. There’s a fair bit of existing research into e-reading, but the vast majority of it fails to distinguish between desktop PCs, laptop PCs, dedicated e-readers, tablets and cellphones, let alone between different software tools and configurations. This is silly. It’s equivalent to generically comparing e-learning and p-learning which, as we all should now know, is a completely spurious thing to do.‘Tain’t what you do, it’s the way that you do it. It is particularly interesting that, though there are a few variations in paper books – size, font, hard/paperback, etc – the variation is not even close to that found in e-reading hardware and software, and we have barely begun to innovate in this area yet. To do so, it is useful to understand the benefits and weaknesses of existing tools. These notes are part of that process.

The devices I will discuss here are:

  • Kindle Voyage (high-end e-reader)
  • Sony DPT-S1 (A4-size e-paper e-reader)
  • Lenovo Yoga Tab 3 (Android tablet with built-in projector)
  • Google Cardboard (generic VR viewer)
  • Pebble Time Steel (smartwatch)
  • iPad Pro and Apple Pencil (needs no introduction)

Amazon Kindle Voyage

Kindle VoyageI got this device because I wanted to know what makes something a top-of-the-line e-reader. The Kindle Voyage, though heavily criticized for its price, had (at the time I got it) pretty much swept the board in comparative reviews, coming top in almost all of them. This is therefore my reference point.

The Kindle Voyage is very small: the (6 inch) page is smaller than the average paperback book, especially the slightly larger formats used mainly for academic books.  Whether this is a good or bad thing depends a lot on the book. For text, I find that is good enough but, for diagrams, tables and images, it can be too small.

The monochrome e-ink screen is bright and very clear, with better resolution than many laser printers. It has a non-reflective etching that I have tried in bright sunlight and found to be extremely easy on the eye, with virtually no reflections unless you deliberately angle it at the sun. It is not quite paper, but extremely close to it and, in many ways, is superior to read from: flatness and consistency are mostly a positive thing, albeit that the curve of a paper page provides cues about location in a book and helps one to remember a page’s unique shape. It has very even backlighting that gently glows, and dims according to the level of background lighting – this is great, though I’d like it more if it had the option to tint it with red light – the blue-ish glow is not great last thing at night, when I tend to read the most. Battery life, even when backlit, is very good: the claimed 6 weeks of life assumes only half an hour of reading a day, which is way less than I’d normally do, but that still equates to a good 20 hours between charges in real life which, for something so tiny, is good. It appears to take a couple of hours to fully charge on a typical USB connection.

The device is very thin and very light – it feels much lighter than the average smartphone and far lighter than a small paperback – with a nice rubbery grippable back, and intelligently positioned ‘buttons’ on both sides of the screen, so it works well in either hand. The ‘buttons’ are actually pressure sensitive areas: pressing them gives a reassuring and very gentle haptic buzz when they are squeezed. After only 10-15 minutes of pressing them this can lead to finger cramp, however, so it is good that it is also possible to swipe across or up and down a page, in a manner that is quite familiar to phone and tablet users. There are two smaller back ‘buttons’ above the main page flippers, that are quite hard to reach with one hand. There’s an on/off button on the rear of the device, just out of reach of even my long-ish fingers. This is good – it is hard to turn it off accidentally. The bevel is not huge, but is about the right size to make it easy to hold without touching the screen, about the size of a normal book margin.

Performance is notably better than that of any other e-ink devices I have used, with screen refreshing that is fast and that seldom, and barely perceptibly, flashes (a generic issue with e-ink, that starts to burn in if not zapped occasionally with a reverse image). For reading, I find page turns fast enough not to interrupt my flow of reading at all. Much faster than flipping pages in a p-book although, as my weak eyes mean that I like to have a larger font, I tend to do this more often.

It has a web browser, but it’s awful. Soft buttons for the keyboard and tools are often quite unresponsive. Especially annoying is the lag and difficulty finding the right place to press for punctuation such as the @ symbol and period. Once you move on to pages that need scrolling it is very jerky, with multiple refreshes, and extremely slow responses to things like pinching to zoom, which is distracting to the point of making it virtually unusable for many pages: few are optimized for e-readers. Lack of colour also becomes a serious issue on such pages. That is also a noticeable problem when scrolling through my catalogue of books or the Kindle store (also available directly from the device), because many book covers blur into a grey mass: this is a surprising failure on the part of Amazon who, you might think, ought to be doing its best to sell books to you. If you cannot differentiate between them or even see their titles, there is not a lot of point. I still mostly need to get my books via a tablet, phone or PC. It is at least nice to be able to browse books on archive.org and download them (in the correct format) to the device.

On the subject of the book catalogue, the interface to it is tedious. I have hundreds of books that I like to browse, not simply search for, and it can take several minutes to scroll painstakingly through them. There are options for tagging and cataloguing books but, with a large existing catalogue, this is not a simple option. This is many times worse than even a disorganized pile of books, let alone proper bookshelves. The fact that you can search (and search for text within books) is a notable benefit, but the loss of random browsing is a serious disadvantage.

Whispersync works very well: it’s very easy to pick up on one device what was left off on another. I very much like the ‘free’ 3G connection that works in most countries and that allows books to be downloaded (and purchased) from almost anywhere in the world, without the need for wifi, but I deeply hate the fact that a fair number of my books are limited by DRM to a few devices. As a researcher into such technologies, I have a great many versions of the Kindle app on many devices, so I often hit these limits, then have to work out on which machines to disable reading (is it mac 1, or mac 4 that I am actually using? Very hard to tell).  In fact, I deeply hate DRM, period. It is not fiendishly hard to convert and transfer non-DRM’d books from other devices but I find the fact that Amazon insist all should be in its proprietary format or PDF (not a good thing on a 6” screen) to be intensely annoying. Given that DRM is perfectly possible in the otherwise ubiquitous epub format, this is an annoying constraint.

I was encouraged after getting this device to try a subscription to Kindle Unlimited, which gives (as the name implies) Netflix-like access to over a million titles – an all-you-can-eat rental smorgasbord covering a vast array of subjects and genres, all for $10/month, with up to 10 books at any one time. This has been a disappointing investment so far. The overwhelming majority of the books are those that no one in their right mind would bother paying the typical asking price of between 2 and 6 dollars and would certainly not bother borrowing from a library. The majority are self-published, and some are scams that are not even meant to be read – they are just a means to leech a bit of money from Amazon, filled with nonsense. Within the area of science I found a great many books that are anything but scientific, with a preponderance of rubbish folk psychology, ’10 things’ books, and right-of-Hitler religious nutcases trying to disprove evolution and climate change. In fiction, there’s a lot of genre novellas and novels of the fan-fiction variety, most of which seem to be of extremely low quality and imagination. Very disappointing, though I have found a copious catalogue of Kurt Vonnegut books, many of which I have not read, so am happy enough for now. There are certainly some gems to be found but the effort of doing so is great, and none of those that I actually sought out have been there so far. The device does allow you to set up a library account to borrow books from your local library. I have not tried this yet, but find the idea appealing. You can, of course, do this on any device, but the convenience is worth having, especially given the complete lack of network charges.

Is it worth the money? I’d say not. Amazon’s own much cheaper alternative, with a very similar screen, the Paperwhite, is a little thicker, lacks the buzzing buttons and adaptive backlight, and is slightly slower, but these are not big enough differences to be worth $100. My only other notable e-ink device till this point was a tiny and now slightly elderly Kobo with a 4” screen. Apart from size and backlight, there is not too much to choose between them.  Yes, the Voyage has a notably better screen, but not so much that it is worth nearly $200 more (the Kobo cost me less than $40) and bigger is certainly better, but not $200 better. The software on the Kobo is, I would say, mostly a bit nicer, but essentially extremely similar. Its native epub format is way friendlier, with far more books available without the need for conversion, albeit with less wonderful sync between devices. The main differentiator is the book stores behind them – Amazon’s catalogue is vastly much bigger and better. Vastly much. Though both can be used with books from elsewhere, as both are tightly integrated with their respective bookstores, this matters.

For all its weaknesses, the Voyage is a device that I have found myself using for at least an hour every day. It’s a great way to read books, especially fiction. It very rarely needs charging, sits unobtrusively by my bed, and just works. The interface virtually disappears, and there are no interruptions to your reading from a dumb device that thinks it needs a place in every part of your life. It is so light that you barely notice it in your hand – so much easier than a paper book. And I love the adaptive backlighting. Though it would be easy to dim and brighten the screen manually (as in the Paperwhite), the unobtrusive automatic dimming is surprisingly pleasant.

Amazon now has an even higher end device, the Oasis, that is a little lighter, has an extra boost for the battery in the cover, an ergonomic grip, and more LEDs for even more even backlight. Apart from that, it is hard to see why it would be worth getting: everything else is much the same. The Voyage is already too expensive, especially given how much Amazon will leech from you after purchase, so I cannot imagine why one would spend another $100 for a leather cover with a battery in it.

https://www.amazon.ca/High-Resolution-Display-Adaptive-PagePress-Sensors/dp/B00IOY524S

Sony DPT-S1 e-reader

SOny DPT-S1The DPT-S1 is an e-paper device that does pretty much only one thing – it lets you read and annotate PDF documents. True, it does have a note taking app that is quite serviceable and a web browser that is not at all serviceable but, basically, this is a very expensive one-trick pony that cannot even read standard ebook formats. How expensive? Over $1000 expensive. You could get a good iPad Pro for that money, or 4 or 5 Kindles. Or a pretty good PC laptop or tablet, or even a top of the line Chromebook. Or a nice bicycle. All in all, this is one incredibly expensive device that does very little.

So why did I want one? Well, obviously enough, it’s for that one thing. I get to read a great many documents, many of which are already in PDF format and most of which can easily be made so. The reading area of the DPT-S1 is effectively the same as a standard sheet of office paper and, in theory at least, provides a very similar experience, with similar resolution and contrast to a slightly greyish printed sheet, and similar ability to mark up the text. As far as I know, this is the only commercially available e-paper device with a screen this size.

One of the notable ways that p-reading is normally better than e-reading is that it provides a consistent, fixed visual layout. It is better because the shape of text on a page is important in helping us to remember where we read it and in what context, and human typesetters generally pay closer attention than machines to making pages readable and appealing. Most e-book formats re-flow the text according to device, font, etc, so there are few cues of this nature. It is true that, especially  when making text larger for those with aged eyes, this is an advantage in many ways, but the loss of visual memory of the shape of the page is a cognitive trade off.  PDFs are much more like print in this regard as the format is fixed, albeit that it remains difficult to get a sense of the context of the page in the broader text. On a small device, though, PDFs are usually unreadable, or require absurd amounts of scrolling, so a device that lets you see the whole page at its native size is a very interesting idea. Could this be a step towards what we need to replace paper for reading? Well…

The size of the DPT-S1 is great. The contrast is not quite as good as black ink on a white sheet of paper: blacks are not very black, and ‘whites’ are definitely grey. It’s not even close to the Kindle Voyage, but letters appear quite sharp and clear, and A4/Letter sized documents are very easy to read. Without the glow of most modern screens, it is relatively restful on the eyes. It is extremely light: it feels like a thick sheet of cardboard in the hand, lighter than even 20-30 pages of good printed paper, let alone the thousands of books it can carry. It is really easy to hold in one hand for prolonged periods. The screen is very readable in bright sunlight, and it is acceptable in a reasonably well-lit room. It has no backlight, though, so it is not much use in darker rooms. The battery life is fine – around 15 to 20 hours. You can certainly use it for a whole day without the need to recharge it. Recharging, through a standard micro USB plug, takes a while but you can use it at the same time or recharge it overnight.  So far, so good. After that, though, it goes rapidly downhill.

The software is truly awful. My most important intended use for the device was to make marking of student work and reviewing of papers and books, etc, easier. Alas, it does not. The first big problem is actually getting documents onto it. The built-in (and atrociously unusable) web browser does not recognize PDFs from Moodle or Office 365 email as known file types. It does support WebDAV, but it only allows a single webDAV server to be configured. The wifi is as primitive as it gets, and far from reliable. Worse, the device unaccountably wipes out any files you have saved should you choose to go through the incredibly slow and tortuous process of changing the WebDAV server, using the highly unresponsive and annoying keyboard (there are always characters that are at least 3 keyboards away from the default) but I have found that so unreliable and slow (often it fails to connect because it takes so long to set up a simple, single, wifi connection, and it is very fussy about which webDAV variants it will support) that there is very little point, even for a single WebDAV server. I have found that it can use CloudApp via the web browser, which is fine for the odd one-off file, albeit that it can easily take 5 minutes to enter the URL and get the file. I could set up my PC as a WebDAV server but part of the point of this is to untether me from it and, if I’m going to be around it anyway, I might as well plug it in. The only sensible way to add files is thus to download the work onto my PC and transfer it from there via a USB cable. This is extremely clunky: it can easily take 5 minutes simply to get a file, once you factor in saving from wherever it is in the first place (e.g. Moodle, email, review sites). Though it does have a micro-SD card, the hassle of unmounting then remounting it is not worth the bother, especially as it demands removal of a small back-panel to get at the thing. Without even a means to upload files via the web browser, it is even worse trying to get it back again afer annotation: USB or SD card are the only plausible ways, notwithstanding the awful WebDAV implementation. This is far too clunky: the whole point is to streamline the process, not to make it more difficult. I can see that it might be OK if I had bulk documents to download and upload but that’s not how I normally work, nor how I wish to work.

The next problem is that navigation through texts is tortuous. I thought it would be great for reading and annotating a book that I am reviewing, but that turns out not to be the case. Unlike most e-readers, you cannot simply jump to references and back again. In fact, even skipping to the back of the book to look them up is incredibly tedious. As far as I can tell, you cannot even flip to the index, or the back, or the front. Switching to thumbnail view sounds promising, but actually means you lose your place as the current page is somewhere in the middle and not highlighted. Compared with the Kindle’s quite neat x-ray and other browsing tools, this feels like something from the Middle Ages. Even reading is less than perfect: it blanks the screen way too often (reversing black and white to clear the memory effect of e-paper screens) and takes too long to return. Book-length texts take an age to load.

After some time using it, a few other issues have arisen that make it even less useful. I had been using it as a simple way to record notes such as my daily to-do list. However, every now and then – like every couple of days or so – it needs a reboot, as it loses track of what’s in any file. All appear as blank. Sony are showing no signs of wanting to maintain this buggy firmware, and (though some have found complex ways to replace the customized Android operating system with their own) it is well locked down to prevent customization. Not that there is much to customize: it lacks even sound input or output, so cannot even do text-to-speech, let alone anything more useful. In fact, even some PDFs get mangled by it.

It feels very cheaply made: the buttons (3 standard Android buttons) are clicky, imprecise and toy-like. The body is made of flimsy plastic that bends a bit. Nothing quite fits. Its lightness means that it slips easily – indeed, it slid off my desk, making a soft landing on the floor about a metre below. Not a great thing to do, for sure, but, given that it is such a light device made of resilient plastic, I would not have expected much damage. However, the ugly fabric pen holder snapped off – apparently it is only lightly glued in place. This is shoddy. The screen itself gets messy very quickly which, given that the point is to write on it, seems a design flaw. It also picks up scratches easily. It comes with a cheap and ugly cover, but that virtually doubles the apparent weight and makes it far less comfortable to use. The dedicated cheap plastic pen is easy to lose, and there is a very small but perceptible lag between writing and the appearance of your writing on the screen. It doesn’t feel quite like using a pen – the screen is too slippery and, oddly, also scratchy at the same time. I like the instant erase button on the pen, but it is too easy to press it by mistake. The fine plastic point looks flimsy: I doubt it will last long. Replacements designed for graphics tablets should work, but it doesn’t inspire confidence.

Overall, this should be a very promising device, but it fails to even do the one thing it is supposed to do at all well. I would love a better thought-through device with this screen format, or the same thing at a tenth of the price but, right now, this is one to avoid like the plague.

https://pro.sony.com/bbsc/ssr/product-DPTS1/

Lenovo Yoga Tab 3 Pro 10

Yoga Tab 3 The Lenovo Yoga Tab 3 Pro 10 is mostly a fairly conventional 10″ Android tablet with one significant twist: it has an integrated pico projector. It’s remarkably hard to find the specs for the projector, but I would guess it must be about 50 lumens and perhaps 800 x 600 resolution, or maybe a little higher.

The device runs Android 5.1 with only a few slightly annoying differences from the stock version. I fail to understandand why almost all manufacturers insist on doing this: while the projector does mean it needs a few small tweaks, there’s no good reason to mess with the rest.  It has the usual range of sensors, expandable memory (by microSD card), and a stonkingly big 10,200 mAh fast charging battery from which it is claimed one can get 18 hours of use. I think that’s an exaggeration: you’d need some gentle apps, low screen brightness, and no wifi to get anything like that but, in normal use, with web browsing, email, Kindle, a bit of streaming video and light use of the projector, I have easily got well over 12 hours, which is not bad at all. Unfortunately, being Android, it keeps eating power when you are doing nothing with it so, unlike an iPad (that could be left for weeks and still have power) it will die when left alone for a few days, unless you turn it off completely (in which case it will last well over a month). That being said, you could certainly watch three or four movies before needing to recharge. The price paid for this is, unsurprisingly, in extra weight and bulk, but that is mostly taken up by a side handle that is fairly comfortable to hold and that also contains the projector, speakers, rear camera and a really well designed built-in prop that also doubles as a means to hang it on a hook on the wall. All in all, though you are aware of the weight, it is well balanced and sits comfortably in the hand. It feels solid and well engineered. The other stand-out features are a full Quad HD screen that is at least as nice as that on the iPad Pro (though much smaller), and Dolby sound from four JBL speakers that are remarkably good at spatial stereo – it seems that the sound spreads from a far wider area than the device itself. It would be better with 3G/4G, but wifi is available most places so I can live without that, though I do miss fingerprint authentication: passcodes and gestures are not at all as convenient. One quite nice feature is that you can use anything conductive as a stylus, including a steel pen or even a pencil. Also unusual in a tablet is the inclusion of a buzzer. It is also splashproof to IP21 (ie. it can cope with condensation and light showers) which brings it a little closer to a p-book in resilience.

Unfortunately, even more than most Android devices, it is flaky. Apps crash very regularly, have more bugs than their iOS counterparts, pause for no obvious reason, and the whole thing feels very unresponsive most of the time. Given a quad core Atom CPU running at 1.4GHz and 2GB of RAM, this is quite surprising. It is partly a generic Android thing, but I think Lenovo have made it worse: I get nothing like these problems on other Android devices, even on those with lower specs. I have tried very hard to love Android over many years, because I approve of its (general) relative openness and its flexibility, but I always feel a sense of profound relief coming back to my far better and smoother iOS devices. It’s the same trade-off as that between Windows or Linux and Macs: you can pick flexibility but sub-optimal flakiness, or something that works really well but that limits your choices. Even ignoring Apple’s superior hardware and operating system, such inequality is inevitable: developers can test on pretty much all Apple devices, but even the biggest developers cannot hope to do so on the tens of thousands of Android machines. iOS simply works better, but good luck to anyone wanting a built-in projector or waterproofing on their iPad (though it can be done).

One of the main reasons I got this was to try e-reading at gigantic size. Using a projector is an interesting alternative to book-like e-readers that has not been researched much, if at all. It does work for this, up to a point. With normal room lighting the projected image is pretty bright up to about 30 inches, but decays rapidly after that. In darkness, I reckon it is pretty good at 90 inches or more. Colours are clear, the image is sharp. It does a very good job of showing anything on the screen, has intuitive controls, and is pretty smart at automatically adjusting the keystone and focus as you move the device around. The focal length is not as wide/adjustable as I’d like – you have to be some way from the wall or ceiling to get a decent picture, far further than for my dedicated pico projector, but I guess that is so that you can sit on a sofa in a normal sized room to control it. Unfortunately the focus is not wonderfully even, so the corners and edges are a bit blurred. However, with a reasonably large font, it is perfectly possible to read it. I have yet to figure out whether it is possible to display it vertically, though, for easier page reading: it seems no one in the design team considered the possibility that someone might want to do that, and it does not flip like the standard screen: pages are therefore always horizontal, whatever their original orientation. The dimness and relatively poor resolution mean that it is not great for looking at details, which is one thing I hoped might be a strength, especially for books with pictures and diagrams. If the projector had the same resolution as the screen itself, you could easily display multiple pages and read them, but that’s not going to happen. Another thing that I had not really thought too deeply about until I tried is is that many of the same problems with reading on a laptop or desktop machine remain: the fixed distance between reader and book is really bad for reading, and not at all comfortable after a little while, though I have enjoyed lying in bed reading a book from the ceiling (albeit that the ceiling texture can make a bit difference to legibility). However, another thing that should have been obvious to me is that, to control it, you need to be holding the device. This means two very bad things happen. First, and most annoyingly, it wobbles. This is incredibly bad for reading. Secondly, it divorces the page turning and highlighting action from the reading surface. It’s actually surprisingly difficult to coordinate hand movements on a tablet when you are not getting direct visual feedback, far worse than, say, using a graphics tablet on a conventional PC. So, though it is kind of nice to be able to read a book with a partner in bed (not something you’d want to do all the time), the projector is by no means a great e-reading device. The standard 10″ screen itself is perfectly usable for reading, if a bit too shiny, and it has the widescreen aspect ratio favoured by most Android tablets, which is good for movies but not so much for reading books. The resolution of the built in screen is very good indeed, but that is not unusual nowadays.

My second use case was to explore its potential as a more social device than most personal machines. Tablets tend to be more social objects than cellphones or PCs anyway – it’s one of the key things that makes them a different product category. Tablets are things that get passed around, peered over together, talked about and used in a (physical) social context. People do use phones that way but only because of the convenience and availability of the devices: they are too small and too personal (texts etc keep popping up) to be really useful in that context. A device with a projector ought to be more interesting. While the projector’s limitations mean that it can’t be used outside or in a brightly lit room, and it’s not much use unless you can find a blank bit of wall to display onto (surprisingly absent in most public social venues like pubs and cafes), in the right physical context it becomes a shared object, a catalyst for conversation and conviviality, and a means to engage with one another. It seems especially useful for things like short YouTube videos, photos, and so on. Having the device on the table when there is a gathering of friends and family means that when (as someone usually does) people refer to something they have seen online, everyone can share the experience as a group, not as separate individuals one or two at a time. This changes the meaning of the activity quite considerably. It notably blurs the online/physical space. For a full TV show or movie, I would almost always prefer to make an event of it and gather round a TV or proper projector than use this inevitably makeshift device. However, it has already proved useful, even in that context. We had a family movie night the other day, with large projector screen, and the PC driving it dropped its Netflix connection, refusing to return. As we only had a few minutes left of the movie, it took all of 2 minutes to switch on the device, aim it at the screen, and pick up where we left off. Another thing that is very appealing about it is that it needs no wires at all – just set it down and play. I’ve not yet had a chance to use it in another similar setting, with two or more collocated groups working at a distance. I suspect that it might be quite effective when using webmeeting or Skype-like software, though the inability to pan the selfie camera might negate the benefits.

Overall, I am quite pleased with this: if it were my only tablet, it would do most, if not all, of what I need a tablet for, and it is not bad for the price, even without the projector, closely comparable to a similar iPad.  If the software and hardware combination were more reliable, more consistent and responsive in performance, and less rough at the edges, it might be a very good competitor to my iPad Air 2, but it just isn’t. They are mostly small irritations but there are many of them, from buttons that take a second to respond to random crashes to simple flakiness and inconsistency in software design. Taken together, they make the whole experience profoundly unsatisfying. The device does not disappear as it should. I like the prop, I like the battery life, I like the projector, even though its e-reading uses are limited. It strikes me that there’s room in the market for an iPad accessory that includes integrated projector, battery, maybe speakers and prop.  It would be easy enough to implement and much handier to add it when needed than to have such things all the time when, mostly, they are not needed.

http://shop.lenovo.com/ca/en/tablets/lenovo/yoga-tablet-series/yoga-tab-3-pro-10/

Google Cardboard

VR headsetThe Google Cardboard viewer I purchased is one of the hundreds of cheap generic plastic VR headsets into which one slots a smartphone. It comes with a small, generic, bluetooth controller that is supposed to allow you to control the phone, that works very poorly and intermittently on an Android device, and is virtually useless for an iPhone, though technically supported.

It is quite scary, at first, to place one’s big, expensive phone into this flimsy plastic container and dangle it a few feet above the ground. However, the phone is gripped well and seems in no danger of falling. The device is not very comfortable to wear, especially over a prolonged period, especially if you have a prominent nose. I find the rubbery eye mask to be hot and awkward after a little while, and the elasticated bands begins to be noticeable after a short time. With a big phone, it pulls forwards on your face. Virtual reality is still an uncomfortable place. You look really stupid wearing it.

The software needs a lot of work. The best I have managed so far with it is to look around in a few virtual worlds. With its dreadful controller, it is really hard to even click a hovering button, and the disconnect between the heads-up display and the crappy controls is huge. It might be fun to try this with a circa 1992 data glove, or the super-smart HTC Vive controllers, but that would rather negate the point of a wire-free VR device. This is no Vive or Oculus – not by a long chalk. It’s about the same kind of experience as early 2000s VR, without the wires. The field of view is quite small, the resolution is not great, the movement is jerky and obvious, even on a fast iPhone. It was not notably worse on an old Nexus 4, and Moto G, so I think this is more down to software than hardware.

Once you have exhausted the possibilities of the demo apps that Google provides, it is actually quite tricky to find decent apps for it. It’s not that no apps are available. They are just not very good. It is hard to set them up, many just don’t do anything, and virtually none are properly supported by the bluetooth controller. I would have expected the potential for augmented reality to be a selling point, as the camera is deliberately uncovered. Not so much. Most apps don’t use it at all.

As an e-reader, it is hopeless. Though an Phone 6+ has plenty of definition and is about as big as what the case will hold, all of that is lost when viewed through cheap plastic lenses, and the slight differences in viewing angle from each eye make it quite dizzying to read even large text (without the stereo). Oculus or HTC Vive does this sort of thing quite well, but at such a high price (in every way) it is an absurd idea to even try it. For all such things, the fact that you have to exclude the entire outside world in order to use them makes these deeply anti-social devices. Perhaps the Magic Leap will provide a better answer, as it has both high resolution and integration with the real world. It would be cool to mimic a bookshelf in AR, and would not be a terrible way to read text. Some of the videos – https://www.youtube.com/channel/UC2E1x3l45YUO2eOhRv-A7lw – are amazing. However, it appears not to be too portable. Something that gave both the portability of the Google Cardboard box and the power of a Magic Leap might be well worth having. There are plenty of suitable desktop variants for such devices.

Overall, this particular device is a badly conceived toy: it is difficult to use, limited, uncomfortable and flaky. Fun to play with for a few minutes, but not good for anything.

https://www.amazon.ca/Virtual-Reality-Headset-Controller-Smartphones/dp/B019NBVJII/

Pebble Time Steel

Pebble Time SteelMost mainstream smart watches (the Apples and Androids) have glowing screens with battery lives of a day or two at best, and almost all the rest seem to be focused on golf players, runners, or people that want really basic email and phone alerts. The Pebble, however, hits a sweet spot: a claimed battery life of a week or more, a big app ecosystem, an always-on e-paper (not e-ink) LCD screen, and a few sensors to make it useful. I started with an original monochrome Pebble (a fabulous bargain at $79 – less than many fitness trackers alone, and a really good watch) but, after a couple of weeks, passed it on, after realizing that I am too rough on my watches for a plastic screen. So now I have what was, at the time of purchase, the top-of-the-line, colour, voice-recognizing, Gorilla-Glassed Pebble Time Steel ($200). This is on the verge of being superseded by the Pebble Time 2, with a larger viewing screen, keeping the same size for the watch, which is a good thing: the usable screen (it has a big bevel) is too small. I can at least read the time without glasses, though some of the apps use text and images that are too small to read unassisted. I have never come close to the claimed battery life of 10 days: mostly I manage 6-7 days, though I have not run it into the ground so may be misjudging its staying power. However, that’s fine: I just have to take it off for a couple of hours once a week to charge it with its magnetic charger. It does warn you about a day ahead of when it is going to die, so there’s usually time to charge it before it goes completely.

Instead of a touch screen favoured by most smartwatches, the Pebble has just four buttons, that you can use to control almost anything. They are hopeless for data entry (the calculator apps are all but useless) but they are fine and intuitive for getting around the various menus. The Time Steel has voice recognition, that is used in a few apps, but I don’t find it at all accurate and it is weird to talk to a watch, repeating things that it fails to understand over and over. I’m guessing it uses a cloud service to perform the translation itself: a bit bothersome for privacy. I really don’t get the Star Trek notion of talking to your computer. Sure, it’s fine for quick look-ups but I don’t think the evangelists for such things ever looked at how real people actually behave. It’s bad enough having to add things to my shopping list in a supermarket, make appointments on a bus, or take notes in a cafe. Can you imagine dictating a report or a paper in a crowded office or Starbucks? Especially when more than a few people are doing it? Even in the family home it is plain weird to hear someone talking into their computer in the next room and, for many things, confidentiality and privacy are serious issues. So, for a vast number of use cases, the only way to use voice recognition is in a soundproof room. Surprisingly, perhaps, computers that you talk to are considerably more anti-social than those that you write to.

Like most such devices, the Pebble relies on a smartphone for much of its functionality, pulling apps and data (such as GPS coordinates, weather, news, etc) from the phone as needed, keeping only the more recently used ones in its cache. Some apps require separate companion apps on the phone (as opposed to just the Pebble app itself) but, as most eat more battery, I have tried to avoid those where I can. I started by installing a lot of apps but soon realized that most were entirely pointless. On the whole, it is easier to reach into your pocket and use the phone app than to navigate through menus on the watch to find the reduced-functionality one that you need. There are a few that I use all the time: the watch itself, notifications, weather, a shopping list, the alarms, a sailing tracker.

I installed O365 and Evernote note reading apps, but never used them after testing that they worked: it’s basically a terrible way to read notes.  I quite liked being able to control presentations from my watch until, the first time I tried to use it for a keynote and despite having tested fine beforehand, it didn’t work at all. A high stress public talk is a bad place to find that out.

As an e-reader, not unexpectedly, the Pebble leaves a lot to be desired. There are various apps that will read text, RSS feeds, etc, by scrolling at a fixed rate, as well as those that let you painfully scroll through text files but one in particular intrigued me: AFR (a faster reader). This displays words one at a time at a configurable speed, laid out in a way that keeps your focus on a central coloured letter (an implementation of RSVP). It’s a strange and disconcerting experience. A standard e-book is bad enough for reducing the contextual information needed to remember what you have read, but AFR decontextualizes every single word, flowing like a video through the text, one word at a time. It cannot read PDFs or DRM’d books (though you can copy your iPhone’s clipboard into it) – and it can be a bit complicated to get text into it, despite useful sharing options in iOS. It also requires a companion app in iOS. It chokes on even mildly complex formatting, and the backlight turns off as usual while it is running so, unless you are in brightly lit conditions, it is not easy to read. There is no control over the text size which, on my watch, is difficult to read even with normal reading glasses. It is also pretty buggy, prone to freezing, and, worst of all, the speed of text on the Pebble does not match that on the iPhone (in fact, it often reads as nonsense, skipping words rather than just slowing down, and showing them at a rate of about one per second, which is hopeless). However, as a concept, I think it is quite neat and well worth exploring further.

As a watch, the Pebble Time Steel is great. I love the backlight. I love that it is waterproof. I can live with charging it once a week. I love the alarms. However, like all computers, it crashes occasionally. Not every week, but maybe every month. Relying on the alarm can, therefore, be a bit risky if it really matters. I’ve experienced one major crash in the past 3 months, which required a device reset. This was annoying, not just for the hassle of having to figure out the weird button combinations needed for the reset, but also because it lost all my settings (including the alarms, that I only realized quite late the next morning). It is also annoying that the app has to be running in the background on the iPhone, and the iPhone doesn’t let you force an app to remain running. The first couple of times it stopped were quite confusing, because the watch told me it couldn’t communicate with the phone, even though I could see it was connected via bluetooth. I just needed to start the app again. I suspect the Android app might be better in that regard though, unfortunately, you cannot pair the watch with more than one device at a time so I’ve not tried that. Though it uses a very lightweight bluetooth connection for most of its activities, it does eat more of my phone battery than I’d like: perhaps 10-20% of its capacity per day. This is a nuisance, but my phone still lasts more than a day on the whole (well, at least it did before Pokémon GO) so it is not too bad.

Overall, I like the Pebble Time Steel. It’s not going to be my e-reader of choice, but it’s a darn good watch. It’s neither ugly nor attractive – the strap is actually very tasteful – it’s comfortable, it tells the time, it withstands bangs and dunkings, it wakes me up, it provides useful information, it doesn’t get in the way, it doesn’t need constant care, it’s always there. But I doubt that I will keep it for very long. In the past, watches used to last for 10 years or more (my Swiss watch is about 20 years old) but I would be quite surprised if this lasts me even a couple of years. Maybe less – its reliance on a proprietary app and phone is quite worrisome and could fail at any time. Such is the way of modern tech. We do not own the things we buy any more.

https://www.pebble.com/buy-pebble-time-steel-smartwatch

iPad Pro

iPad Pro At first sight the iPad Pro seems like an odd idea. It’s too uncomfortable to hold in one hand, too big to fit in an iPad pocket,  it needs a (non-included) pen to operate its smartest features, and it is really expensive, even compared with a quite high-spec laptop. But the ‘Pro’ nomenclature reveals some of what Apple is aiming for: this is not meant as a device for the masses but is instead for those seeking serious productivity from their device, a different way of engaging with a tablet, beyond media consumption, game playing and simple interaction. Indeed, Apple has gone so far as to claim it can be a laptop replacement, if you add a keyboard.

The device feels huge at first, albeit that it is slim and beautiful to hold. After a few hours, though, using a standard iPad feels cramped and small, and the Pro feels quite normal. You’d not want to hold this in one hand for any length of time, of course. It doesn’t actually weigh noticeably more than the first generation iPad, but the force it exerts on your hand can be considerably greater, unless you get the balance exactly right. That’s actually not too hard, though it does call for a change in approach. I tend to use the device on my lap, or propped up in bed, or in its keyboard case resting on a chair or table. I can walk around with it when I need to, and that’s a world away from walking around with a laptop, and it is vastly much easier to share with other people around you.  I have found that its extraordinarily good video display makes it a far more interesting social device than smaller tablets when sitting around with other people. I am very used to passing a tablet around for my wife, family and friends to look at but, with the iPad Pro, we can all look at the same thing together, sitting on a sofa or at a table. It’s surprisingly superior to the same experience using a laptop too. Perhaps it is the lack of other intrusions, or the cleanness of just a screen and nothing else to interfere with the experience. The iPad largely disappears, leaving only the content it displays. Those that hold it tend to be reluctant to let go again. The battery life is good: 9 or 10 hours seems about normal.

The Pro is great for reading of news sites, letting you see large amounts of individual articles and links to other articles on a page. Much more convenient than a newspaper, but with a similar capability to show not just what you are reading but other stories around it. Oddly, though, it is not as satisfying as I had thought it might be for normal e-books.  In some ways is exacerbates the problem of there being a lot of undifferentiated, non-typeset text, emphasizing the fact that there has been no human involvement in laying it out on screen. However, for books with many diagrams and images, it is a lot better than smaller devices and, for those with particularly bad eyesight, it might be wonderful. For simply reading a long-form linear text, however, the Kindle Voyage wins hands down.

One big hope that I had for this was, like the DPT-S1, to be able to comfortably read – and annotate – PDF files originally designed for print. This has actually worked out pretty well – far, far better than the rotten DPT-S1. Reading is easy on the eye, immune to most light levels apart from really bright sun or spotlights, and the experience is mostly smooth and slick. The size of the screen means that there is even enough space for apps like Goodreader to show previews of surrounding pages, which helps a lot in getting a sense of where you are in a text (a perennial problem with most existing e-readers), and largely eliminates one of the major cognitive hurdles in reading e-texts, that there is no consistent visual pattern to help you remember what you have read. When I have a lot of work to mark, this is great. It is much lighter and easier to carry than (say) a paper thesis or dissertation, and almost as easy to mark up, though nothing like as light as the DPT-S1. One notable difference between paper and all the software I have used, however, is that it is much harder to flick between multiple pages, to hold two or three open at once from different parts of the manuscript to compare and connect them: it would be so useful to find a good way to replicate this, especially for writing my own books and papers. Though bookmarks help, it is nothing like as fluid or easy as holding a manuscript with fingers on each passage that interests me. I suspect that a desk-sized tablet, with the same retina resolution and an Apple Pencil, might solve this problem, with the right software. Though quite a few  ‘tablets’ with such dimensions do exist, all of those I have seen cannot come close to this resolution, including the ludicrously priced Microsoft Surface Hub. We might have a generation or two to go before that becomes a reality and, by then, heads-up displays will offer a much more cost-effective and flexible alternative. I think perhaps that the main problem is the metaphor of a screen as being a window into virtual space. Windows frame things that should not be framed.

There are numerous annotation-friendly apps for PDFs available, with different strengths and weaknesses. It bugs me, though, that every app maintains its own storage so you cannot seamlessly flit between different apps to take advantage of their different features. This is one of the things that makes iOS secure, but it makes it very annoying to manage documents, even though cloud storage services can reduce that pain a little. Essentially, though, you have to copy documents between one app and the next, rather than simply working on them with whatever you want to use. There is no sense of connection and continuity.  I guess, if I were wise, I would simply use a single moderately good app like Goodreader, with its own storage and copious links to cloud storage for shifting documents around, and leave it at that, but that’s not the kind of guy I am, and it doesn’t handle all document workflows well, especially with regard to conversion between formats. I want to keep chasing the adjacent possible.

I am far from being a visual artist, but there are many occasions that it would be useful to draw things, create diagrams, design 3D objects for printing or VR, sketch ideas, mockup interfaces, sketch over images, and so on. In dedicated apps, with the Apple Pencil, the iPad mostly feels much like working on paper, with all the additional tools, views, perspectives, layers and wonderful extras that the computer-based environment provides. Very different too from working with a graphics tablet that, because it is separate from the created object, has always felt alienating and awkward to me.  There are also plenty of tools that let you annotate PDFs and images. But I would like to be able to do this kind of thing and seamlessly incorporate it into anything that I am doing – to sketch in a slideshow, or word processor, a book, a paper, or whatever, wherever I am. The notion that documents are of a particular type – not just text, image, diagram, etc but specific formats of such things (Word, Kindle, PNG, etc) – is deep in the genes of even the smartest tablets. We seem trapped in a 1980s timewarp on this. It is even true of formats that ought to support such flexibility like word processors, perhaps because of Microsoft’s stranglehold on word processing paradigms that has kept us in a typewriter mindset for decades. Even Apple’s own otherwise great Pages is victim to this. At best, you can embed an image or use a separate app within the main application.

After Apple’s hype, I wondered whether it might also work as a laptop replacement, and so I got a backlit Logitech keyboard to test this theory out. I have tried this experiment with various different tablets (Apple, Android, and Windows) over the past 6 years or so, but all my efforts so far had been less than wonderful. Fine for a day or two on the road, but not at all close to the laptop experience. The iPad Pro is better, but still not ideal. I’m typing this on the Logitech keyboard and finding it to be at least as comfortable as typing on my MacBook Pro. The screen is incredibly bright and clear.  WIth the enormous Logitech case, though, it is very heavy indeed, perhaps heavier than my MacBook Pro and far less well balanced. The case has only one position – comfortable-ish, but not great. It has a nice set of iPad optimized function keys, though, and works with most Mac keyboard shortcuts, e.g. app switching. I also like that it just works, sipping power from the iPad itself. It is a vast improvement on smaller machines – especially with the dual-window view – and might be OK for a few days at a pinch if I were not doing anything apart from using the Internet, doing a bit of writing, and maybe a bit of presenting. There probably are some people that could use it as a laptop all the time but, as a technologist and computing professional, it is nothing like close enough: the software is simply not sufficiently capable. It’s OK at a pinch for remote-controlling the Macbook Pro, but not a lot of fun. I’m enjoying the new iOS Scrivener app, which is very close to the desktop version in power and usability, but it lacks the tight integration with reference libraries of the desktop version, and I use such things a great deal.

The retina resolution makes a really great second screen that I can use while travelling or on my boat, using the terrific Duet app for a virtually lagless and seamless experience. I have done this with the smaller iPad Air 2 for some time, but it has always been just a little bit of extra real estate, not a seriously useful extra monitor: helpful for, say, viewing incoming email or writing brief notes, but not in the same league as a real second monitor. The iPad Pro can be used for real work – programming, marking, research, etc are a breeze with a big screen attached. Not quite as amazing as my 29” Apple monitor, but good enough for real gains in productivity. Though it has to be tethered to the MacBook Pro for this, it allows you to read papers etc from the much more flexible computer with at least some of the benefits of a tablet.

Another good surprise is the onscreen keyboard, that is not far off complete, with a row of numbers and a good range of punctuation available at all times, and a size that fits my hands well. This only applies to apps that have been optimized for the iPad Pro – there are still quite a few that make use of the more basic and less functional keyboard of the older iPad. In fact, there are even a few iPhone apps that do run on the Pro, albeit not completely full-screen (even with double scaling) which looks entirely weird, like clunky toys. But, when you get the full iPad experience, typing on the screen is far easier and friendlier than in previous models.

One irritation is that quite a few websites decide that I am using an iPad and therefore give me a mobile-optimized – ie. less functional – view. With all that screen real estate it is silly to have a set of buttons etc that are made to work on a cellphone.  Conversely, sites that are designed for desktop use can be fiddly, especially when they disable pinch-to-zoom. Google Books (which could be amazingly useful in this format) is a real pain – its tiny zoom buttons are hard to press and it overrides all the usual controls – especially zoom – that one would normally use to deal with that.

Using it in the sunshine is fine, from a screen perspective, but risky: it gets very very hot very very quickly. The first time I realized it was happening I had to rush inside and train a fan on it because the battery was notably in peril. A case is essential for outdoor summer use. It is not great in the rain either.

The Apple Pencil is a real surprise. I was in two minds whether to get it at all. Seriously, $115 for a pencil that doesn’t even write, and that only works with one device (now two)? I could get two cheap tablets for the price of this thin white stick. To make things worse, though it is quite pretty in its simplicity, this is not a well designed tool. The magnetic lid for the lightning connector is guaranteed to be lost, as is the connector that allows it to be charged using a standard lightning cable. I have no idea where I put the spare rubber tip for when the current one wears down. Apple used to be better than this – I am quite sure Steve Jobs would not have allowed this one out of the door. There’s no way to keep it with your iPad, unless you want to 3D print something to hold it or use sticky tape, and neither the Apple nor the Logitech keyboard cases have any means to attach it, which seems bizarre. It is incredibly easy to lose. When just putting it down, the fact that it is magnetic means it will stick to the side of the iPad, but not strongly enough to hold on when it is tilted. It’s not terrible on a flat surface because it is counterweighted a little so that it doesn’t roll too much – a nice design touch. On a white tablecloth, though, it is easily missed. It is just difficult enough to find that it becomes an active decision to use it. It would certainly cure the problems of, say, non-zooming screens in web browsers but, unless it is directly to hand, it is too much hassle to go find it when such needs occur. The fast charge option, though, which gives about 30 minutes use on a 15 second charge, is quite smart, and I love the way it disables the touch sensitive screen when it gets close to it, so you can very comfortably rest your hand and trust that you will not suddenly start drawing with knuckles or other parts of your anatomy.

In most apps (not all), lines appear instantly, with no perceptible lag, a great deal of accuracy (Apple make the point that the level of control is tens of times more accurate than even the best passive pens) and a reassuring amount of tactile feedback. I have used other styluses that are best of breed – Pencil (the non-Apple one), for instance – but they don’t come close enough to replicating the experience of drawing on paper. The Apple Pencil does. Drawing with hard rubber on glass is certainly not at all the same thing as writing on paper with pen, pencil, charcoal or brush, but it’s a lot better than a hard-tipped or blobby rubber-tipped stylus, and it is easy to get used to it, especially thanks to the near-instant feedback. I can create extremely small details, with a similar amount of precision as I would get with a ballpoint ben, though maybe a little less than with a proper drawing pen like a Rapidograph or even a fine rollerball. However, that tends to be no big problem because, in most apps, you can zoom in to any level of detail you like. I was quite surprised to find it really easy and natural to, for instance, draw lines using a real ruler, which (on older tablets) is possible but totally weird, and prone to accidental artefacts. You can even trace or draw around things, which is neat for 3D design, especially. There is, though, still a very slight perceptible distance between the drawing surface and the stylus. The glass is extremely thin indeed – a hair’s breadth – but it is still there, and it separates you from the page. It’s like the difference between playing a guitar and playing a piano: the feedback is between hand and brain rather than hand and medium. It’s not quite direct.

I am enjoying the Apple Pencil more and more. Its precision turns out to be extremely useful at times, allowing manipulations and selection of small objects with ease, and I enjoy writing and sketching with it. It just disappears (sadly, literally sometimes) and it changes the nature of the interaction with the tablet in some very good ways.

Overall, I was not expecting wonders from the iPad Pro – at least compared with the Air 2 – and was totally in sympathy with Steve Jobs’s edict to avoid styluses on such things, but I have been very pleasantly surprised. The size of the iPad Pro makes a huge difference for reading (though seldom of e-books) and working in general, the Pencil is really effective for all its design flaws, and this, after my Macbook Pro and iPhone, is one of my favourite and most-used devices.

http://www.apple.com/ca/shop/buy-ipad/ipad-pro