Challenges of the Physical: slides from my keynote at XII Conferência Internacional de Tecnologias de Informação e Comunicação na Educação, September 2021

Here are the slides from my opening keynote today for the XII Conferência Internacional de Tecnologias de Informação e Comunicação na Educação in Portugal. first slide of the presentation

The conference theme was ‘challenges of the digital’ so I thought it might be fun to reverse the problem, and to think instead about the challenges of in-person education. In this presentation I imagined a world in which in-person teaching had never been invented, and presented a case for doing so. In fairness, it was not a very good case! But I did have fun using some of the more exotic voice changing features of my Voicelive Play vocal processor (which I normally use for performing music), presenting some of the arguments against my suggestions in different voices using a much better mic than my usual (pretty good) Blue Yeti. I might not use the special effects again that often, but I was quite impressed with the difference the better microphone made.

My central points (mostly implicit until the end) were:

  • That the biggest challenge of the digital is all the baggage that we have inherited from in-person teaching, and our continuing need to interoperate with in-person institutions.
  • That pedagogies are neither universal nor neutral. They are solutions to problems of learning in a particular context, in assembly with countless constraints and possibilities provided by that context: people, tools, structures, methods, systems, and so on.
  • That solutions to learning in a physical context – at least in the one-to-many model of traditional education systems – inevitably lead to a very strong power imbalance between teacher and learner, where the teacher is in control of every moment that the teaching event occurs. This has many repercussions, not least of which being that needs for autonomy and competence support are very poorly addressed (though relatedness comes for free), so it is really bad for intrinsic motivation.
  • Thus, the pedagogies of physical spaces have to compensate for the loss of control and achievable challenge that they naturally entail.
  • That the most common approach – and, again, an almost inevitable (i.e. the shortest path) follow-on from teaching a lot of people at once – involves rewards and punishments, that massively impair or destroy intrinsic motivation to learn and, in most cases, actively militate against effective learning.
  • That the affordances of teaching everyone the same thing at once lead fairly naturally to credentials for having learned it, often achieved in ‘efficient’ ways like proctored exams that are incredibly bad for learning, and that greatly reinforce the extrinsic motivation that is already highly problematic in the in-person modality. The credentials, not the learning, become the primary focus.
  • That support for autonomy and competence are naturally high in online learning, though support for relatedness is a mix of good and bad. There is no need for teachers being in control and, lacking most of the means of control available to in-person teachers, the only reliable way to regain it is through rewards and punishments which, as previously mentioned, are fatal to intrinsic motivation.
  • That the almost ubiquitous ways that distance educators inherit and use the pedagogies, methods, and structures of in-person learning – especially in the use of coercion through rewards and punishments (grades, credentials, etc) but also in schedules, fixed-length courses, inflexible learning outcomes, etc – are almost exactly the opposite of what its technologies can best support.

Towards the end, acknowledging that it is difficult to change such complex and deeply entangled systems (much though it is to be desired) I presented some ways of reducing the challenges of the physical in online teaching, and regaining that lost intrinsic motivation, that I summarized thus:

  • Let go (you cannot and should not control learning unless asked to do so), but stay close;
  • Make learning (not just its products) visible (and, in the process, better understand your teaching);
  • Make learning shared (cooperation and, where possible, collaboration built in from the ground up);
  • Don’t ever coerce (especially not through grades);
  • Care (for learners, for learning, for the subject).

It’s a theme that I have spoken and written of many, many times, but (apart from the last few slides) the way I presented it this time was new for me. I had fun pretending to be different people, and the audience seemed to like it, in a challenging kind of a way. There were some great questions at the end, not all of which I had time to answer, though I’m happy to continue the conversation here, or via Twitter.

At last, a serious use for AI: Brickit

https://brickit.app/

Brickit is what AI was made for. You take a picture of your pile of LEGO with your phone or tablet, then the app figures out what pieces you have, and suggests models you could build with it, including assembly plans. The coolest detail, perhaps, is that, having done so, it highlights the bricks you will need in the photo you took of your pile, so you can find them more easily. I’ve not downloaded it yet, so I’m not sure how well it works, but I love the concept.

The fan-made app is iOS only for now, but an Android version is coming in the fall. It’s free, but I’m guessing it may make money in future from in-app purchases giving access to more designs, options to purchase missing bricks, or something along those lines.

It would be cooler if it connected Lego enthusiasts so that they could share their MOCs (my own constructions) with others. I’m guessing it might use the LXFML format, which LEGO® itself uses to export designs from its (unsupported, discontinued, but still available) LEGO DIgital Designer app, so this ought to be easy enough. It would be even cooler if it supported a swap and share feature, so users could connect via the app to get hold of or share missing bricks. The fact that it should in principle be able to catalogue all your pieces would make this fairly straightforward to do. There are lots of existing sites and databases that share MOCs, such as https://moc.bricklink.com/pages/moc/index.page, or the commercial marketplace https://rebrickable.com/mocs/#hottest; there are brick databases like https://rebrickable.com/downloads/ that allow you to identify and order the bricks you need;  there are even swap sites like http://swapfig.com/ (minifigures only); and, of course, there are many apps for designing MOCs or downloading others. However, this app seems to be the…er…missing piece that could make them much more useful. 

Reviews suggest that it doesn’t always succeed in finding a model and might not always identify all the pieces. Also, I don’t think there’s a phone camera in the world with fine enough resolution to capture my son’s remarkably large LEGO collection. Even spreading the bricks out to take pictures would require more floor-space than any of us have in our homes. But what a great idea!

Originally posted at: https://landing.athabascau.ca/bookmarks/view/9558928/at-last-a-serious-use-for-ai-brickit

A few thoughts on learning management systems, and on integrated learning environments and their implementation

Why do we build digital learning systems to mimic classrooms?

It is understandable that, when we teach in person, we have to occupy and make different uses of the same or similar environments like classrooms, labs, workshops, lecture theatres, and offices. There are huge financial, physical, and organizational constraints on making the environment fit the task, so it would be madness to build a whole new classroom every time we wished to run a different class.

Online, we could build anything we like

But why do we do the same when we teach online? There are countless tools available and, if none are suitable, it is not too hard to build them or modify them to suit our needs. Once they are built, moving between them just takes a tap of a screen or the click of a mouse. Heck, you can even occupy several of them at once if you have a decent monitor or more than one device.

So why don’t we do this?

Here are a few of the more obvious reasons that using the perfect app for the context of study rarely happens:

  • Teachers’ lack of knowledge of the options (it takes time and effort to discover what’s available).
  • Teachers’ lack of skill in using them (most interesting tools have a learning curve, and that gets steeper in inverse proportion to the softness and diversity of the toolset, so most teachers don’t even know how to make the most of what they already have).
  • Lack of time and/or money for development (a real-life application is what it contains, not just the shell that contains it, and it is not always as easy to take existing stuff and put it in a new tool as it might be in a physical space).
  • Costs and difficulties in management (each tool adds costs in managing faults, configuration, accounting for use, performance, and security).
  • Cognitive load involved for learners in adapting to the metaphors, signposts, and methods needed to use the tool itself.

All of these are a direct consequence of the very diversity that would make us want to use different apps in the first place. This is a classic Faustian bargain in which the technology does what we want, and in the process creates new problems to solve.  Every virtual system invents at least some of the dynamics of how people and things interact with it and within it. In effect, every app has its own physics. That makes them harder to find out about, harder to learn, harder to develop, costlier to manage, and more difficult to navigate than the static, fixed facilities found in particular physical locations. They are all different, there are few if any universals, and any universal today may become a conditional tomorrow. Gravity doesn’t necessarily work the same way in virtual systems.

image of a pile of containersAnd so we get learning management systems

The learning management system (LMS) kind of deals with all of these problems: poorly, harmfully, boringly, and painfully, but it does deal with them. Currently, most of the teaching at Athabasca University is through the open source Moodle LMS, lightly modified by us because our needs are not quite like others (self-pacing and all that). But Moodle is not special: in terms of what it does and how it does it, it is not significantly different from any other mainstream LMS – Blackboard, Brightspace, Canvas, Sakai, whatever.

Almost every LMS essentially automates the functions, though not exactly the form, of traditional classrooms. In other parts of the world people prefer to use the term ‘managed learning environment’ (MLE) for such things, and it is the most dominant representative of a larger category of systems usually described as virtual learning environments (VLEs) that also includes things like MOOs (multi-user dungeons, object oriented), immersive learning environments, and simpler web-based teaching systems that replicate aspects of classrooms such as Google Classroom or Microsoft’s gnarly bundle of hastily repurposed rubbish for teaching that I’m not sure even has a name yet. Notice the spatial metaphors in many of these names.

Little boxes made of ticky tacky

The people who originally designed LMSs back in the 90s (I did so myself) based their designs on the functions and entities found in a traditional university because that was their context, and that was where they had to fit. Metaphorically, an LMS or MLE is a big university building with rather uniform classrooms, with perhaps a yard where you can camp out with a few other systems (plugins, LTI hooks, etc) that conform to its requirements and that are allowed in to classrooms when invited, and a few doors and gateways (mainly hyperlinks) linking it circuitously or in jury-rigged fashion to other similarly weakly connected buildings (e.g. places to register, places to seek support, places to talk to an advisor, places to complain, places to find books, and so on). It doesn’t have metaphorical corridors, halls, common rooms, canteens, yards, libraries or any of the other things that normally make up a physical university. You rarely get to even be aware of other classrooms beyond those you are in. Some people (me in a past life) might give classrooms cute names like ‘the learning cafe’ but it’s still just another classroom. You teleport from one classroom to the next because what happens in corridors (really a big lot of incredibly important pedagogically useful stuff, as it happens) is not perceived by the designers as a useful classroom function to be automated or perhaps, more charitably, they just couldn’t figure out how to automate that.

Reified roles

It’s a very controlled environment where everyone has a programmatically enforced role (mostly reflecting traditional educational roles), that may vary according to the room, but that are far less fluid than those in physical spaces. There are strong hierarchies, and limited opportunities for moving between them. Some of those hierarchies are new: the system administrator, for instance, has way more power than anyone in a physical university to determine how learning happens, like an architect with the power to move walls, change the decor, add extensions, and so on, at will. The programmers of the system are almost god-like in their command of its physics. But the ways that they give teachers (or learning designers, or administrators) control, as designers, directors, and regulators of the classroom, are perhaps the most pernicious. In a classroom a teacher may lead (and, by default, usually does). In an LMS, a teacher (or someone playing that role) must lead. The teacher sees things that students cannot, and controls things that the students may not. A teacher configures the space, and determines with some precision how it will be used. With a lot of effort and risk, it can be made to behave differently, but it almost never is.

Functions are everything

An LMS is typically built along functional lines, and those functions are mostly based on loose, superficial observations of what teachers and students seem to do in physical classrooms. The metaphorical classrooms are weird, because they are structured by teaching (seldom learning) function rather than along pedagogical lines: for instance, if you want to talk with someone, you normally need to go to a separate enclosed area inside the classroom or leave a note on the teacher’s desk. Same if you want to take a test, or share your work with others. Another function, another space. Some have many little rooms for different things. Lectures are either literally that (video recordings) or (more usefully, from a learning perspective), text and images to be read on screen, based on the assumption that the only function of lectures is information transmission (it is so very, very much not – that’s its least useful and least effective role). There’s seldom a chance to put even put up your hand to question something. Notices can usually only be pinned on the wall by teachers. Classroom timetables are embodied in software because of course you need a rigid and unforgiving timetable in a medium that sells itself on enabling learning anywhere, any time. Some, including Moodle, will allow you to break up the content differently, but it’s still another timetable; just a timetable without dates. It’s still the teacher who sets the order, pacing and content.

Robot overlords

It’s a high-tech classroom. There are often robots there that are programmed to make you behave in ways determined by those higher in the hierarchy (sometimes teachers, sometimes administrators, sometimes the programmers of the software). For instance, they might act as gatekeepers that prevent you from moving on to the next section before completing the current one, or they might prevent you submitting work before or after a specified date. They might mark your work. There are surveillance cameras everywhere, recording your every move, often only accessible to those with more powerful roles (though sometimes a robot or two might give you a filtered view of it).

Beginnings and ends

You can’t usually go back and visit when your course is over because someone decided it would be a good idea to set opening and closing enrolment dates and assumed that, when they were done, the learning was done (which of course it never is – it keeps on evolving long after explicit teaching and testing occurred). Again, it’s because physical classes are scheduled and terms come to an end because they must be, not because it makes pedagogical sense. And, like almost everything, you can override this default, but hardly anyone ever does, because it brings back those Faustian bargains, especially in manageability.

Dull caricatures of physical spaces

Basically, the LMS is an automated set of metaphorical classrooms that hardens many of the undesirable by-products of educational systems in software in brain-dead ways that have little to do with how best to teach, and that stretch the spatial metaphors that inform it beyond breaking point. Each bit of automation and each navigational decision hardens pedagogical choices. For all the cozy metaphors, programmers invent rather than replicate physics, in the process warping reality in ways that do no good and much harm. Classrooms solved problems of physics for in-person teaching and form part of a much larger structure that has evolved to teach reasonably well (including corridors, common rooms, canteens, and libraries, as it happens). Their more visible functions are only a part of that and, arguably, not the main part. There is much pedagogy embedded in the ways that physical universities, whether by accident or design, have evolved over centuries to support learning in every quadrangle and nook of a coffee shop. LMSs just focus on a limited subset of teaching roles, and empower the teacher in ways that caricature their already excessive dominance in the classroom (which only occurred because it had to, thanks to physics and the constraints it imposed).

LMSs are crap, but they contain recognizable semblances of their physical counterparts and just enough configurability and flexibility to more or less work as teaching tools, a bit, for everyone, almost no matter what their level of digital proficiency might be. They more or less solve the Faustian bargains listed earlier, but they do so by stifling what we wanted and should have been able to do in the first place with online tools, in the process creating new and quite horrific problems, as well as demolishing most of what makes physical universities work in the first place. It never has been true that virtual learning environments are learning environments – they are only ever parts of them – and there are places to escape from them, such as the Landing, other virtual systems, or even just plain old email, but then all those Faustian bargains come back to haunt us again. There has to be a better way.

Beyond the LMS

Cognisant of the issues, Athabasca University is now some way down the path to developing its own distinctive solutions to these problems, in a multi-year multi-million-dollar initiative known as (following the spatial metaphor) the Integrated Learning Environment (ILE). The ILE is not an application. It is an umbrella term for a lot of different, usually independent systems working together as one. Though some of the most interesting opportunities are still only loosely imagined, perhaps because they cause problems that are fiendishly hard to solve (e.g. how can we integrate systems that we build ourselves without creating risks for the rest of the ILE, and what happens when they need to be maintained?) a lot of progress is being made on the non-teaching foundations on which the rest depends (student admin systems, support tools, procedures, etc), as well as on the most visible and perhaps the biggest of its parts, BrightSpace, a proprietary commercial LMS that is meant to replace Moodle, for no obvious pedagogical or technical reasons (it’s no better). It might make economic sense. I don’t know, but I do know that open source software typically costs a fair bit to own, albeit because of the things that make it a much better idea (freedom, flexibility, ownership, etc). There is probably a fair bit of time and money being spent with Desire2Learn (makers of Brightspace) on the things that we spent a fair bit of time and money on many years ago to make Moodle a bit less classroom-like. The choice no doubt has something to do with how reliably and easily it can be made to work with some of the other proprietary commercial systems that someone has decided will make up the ILE. It bothers me greatly that we are not trying hard to choose open source solutions, for reasons that will become clearer in the rest of this post. However, (pedagogically speaking) all the mainstream LMSs are much of a muchness, making the same mistakes as one another in very similar ways, so it probably won’t wreck too much of what we already do within Moodle. But, on its own, it won’t move us much further forward and we could do it better. That’s what the ILE is supposed to do – to make the LMS just a part of a much larger teaching environment, intimately connected with the rest of what the university does for or with students, and extensible with new and better ways of learning, teaching, and assessing learning.

picture of lego bricksLego bricks make poor metaphors

When we were first imagining the ILE, though the approach was admirably participative, engaging much of the university community, I was very worried by the things we were encouraged to focus on. It was all about the functionality, the usability, the design, the tools, the pedagogies, the business systems that supported them. Those things matter, for sure, and should be not be ignored, but they should and will change and grow all the time: in fact, part of the point of building this thing is to do just that. Using the city metaphor, pretty much all that we (collectively) considered were the spaces (the rooms, mainly), and the stuff that goes on inside them, much like LMS designers thought of universities as just collections of classrooms in which teaching functions were performed. Space and stuff are, not uncoincidentally, exactly what Stewart Brand identified long ago as inevitably being the fastest-changing, most volatile parts of any town or city (after site, structure, skin, and services). I’ve written a fair bit on the universality of this principle across all systems. It’s a solid structural principle that applies as much to ecosystems and educational systems as to cities. As Brand observes himself, drawing from O’Neill et al (1986), the larger, slower-changing elements of any system affect the smaller, faster-changing more than vice versa. This is for much the same reasons that path dependencies set in. It’s about the prior providing the context for what follows. Flexible things have to fit into the gaps left by less flexible, older, pre-existing things. In physical spaces, of course these tend to be bigger and/or slower, but the same is true in virtual spaces, where size seldom matters that much, but hardness (inflexibility, brittleness) really does. Though lip service was paid to the word ‘integrated’ in our discussions,  I had the strong feeling that the kind of integration we had in mind was that of a Lego set. In fact, I think we were aiming to find a ‘Lego Athabasca University’ set, with assembly instructions and a picture on the box. The vendors who came to talk with us made much of how effectively they could do that, rather than how effectively they could make it possible for others to do that.

Metaphors matter. Lego bricks have to fit together tightly, in pre-specified ways, especially if you are following a plan. If you want to move them around, you have to dismantle a bit of the structure to fit them in. It’s difficult to integrate things that are not bricks, or that are made by different toy companies to work in different ways. At best you get what Brand calls ‘magazine architecture’, or ‘no road’ architecture, beautiful, fit for purpose, intricate and solid, but slow to learn. Lego is not a terrible way to build, compared with buying everything pre-assembled, but it could be improved.

Signals and boundaries

Drawing inspiration from John Holland’s brilliant last work, Signals & Boundaries, I tried to make the case that, instead, we should be focusing on the boundaries (the interfaces between the buildings and the rest of the city), and the signals that pass between them (the people, the messages, etc, the forms they take and how they move around). In Brand’s terms, I wanted us to be thinking about skin and services, and perhaps even structure, though site – Athabasca University – was a given. Though a few people nodded in agreement, I think it mainly fell on deaf ears. We wanted oven-ready solutions, not the infrastructure to enable those solutions. Though the city metaphor works well, because we are talking about human constructions, others would result in similar ways of thinking: cells in bodies, organisms in ecosystems, brains, termite mounds, and so on. All are organized by boundaries (at many levels of hierarchy) and the signals that pass between them.

The Lego set metaphor – whether deliberately or not – seems to have prevailed for now. A lot of old buildings are being slated for demolition and a lot of new virtual buildings are now being erected as part of this development, many of them chosen not because of problems with existing buildings but so that they can more easily connect together and live in the same cloud. This will very likely work, for now, but it is not cheap and it is not flexible, especially given the fact that most of it is not open so, like a rental property, we are not allowed to fix things, add utilities, change the walls, etc, and we are wholly dependent on the landlords being nice to us and each other (knowing that some – ahem, Microsoft – have a long history of abusing their tenants). Those buildings will age. We will find them cramped. Some will age faster than others, and will have to be modified to keep up, perhaps at high cost. Companies renting them might go out of business or change their terms so we might have to demolish the buildings and rent/make new ones. We will be annoyed at how they do things, usually without asking us. We will hate the landlords who dictate what we can do and how we can do it, and who will keep upping the rent while not doing what we ask. We will want more, and the only way to get it will be to build extensions, buy new brick sets, if it is not enough to pay someone to remodel the interiors (and it won’t be). Of course, because most of the big structural elements will not be open source, we will not be able to do that ourselves.

What the ILE really should be

The ILE is, I think, poorly named, because it should not be an environment at all. Following the building metaphor, the ILE is (or should be) more like the system that connects a lot of buildings, bringing them together into a coherent, safe, livable community. It’s infrastructure and services; it is the roads, the traffic signals, the doors, the sidewalks, the water pipes, the waste pipes, the electricity, the network cables; it is the services – fire, police, schools, traffic control, etc; it is all the many rules, standards, norms and regulations that make them work together to help make an environment in which people can live, work, play, and grow. It’s part of the environment – the part that makes it work – but it is not the environment itself. The environment itself is Athabasca University, not just the tools, processes, and systems that support its functions. That includes, most importantly, the people who are part of the university, or who are visitors to it, who are not just users of the environment or dwellers in its walls, but who are or should be the most significant and visible parts of it, just as trees are part of the environment of forests, not users of the forest. Those people live in physical as well as other virtual environments (social media, Word documents, websites, etc) that the ILE can connect together too, to make them a part of it, so the spatial metaphor gets weird at this point. The ILE makes environmental boundaries fuzzy, permeable, and shifting. It’s not an ILE, it’s an ILI – an integrated learning infrastructure.

If we focused on the connections and interfaces, and on how information and processes need to pass across them, and if we thought hard about the nature of those signals, then we could build a system that is resilient, that adapts, that lasts, that grows, that evolves, with parts that we can seamless replace or improve because the interfaces – the building facades, the mains pipes, the junction boxes, etc – will mostly stay the same, evolving slowly as they should. This is about strategy, not planning,  a way of thinking about systems rather than a sequence of things to do.

Some of the key people involved in the process realize this. They are talking about standards, protocols, and projects to build interfaces between systems, and imagining future needs, though they are inevitably distracted by the process of renting Lego bricks, so I am not sure how much they will be able to stay focused on that. I hope they prevail over those who think they are building a set of classrooms and tightly connected admin offices out of self-contained interlocking bricks because our future depends on getting it right. We are aiming to grow. It just takes one critical piece in the Lego building to fail to support that, and the rest falls apart like a… well, like a pile of bricks.

References

Brand, S. (1997). How buildings learn. Phoenix Illustrated. https://www.penguinrandomhouse.ca/books/320919/how-buildings-learn-by-stewart-brand/9780140139969

Holland, J. H. (2012). Signals and Boundaries: Building Blocks for Complex Adaptive Systems. MIT Press.  https://mitpress.mit.edu/books/signals-and-boundaries

O’Neill, R.V., DeAngelis, D.L, Waide, J. B., & Allen, T. F. H. (1986). A Hierarchical Concept of Ecosystems. Princeton University Press. http://www.gbv.de/dms/bs/toc/025157787.pdf

Postman, N. (1998). Five things we need to know about technological change. Denver, Colorado, 28.  https://student.cs.uwaterloo.ca/~cs492/papers/neil-postman–five-things.html

Words will never be a substitute for grunts

https://www.aare.edu.au/blog/?p=8996

Andrew Norton claims that online learning will never be a substitute for face-to-face learning.

Indeed.

Here are some other equally useful and true claims:

  • electric vehicles will never be a substitute for gasoline-fueled vehicles;
  • cellphones will never be a substitute for desktop computers;
  • MP3s will never be a substitute for vinyl records;
  • email will never be a substitute for letters;
  • word processing will never be a substitute for handwriting;
  • TV will never be a substitute for radio;
  • aircraft will never be a substitute for ships;
  • cars will never be a substitute for horses;
  • photography will never be a substitute for painting;
  • pianos will never be a substitute for harps;
  • folios will never be a substitute for scrolls;
  • cities will never be a substitute for villages;
  • writing will never be a substitute for speaking;
  • agriculture will never be a substitute for foraging;
  • cooked food will never be a substitute for raw food;
  • words will never be a substitute for grunts;
  • walking on two legs will never be a substitute for walking on four.

Do you see any patterns here? Indeed.

Perhaps it would be better to think about what is enabled and what is enhanced, rather than mainly focusing on what is lost. Perhaps it is a chance to think about what is the same, and maybe to think about how those similarities suggest weaknesses and missed opportunities in what we used to do, and thus to improve both the older and the newer. Perhaps we could try to see the whole assembly rather than a few of its obvious parts. Perhaps we could wonder about how to fill the gaps we perceive, or look for ways that they might already be filled even though we didn’t design it that way. Perhaps we could appreciate all the opportunities and the failings of everything that is available to us. Perhaps we could notice that everything new brings new problems to solve, as well as new opportunities to discover. Perhaps we could remember that we invented new things because they did stuff the old things could not do, or because they do some things better. Perhaps we should observe that new technologies hardly ever fully replace their ancestors, because there are almost always reasons to prefer the old even when the new seem (for some or most purposes, some or most of the time) better.

As it happens, I recently wrote a paper about that kind of thing.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/8775146/words-will-never-be-a-substitute-for-grunts

What really impacts the use of active learning in undergraduate STEM education? Results from a national survey of chemistry, mathematics, and physics instructors

This is a report on an interesting study by Naneh Apkarian et al, that asked a large-ish number (3796) of in-person American STEM profs (college and university levels) about the effects of various known factors on their use of active learning approaches. To a large extent it seems that ‘active learning’ is mainly taken to mean ‘not lectures’ (which is both unfair to a minority of lectures and over-kind to a majority of alternative teaching methods). Photo of a lecture (credit to Sam Balye) It’s a good paper but the study itself has some gaping flaws (there are many chicken-and-egg issues here, lots of confounding factors, massive fuzziness, loads of systemic biases, and great complexity hidden in the details), which are, in fairness, very well recognized by the authors. Wisely, they largely avoid making causal connections and, when they do, they use other evidence beyond that of their findings to support them. Flaws aside, it’s a good contribution to our collective story, and a thoroughly interesting read. This is what they found:

1) Though active and inactive(TM) learning approaches are used across the board, lectures are far more likely to be used when class sizes are large (notably so at 60+ class sizes, predominantly so at 100+ class sizes). Depressing, but not surprising: big class sizes massively exaggerate the dominant role of the teacher, and controlling teachers faced with the scary prospect consequently tend focus on what they want to indoctrinate rather than what students need to do. It doesn’t have to be that way, but it’s how lecturing began in the first place, so it has a bit of a history.

2) If you schedule classes in lecture theatres, most people use them for lecturing. This could  be seen as useful supporting evidence for my own coparticipation model, which predicts this on theoretical grounds (large and slow technologies influence smaller and faster ones more than vice versa, defaults harden). However, it actually shows no causal relationship at all. In fact, the reasons are likely much more mundane. From my dim recollections of in-person teaching, if the course design involves lectures then you get classes scheduled into lecture theatres. If you are stuck with a lecture theatre because of dimwitted/thoughtless timetablers but want to do something different then you have a (fun and challenging) problem, but that’s not what the results here tell us.

3) There’s a small correlation between how teachers are evaluated/the perceived importance of teaching in those evaluations, and how they teach. Those who perceive teaching to be less valued tend to lecture more. This doesn’t seem very useful information to me, without a lot more information about the culture and norms of the institutions, relative weightings for research or service, and so on. Even then, it would be hard to find any causal relationships. It might show that teachers who don’t like or have time for teaching tend to lecture because it is the easiest thing for them to do, but I’d need more evidence to prove that. It might show that extrinsic motivation drives compliance (a little), but, again, it’s not even close to proven. Much more context needed.

4) Perceived job security has no obvious effects on teaching practice. This might be seen as a little surprising as there is a fairly widespread perception that people give up on doing good things when they get tenure, but it doesn’t surprise me, given the multiple factors that affect it. Whether active or not, you can always teach badly or well. The implied assumption that active approaches are riskier and more experimental is not actually true much of the time, and there’s nothing in the survey that draws out whether people are taking risks or not anyway. Most teachers continue to teach in ways that seemed to work before, and tenure makes little or no difference to that.

5a) Very active researchers tend to lecture quite a bit more than quite inactive researchers. Indeed. See 3 – if you are a researcher but not engaged in the scholarship of learning and teaching then you probably have less interest and/or time to spend on teaching well, not to mention the fact that many universities compete to get the best researchers and couldn’t care less whether they can teach or not. There is a happier corollary…

5b) those who engage in educational research of any form lecture a lot less. This speaks to common sense, to what educational research has consistently shown for about 100 years, and to the dominant educational doctrine that lectures are bad. Personally, I kind-of agree with that doctrine, but I think the problem is much subtler than simply that lectures are bad per se – lectures can play a useful role as long as you don’t ever try to use them to impart information, as long as you always remember the rest of the learning assembly into which they fit (and in which most of the learning happens), and as long as you never, ever, ever, whether implicitly or explicitly, mandate attendance. The fact that most institutional lectures fail on all three counts, and virtually all falter on at least the most important two, does indeed make them very bad, but it’s not inherent in the technology. Tain’t what you do, it’s the way that you do it.

6) People who have experienced active learning as learners are far more likely to use such approaches. Well, yes. It would be quite a surprise if, having discovered there are better ways to learn that are more satisfying and effective for all concerned, people did not then use them.

None of this is novel, all of it reconfirms (but doesn’t prove) what we already know, especially in the hard disciplinary areas of STEM. However, it will still be a useful paper to lend support to other research, or when thinking about what needs to change if institutions are trying an intervention.  I expect that I will cite it some time.

I’m more interested, though, in what lessons might be drawn for online teaching, especially in an institution like Athabasca University, where teaching is explicitly distributed, where roles in that distributed assembly are well defined and, too often, mutually exclusive, and where lecturing is almost unheard of. 

Inactive online learning

For AU courses, I think the nearest equivalent to a lecture is a heavily content-oriented course (typically greatly reliant on a textbook) with over-controlling, easily-marked assignments, and a proctored exam at the end. That’s the ‘don’t think about it’ too common default. It’s not quite that simple, because the involvement of experienced and well-educated learning designers, editors, and media experts tends to make the content quite well written and at least somewhat informed by theory. Also, compassionate tutors can fill in a lot of gaps: good tutoring is often the saving grace of an otherwise yawn-inducing pedagogical model. It’s efficient and well-honed, like the lecture, and it works most of the time because our students are wonderful and do much of the teaching themselves (despite  attempts to control them), but it’s not a great way to teach anyone. Better than lecturing, for sure, but it has to be because there’s not so much of the other stuff that teaches in in-person institutions. We do of course have a great many courses that do not follow this pattern, that involve far more active learning: it’s far from ubiquitous, even in STEM teaching.

I think that part of the reason for a preponderance of inactive approaches at AU can be found in the paper’s second finding. In our case, an LMS is the functional equivalent of a lecture theatre (with a similar emphasis on teacher control, structure, and content), especially as our self-paced model limits the options for using its already impoverished social features. There’s also a lot of rigidity in our course development processes, with a laser-sharp focus on measurable outcomes or, worse, clearly defined objectives, that tends to make things more content-driven. Perhaps a bigger part of the reason, though, relates more closely to finding 6. It’s not that our teachers aren’t engaged and interested in producing good stuff: they really are. It’s more that they don’t have a great many role models and examples to call on. This is compounded by:

  • again, the stupidity of LMS design (courses are enclosed and hidden, for the most part),
  • a lack of sharing of tacit knowledge between teachers (we tend to only meet and communicate with a defined purpose, leaving little time for incidental and passing exchanges), and
  • our contact with students tends to be similarly instrumental and formal so we don’t usually learn as much about how they feel about other courses as in-person teachers.

All in all, though it does happen, and we are constantly getting better at it, good ideas still do not spread easily enough. In fairness, that’s also true of many in-person institutions, but at least they have serendipity, greater visibility of teaching, and simpler ways to connect socially for free, because physics. We have to actively design our own social physics, and the results of doing are seldom particularly great. As we move towards become a near-virtual institution (or even nearer-virtual) we are really going to have to work much harder on that.

On the bright side, we are fortunate to have a vast number of faculty (around 40%) who fall into the 5b category. If only we could do a better job of sharing their learning. That, of course, is a lot of the reason I am writing this, and it was a big impetus behind why we created Athabasca Landing in the first place.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/8727582/what-really-impacts-the-use-of-active-learning-in-undergraduate-stem-education-results-from-a-national-survey-of-chemistry-mathematics-and-physics-instructors

Incarceration in Real Numbers

This is stunning, both in terms of content and in terms of its presentation.

The content is depressingly familiar – the fact that the US incarcerates (in real numbers and as a percentage of population) vastly more people than any other country in the world, the fact that it really likes to do so to visible minorities in particular, and the fact that the system is shockingly corrupt at every level – but the detail is deeply disturbing. I was particularly amazed to learn that around 2% of those vast numbers of incarcerated Americans have actually had a trial. It provides lots of effective comparisons (with other countries, with different demographics, between different demographics, etc) that provide a good sense of the scale of the problem.

What makes this so powerful, though, is the brilliant, JavaScript-powered, interactive presentation. This is one extraordinarily long web page that shows individual images (in symbol form) of all 2.3 million incarcerated Americans, including a count of where you are now to put this into context. To read it, you have to keep scrolling. Keep scrolling, even if you get tired: it’s worth it. It’s particularly effective on a tablet, and less likely to lead to RSI. Some ingenious (but not at all complicated) coding brings phrases, infographics, statistics, and the occasional interactive element into view along the way, hovering for a while whilst you scroll, or becoming part of what you see as you scroll. You control this – you can slow down, go back, pause, and interact with much of the content as it appears. Watch out for some brilliant ways of representing proportions of population, showing graphs at their true scale, and emphasizing agency by showing the likely effects of different interventions.

The experience is deeply visceral – it’s an engagement with the body, not just the eye and brain.  The physical act of scrolling repeatedly hammers home what the numbers actually mean, and the fact that you play such an active role in revealing the content makes it much more impactful than it would be were it simply presented as text and figures, or hyperlinks. I’ve not seen this narrative form used in such a polished, well-integrated way before. This is a true digitally native artwork. The general principle is not dissimilar to that of most conventional e-learning content of the simplest, most mundane next-previous-slide variety. In fact it’s simpler, in many ways. The experience, though, is startlingly different.

It’s quite inspiring. I want to explore this kind of approach in my own teaching, though I don’t know how often I could use it before the effect gets stale, there may be some accessibility issues, and, if it were used in a course context as a means of sharing knowledge, it could easily become as over-controlling as a lecture. That said, it’s a brilliant way to make a point, far more powerfully than a PowerPoint, and  more engagingly than text, images, or video alone. It could be very useful. At the very least, it might provide a little inspiration for my students seeking ideas for using JavaScript on their sites.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/8477597/incarceration-in-real-numbers

Affordable Internet for Canada – Virtual Day of Action today (March 16)

https://affordable-internet.ca/

This is a timely event, running today via Zoom, as Rogers unexpectedly announce an attempted takeover of Shaw, thereby not only acquiring Shaw’s extensive cable Internet and TV business, but also one of the last remaining serious competitors to the big mobile companies, Freedom Mobile (my cellular provider of choice). The takeover of Freedom by Shaw was in itself a serious matter for concern, especially as it allowed them to sneak in an anti-competitive way to undercut small Internet service providers like Teksavvy (my Internet provider of choice), who use its infrastructure to offer better value options, and who have already been royally screwed by Shaw in every way legal loopholes allow. This would be a disaster for consumers.

Thanks to the power of the big three (Telus, Bell, and Rogers) Canada is already among the most expensive places for mobile and Internet plans in the world. This is bad news in countless ways, not least of which being the extra tax it adds for online learners, nor the fact that those most in need (outside large, mostly Southern urban areas) are the least well served. Destroying the competition does not seem like the best way to deal with this already serious problem. Successive governments have failed to curb the power of big telcos to do pretty much what they like, at best achieving small temporary victories, eventually being out-manoeuvred every time. More serious legislative action is needed, especially to support those in outlying areas.

Add your voice to the protest!

Originally posted at: https://landing.athabascau.ca/bookmarks/view/8477140/affordable-internet-for-canada-virtual-day-of-action-today-march-16

Course Content – London Interdisciplinary School

https://www.londoninterdisciplinaryschool.org/course-content/

For those in other parts of the world, some translation may be needed here in order to understand what is novel about the London Interdisciplinary School (LIS): a course in the UK is equivalent to a program in North America, and a module is equivalent to a North American course (or unit, if you are Australian, or paper, if you are from NZ). The UK does also sometimes have programmes, though these are mostly administrative umbrellas to make course management easier, rather than things you can enrol on as a student. I will use the UK terms in this post.

The LIS is the first new ratified degree-awarding institution in the UK since the 60s, though more are coming soon. It has one and only one course. The modules for this course are problem-based, centred around real world issues, and they focus on connecting rather than separating subjects and disciplines, so students can take a very diverse range of paths through them, hooking them into workplace practice. There are plenty more conventional (mostly optional) modules that provide specific training, such as for research methods, web design, and so on, but they seem to be treated as optional supports for the journey, rather than the journey’s destination, in a similar way to that used on many PhDs, where students choose what they need from module offerings for their particular research program.

Strongly interdisciplinary and flexible courses are not new, even in the UK – Keele, for instance, has encouraged pretty much any mix of modules for more than half a century, and many institutions provide a modular structure that gives a fair bit of flexibility (though too rarely between, say, arts and sciences).  What differentiates the LIS approach is that it explicitly gets rid of subjects and disciplines altogether, rightly recognizing no distinct boundaries between them. I like this. The tribes and territories of academia are ridiculous inventions that emerge from place-based constraints, bureaucratic management concerns, and long, long path dependencies, not from any plausible rationale related to learning or intellectual coherence.

The college is partially funded by government but operates as a private institution. I look forward to seeing where they go next.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/7019312/course-content-london-interdisciplinary-school

Echoes and polarization in Facebook, Reddit, and Twitter: it's not just about the algorithms

Faecesbook The authors of a recent paywalled article in MIS Quarterly here summarize their findings in another restrictive and normally paywalled site, the Washington Post. At least the latter gives some access – I was able to read it without forking out $15, and I hope you are too. Unfortunately I don’t have access to the original paper (yet) but I’d really like to read it.

The authors examined the web browsing history of nearly 200,000 US adults, and looked at differences in diversity and polarization related to use of Reddit, Twitter, and Facebook, correlating it with political leanings. What they found will surprise few who have been following such issues.  The headliner is that Facebook is over five times more polarizing for US conservatives than for liberals, driving them to far more partisan news sites, far more of the time. Interestingly, though, those using Reddit visited a far more diverse range of news sites than expected, and tended towards more moderate sites than usual: in fact, the sites were a claimed 50% more moderate than what they would typically read. Furthermore, and just as interesting to me, Twitter seemed to have little effect either way.

The authors blame this on the algorithms – that Facebook preferentially shows posts that drive engagement (so polarizing issues naturally bubble to the top), while Reddit relies on votes for its emphasis, so presenting a more balanced view. In the Washington Post article they have little to say about Twitter, apart from that it wants to be more transparent in its algorithms (though nothing like as transparent as Reddit). But it isn’t, and I think I know why that lack of effect was seen.

Algorithms vs structure

You could certainly look at it from an algorithmic perspective. There is no doubt that different algorithms do lead to different behaviours. Facebook and Twitter both make use of hidden algorithms to filter, sort, and alter the emphasis of posts. In Twitter’s case this is a relatively recent invention. It started using a simpler, time-based sort order, and it has become a much less worthwhile site since it began to emphasize posts it thinks individuals want to see. I don’t like it, and I am very glad to hear that it intends to revert to providing greater control to its users (what Judy Kay calls scrutable adaptation). Reddit’s algorithms, on the other hand, are entirely open and scrutable, as well as being intuitive and (relatively) simple. It is important to remember that none of these sites are entirely driven by computer algorithms, though: all have rules, conditions of use, and plentiful supplies of humans to enforce them. Reddit has human moderators but, unlike the armies of faceless paid moderators employed by Twitter and Facebook to implement their rules, you can see who the moderators are and, if you put in the effort and feel so inclined, you could become one yourself.

However, though algorithms do play a significant role, I think that the problem is far more structural, resulting from the social forms each system nurtures. These findings accord very neatly with the distinction that Terry Anderson and I have made between nets (social systems formed from connections between individuals) and sets (social systems that form around interests or shared attributes of their users). Facebook is the archetypal exemplar of the network social form; Reddit is classically set-oriented (as the authors put it ‘topic based’); Twitter is a balanced combination of the two, so the effects of one cancel out the effects of the other (on average). It’s all shades of grey, of course – none are fully one or the other (and all also support group social forms), and none exist in isolation, but these are the dominant forms in each system.

Networks – more specifically, scale-free networks – have a natural tendency towards the Matthew Effect: the rich get richer while the poor get poorer. You can see this in everything from academic paper citations to the spread of diseases, and it is the essence of any human social network. Their behaviours are enormously dependent on highly connected influencers, They are thus naturally inclined to polarize, and it would happen without the algorithms. The algorithms might magnify or diminish the effects but they are not going to stop them from happening. To make things worse, when they are taken online then it is not just current influence that matters, because posts are persistent, and continue to have an influence (potentially) indefinitely, whether the effect is good or bad (though seldom if it is somewhere in between).

There are plenty of sets that are also highly partisan. However, they are quite self-contained and are thus containable, either because you simply don’t bother to join them or because they can easily be eliminated: Reddit, for instance, recently removed r/the_donald, and extreme right wing subreddit for particularly rabid supporters of Trump, for its overwhelmingly violent and hateful content. Also, on a site such as Reddit, there are so many other interesting subreddits that even the hateful stuff can get a bit lost in the torrent of other news (have you seen the number of subreddits devoted to cats? Wow). And, to a large extent, a set-based system has a natural tendency to be more democratic, and to tend towards moderate views. Reddit’s collective tools – karma, votes, and various kinds of tagging – allow the majority (within a given subreddit) to have a say in shaping what bubbles to the top whereas, in a network, the clusters that form around influencers inevitably channel a more idiosyncratic, biased perspective. Sets are intentional, nets are emergent, regardless of algorithms, and there are patterns to that emergence that will occur whether or not they are further massaged by algorithms. Sets have their own intractible issues, of course: flaming, griefing, trolling, sock-puppeting and many more big concerns are far greater in set-based systems, where the relatively impersonal and often anonymous space tends to suck the worst of humanity out of the woodwork.

I would really like to see the researchers’ results for Twitter. I hypothesize that the reason for its apparent lack of effect is that the set-based features (that depolarize) counterbalance the net-based features (that polarize) so the overall effect is null, but that’s not to say that it has no effect: far from it. People are going to be seeing very different things than they would if they did not use Twitter – both more polarized and more moderate, but (presumably) a bit less in between the two. That’s potentially very interesting, especially as the nuances might be quite varied.

Are networks necessarily polarizing?

Are all online social networking systems evil? No. I think the problem emerges mainly when it is an undifferentiated large-scale general purpose social networking system, especially when it uses algorithmic means to massage what members see. There are not many of those (well, not any more). There are, however, very many vertical social networks, or niche networks that, though often displaying the same kinds of polarization problem on a smaller scale, are far less problematic because they start with a set of the people who share attributes or interests that draw them to the sites. People are on Facebook because other people are on Facebook (a simple example of Metcalfe’s Law). People are on (say) ResearchGate are there because they are academics and researchers – they go elsewhere to support the many other facets of their social lives. This means that, for the most part, niche networks are only part of a much larger environment that consists of many such sets, rather than trying to be everything to everyone. Some are even deliberately focused on kindness and mutual support

Could Facebook shift to a more set-oriented perspective, or at least develop more distinct and separate niches? I doubt it very much. The whole point of Facebook is and always has been to get more people spending more time on the site, and everything it does is focused on that one goal, regardless of consequences. It sucks, in every way, pulling people and content from other systems, giving nothing back, and it thrives on bias. In fact, it is not impossible that it deliberately nurtures the right-wing bias it naturally promotes, because it wishes to avoid being regulated. Without the polarization that drives engagement, it would lose money and users hand over fist, and there are bigger, more established incumbents than Facebook in the set space (YouTube, at least). Could it adjust its algorithms to reduce the bias? Yes, but it would be commercial suicide.  Facebook is evil and will remain so because its business model is evil. For more reasons than I can count, I hope it dies.

It could have been – and still could be – different. Facebook more or less single-handedly and very intentionally maimed the Open Social project, to which virtually all other interested organizations had signed up, and that would have allowed federation of such systems in many flexible ways. However, the dream is not dead.  A combination of initiatives like Solid, perhaps a browser-based approach to payments, certainly connecting protocols like Webmention, and even the not-quite-dormant OpenSocial might yet enable this to happen. Open source social networking software, like Diaspora or Mastodon  or Elgg or Minds (with some provisos), support something like a distributed model, or at least one that can be owned by individuals rather than corporations. WordPress dwarfs every other social system in terms of users and websites, and is inherently set-based and distributed: there are also plentiful plugins that support those other open protocols to provide deeper connections. This kind of distributed, open, standards-based initiative could radically alter the dynamic, giving far more control to end users to pick the sets and networks that matter to them, to wrest control of the algorithms from one big behemoth, and to help build a richer, more tolerant society. I am delighted to see that Facebook has lost a couple of million of its US and Canadian users as well as being boycotted by lots of advertisers, and I hope that the void isn’t being filled by Instagram (also Facebook). It could be the start of something big, because Metcalfe’s Law works the same in reverse: what goes up fast can come down just as quickly.  Get in on the trend while it’s hot, and join the exodus!

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6960862/echoes-and-polarization-in-facebook-reddit-and-twitter-its-not-just-about-the-algorithms

Asus Flip C234 Chromebook review

I’ve been thinking for some time that I need to investigate Chromebooks – at least, ever since Chrome OS added the means to run Android and Linux apps alongside Chrome web apps. I decided to get one recently because I was going on a camping trip during which I’d be required to do some work, and the (ridiculously many) machines I already had were all some combination of too limited, too unreliable, too fragile, too heavy, too power-hungry, too buggy, or too expensive to risk in a muddy campsite. A Chromebook seemed like a good compromise. I wanted one that was fairly cheap, had a good battery life, was tough, could be used as a tablet, and that was not too storage-limited, but otherwise I wasn’t too fussy. One of the nice things about Chromebooks is that, notwithstanding differences in hardware, they are all pretty much the same. 

After a bit of investigation, I noticed that an Asus C234 Flip with an 11.6″ screen was available at BestBuy for about $400, which seemed reasonable enough, based on the advertised specs, and even more reasonable when they lopped $60 off the price for Labour Day. Very unusually, though, the specs on the site were literally all that I had to go on. Though there are lots of Flip models, the C234 is completely unknown, apparently even to Asus (at least on its websites), let alone to reviewers on the web, which is why I am writing this! There’s no manual with it, not even on the machine itself, just a generic leaflet. Following the QR code on the base of the machine leads to a generic not-found page on the Asus site. Because it looked identical to the better-known Flip C214 I thought BestBuy must have made a labelling mistake but the model number is clearly printed in two places on the base. Despite the label it is, in fact, as I had guessed and eventually confirmed by circuitous means, identified by Asus themselves as a Flip C214MA, albeit with 64GB of storage rather than the more common 32GB and a very slightly upgrade Intel Celeron N4020 CPU instead of an N4000. This model lacks the option of a stylus that is available for many C214 models (pity – that seemed very nice). It was not quite the cheapest Chromebook to fit the bill, but I trust Asus a lot and have never regretted buying one of their machines over about 20 years or more of doing so quite frequently. They really know how to design and build a great computer, and they don’t make stupid compromises even on their cheapest machines, be they PCs, tablets, phones, or netbooks. Great company in a sea of dross.

Hardware overview

The C234 comes with only 4GB RAM, which means it can get decidedly sluggish when running more than a handful of small apps, especially when some of them are running under Linux, but it is adequate for simple requirements like word processing, light photo editing, audio recording, web browsing, email, webinars, etc: just the use cases I had in mind, in fact. The 64GB of storage is far less than I’d prefer but, I calculated, should be fine for storing apps. I assumed (wrongly) that any data I’d need locally could be kept on the 256GB SDXC card that I bought to go with it so I was – foolishly – not too concerned. It turns out that Android apps running under ChromeOS that can save data to the SD card are few and far between, and ChromeOS itself is barely aware of the possibility although, of course, most apps can read files from just about anywhere so it is not useless. Unfortunately, the apps that do not support it include most video streaming services and Scribd (which is my main source of ebooks, magazines, and audiobooks) – in other words, the ones that actually eat up most space. The physical SD slot is neat – very easy to insert and difficult (but not too difficult) to remove, so it is not going to pop out unexpectedly.

The computer has two full-spec USB-C ports that can be used for charging the device (45W PD, and not a drop less), as well as for video, external storage, and all the usual USB goodness. It has one USB-A 3.0 socket, and a 1/8″ combo mic/headphone socket that can take cellphone headsets or dedicated speakers/microphones. The wifi and bluetooth are both pretty modernish mainstream, adequate for all I currently have but maybe not everything I might buy next year. There is a plastic tab where a stylus is normally stored but, buyer beware, if the detailed model number doesn’t end in ‘S’ then it does not and cannot support a stylus: no upgrade path is available, as far as I can tell. Wifi reception is very good (better than my Macbook Pro), but there is no WiFi6. There’s no cellular modem, which is a pity, but I have a separate device to handle that. It does have a Kensington lock slot, which I guess reflects how it might be used in some schools where students have to share machines. Going back to the days when I used to manage university computer labs, I would have really liked these machines: they are very manageable. A Kensington lock isn’t going to stop a skilled thief for more than a couple of seconds but, as part of a security management strategy, they fit well.

The battery life is very good. It can easily manage 11-12 hours between charges from its 50WH battery, and could almost certainly do at least a couple more hours if you were not stretching its capabilities or using the screen on full brightness (I’m lazy and my eyesight is getting worse, so I tend to do both). It charges pretty quickly – I seldom run it down completely so the longest I’ve needed to plug it in it dropped below 20%, has been a couple of hours. It uncomplainingly charges from any sufficiently powerful USB-C charger.

As a laptop the Flip feels light in the hand (it weighs in at a little over a kilogram) but, as a tablet, it is pretty heavy and unwieldy and the keyboard cannot be detached. This is a fair compromise. Most of the time I use it as a laptop so I’d rather have a decent keyboard and a battery that lasts, but it is not something you’d want to hold for too long in the kind of orientations you might with an iPad or e-reader. Its 360 degree screen can fold to any intermediate angle so it doesn’t need a separate stand if you want to perch it on something, which is handy in a tent: while camping, I used it in both (appropriately) tented orientation and wrapped over a big tent pocket so that it was held in place by its own keyboard.

Video and audio

The touch screen is OK. At 1366×768 resolution and with a meagre 162 pixels per inch it is not even HD, let alone a retina display. It is perfectly adequate for my poor eyesight, though: fairly bright, acceptable but not great viewing angles, very sharp, and not glossy (I hate glossy screens). I’d much rather have longer battery life than a stunning display so this is fine for me. Viewing straight-on, I can still read what’s on the screen in bright sunshine and, though it lacks a sensor to auto-adjust the brightness, it does have an automatic night-time mode (that reddens and dims the display) that can be configured to kick in at sunset, and there are keyboard keys to adjust brightness. The generic Intel integrated GPU chip works, but that’s all I can say of it. I’d certainly not recommend it for playing graphics intensive games or using photoshop, and don’t even think about VR or compiling big programs because it ain’t going to happen.

The speakers, though, are ridiculously quiet: even when pumped up to full volume a little rain on the tent made it inaudible, and they are quite tinny. I’m guessing that this may have a bit to do with its target audience of schoolkids – a lack of volume might be a good thing in a classroom. The speakers are down-facing so it does benefit from sitting on a table or desk, but not a lot. The headphone volume is fine and it plays nicely with bluetooth speakers. It has a surprisingly large array of 5 microphones scattered quite widely that do a pretty good job of echo cancellation and noise reduction, providing surprisingly good sound quality (though not exactly a Blue Yeti).

It has two cameras, one 5K device conventionally placed above the screen when used in laptop mode, the other on the same surface as the keyboard, in the bottom right corner when typing, which is weird until you remember it can be used in tablet mode, when it becomes a rear-facing camera. Both cameras are very poor and the rear facing one is appalling (not even 1K resolution). They do the job for video conferencing, but not much else. That’s fine by me: I seldom need to take photos with my notebook/tablet and, if I want better quality, it handles a Logitech webcam very happily.

Input devices

The keyboard is a touch smaller than average, so it takes a bit of getting used to if you have been mostly using a full-sized keyboard, but it is quite usable, with plenty of travel and the keys and, though each keypress is quite tactile so you know you have pressed it, it is not clicky. It is even resistant to spilt drinks or a spot or two of rain. Having killed a couple of machines this way over the past thirty years or so (once by sneezing), I wish all keyboards had this feature. The only things I dislike about it are that it is not backlit (I really miss that) and that the Return key is far too small, bunched up with a load of symbol keys and easily missed. Apart from that, it is easy to touch type and I’d say it is marginally better than the keyboard on my Macbook Pro (2019 model). The keys are marked for ChromeOS, so they are a bit fussy and it can be hard to identify which of the many quote marks are the ones you want, because they are slightly differently mapped in ChromeOS, Android, and Linux. On the other hand I’m not at all fond of Chrome OS’s slightly unusual keyboard shortcuts so it’s nice that the keys tell you what they can do, even though it can be misleading at times.

The multi-touch screen works well with fingers, though could be far more responsive when using a capacitive stylus: the slow speed of the machine really shows here. Unless you draw or write really slowly, you are going to get broken lines, whether using native Chrome apps, Android, or Linux. I find it virtually unusable when used this way.

The touchpad is buttonless and fine – it just works as you would expect, and its conservative size makes it far less likely to be accidentally pressed than the gigantic glass monstrosity on my Macbook Pro. I really don’t get the point of large touchpads positioned exactly where you are going to touch them with your hand when typing.

There is no fingerprint reader or face recognition, though it mostly does unlock seamlessly when it recognizes my phone. It feels quite archaic to have to enter a password nowadays. You can get dongles that add fingerprint recognition and that work with Chromebooks, but that is not really very convenient.

Build

The machine is made to be used by schoolkids, so it is built to suffer. The shell of the Flip is mostly made of very sturdy plastic. And I do mean sturdy. The edges are rubberised, which feels nice and offers quite a bit of protection. Asus claim it can be dropped onto a hard floor from desk height, and that the pleasingly textured covering hides and prevents scratches and dents. It certainly feels very sturdy, and the texture feels reassuring in the hand, with a good grip so that you are not so likely to drop it. It doesn’t pick up fingerprints as badly as my metal-bodied or conventional plastic machines. Asus say that the 360 degree hinges should survive 50,000 openings and closings, and that the ports can suffer insertion of plugs at least 5,000 times. I believe them: everything about it feels well made and substantial. You can stack 30kg on top of it without it flinching. For the most part it doesn’t need its own case. I felt no serious worries throwing this into a rucksack, albeit that it is neither dust nor water resistant (except under the keyboard). Asus build it to the American military’s MIL-STD 810G spec, which sounds impressive though it should be noted that this is not a particular measure of toughness so much as a quality control standard to ensure that it will survive the normal uses it is designed for. It’s not made for battlefields, boating, or mountaineering, but it is made to survive 11-year-olds, and that’s not bad.

It’s not unattractive but nor is it going to be a design classic. It is just a typical old fashioned fairly non-descript and innocuous small laptop, that is unlikely to attract thieves to the same extent as, say, a Microsoft Surface or Macbook Pro. It has good old fashioned wide bezels. I realize this is seldom considered a feature nowadays, but it is really good for holding it in tablet mode and helps to distinguish the screen from the background. It feels comfortable and familiar. In appearance, it is in fact highly reminiscent of my ancient Asus M5N laptop from 2004, that still runs Linux just fine, albeit without a working battery, with only 768KB of RAM and with, since only recently, a slightly unreliable DVD drive – Asus really does make machines that last.

The machine is fanless so it is quite silent: I love that. Anything that moves inside a computer will break, eventually, and fans can be incredibly annoying even when they do work, especially after a while when dust builds up and operating system updates put more stress on the processor. If things do break, then the device has a removable panel on the base, which you can detach using a single standard Philips screwdriver, and Asus even thoughtfully provide a little thumbnail slot to prise it up. Through this you can access important stuff like storage and RAM, and the whole machine has a modular design that makes every major component easily replaceable – so refreshing after the nightmares of trying to do any maintenance on an Apple device. Inside, it has a dual core Celeron of some kind that can be pushed up to 2800 MHz – an old and well-tried CPU design that is not going to win any performance prizes but that does the job pretty well. From my tech support days I would be a bit bothered leaving this with young and inquisitive kids – they really like to see how things work by doing things that would make them not work. I lost a couple of lab machines to a class of kids who discovered the 240/110v switch on the back of old PCs.

It does feel very sluggish at the best of times after using a Macbook Pro – apps can take ages to load, and there can be quite a long pause before it even registers a touch or a keypress when it is running an app or two already – but it is less than a tenth of the price, so I can’t complain too much about that. It happily runs a full-blown DBMS and web server, which addresses most of my development needs, though I’d not be keen on running a full VM on the device, or compiling a big program.

Included software

There are no Asus apps, docs, or customizations included. It is pure, bare-bones, unadulterated Chrome OS, without even a default Asus page to encourage registration. This is really surprising. Eventually I found the MyAsus (phone) app for Android on Google’s Play store, which is awful but at last – when I entered the serial number to register the machine – it told me what it actually was, so I could go and find a manual for it. The manual contains no surprises and little information I couldn’t figure out for myself, but it is reassuring to have one, and very peculiar that it was not included with the machine. This makes me suspect that BestBuy might have bought up a batch of machines that were originally intended for a (large) organization that had laid down requirements for a bare-bones machine. This might explain why it is not listed on the Asus site.

ChromeOS

I may write more about ChromeOS at some later date – the main reason I got this device was to find out more about it – but I’ll give a very brief overview of my first impressions now. ChromeOS is very clever, though typical of Google’s software in being a little clunky and making the computer itself a little bit too visible: Android suffered such issues in a big way until quite recently, and Android phones still feel more like old fashioned desktop computers than iPhones or even Tizen devices.

Given that it is primarily built to run Chrome apps, It is surprisingly good at running Android apps – even VPNs – though integration is not 100% perfect: you can occasionally run into trouble passing parameters from a Chrome app to Android, for instance, some Android apps are unhappy about running on a laptop screen, and not all understand the SD card very well. Chrome apps run happily without a network, so you are not tied to the network as much as with other thin-client alternatives like WebOS.

It also does a really good job of running and integrating Linux apps. They all run in a Debian Linux container, so a few aspects of the underlying machine are unavailable and it can be a little complex when you want to use files from other apps or peripherals, but it is otherwise fully featured and utilizes much of the underlying Debian system that runs ChromeOS itself, so it is close to native Linux in performance. The icons for Linux apps appear in the standard launcher like any other app and, though there is a little delay when launching the first Linux app when it starts the container, once you have launched one then the rest load quickly. You do need a bit of Linux skill to use it well – command line use of apt is non-negotiable, at least, to install any apps, and integrating both Android and ChromeOS file systems can be a little clunky. Linux is still a geek option, but it makes the machine many times more useful than it would otherwise be. There’s virtually nothing I’d want to do with the machine that is constrained by software, though the hardware creates a few brick walls.

Integration between the three operating systems is remarkably good altogether but the seams show now and then, such as in requiring at least two apps for basic settings (ChromeOS and Android), with a handful of settings only being available via the Chrome browser, or in not passing clipboard contents to the Linux terminal command line (though you can install an x-terminal that works fine). I’ve hit quite a few small problems with the occasional app, and a few Android apps don’t run properly at all (most likely due to screen size issues rather than integration issues) but overall it works really well. In fact, almost too well – I have a few of the same apps in both ChromeOS and Android versions so sometimes I fail to notice that I am using the glitchier one until it is too late.

Despite the underlying Debian foundations, it is not super-stable and crashes in odd ways when you stretch it a little, especially when reconnecting on a different network, but it is stable enough for most standard uses that most people would run into, and it reboots really quickly. Even in the few weeks I’ve had it, it seems more stable, so this is a moving target.

Updates come thick and fast, but it is a little worrying that Google’s long term commitment to ChromeOS seems (like most of their offerings) shaky: the Web app store is due to close at some point soon and there are some doubts about whether it will continue to offer long term support for web apps in general, though Android and Linux support makes that a lot less worrying than it might be. Worst case would be to wipe most traces of ChromeOS and simply partition the machine for Linux, which would not be a bad end-of-life option at all. 

The biggest caveat, though, is that you really need to sell your soul (or at least more of your data than is healthy) to Google to use this. Without a Google account I don’t think it would work at all, but at the very least it would be crippled. I trust Google more than I trust most other big conglomerates – not because they are nice but because their business model doesn’t depend on directly selling my data to others –  but I do not love their fondness for knowing everything about me, nor that they insist on keeping my data in a banana republic run by a reality TV show host. As much as possible the apps I use are Google-free, but it is virtually impossible to avoid using the Chrome browser that runs many apps, even when you have a friendlier alternative like Vivaldi that would work just as well, if Google allowed it. In fairness, it is less privacy-abusive than Windows, and more open about it. MacOS is not great either, but Apple are fiercely aggressive in protecting your data and don’t use it for anything more than selling you more Apple goodies. Linux or BSD are really the only viable options if you really want to control your data or genuinely own your software nowadays.

Conclusions

This was a great little machine for camping. Though water and dust were a concern, especially given the low price I wasn’t too worried about treating it roughly. It was small and light, and it performed well enough on every task that I threw at it. It’s neither a great laptop nor a great tablet, but the fact that it performs both tasks sufficiently well without the ugliness and hassles of Windows or the limitations of single OS machines is very impressive.

Since returning from camping I have found myself using the machine a lot more than I thought I might. My Macbook Pro is pretty portable and its battery life is not too bad, but it is normally plugged in to a big monitor and a whole bunch of disk drives, so I can’t just pick it up to move around the house or down to the boat without a lot of unplugging and, above all, disk ejection (which, thanks to Apple’s increasingly awful implementation of background indexing that has got significantly worse with every recent release of OSX, can often be an exercise in deep frustration), so I rarely do so unless I know I will be away from the desk for a while. I love that I can just pick the Flip up and use it almost instantly, and I only need to charge it once every couple of days, even when I use it a lot. I still far prefer to use my Macbook Pro for anything serious or demanding, my iPad or phone for reading news, messaging, drawing, etc, and a dedicated ebook reader for reading books, but the fact that this can perform all of those tasks reasonably well is useful enough that it is fast becoming my default mobile device for anything a cellphone doesn’t handle well, such as writing anything of any length, like this (which is all written using the Flip).

In summary, the whole thing is a bit of a weird hybrid that shows its seams a bit too often but that can do most things any tablet or PC can do, and then some. It does a much better job than Windows of combining a ‘real’ PC with a tablet-style device, mainly because (thanks to Android) it does the tablet far better than any Windows PC and, thanks to Linux, it is almost as flexible as a PC (though, bearing in mind that Windows now does Linux reasonably well, it is not quite in the same league). The low spec of the machine does create a few brick walls: I am not going to be running any VMs on it, nor running any graphics-intensive, memory-intensive, or CPU-intensive tasks but, for well over 90% of my day to day computing needs, it works just fine.

I’m now left wondering whether it might be worthwhile to invest in one of the top-of-the-line Google Chromebooks to cater for my more advanced requirements. They are beautiful devices that address nearly all the hardware limitations of the C234 very well, and that are at least a match for mid-to-high end Windows and Mac machines in performance and flexibility, and they come at a price to match: really not cheap. But I don’t think either I or ChromeOS are quite ready for that yet.  MacOS beats it hands down in terms of usability, speed, reliability, consistency, and flexibility, despite Apple’s deeply tedious efforts to lock MacOS down in recent years (trying to triple boot to MacOS, Windows, and Linux is an exercise in frustration nowadays) and despite not offering a touch screen option. If Apple goes further down the path of assuming all users are idiots then I might change my mind but, for now, its own operating system is still the best available, and a Mac  runs Windows and Linux better than almost any equivalently priced generic PC. I would very seriously consider a high-end Chromebook, though, as an alternative to a Windows PC. It is inherently more secure, far less hassle to maintain, and lets you get to doing what you want to do much faster than any Windows machine. Unless you really need a bit of hardware of software that only runs under Windows – and there are very few of those nowadays – then I can think of few reasons to prefer it.  

Where to buy (current advertised price $CAD409): https://www.bestbuy.ca/en-ca/product/asus-flip-c234-11-6-touchscreen-2-in-1-chromebook-intel-celeron-n4020-64gb-emmc-4gb-ram-chrome-os/14690262