Bananas as educational technologies

  Banana Water Slide banana statue, Virginia Beach, Virginia One of my most memorable learning experiences that has served me well for decades, and that I actually recall most days of my life, occurred during a teacher training session early in my teaching career. We had been set the task of giving a two-minute lecture on something central to our discipline. Most of us did what we could with a slide or two and a narrative to match in a predictably pedestrian way. I remember none of them, not even my own, apart from one. One teacher (his name was Philippe) who taught sports nutrition, just drew a picture of a banana. My memory is hazy on whether he also used an actual banana as a prop: I’d like to think he did. For the next two minutes, he then repeated ‘have a banana’ many times, interspersed with some useful facts about its nutritional value and the contexts in which we might do so. I forget most of those useful facts, though I do recall that it has a lot of good nutrients and is easy to digest. My main takeaway was that, if we are in a hurry in the morning, not to skip breakfast but to eat a banana, because it will keep us going well enough to function for some time, and is superior to coffee as a means of making you alert. His delivery was wonderful: he was enthusiastic, he smiled, we laughed, and he repeated the motif ‘have a banana!’ in many different and entertaining ways, with many interesting and varied emphases. I have had (at least) a banana for breakfast most days of my life since then and, almost every time I reach for one, I rememember Philippe’s presentation. How’s that for teaching effectiveness?

But what has this got to do with educational technologies? Well, just about everything.

As far as I know, up until now, no one has ever written an article about bananas as educational technologies. This is probably because, apart from instances like the one above where bananas are the topic, or a part of the topic being taught, bananas are not particularly useful educational technologies. You could, at a stretch, use one to point at something on a whiteboard, as a prop to encourage creative thinking, or as an anchor for a discussion. You could ask students to write a poem on it, or calculate its volume, or design a bag for it. There may in fact be hundreds of distinct ways to use bananas as an educational technology if you really set your mind to it. Try it – it’s fun! Notice what you are doing when you do this, though. The banana does provide some phenomena that you can make use of, so there are some affordances and constraints on what you can do, but what makes it an educational technology is what you add to it yourself. Notwithstanding its many possible uses in education, on balance, I think we can all agree that the banana is not a significant educational technology.

Parts and pieces

Here are some other things that are more obviously technological in themselves, but that are not normally seen as educational technologies either:

  • screws
  • nails
  • nuts and bolts
  • glue

Like bananas, there are probably many ways to use them in your teaching but, unless they are either the subject of the teaching or necessary components of a skill that is being learned (e.g. some crafts, engineering, arts, etc) I think we can all agree that none of these is a significant educational technology in itself. However, there is one important difference. Unlike bananas, these technologies can and do play very significant roles in almost all education, whether online or in-person. Without them and their ilk, all of our educational systems would, quite literally, fall apart. However, to call them educational technologies would make little sense because we are putting the boundaries around the wrong parts of the assembly. It is not the nuts and bolts but what we do with them, and all the other things with which they are assembled, that matters most. This is exactly like the case of the banana.

Bigger pieces

This is interesting because there are other things that some people do consider to be sufficiently important educational technologies that they get large amounts of funding to perform large-scale educational research on them, about which exactly the same things could be said: computers, say. There is really a lot of research about computers in classrooms. And yet metastudies tend to conclude that, on average, computers have little effect on learning. This is not surprising. It is for exactly the same reason that nuts and glue, on average, have little effect on learning. The researchers are choosing the wrong boundaries for their investigations.

The purpose of a computer is to compute. Very few people find this of much value as an end in itself, and I think it would be less useful than a banana to most teachers. In fact, with the exception of some heavily math-oriented and/or computer science subjects, it is of virtually no interest to anyone.

The ends to which the computing they perform are put are another matter altogether. But those are no more the effect of the computer than the computer is the effect of the nuts and bolts that hold it together. Sure, these (or something like them) are necessary components, but they are not causes of whatever it is we do with them. What makes computers useful as educational technologies is, exactly like the case of the banana, what we add to them.

It is not the computer itself, but other things with which it is assembled such as interface hardware, software and (above all) other surrounding processes – notably the pedagogical methods – that can (but on average won’t) turn it into an educational technology. There are potentially infinite numbers of these, or there would be if we had infinite time and energy to enact them. Computers have the edge on bananas and, for that matter, nuts and bolts because they can and usually must embody processes, structures, and behaviours. They allow us to create and use far more diverse and far more complex phenomena than nuts, bolts, and bananas. Some – in fact, many – of those processes and structures may be pedagogically interesting in themselves. That’s what makes them interesting, but it does not make them educational technologies. What can make them educational technologies are the things we add, not the machines in themselves.

This is generalizable to all technologies used for educational purposes. There are hierarchies of importance, of course. Desks, classrooms, chairs, whiteboards and (yes) computers are more interesting than screws, nails, nuts, bolts, and glue because they orchestrate more phenomena to more specific uses: they create different constraints and affordances, some of which can significantly affect the ways that learning happens. A lecture theatre, say, tends to encourage the use of lectures. It is orchestrating quite a few phenomena that have a distinct pedagogical purpose, making it a quite significant participant in the learning and teaching process. But it and all these things, in turn, are utterly useless as educational technologies until they are assembled with a great many other technologies, such as (very non exhaustively and rather arbitrarily):

  • pedagogical methods,
  • language,
  • drawing,
  • timetables,
  • curricula,
  • terms,
  • classes,
  • courses,
  • classroom rules,
  • pencils and paper,
  • software,
  • textbooks,
  • whiteboard markers,
  • and so on.

None of these parts have much educational value on their own. Even something as unequivocally identifiable as an educational technology as a pedagogical method is useless without all the rest, and changes to any of the parts may have substantial impacts on the whole. Furthermore, without the participation of learners who are applying their own pedagogical methods, it would be utterly useless, even in assembly with everything else. Every educational event – even those we apparently perform alone – involves the coparticipation of countless others, whether directly or not.

The point of all this is that, if you are an educational researcher or a teacher investigating your own teaching, it makes no sense at all to consider any generic technology in isolation from all the rest of the assembly. You can and usually should consider specific instances of most if not all those technologies when designing and performing an educational intervention, but they are interesting only insofar as they contribute, in relationship to one another, to the whole.

And this is not the end of it. Just as you must assemble many pieces in order to create an educational technology, what you have assembled must in turn be assembled by learners – along with plenty of other things like what they know already, other inputs from the environment, from one another, the effects of things they do, their own pedagogical methods, and so on – in order to achieve the goals they seek. Your own teaching is as much a component of that assembly as any other. You, the learners, the makers of tools, inventors of methods, and a cast of thousands are coparticipants in a gestalt process of education.

This is one of the main reasons that reductive approaches to educational research that attempt to isolate the effects of a single technology – be it a method of teaching, a device, a piece of software, an assessment technique, or whatever – with the intent of generalizing some statement about it cannot ever work. The only times they have any value at all are when all the technologies in question are so hard, inflexible, and replicable, and the uses to which they are put are so completely fixed, well defined, and measurable that you are, in effect, considering a single specific technology in a single specific context. But, if you can specify the processes and purposes with that level of exactitude then you are simply checking that a particular machine works as it is designed to work. That’s interesting if you want to use that precise machine in an almost identical context, or you want to develop the machine itself further. But it is not generalizable, and you should never claim that it is. It is just part of a particular story. If you want to tell a story then other methods, from narrative descriptions to rich case studies to grounded theory, are usually much more useful.

I hate change, especially when it is inflicted upon me

For at least the past 5 or 6 years I have been hosting the websites I care most about, including this one, with a good-value, mostly reliable provider (OVH) that has servers in Canada. I don’t dislike the company and I’m still paying them, though the value isn’t feeling so great right now, because they are soon to retire their old VPS solution on which my sites are hosted, forcing me to either leave them or ‘upgrade’ to one of their new plans. Of course, the cheapest plan that can fit what I already have is more expensive than the old one. If I had the time, I might look for an alternative, but Canada is not well served by companies that provide cheap, reliable virtual private servers. There’s no way I’m moving my sites to US hosting (guys, stop letting rich corporations decide your laws for you, or at least elect someone to your presidency who’s not a dead ringer for the antichrist). I do have servers elsewhere but I live here, and I like Canada more than any other country.

My new hosting plan might be a bit better than the old one in some ways but worse in others. I am now paying $15/month instead of $10 for something I didn’t need to be improved, and that is mostly not much better than it was. I have lost a day or two of my own time to migration already (with just one site mostly migrated), and expect to lose more as I migrate more sites, not to mention significant downtime when I (inevitably) mess things up, especially because, of course, I am ‘fixing’ a few things in the process. In fairness, OVH have given me 6 months of ‘free’ hosting by way of compensation but, given the amount of work I need to put into it and the increased cost over the long term, it’s not a good deal for me.

I do understand why things must change. You cannot run the same old servers forever because things decay, and complexity (in management, especially) inevitably increases. This is true of all technologies, from languages to bureaucracies, from vehicles to software. But this seems like a sneaky way to impose a price hike, rather than an inevitable need. More to the point, if I need to change the technologies my sites run on, l want to be the one that makes those choices, and I want to choose exactly when I make them. That’s precisely why I put up with the pain and hassle of managing my ‘own’ servers. Well, that and the fact that I figure a computing professor ought to have a rough idea about real world computing, and having my own server does mean I can help out friends and family from time to time.

Way back in time I used to run servers for a living so, though the pace of change (in me and technologies I use) makes it more difficult to keep up than it used to be, I am not too scared about doing the hard stuff. I really like the control that managing a whole server gives me over everything. If it breaks, it’s my fault, but it’s also my responsibility when it works. I’ve always told myself that, worst case, all I need to do is to zip up the sites and move them lock stock and barrel somewhere else, so I am not beholden to proprietary tools and APIs, nor do I have much complexity to worry about when things need to change. I’ve also always known that this belief is over-simplistic and overly optimistic, but I’ve tried to brush that under the carpet because it’s only a problem when it becomes a problem. Now it’s a problem.

On the bright side, I have steadfastly avoided cloud alternatives because they lock you in, in countless ways, eventually making you nothing but a reluctant cash cow for the cloud providers. This would have been many times worse if I had picked a cloud solution. I have one small server to worry about rather than dozens of proprietary services, and everything on it is open and standardized. But path dependencies can lock you in too. Though I rarely make substantial changes – that way madness lies – I’ve made quite a surprising number of small decisions about the system over the past few years that, on the whole, I have mostly documented but that, en masse, are more than a slight pain to deal with. This site was down for hours today, for instance, while I struggled to figure out why it had suddenly decided that it knew nothing about SSL any more, which it turned out was due to a change in the Let’s Encrypt certificates (that had to be regenerated for the new site) and some messiness with permissions that didn’t quite work the same way on the new servers (my bad for choosing this time to upgrade the operating system, but it was a job that needed doing), combined with some automation that wanted to change server configuration files that I expected to configure myself. This kind of process can reveal digital decay that you might not have noticed happening, too. Right now, for example, there appear to be about 50 empty files sitting in my media folder for reasons that I am unsure of, that were almost certainly there on the old server. I think they may be harmless, but I am bothered that there might be something that is not working that I have migrated over, that might cause more problems in future. More hours of tedious effort ahead.

The main thing that all this highlights to me, though, is something I too often try to ignore: that I do not own what I think I own any more. This is my site, but someone else has determined that it should change. All technologies tend towards entropy, be they poems, words, pyramids, or bicycles. They persist only through an active infusion of energy. I suppose I should therefore feel no worse about this than when a drain gets blocked or a lock needs replacing, but I do feel upset, because this is something I was paying someone else to deal with, and because there is absolutely nothing I could have done (or at least nothing that would not have been much more hassle) to prevent it. I have many similar ‘lifetime’ services that are equally tenuous, ‘lifetime’ referring only to the precarious lifespan of the company in its current state, before it chooses to change its policies or gets acquired by someone else, or simply goes out of business. A few of the main things I have learned through having too many such things are:

  • to keep it simple: small, easily replaceable services trump big, highly functional systems every single time.
  • to always maintain alternatives. Even if OVH had gone belly-up, I still have mirrors on lesser sites that would keep me going in a worst case scenario, though it would have been harder work and less efficient to have gone down that path.
  • don’t trust any company, ever. They are not people so, even if they are lovely now, there is no guarantee that they will be next year, or tomorrow. And their purpose is to survive, and probably to make money, not to please you. You can trust people, but you cannot trust machines.
  • this is even true of the companies you work for. Much as I love my university, its needs and purposes only partially coincide with mine. The days of the Landing, for instance, a system into which I have poured much energy for well over 10 years, are very likely numbered, though I have no idea whether that means it has months or years left to live. Not my call, and not the call of any one individual (though someone will eventually sign its death warrant). With luck and concerted effort, it will evolve into something more wonderful but that’s not the point. Companies are not human, and they don’t think like humans.
  • if possible, stick with whatever defaults the software comes with or, at least, make sure that all changes are made in as few places as possible. It’s an awful pain to have to find the tweaks you made when you move it to a new system unless they are all in one easy-to-find place.
  • open standards are critical. There’s no point in getting great functionality if it relies on the goodwill of a company to maintain it, except where the value is unequivocally transient. I don’t much mind a trustworthy agent handling my spam filtering or web conferencing, for instance, though I’d not trust one to handle my instant messaging or site hosting, unless they are using standards that others are using. Open source solutions do die, and do lose support, but they are always there when you need them, and it is always possible to migrate, even if the costs may be high.

This site is now running on the new system, with a slightly different operating system and a few upgrades here and there. It might even be a little faster than the last version, eventually. I (as it turns out) wisely chose Linux and open source web software, so it continues to work, more or less as it did before, notwithstanding the odd major problem. If this had been a Windows or even a Mac site, though, it would have been dead long ago.

I have a bit of work to do on the styling here and there – I’m not sure quite what became of the main menu and (for aforementioned reasons) am reluctant to mess around with the CSS. If you happen to know me, or even if you don’t but can figure out how to deal with the anti-spam stuff in the comments section of this page, do tell me if you spot anything unusual.

Finally, if I’ve screwed up the syndication then you will probably not be reading this anyway. I’ve already had to kill the (weak) Facebook integration in order to make it work at all, though that’s a good riddance and I’m happy to see it go. Twitter might be another matter, though. Another set of proprietary APIs and, potentially, another fun problem to deal with tomorrow.

Addendum: so it turns out that I cannot save anything I write here. Darn. I thought it might be a simple problem with rewrite rules but that’s not it. When you read this, I will have found a solution (and it will probably be obvious, in retrospect) but it is making me tear my hair out right now.

Addendum to addendum: so I did screw up the syndication, and it was a simple problem with rewrite rules. After installing the good old fashioned WordPress editor everything seemed fine, but I soon discovered that the permalinks were failing too, so (though it successfully auto-posted to Twitter) links I had shared to this post were failing. All the signs pointed to a problem with Apache redirection, but all my settings were quadruple-checked correct. After a couple of hours of fruitless hacking,  I realized that the settings were quadruple-checked correct for the wrong domain name (jondron.org, which actually redirects here to jondron.ca, but that is still running on the old site so not working properly yet). Doh. I had even documented this, but failed to pay attention to my own notes. It’s a classic technology path-dependency leading to increased complexity of exactly the kind that I refer to in my post. The history of it is that I used to use jondron.org as my home page, and that’s how the site was originally set up, but I chose to switch to jondron.ca a few years ago because it seemed more appropriate and, rather than move the site itself to a new directory, I just changed everything in its database to use the jondron.ca domain name instead. Because I had shared the old site with many people, I set up a simple HTTP redirect from jondron.org to point to this one, and had retained the virtual host on the server for this purpose. All perfectly logical, and just a small choice along the way, but with repercussions that have just taken up a lot of my time. I hope that I have remembered to reset everything after all the hacks I tried, but I probably haven’t. This is how digital decay sets in.

Turns out the STEM ‘gender gap’ isn’t a gap at all

Grace Hopper and Univac, image from en.wikipedia.org/wiki/Grace_HopperAt least in Ontario, it seems that there are about as many women as men taking STEM programs at undergraduate level. This represents a smaller percentage of women taking STEM subjects overall because there are way more women entering university in the first place. A more interesting reading of this, therefore, is not that we have a problem attracting women to science, technology, engineering, and mathematics, but that we have a problem attracting men to the humanities, social sciences, and the liberal arts. As the article puts it:

“it’s not that women aren’t interested in STEM; it’s that men aren’t interested in poetry—or languages or philosophy or art or all the other non-STEM subjects.”

That’s a serious problem.

As someone with qualifications in both (incredibly broad) areas, and interests in many sub-areas of each,  I find the arbitrary separation between them to be ludicrous, leading to no end of idiocy at both extremes, and little opportunity for cross-fertilization in the middle. It bothers me greatly that technology subjects like computing or architecture should be bundled with sciences like biology or physics, but not with social sciences or arts, which are way more relevant and appropriate to the activities of most computer professionals. In fact, it bothers me that we feel the need to separate out large fields like this at all. Everyone plays lip service to cross-disciplinary work but, when we try to take that seriously and cross the big boundaries, there is so much polarization between the science and arts communities that they usually don’t even understand one another, let alone work in harmony. We don’t just need more men in the liberal arts – we need more scientists, engineers, and technologists to cross those boundaries, whatever their gender. And, vice versa, we need more liberal artists (that sounds odd, but I have no better term) and social scientists in the sciences and, especially, in technology.

But it’s also a problem of category errors in the other direction. This clumping together of the whole of STEM conceals the fact that in some subjects – computing, say – there actually is a massive gender imbalance (including in Ontario), no matter how you mess with the statistics. This is what happens when you try to use averages to talk about specifics: it conceals far more than it reveals.

I wish I knew how to change that imbalance in my own designated field of computing, an area that I deliberately chose precisely because it cuts across almost every other field and did not limit me to doing one kind of thing. I do arts, science, social science, humanities, and more, thanks to working with machines that cross virtually every boundary.

I suspect that fixing the problem has little to do with marketing our programs better, nor with any such surface efforts that focus on the symptoms rather than the cause. A better solution is to accept and to celebrate the fact that the field of computing is much broader and vastly more interesting than the tiny subset of it that can be described as computer science, and to build up from there. It’s especially annoying that the problem exists at Athabasca where a wise decision was made long ago not to offer a computer science program. We have computing and information systems programs, but not any programs in computer science. Unfortunately, thanks to a combination of lazy media and computing profs (suffering from science envy) that promulgate the nonsense, even good friends of mine that should know better sometimes describe me as a computer scientist (I am emphatically not), and even some of our own staff think of what we do as computer science. To change that perception means not just a change in nomenclature, but a change in how and what we, at least in Athabasca, teach. For example, we might mindfully adopt an approach that contextualizes computing around projects and applications, rather than its theory and mechanics. We might design a program that doesn’t just lump together a bunch of disconnected courses and call it a minor but that, in each course (if courses are even needed), actively crosses boundaries – to see how code relates to poetry, how art can inform and be informed by software, how understanding how people behave can be used in designing better systems, how learning is changed by the tools we create, and so on.

We don’t need disciplines any more, especially not in a technology field. We need connections. We don’t need to change our image. We need to change our reality. I’m finding that to be quite a difficult challenge right now.

 

Address of the bookmark: http://windsorstar.com/opinion/william-watson-turns-out-the-stem-gender-gap-isnt-a-gap-at-all/wcm/ee4217ec-be76-4b72-b056-38a7981348f2

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2929581/turns-out-the-stem-%E2%80%98gender-gap%E2%80%99-isn%E2%80%99t-a-gap-at-all

Evidence mounts that laptops are terrible for students at lectures. So what?

The Verge reports on a variety of studies that show taking notes with laptops during lectures results in decreased learning when compared with notes taken using pen and paper. This tells me three things, none of which is what the article is aiming to tell me:

  1. That the institutions are teaching very badly. Countless decades of far better evidence than that provided in these studies shows that giving lectures with the intent of imparting information like this is close to being the worst way to teach. Don’t blame the students for poor note taking, blame the institutions for poor teaching. Students should not be put in such an awful situation (nor should teachers, for that matter). If students have to take notes in your lectures then you are doing it wrong.
  2. That the students are not skillful laptop notetakers. These studies do not imply that laptops are bad for notetaking, any more than giving students violins that they cannot play implies that violins are bad for making music. It ain’t what you do, it’s the way that you do it. If their classes depend on effective notetaking then teachers should be teaching students how to do it. But, of course, most of them probably never learned to do it well themselves (at least using laptops). It becomes a vicious circle.
  3. That laptop and, especially, software designers have a long way to go before their machines disappear into the background like a pencil and paper. This may be inherent in the medium, inasmuch as a) they are vastly more complex toolsets with much more to learn about, and b) interfaces and apps constantly evolve so, as soon as people have figured out one of them, everything changes under their feet. It becomes a vicious cycle.

The extra cognitive load involved in manipulating a laptop app (and stopping the distractions that manufacturers seem intent on providing even if you have the self-discipline to avoid proactively seeking them yourself) can be a hindrance unless you are proficient to the point that it becomes an unconscious behaviour. Few of us are. Tablets are a better bet, for now, though they too are becoming overburdened with unsought complexity and unwanted distractions. I have for a couple of years now been taking most of my notes at conferences etc with an Apple Pencil and an iPad Pro, because I like the notetaking flexibility, the simplicity, the lack of distraction (albeit that I have to actively manage that), and the tactile sensation of drawing and doodling. All of that likely contributes to making it easier to remember stuff that I want to remember. The main downside is that, though I still gain laptop-like benefits of everything being in one place, of digital permanence, and of it being distributed to all my devices, I have, in the process, lost a bit in terms of searchability and reusability. I may regret it in future, too, because graphic formats tend to be less persistent over decades than text. On the bright side, using a tablet, I am not stuck in one app. If I want to remember a paper or URL (which is most of what I normally want to remember other than my own ideas and connections that are sparked by the speaker) I tend to look it up immediately and save it to Pocket so that I can return to it later, and I do still make use of a simple notepad for things I know I will need later. Horses for courses, and you get a lot more of both with a tablet than you do with a pencil and paper. And, of course, I can still use pen and paper if I want a throwaway single-use record – conference programs can be useful for that.

 

 

 

 

Address of the bookmark: https://www.theverge.com/2017/11/27/16703904/laptop-learning-lecture

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2871283/evidence-mounts-that-laptops-are-terrible-for-students-at-lectures-so-what

Teens unlikely to be harmed by moderate digital screen use

The results of quite a large study (120,000 participants) appear to show that ‘digital’ screen time, on average, correlates with increased well-being in teenagers up to a certain point, after which the correlation is, on average, mildly negative (but not remotely as bad as, say, skipping breakfast). There is a mostly implicit assumption, or at least speculation, that the effects are in some way caused by use of digital screens, though I don’t see strong signs of any significant attempts to show that in this study.

While this accords with common sense – if not with the beliefs of a surprising number of otherwise quite smart people – I am always highly sceptical of studies that average out behaviour, especially for something as remarkably vague as engaging with technologies that are related only insofar as they involve a screen. This is especially the case given that screens themselves are incredibly diverse – there’s a world of difference between the screens of an e-ink e-reader, a laptop, and a plasma TV, for instance, quite apart from the infinite range of possible different ways of using them, devices to which they can be attached, and activities that they can support. It’s a bit like doing a study to identify whether wheels or transistors affect well-being. It ain’t what you do, it’s the way that you do it. The researchers seem aware of this. As they rightly say:

“In future work, researchers should look more closely at how specific affordances intrinsic to digital technologies relate to benefits at various levels of engagement, while systematically analyzing what is being displaced or amplified,” Przybylski and Weinstein conclude. 

Note, though, the implied belief that there are effects to analyze. This remains to be shown. 

Address of the bookmark: https://www.eurekalert.org/pub_releases/2017-01/afps-tut011217.php

Moral panic: Japanese girls risk fingerprint theft by making peace-signs in photographs / Boing Boing

As Cory Doctorow notes, why this headline should single out Japanese girls as being particularly at risk – and that this is the appeal of it – is much more disturbing than the fact that someone figured out how to lift fingerprints that can be used to access biometric authentication systems from photos taken using an ‘ordinary camera’ at a considerable distance (3 metres). He explains the popularity of the news story thus:

I give credit to the news-hook: this is being reported as a risk that young women put themselves to when they flash the peace sign in photos. Everything young women do — taking selfies, uptalking, vocal fry, using social media — even reading novels! — is presented as a) unique to young women (even when there’s plenty of evidence that the trait or activity is spread among people of all genders and ages) and b) an existential risk to the human species (as in, “Why do these stupid girls insist upon showing the whole world their naked fingertips? Slatterns!”)

The technical feat intrigued me, so I found a few high-res scans of pictures of Churchill making the V sign, taken on very good medium or large format film cameras (from that era, 5″x4″ press cameras were most common, though some might have been taken on smaller formats and/or cropped) with excellent lenses, by professional photographers, under various lighting conditions, from roughly that distance. While, on the very best, with cross-lighting, a few finger wrinkles and creases were partly visible, there was no sign of a single whorl, and nothing like enough detail for even a very smart algorithm to figure out the rest. So, with a tiny fraction of the resolution, I don’t think you could just lift an image from the web, a phone, or even from a good compact camera to steal someone’s fingerprints unless the range were much closer and you were incredibly lucky with the lighting conditions and focus. That said, a close-up selfie using an iPhone 7+, with focus on the fingers, might well work, especially if you used burst mode to get slightly different images (I’m guessing you could mess with bas relief effects to bring out the details). You could also do it if you set out to do it. With something like a good 400mm-equivalent lens,  in bright light, with low ISO, cross-lit, large sensor camera (APS-C or higher), high resolution, good focus and small aperture, there would probably be enough detail. 

Address of the bookmark: https://boingboing.net/2017/01/12/moral-panic-japanese-girls-ri.html

Setapp – Netflix-style rental model for apps for Mac

Interesting. For $10USD/month, you get unlimited access to the latest versions of what is promised to be around 300 commercial Mac apps. Looking at the selection so far (about 50 apps), these appear to be of the sort that usually appear in popular app bundles (e.g. StackSocial etc), in which you can buy apps outright for a tiny fraction of the list price (quite often at a 99% reduction). I have a few of these already, for which I paid an average of 1 or 2 dollars apiece, albeit that they came with a bunch of useless junk that I did not need or already owned, so perhaps it’s more realistic to say they average more like $10 apiece. Either way, they can already be purchased for very little money, if you have the patience to wait for the right bundle to arrive. So why bother with this?

The main advantage of SetApp’s model is that, unlike those in bundles, which often nag you to upgrade to the next version at a far higher price than you paid almost as soon as you get them, you always get the latest version. It is also nice to have on-demand access to a whole library at any time: if you can wait for a few months they will probably turn up in a cheap pay-what-you-want app bundle anyway, but they are only rarely available when you actually need them.  I guess there is a small advantage in the curation service, but there are plenty of much better and less inherently biased ways to discover tools that are worth having. 

The very notable disadvantage is that you never actually own the apps – once you stop subscribing or the company changes conditions/goes bust, you lose access to them. For ephemerally useful things like disk utilities, conversion tools, etc this is no great hassle but, for things that save files in proprietary formats or supply a cloud service (many of them) this would be a massive pain. As there is (presumably) some mechanism for updating and checking licences, this might also be an even more massive pain if you happen to be on a plane or out of network range when either the app checks in or the licence is renewed. I don’t know which method SetApp uses to ensure that you have a subscription but, one way or another, lack of network access at some point in the proceedings could really screw things up. When (with high probability) SetApp goes bust, you will be left high and dry. Also, I’m guessing that it is unlikely that I would want more than a dozen or thereabouts of these in any given year, so each would cost me about $10 every year at the best of times. Though that might be acceptable for a major bit of software on which one’s livelihood depends, for the kind of software that is currently on show, that’s quite a lot of money, notwithstanding the convenience of being able to pick up a specialist tool when you need it at no extra cost. 

This is a fairly extreme assault on software ownership but closed-source software of all varieties suffers from the same basic problem: you don’t own the software that you buy.  Unlike use-once objects like movies or books, software tends to be of continuing value. The obvious solution is to avoid closed-source altogether and go for open source right the way down the stack: that’s always my preference. Unfortunately, there are still commercial apps that I find useful enough to pay for and, unfortunately, software decays. Even if you buy something outright that does the job perfectly, at some point the surrounding ecosystems (the operating system, network, net services, etc) will most likely render it useless or positively dangerous at some point. There are also some doubly annoying cases where companies stop supporting versions, lose databases, or get taken over by other companies, so software that you once owned and paid for is suddenly no longer yours (Cyberduck, I’m looking at you). Worst of all are those that depend on a cloud service over which you have no control at all and that will almost definitely go bust, or get taken over, or be subject to cyberattack, or government privacy breaches, or be unavailable when you need it, or that will change terms and conditions at some point to your extreme disadantage. Though there may be a small niche for such things and the immediate costs are often low enough to be tempting, as a mainstream approach to software provision, it is totally unsustainable.

 

Address of the bookmark: https://setapp.com/

Pebble dashed

Hell.

Pebble made my favourite smart watches. They were somewhat open, and the company understood the nature of the technology better than any of the mainstream alternatives. Well, at least they used to get it, until they started moving towards turning them into glorified fitness trackers, which is probably why the company is now being purchased by Fitbit.

So, no more Pebble and, worse, no more support for those that own (or, technically, paid for the right to use) a Pebble. If it were an old fashioned watch I’d grumble a bit about reneging on warranties but it would not prevent me from being able to use it. Thanks to the cloud service model, the watch will eventually stop working at all:

Active Pebble watches will work normally for now. Functionality or service quality may be reduced down the road. We don’t expect to release regular software updates or new Pebble features. “

Great. The most expensive watch I have ever owned has a shelf life of months, after which it will likely not even tell the time any more (this has already occurred on several occasions when it has crashed while I have not been on a viable network). On the bright side (though note the lack of promises):

We’re also working to reduce Pebble’s reliance on cloud services, letting all Pebble models stay active long into the future.”

Given that nearly all the core Pebble software is already open source, I hope that this means they will open source the whole thing. This could make it better than it has ever been. Interesting – the value of the watch would be far greater without the cloud service on which it currently relies. 

 

Address of the bookmark: https://www.kickstarter.com/projects/597507018/pebble-2-time-2-and-core-an-entirely-new-3g-ultra/posts/1752929

Open Whisper Systems

The Signal protocol is designed for secure, private, encrypted messaging and real-time calling. The protocol, designed by Open Whisper Systems, is used in an increasingly large range of tools (including by Facebook and Google), but their own app is the most interesting application of it. 

The (open, GPL) Signal app is a secure, private messaging and voice chat app for iOS and Android, offering guaranteed and strong end-to-end encryption without having to sign up for a service with dubious privacy standards or further agendas (e.g. Facebook, Apple, Google, Whatsapp, Viber etc). No ads, no account details kept by the company, no means for them (or anyone) to store or intercept messages or calls, the organization is funded by donations and grants. The app uses your phonebook to discover other contacts using Signal – I don’t have many yet, but hopefully a few of my contacts will see this and install it. Call quality seems excellent – as good as Skype used to be before Microsoft maimed it – though I haven’t used it enough yet to assess its reliability. One disadvantage is that, if you have more than one phone and phone number, there seems to be no obvious way to link them together. That’s a particular nuisance on a dual-SIM phone.

It needs a real, verified phone number to get started but, once you have done that, you can link it to other devices too, including PCs (via Chrome or a Chrome-based browser like the excellent Vivaldi), using a simple QR code (no accounts!) so this is a potentially great replacement for things like Whatsapp, Skype, Allo, Viber, etc. No video calling yet, though you can send video messages (and most other things).

 

Address of the bookmark: https://whispersystems.org/#page-top

Get that “new Mac” smell all the time with a $24 scented candle

Some time ago, while comparing the virtues of paper and electronic books, I predicted that the current generation would one day wax lyrical about the smell of a new iPhone much as those from my generation get gooey over the scent of old books.

That day has arrived.

Address of the bookmark: http://www.alphr.com/apple/1004449/get-that-new-mac-smell-all-the-time-with-a-24-scented-candle