Datawind Aakash Android Tablets

The cheapest tablet in this range is CAD$43, for which you get a 7″ screen device with WiFi, Bluetooth and limited but extendible storage, capable of web browsing, email, Skype, word-processing and e-reading. Not well, for sure, nor with any kind of battery life to speak of, and with a low resolution screen with a viewing position rather than range of angles.

But it’s $43 (Canadian)!

That’s less than plenty of internet-capable radios, MP3 players, electronic picture frames, or even sophisticated alarm clocks, all of which it can comfortably replace and actually do a better job.  In fact, it’s less than a meal for two (with drinks) at my local pub. The others in the range don’t add much apart from a front-facing camera and very slow mobile data ($55), up to 3G phone and a slightly better screen for the top-of-the range UbiSlate3G7 for $90. Not too bad a price for an unlocked if totally enormous smartphone, though not the cheapest around.

The UbiSlates are Canadian, though the primary market for them is India, where they can be purchased for even less, and can come with $2/month mobile Internet (some US versions come with unlimited mobile web browsing for about US$100 a year). I think I might get one of these for the hell of it. 

 

Address of the bookmark: http://ubislate.ca/compare.php

Transactional Distance among Open University Students: How Does it Affect the Learning Process? : European Journal of Open, Distance and E-Learning

Interesting study looking into transactional distance between online learners at a Greek open university, with some great qualitative findings.

The findings are very revealing about the role and nature of dialogue in online learning at the authors’ university. As we noted in our book, Teaching Crowds, transactional distance becomes very complex once there are multiple ‘teachers’ (or teaching presences) involved, where peer and content interactions are multi-dimensional and so transactional distance shifts and varies all the time. The study reveals some quite nuanced and differentiated communication patterns that demonstrate this quite nicely.  A bit of fuzziness shows through, however, where what is reported is mainly levels of communication rather than perceived transactional distance. The two are very closely related, inasmuch as communication is a necessary but not sufficient condition for reducing transactional distance, but they are not the same thing. 

I find it hard to imagine, as suggested for future study in the conclusion, what ways one might measure transactional distance in learner-content or learner-interface interactions that would not make the distance extraordinarily high. This is almost true by definition, apart from in ‘creepy’ ways (e.g. if the learners felt psychological closeness and attachment with an AI) or, maybe stretching the definition a bit, through guided didactic conversation. I will be interested to see how the writers address this!

Address of the bookmark: http://www.degruyter.com/view/j/eurodl.2014.17.issue-1/eurodl-2014-0002/eurodl-2014-0002.xml

GRC's | SQRL Secure Quick Reliable Login  

Steve Gibson, a venerable computer guru who has innovated for decades and never produced anything but brilliantly elegant code, as well as being a compelling and thought-provoking writer, presents SQRL. It’s truly ingenious, I think. It provides secure, password-free logins, with unique but anonymous IDs, to any site that implements this standard, in a manner that seems to be far more secure than any conventional username/password design. True, some other form of authentication is needed to set up the app in the first place – you’d not want someone else to get hold of that! Also, it’s not quite as good as two-factor systems for security. But it is much better than username/login combinations, it is much easier for the end user even than using a social media site to provide authentication, and it offers the potential for uniquely identifying an individual without intruding on that individual’s privacy. That’s pretty cool. Two-factor systems may be secure but all are very complex, irritating and prone to error, but there’s nothing to stop someone intent on assuring secure access from using this as part of a two-factor system. Brilliant.

Address of the bookmark: https://www.grc.com/sqrl/sqrl.htm

Challenge Propagation: Towards a theory of distributed intelligence and the global brain

Fascinating paper from the always thought-provoking and often inspirational Francis Heylighen, in which he draws together various models of distributed intelligence, distributed cognition, evolution and complex adaptive systems, incorporating stigmergic and networked perspectives on ways that self-organizing systems can exhibit intelligent behaviour. This is very relevant to anyone interested in connectivism, collectives, learning, intelligence, complex systems or social software.

Heylighen’s central thesis revolves around a definition of intelligence as not just problem solving but also opportunity seeking: it’s about both overcoming obstacles and seeking new possibilities. This combination is encompassed by the term ‘challenge’, which Heylighen defines as ‘a phenomenon that invites action from an agent’. Given competing positive (proactive) and negative (reactive) challenges, he sees challenge in evolutionary terms as ‘a promise of fitness gain for action relative to inaction’. All of this is framed in a context of bounded rationality and different approaches to challenge resolution, from simple look-ups to complex heuristics, and a range of factors that may motivate or demotivate different actions. This is all good stuff but it gets really interesting when he reaches the ‘challenge propagation’ referred to in the title. In essence, this applies the logic of memetics to challenges. As he puts it:

In contrast to the standard paradigm of individual problem solving, the challenge propagation paradigm investigates processes that involve a potentially unlimited number of agents. To deal with this, our initial focus must shift from the agent to the challenge itself: what interests us is how an individual challenge is processed by a collective of agents distributed across some abstract space or network. Instead of an agent traveling (searching) across a space of challenges (problem space), we will consider a challenge traveling (propagating) across a space of agents.”

This is a brilliant idea. I love the change in perspective that this brings. There are, I think, some very large and unresolved questions about what a ‘challenge’ means in the context of a collective. This follows from the fact that it is hard to understand what fitness in such a collective might consist of, save in its utility to the agents of which it is composed, though it might shed some light on our eusociality (evolution not for the benefit of selfish genes but for the benefit of a large social collective). I find it particularly hard to map his earlier discussion of how things are valued (with ‘valences’) by an individual agent and how things might be valued by a collective. A challenge does not exist in isolation – it must have a subject. It’s not entirely clear what that subject might be here. Such fuzziness aside, as a way of understanding an otherwise massively complex intelligent system like a brain, an ant colony or human culture, it has a lot going for it.

While the foundations are very strong, I have some reservations about some of the examples Heylighen uses and some conclusions that he draws. While I can readily accept that there are some stigmergic aspects to Wikipedia, I do not believe that the act of editing a page is in any meaningful way analogous to the way that stigmergy operates in (say) termite mound building or movements of currency markets. In the first place, unlike in a true stigmergic system, there is an infinite range of possible ‘algorithms’ that might influence agents making changes to a Wikipedia article. There are path dependencies, sure, but that doesn’t make it stigmergic. Apart from some stylistic patterns that tend to replicate, there is none of the emergent self-organized behaviour that is characteristic of all stigmergic systems. A Wikipedia page is largely just the sum of its parts, not an emergent artefact. In the second place, unlike in stigmergic systems, individual agents make deliberate contributions with a clear design purpose and end-goal in mind when building a Wikipedia page: their interactions are not local but planned and focused on the whole. It is no more stigmergic than a house to which someone decides to add an extension or remodel its rooms. It’s a good model for cooperative action, but not of collective intelligence.

I’m also not entirely happy with the notion of the Internet as being a gigantic collection of forums (generalized by Heylighen as ‘meeting grounds’) to exchange challenges, though the metaphor is appealing on many levels. The same could, of course, be said about any human artifacts or ways of ‘meeting’, from buildings to tools to doorknobs to forest footpaths to books to conversations to simply passing in a street. So far so good.  He describes the propagation of challenges as involving division of labour, workflow, and aggregation – this too makes sense. He then describes how such a system becomes self-organizing and uses as an example the growth of open source software. Here I have problems, for much the same reasons as I have problems seeing Wikipedia article development as stigmergic. In real life, many large open source developments are a million miles from self-organizing. The archetypal Linux, for instance, is extremely tightly controlled by a very small number of people using very rigid processes that are in many ways more traditionally organized from the top down than most proprietary systems. While the challenges are indeed solved by individual agents acting largely independently, albeit building on what others have already built, the workflow and aggregation are firmly in a traditional designed mould and tightly controlled by a clique. This is even true of more open approaches, such as those encouraged by Github although, in this case, workflow is managed by a ‘blind’ algorithmically driven system rather than by a clique. 

My concerns are minor and they are not with the basic ideas presented here, that I find very compelling. I think this is an important paper. While it certainly needs refinement, this feels like the beginnings of a new language for discussing and describing connectivist accounts of learning. It provides some much needed solid underpinning theory and a very useful perspective on some of connectivism’s major tenets: that knowledge exists in the network, including non-human artefacts; that connections are learning; the significance of decision-making; the ways that more is different; and the value of diversity. Great stuff.

Address of the bookmark: http://cleamc11.vub.ac.be/papers/ChallengePropagation-Spanda.pdf

HP Stream 7 – $120 Windows tablet is remarkable value

I don’t normally link to Best Buy (the name of the company is not entirely accurate!) but, as I got my Stream 7 from there, I figure it’s as good a place as any. You can probably find it a little cheaper elsewhere but Best Buy/FutureShop (essentially the same company) are convenient for most people living in Canadian cities and let you play before you buy.

I’ve written elsewhere that e-book readers are an essential commodity if, as Athabasca University is increasingly demanding, students are required to use e-texts instead of paper books. This device is a serviceable Windows PC that can do pretty much anything any other Windows PC is capable of and that also makes a very decent e-reader. At $120 it is cheaper than many textbooks.

The Stream 7 is a basic 7″ tablet, with few bells and whistles: technically speaking, it has no 3G, no GPS, no 802.11AC wifi, only 5 touch point recognized at once (not normally a problem unless you have a piano keyboard app), slightly chunky construction, a battery life of 8 hours at best (5 or 6 would be more typical), no HDMI output, 1GB RAM, dreadful (but usable for simple needs) front and back cameras, measly speaker and poor headphone output. On the plus side, it does have bluetooth, a pretty generous 32GB of flash storage (expandable via MicroSD), it charges using a standard Micro USB cable, and (notably) the screen is absolutely excellent, bright and vibrant, even if it’s resolution (1280×800) is not quite up there with Retina displays. As an e-reader and for videos it’s pretty good, and it runs pretty much any e-reading software, including the DRM-afflicted and web-based stuff we sometimes use at AU.

Perhaps the most amazing thing about it is that it runs the full and uncrippled version Windows 8.1, not the ugly mess that was Windows RT. Admittedly it’s only 32 bit and is joined at the hip to Microsoft’s mediocre Bing service, but it smoothly runs almost any Windows program you can throw at it, even elderly Flash programs. It’s mostly fairly snappy too, given the constraints of its modest 1GB of RAM. I wouldn’t want to try running lots of programs at once, but for single task activities like web browsing, e-reading or email, it’s absolutely fine and very usable. Windows 8.1 (admittedly a somewhat fuller version than this and without the Bing branding) currently retails at $120, so one way of looking at it is that you get a free tablet with a copy of Windows. Moreover, it even comes with a year’s subscription to Microsoft Office (usual price $100), for those that need it. This is not a thing I’d recommend to anyone, given that there are equally good and often better free competitors available, but it’s astonishingly good value if you are actually thinking of forking out for it anyway. If you do have an existing subscription you can add the year to what you already own, thereby getting a usable tablet for $20 (plus tax and recycle fee).

 

Of course, if you just want a tablet or e-reader, there are better and cheaper Android devices available, and much better iOS devices if you can afford them. My only reason for getting this was to test web sites using Microsoft’s IE browser and to use some ancient IE-only webmeeting software, and this was the cheapest way I could find to do that. Windows is a terrible ugly mess that appears to have been designed by the makers of Spongebob Squarepants and that is so full of holes it is more like a torn insect screen than a window. I am unlikely to use this device for much e-reading myself because I have much better (and mostly costlier) devices that beat it hands down on almost every front. But, if you need to run Microsoft software and also need a means to read e-books and watch the odd video, this is a pretty cheap and effective way to do it.

Address of the bookmark: http://www.bestbuy.ca/en-CA/product/hewlett-packard-hp-stream-s7-5701ca-7-32gb-windows-8-1-tablet-with-intel-atom-z3735g-processor-black-licorice-s7-5701ca/10341178.aspx

For Sale: “Your Name Here” in a Prestigious Science Journal

A Scientific American article on the prevalence of plagiarism and contract cheating in journal articles.  The tl;dr version lies near the end of the article:

“Now that a number of companies have figured out how to make money off of scientific misconduct, that presumption of honesty is in danger of becoming an anachronism. ‘The whole system of peer review works on the basis of trust,’ Pattinson says. ‘Once that is damaged, it is very difficult for the peer review system to deal with.'” 

Very sad. The only heartening thing about all this is that there are now thousands of scam journals (I think I now get at least half a dozen solicitations from these every day that I have learned to junk immediately) who would be more than willing to publish such articles. I rather like the idea that worse than useless fraudulent articles might get published in worse than useless scam journals. A nice little self-contained economy. Unfortunately, some of the cheats target real journals with real reputations and, worse, may be believed by genuine researchers who are taken in by the lies they purvey, endangering the whole academic research endeavour. Apparently the going price for that in China is around 93,000RMB, or $15,000.

This is very much like the issue we face in course assessment too. In some of my own courses I have designed what I reckon to be virtually foolproof methods of preventing most forms of cheating. They mostly work pretty well, but they don’t cope much better with contract cheating than more traditional assignment/exam based courses. My only partial solution to that problem is to try to price cheats out of the market: most of my courses have to be done from start to finish in order to pass, which is a lot more time consuming than writing a few boilerplate essays, exams or exercises. For assignments and exams on most courses you can get a passing grade for as little as $5, if you are willing to take the risk. The risk of discovery is very high because the essay mills tend to plagiarize or self-plagiarize (well, they are cheats – caveat emptor!) and, due to the semi-public nature of cheating sites, it is just as easy for us to discover students seeking ghost writers as it is for them to seek a ghost writer. In fact, when we find such sites, we tend to pass on our findings to colleagues in other institutions, a nice example of informal crowd-sourcing. However, I am absolutely sure some do get away with it, and it makes little or no difference whether teaching is online or face to face. There’s an example of contract cheating in exams in today’s news, but it is hardly newsworthy, apart from that it is endemic. Beyond contract cheating, I also know that some students have family members or friends who are motivated to ‘help’, sometimes quite considerably. There was a charmingly improbable example of a mother sitting her daughter’s exam a while back, for instance. 

I suspect that the ultimate solution to this in the case of courses is structural, not technological nor even directly pedagogical. We are in an un-winnable arms war in which everyone loses as long as the purpose of courses is seen to be to get accreditation, rather than to enable learning. As long as a grade sits enticingly at the end of it, that will inevitably cause some students to seek shortcuts to getting it. Cheats destroy the credibility not just of their own qualifications but those of every other student who has honestly run the course. If we got rid of grades altogether, cheating during the learning process would dry up to the merest trickle (though, bizarrely, might not go away altogether). Making accreditation a separate issue, completely disassociated from learning and teaching, would allow us to concentrate our firepower on preventing cheating at the point of accreditation rather than distracting us during a course, so we could make our courses far more engaging, enjoyable and useful: we could simply concentrate on pedagogy rather than trying to design cheating out of them. For the (entirely separate) accreditation, we could let rip with all the weaponry at our disposal, of course: biometrics, Faraday cages, style detectors, plagiarism detection tools and all the multifarious technologies and techniques we have developed to attempt to thwart cheats could be employed with relative ease by specialists trained to spot miscreants. Better still, we could use other means of proving authenticity such as social network analysis combined with public facing posts, or employer reports, or authentic portfolios created over long periods with multiple sources of authentication. This would also have the enormous benefit of largely solving what is perhaps the biggest challenge in all of education, that of motivation, getting rid of the extrinsic driver that eats at the soul of learning in our educational systems. It would also allow learners to control how, when, with whom and what they learn, rather than having to take a course that might bore them or confuse them. They could easily take a course elsewhere – even a MOOC – and prove their knowledge separately. It would make it easier for us to design courses that are apt for the learning need, rather than having to fit everything into one uniform size and shape. It would also overcome the insane contradiction of teachers telling students they have failed to learn when, quite clearly, it is the teachers that have failed to teach. Athabasca does, of course, have the mechanisms for this, in its PLAR and challenge processes. It could easily be done.

A similar solution might work, at least a little, for journal cheaters. There are different cultural norms around cheating in China, as I have observed previously, that perhaps play a role in the preponderance of Chinese culprits mentioned in the article, but a lot of the problem might be put down to the over-valuation of publication for career progression, prestige and reward in that country. If the rewards and reputation were less tightly bound to publication and more intrinsic to the process, we might see some improvement. This could be done in many ways: for instance, greater value could be given to internal dissemination of results,  open publication (inherently less liable to fraud thanks to many eyes), team work, blogging, supervisor reports, peer review (of people, not papers) and citations (though that is inevitably going to be the next easy target for fraud, if it is not already, so should not be treated too seriously). There are lots of ways to measure academic value apart from through numbers of publications, many of which relate to hard-to-spoof process rather than an easily forged product. The worrisome trend of journals charging authors for publication is an extremely bad idea that can only exacerbate the problem: publication becomes a commodity that is bought and sold, of value in and of itself (like grades) rather than as a medium to disseminate research.

These are sad times for academia, eaten from the inside and out, but they also present an opportunity for us to rethink the process. The standards and values that have evolved over many centuries and that once stood us in good stead when adult education was an elite affair just don’t apply any more. What our forebears sought in opening up academia was to expand the reach of education to all. Instead, we turned it into a system to deliver accreditation. That system is on a self-destruct course as long as we continue to act as though nothing has really changed. 

Address of the bookmark: http://www.scientificamerican.com/article/for-sale-your-name-here-in-a-prestigious-science-journal/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%253A+ScientificAmerican-News+%2528Content%253A+News%2529

Defaults matter

I have often written about the subtle and not-so-subtle constraints of learning management systems (LMSs) that channel teaching down a limited number of paths, and so impose implicit pedagogies on us that may be highly counter productive and dissuade us from teaching well – this paper is an early expression of my thoughts on the matter. I came across another example today.

When a teacher enters comments on assignments in Moodle (and in most LMSs), it is a one-time, one-way publication event. The student gets a notification and that’s it. While it is perfectly possible for a dialogue to continue via email or internal messaging, or to avoid having to use such a system altogether, or to overlay processes on top of it to soften the hard structure of the tool, the design of the software makes it quite clear this is not expected or normal. At best, it is treated as a separate process. The design of such an assignment submission system is entirely about delivering a final judgement. It is a tacit assertion of teacher power. The most we can do to subvert that in Moodle is to return an assignment for resubmission, but that carries its own meanings and, on resubmission, still returns us to the same single feedback box.

Defaults are very powerful things that profoundly shape how we behave (e.g. see here, here and here). Imagine how different the process would be if the comment box were, by default, part of a dialogue, inviting response from the student. Imagine how different it would be if the student could respond by submitting a new version (not replacing the old) or by posting amendments in a further submission, to keep going until it is just right, not as a process of replacement but of evolution and augmentation. You might think of this as being something like a journal submission system, where revisions are made in response to reviewers until the article is acceptable. But we could go further. What if it were treated as a debugging process, using approaches like those in Bugzilla or Github to track down issues and refine solutions until they were as good as they could be, incorporating feedback and help from students and others on or beyond the course? It seems to me that, if we are serious about assignments as a formative means of helping someone to learn (and we should be), that’s what we should be doing. There is really no excuse, ever, for a committed student to get less than 100% in the end. If students are committed and willing to persist until they have learned what they come here to learn, it is not ever the students’ failure when they achieve less than the best: it is the teachers’.

This is, of course, one of the motivations behind the Landing. In part we built this site to enable pedagogies like this that do not fit the moulds that LMSs ever-so-subtly press us into. The Landing has its own set of constraints and assumptions, but it is an alternative and complementary set, albeit one that is designed to be soft and malleable in many more ways than a standard LMS. The point, though, is not that any one system is better than any other but that all of them embed pedagogical and process assumptions, some of which are inherently incompatible.

The solution is, I think, not to build a one-size-fits-all system. Yes, we could easily enough modify Moodle to behave the way I suggest and in myriad other ways (e.g. I’d love to see dialogue available in every component, to allow student-controlled spaces wherever we need them, to allow students to add to their own courses, etc) but that doesn’t work either. The more we pack in, the softer the system becomes, and so the harder it is to operate it effectively. Greater flexibility always comes at a high price, in cognitive load, technical difficulty and combinatorial complexity. Moreover, the more we make it suit one group of people, the less well it suits others. This is the nature of monolithic systems.

There are a few existing ways to greatly reduce this problem, without massive reinvention and disruption. One is to disaggregate the pieces. We could build the LMS out of interoperable blocks so that we could, for instance, replace the standard submission system with a different one, without impacting other parts of the system. That was the goal of OKI and the now-defunct E-Framework although, in both cases, assembly was almost always a centralized IT management function and not available to those who most needed it – students and teachers. Neither have really made it to the mainstream. Sakai (an also-ran LMS that still persists) continues to use OKI technologies under the hood but the e-framework (a far better idea) seems dead in the water. These were both great ideas. There just wasn’t the will or the money, and competition from incumbents like Moodle and Blackboard was too strong. Other widget-based methods (e.g. using Wookie) offer more hope, because they do not demand significant retooling of existing systems, but they are currently far from on the ascendent and the promising EU TENCompetence project that was a leader behind this seems moribund, its site offline.

Another approach is to use modules/plugins/building blocks within an existing system. However, this can be difficult or impossible to manage in a manner that delivers control to the end user without at the same time making it difficult for those that do not want or need such control, because LMSs are monoliths that have to address the needs of many people. Not everyone needs a big toolkit and, for many, it would actively make things worse if they had one. Judicious use of templates can help with that, but the real problem is that one size does not fit all. Also, it locks you in to a particular platform, making evolution dependent on designers whose goals may not align with how you want to teach.

Bearing that in mind, another way to cope with the problem is to use multiple independent systems bound by interoperability standards – LTI, OpenBadges or TinCan, for example. With such standards, different learning platforms can become part of the same federated environment, sharing data, processing, learning paths and so on, allowing records to be kept centrally while enabling incompatible pedagogies to run independently within each system. That seems to me to be the most sensible option right now. It’s still more complex for all concerned than taking the easy path, and it increases management burden as well as replicating too much functionality for no particularly good reason. But sometimes the easy path is the wrong one, and diversity drives growth and improvement.

Great Firewall of China

Terry Anderson on great form discussing the problems of accessing scholarly and other content in China, with some nice insights into the environment in which Chinese scholars must conduct their research. I had not considered these particular issues of embedded Google services before, though have long been uncomfortable with the potential privacy concerns of using Google Analytics. Great stuff, well worth a read.

Address of the bookmark: http://terrya.edublogs.org/2014/12/13/great-firewall-of-china/