Protocols Instead Of Platforms: Rethinking Reddit, Twitter, Moderation And Free Speech | Techdirt

Reddit logoInteresting article on the rights of companies to moderate posts, following the recent Reddit furore that, in microcosm, raises a bunch of questions about the future of the social net itself. The distinction between freedom of speech and the rights of hosts to do whatever they goddam please – legal constraints permitting – is a fair and obvious one to make.

The author’s suggestion is to decentralize social media systems (specifically Twitter and Reddit though, by extension, others are implicated) by providing standards/protocols that could be implemented by multiple platforms, allowing the development of an ecosystem where different sites operate different moderation policies but, from an end-user perspective, being no more difficult to use than email.

The general idea behind this is older than the Internet. Of course, there already exist many systems that post via proprietary APIs to multiple places, from WordPress plugins to Known, not to mention those ubiquitous ‘share’ buttons found everywhere, such as at the bottom of this page. But, more saliently, email (SMTP), Internet Relay Chat (IRC), Jabber (XMPP), Usenet news (NNTP) are prototypical and hugely successful examples of exactly this kind of thing. In fact, NNTP is so close to Reddit’s pattern in form and intent that I don’t see why it could not be re-used, perhaps augmented to allow smarter ratings (not difficult within the existing standard). Famously, Twitter’s choice of character limit is entirely down to fitting a whole Tweet, including metadata, into a single SMS message, so that is already essentially done. However standards are not often in the interests of companies seeking lock-in and a competitive edge. Most notably, though they very much want to encourage posting in as many ways as possible, they very much want control of the viewing environment, as the gradual removal of RSS from prominent commercial sites like Twitter and Facebook shows in spades. I think that’s where a standard like this would run into difficulties getting off the ground. That and Metcalfe’s Law: people go where people go, and network value grows proportionally to the square of the number of users of a system (or far more than that, if Reed’s Law holds). Only a truly distributed system ubiquitously used system could avoid that problem. Such a thing has been suggested for Reddit and may yet arrive.

As long as we are in thrall to a few large centralized commercial companies and their platforms – the Stacks, as Bruce Sterling calls them – it ain’t going to work. Though an incomplete, buggy and over-complex implementation played a role, proprietary interest is essentially what has virtually killed OpenSocial, despite being a brilliant idea that was much along these lines but more open, and despite having virtually every large Internet company on board, bar one. Sadly, that one was the single most avaricious, amoral, parasitic company on the Web. Almost single-handedly, Facebook managed to virtually destroy the best thing that might have happened to the social web, that could have made it a genuine web rather than a bunch of centralized islands. It’s still out there, under the auspices of the W3C, but it doesn’t seem to be showing much sign of growth or deployment.

Facebook front pageFacebook has even bigger and worser ambitions. It is now, cynically and under the false pretense of opening access to third world countries, after the Internet itself. I hope the company soon crashes and burns as fast as it rose to prominence – this is theoretically possible, because the same cascades that created it can almost as rapidly destroy it, as the once-huge MySpace and Digg discovered to their cost. Sadly, it is run by very smart people that totally get networks and how to exploit them, and that has no ethical qualms to limit its growth (though it does have some ethical principles about some things, such as open source development – its business model is evil, but not all of its practices). It has so far staunchly resisted attack, notwithstanding its drop in popularity in established markets and a long history of truly stunning breaches of trust.

Do boycott Facebook if you can. If you need a reason, other than that you are contributing to the destruction of the open web by using it, remember that it tracks you hundreds of times in a single browsing session and, flaunting all semblance of ethical behaviour, it attempts to track you even if you opt out from allowing that. You are its product. Sadly, with its acquisition of companies like Instagram and Whatsapp, even if we can kill the primary platform, the infection is deep. But, as Reed’s Law shows, though each new user increases its value, every user that leaves Facebook or even that simply ignores it reduces its value by an identically exponential amount. Your vote counts!

Address of the bookmark: https://www.techdirt.com/articles/20150717/11191531671/protocols-instead-platforms-rethinking-reddit-twitter-moderation-free-speech.shtml

The LMS as a paywall

I was writing about openness in education in a chapter I am struggling with today, and had just read Tony Bates’s comments on iQualify, an awful cloud rental service offering a monolithic locked-in throwback that just makes me exclaim, in horror, ‘Oh good grief! Seriously?’ And it got me thinking.

Learning management systems, as implemented in academia, are basically paywalls. You don’t get in unless you pay your fees. So why not pick up on what publishers infamously already do and allow people to pay per use? In a self-paced model like that used at Athabasca it makes perfect sense and most of the infrastructure – role-based time-based access etc – and of course the content already exists. Not every student needs 6 months of access or the trimmings of a whole course but, especially for those taking a challenge route (just the assessment), it would often be useful to have access to a course for a little while in order to get a sense of what the expectations might be, the scope of the content, and the norms and standards employed. On occasion, it might even be a good idea to interact with others. Perhaps we could sell daily, weekly or monthly passes. Or we could maybe do it at a finer level of granularity too/instead: a different pass for different topics, or different components like forums, quizzes or assignment marking. Together, following from the publishers’ lead, such passes might cost 10 or 20 times the total cost of simply subscribing to a whole course if every option were purchased, but students could strategically pick the parts they actually need, so reducing their own overall costs.

This idea is, of course, stupid. This is not because it doesn’t make economic and practical sense: it totally does, notwithstanding the management, technical and administrative complexity it entails. It is stupid because it flips education on its head. It makes chunks of learning into profit centres rather than the stuff of life. It makes education into a product rather than celebrating its role as an agent of personal and societal growth. It reduces the rich, intricately interwoven fabric of the educational experience to a set of instrumentally-driven isolated events and activities. It draws attention to accreditation as the be-all and end-all of the process. It is aggressively antisocial, purpose-built to reduce the chances of forming a vibrant learning community. This is beginning to sound eerily familiar. Is that not exactly what, in too high a percentage of our courses, we are doing already?

If we and other universities are to survive and thrive, the solution is not to treat courses and accreditation as products or services. The ongoing value of a university is to catalyze the production and preservation of knowledge: that is what we are here for, that is what makes us worthwhile having. Courses are just tools that support that process, though they are far from the only ones, while accreditation is not even that: it’s just a byproduct, effluent from the educational process that happens to have some practical societal value (albeit at enormous cost to learning). In physical universities there are vast numbers of alternatives that support the richer purpose of creating and sustaining knowledge: cafes, quads, hallways, common rooms, societies, clubs, open lectures, libraries, smoking areas, student accommodation, sports centres, theatres, workshops, studios, research labs and so on. Everywhere you go you are confronted with learning opportunities and people to learn with and from, and the taught courses are just part of the mix, often only a small part. At least, that is true in a slightly idealized world – sadly, the vast majority of physical universities are as stupidly focused on the tools as we are, so those benefits are an afterthought rather than the main thing to celebrate, and are often the first things to suffer when cuts come along. Online, such beyond-the-course opportunities are few and far between: the Landing is (of course) built with exactly that concern in mind, but there’s precious little sign of it anywhere else at AU, one of the most advanced online universities in the world.  The nearest thing most students get to it is the odd Facebook group or Twitter interaction, which seems an awful waste to me, though a fascinating phenomenon that blurs the lines between the institution and the broader community.

It is already possible to take a high quality course for free in almost any subject that interests you and, more damagingly, any time now there will soon be sources of accreditation that are as prestigious as those awarded by universities but orders of magnitude cheaper, not to mention compellingly cut-price  options from universities that can leverage their size and economies of scale (and, perhaps, cheap labour) to out-price the rest of us. Competing on these grounds makes no sense for a publicly funded institution the role of which is not to be an accreditation mill but to preserve, critique, observe, transform and support society as a whole. We need to celebrate and cultivate the iceberg, not just its visible tip. Our true value is not in our courses but in our people (staff and students) and the learning community that they create.

Open access: beyond the journal

Interesting and thoughtful argument from Savage Minds mainly comparing the access models of two well-known anthropology journals, one of which has gone open and seems to be doing fine, the other of which is in dire straits and that almost certainly needs to open up, but for which it may be too late. I like two quotes in particular. The first is from the American Anthropologist’s editorial, explaining the difficulties they are in:

If you think that making money by giving away content is a bad idea, you should see what happens when the AAA tries to make money selling it. To put it kindly, our reader-pays model has never worked very well. Getting over our misconceptions about open access requires getting over misconceptions of the success of our existing publishing program. The choice we are facing is not that of an unworkable ideal versus a working system. It is the choice between a future system which may work and an existing system which we know does not.”

The second is from the author of the article:

CollabraOpen Library of the HumanitiesKnowledge Unlatched, and SciELO — blur the distinction between journal, platform, and community the same way Duke Ellington blurred the boundary between composer, performer, and conductor.”

I like that notion of blurring and believe that this is definitely the way to go. We are greatly in need of new models for the sharing, review, and discussion of academic works because the old ones make no sense any more. They are expensive, untimely, exclusionary and altogether over-populous. There have been many attempts to build dedicated platforms for that kind of thing over they years (one of my favourites being the early open peer-reviewing tools of JIME in the late 1990s, now a much more conventional journal, to its loss). But perhaps one of the most intriguing approaches of all comes not from academic presses but from the world of student newspapers. This article reports on a student newspaper shifting entirely into the (commercial but free) social media of Medium and Twitter, getting rid of the notion of a published newspaper altogether but still retaining some kind of coherent identity. I don’t love the notion of using these proprietary platforms one bit, though it makes a lot of sense for cash-strapped journalists trying to reach and interact with a broad readership, especially of students. Even so, there might be more manageable and more open, persistent ways (eg. syndicating from a platform like WordPress or Known). But I do like the purity of this approach and the general idea is liberating.

It might be too radical an idea for academia to embrace at the moment but I see no reason at all that a reliable curatorial team, with some of the benefits of editorial control, posting exclusively to social media, might not entirely replace the formal journal, for both process and product. It already happens to an extent, including through blogs (I have cited many), though it would still be a brave academic that chose to cite only from social media sources, at least for most papers and research reports. But what if those sources had the credibility of a journal editorial team behind them and were recognized in similar ways, with the added benefit of the innate peer review social media enables?  We could go further than that and use a web of trust to assert validity and authority of posts – again, that already occurs to some extent and there are venerable protocols and standards that could be re-used or further developed for that, from open badges to PGP, from trackbacks to WebMention. We are reaching the point where subtle distinctions between social media posts are fully realizable – they are not all one uniform stream of equally reliable content – where identity can be fairly reliably asserted, and where such an ‘unjournal’ could be entirely distributed, much like a Connectivist MOOC. Maybe more so: there is no reason there should even be a ‘base’ site to aggregate it all, as long as trust and identity were well established. It might even be unnecessary to have a name, though a hashtag would probably be worth using.

I wonder what the APA format for such a thing might be?

Address of the bookmark: http://savageminds.org/2015/05/27/open-access-what-cultural-anthropology-gets-right-and-american-anthropologist-gets-wrong/

Ivy League For Free: What One Man Learned By Crashing Elite Colleges For 4 Years

A thought-provoking article on a man who gate-crashed some top tier US university courses as well as here in Canada. This was both politically and personally motivated. The man not only attended lectures and seminars but also got access to networks, direct interaction with professors, and received many of the benefits of a traditional university education bar the accreditation and support.

Attendance at lectures is rarely monitored and it has long been touted as one of the best free entertainment options available, if you choose your lectures wisely. I think I first recall reading about it in Playpower, a book on alternative culture from the early 70s. As the article suggests, it is generally pretty well accepted in many institutions and, as long as it doesn’t harm paying students (in this case I’d guess it was actually beneficial), it is hard to see why anyone would object. It might be a little trickier to get away with attending small tutorials or workshops involving restricted resources (e.g needing logins to university computers), and it would not always be easy to get the accompanying documentation and schedules, but this seems eminently do-able for many courses. I don’t think it scales well – most universities would probably start to institute controls if a large number of non-paying students started to join in, especially if they used up rival resources like handouts or materials, or hog tutor time in ways that harmed others. In some subjects it would not work at all. But it is quite an interesting perspective on openness. Face-to-face can be open too – in fact, it might be the default.

Makes me wonder about closed online courses and open MOOCs. Are we really as open as we claim? I think not so much.

Address of the bookmark: http://www.fastcompany.com/3043053/my-creative-life/ivy-league-free-what-one-man-learned-by-crashing-elite-colleges-for-4-years

Instructional quality of Massive Open Online Courses (MOOCs)

This is a very interesting, if (I will argue) flawed, paper by Margaryan, Bianco and Littlejohn using a Course Scan instrument to examine the instructional design qualities of 76 randomly selected MOOCs (26 cMOOCs and 50 xMOOCs – the imbalance was caused by difficulties finding suitable cMOOCs). The conclusions drawn are that very few MOOCs, if any, show much evidence of sound instructional design strategies. In fact they are, according to the authors, almost all an instructional designer’s worst nightmare, on at least some dimensions.  
I like this paper but I have some fairly serious concerns with the way this study was conducted, which means a very large pinch of salt is needed when considering its conclusions. The central problem lies in the use of prescriptive criteria to identify ‘good’ instructional design practice, and then using them as quantitative measures of things that are deemed essential to any completed course design. 

Doubtful criteria 

It starts reasonably well. Margaryan et al use David Merrill’s well-accepted abstracted principles for instructional design to identify kinds of activities that should be there in any course and that, being somewhat derived from a variety of models and theories, are pretty reasonable: problem centricity, activation of prior learning, expert demonstration, application and integration. However, the chinks begin to show even here, as it is not always essential that all of these are explicitly contained within a course itself, even though consideration of them may be needed in the design process – for example, in an apprenticeship model, integration might be a natural part of learners’ lives, while in an open ‘by negotiated outcome’ course (e.g. a typical European PhD) the problems may be inherent in the context. But, as a fair approximation of what activities should be in most conventional taught courses, it’s not bad at all, even though it might show some courses as ‘bad’ when they are in fact ‘good’. 
The authors also add five more criteria abstracted from literature relating rather loosely to ‘resources’, including: expert feedback; differentiation (i.e. personalization); collaboration; authentic resources; and use of collective knowledge (i.e. cooperative sharing). These are far more contentious, with the exception of feedback, which almost all would agree should be considered in some form in any learning design (and which is a process thing anyway, not a resource issue). However, even this does not always need to be the expert feedback that the authors demand: automated feedback (which is, to be fair, a kind of ossified expert feedback, at least when done right), peer feedback or, best of all, intrinsic feedback can often be at least as good in most learning contexts. Intrinsic feedback (e.g. when learning to ride a bike, falling off it or succeeding to stay upright) is almost always better than any expert feedback, albeit that it can be enhanced by expert advice. None of the rest of these ‘resources’ criteria are essential to an effective learning design. They can be very useful, for sure, although it depends a great deal on context and how it is done, and there are often many other things that may matter as much or more in a design, like including support for reflection, for example, or scope for caring or passion to be displayed, or design to ensure personal relevance. It is worth noting that Merrill observes that, beyond the areas of broad agreement (which I reckon are somewhat shoehorned to fit), there is much more in other instructional design models that demands further research and that may be equally if not more important than those identified as common.

It ain’t what you do…

Like all things in education, it ain’t what you do but how you do it that makes all the difference, and it is all massively dependent on subject, context, learners and many other things. Prescriptive measures of instructional design quality like these make no sense when applied post-hoc because they ignore all this. They are very reasonable starting frameworks for a designer that encourage focus on things that matter and can make a big difference in the design process, but real life learning designs have to take the entire context into account and can (and often should) be done differently. Learning design (I shudder at the word ‘instructional’ because it implies so many unhealthy assumptions and attitudes) is a creative and situated activity. It makes no more sense to prescribe what kinds of activities and resources should be in a course than it does to prescribe how paintings should be composed. Yes, a few basics like golden ratios, rules of thirds, colour theory, etc can help the novice painter produce something acceptable, but the fact that a painting disobeys these ‘rules’ does not make it a bad painting: sometimes, quite the opposite. Some of the finest teaching I have ever seen or partaken of has used the most appalling instructional design techniques, by any theoretical measure.

Over-rigid assumptions and requirements

One of the biggest troubles with such general-purpose abstractions is that they make some very strong prior assumptions about what a course is going to be like and the context of delivery. Thanks to their closer resemblance to traditional courses (from which it should be clearly noted that the design criteria are derived) this is, to an extent, fair-ish for xMOOCs. But, even in the case of xMOOCs, the demand that collaboration, say, must occur is a step too far: as decades of distance learning research has shown (and Athabasca University proved for decades), great learning can happen without it and, while cooperative sharing is pragmatic and cost-effective, it is not essential in every course. Yes, these things are often a very good idea. No, they are not essential. Terry Anderson’s well-verified (and possibly self-confirming, though none the worse for it) theorem of interaction equivalency  makes this pretty clear. 

cMOOCs are not xMOOCs

Prescriptive criteria as a tool for evaluation make no sense whatsoever in a cMOOC context. This is made worse because the traditional model is carried to extremes in this paper, to the extent that the authors bemoan the lack of clear learning outcomes. This doesn’t naturally fall out from the design principles at all, so I don’t understand why they are even mentioned, and it seems an abitrary criterion that has no validity or justification beyond the fact that they are typically used in university teaching. As teacher-prescribed learning outcomes are anathema to Connectivism it is very surprising indeed that the cMOOCs actually scored higher than the xMOOCs on this metric, which makes me wonder whether the means of differentiation were sufficiently rigorous. A MOOC that genuinely followed Connectivist principles would not provide learning outcomes at all: foci and themes, for sure, but not ‘at the end of this course you will be able to x’. And, anyway, as a lot of research and debate has shown, learning outcomes are of far greater value to teachers and instructional designers than they are to learners, for whom they may, if not handled with great care, actually get in the way of effective learning. It’s a process thing – helpful for creating courses, almost useless for taking them. The same problem occurs in the use of course organization in the criteria – cMOOC content is organized bottom-up by learners, so it is not very surprising that they lack careful top-down planning, and that is part of the point.

Apparently, some cMOOCs are not cMOOCs either

As well as concerns about the means of differentiating courses and the metrics used, I am also concerned with how they were applied. It is surprising that there was even a single cMOOC that didn’t incorporate use of ‘collective knowledge’ (the authors’ term for cooperative sharing and knowledge construction) because, without that, it simply isn’t a cMOOC: it’s there in the definition of Connectivism . As for differentiation, part of the point of cMOOCs is that learning happens through the network which, by definition, means people are getting different options or paths, and choosing those that suit their needs. The big point in both cases is that the teacher-designed course does not contain the content in a cMOOC: beyond the process support needed to build and sustain a network, any content that may be provided by the facilitators of such a course is just a catalyst for network formation and a centre around which activity flows and learner-generated content and activity is created. With that in mind it is worth pointing out that problem-centricity in learning design is an expression of teacher control which, again, is anathema to how cMOOCs work. Assuming that a cMOOC succeeds in connecting and mobilizing a network, it is all but certain that a great deal of problem-based and inquiry-based learning will be going on as people post, others respond, and issues become problematized. Moreover, the problems and issues will be relevant and meaningful to learners in ways that no pre-designed course can ever be. The content of a cMOOC is largely learner-generated so of course a problem focus is often simply not there in static materials supplied by people running it. cMOOCs do not tell learners what to do or how to do it, beyond very broad process support which is needed to help those networks to accrete. It would therefore be more than a little weird if they adhered to instructional design principles derived from teacher-led face-to-face courses in their designed content because, if they did, they would not be cMOOCs. Of course, it is perfectly reasonable to criticize cMOOCs as a matter of principle on these grounds: given that (depending on the network) few will know much about learning and how to support it, one of the big problems with connectivist methods is that of getting lost in social space, with insufficient structure or guidance to suit all learning needs, insufficient feedback, inefficient paths and so on. I’d have some sympathy with such an argument, but it is not fair to judge cMOOCs on criteria that their instigators would reject in the first place and that they are actively avoiding. It’s like criticizing cheese for not being chalky enough.

It’s still a good paper though

For all that I find the conclusions of this paper very arguable and the methods highly criticizable, it does provide an interesting portrait of MOOCs using an unconventional lens. We need more research along these lines because, though the conclusions are mostly arguable, what is revealed in the process is a much richer picture of the kinds of things that are and are not happening in MOOCs. These are fine researchers who have told an old story in a new way, and this is enlightening stuff that is worth reading.
 
As an aside, we also need better editors and reviewers for papers like this: little tell-tales like the fact that ‘cMOOC’ gets to be defined as ‘constructivist MOOC’ at one point (I’m sure it’s just a slip of the keyboard as the authors are well aware of what they are writing about) and more typos than you might expect in a published paper suggest that not quite enough effort went into quality control at the editorial end. I note too that this is a closed journal: you’d think that they might offer better value for the money that they cream off for their services.

Address of the bookmark: http://www.sciencedirect.com/science/article/pii/S036013151400178X

Microsoft Open Sources .NET, Saying It Will Run On Linux and Mac | WIRED

This is a sign of what appear to be some remarkable seismic shifts at Microsoft. To be fair, Microsoft has long been a contributor to open source initiatives but .NET was, until fairly recently, seen as one of the crown jewels only slightly less significant than Windows and Office, which makes me and the writer of this article wonder whether they might be heading towards open sourcing these at some point (Windows mobile version is already free, albeit with many provisos, terms and conditions, but that’s just common sense otherwise no one would use the substandard pile of pants at all).

Note that they are apparently only open-sourcing the core of .NET, which is not that wonderful without all the accompanying framework and goodies. The open source Mono project has provided this functionality for many years thanks to Microsoft’s wisely open approach to treating it and C# as a specification rather than a completely closed technology in the first place but, and it’s a big but, there are few Windows .NET apps that can run on Mono under Unix without some significant tweaking or acceptance of limitations and bugs, because so much relies on the premium libraries, controls and other proprietary closed tools that only paying Windows users can take advantage of. It’s much better than it used to be, but Mono is still a shim rather than a solution. I’m guessing there are few that would use it in preference to, say, Java unless their primary target were Windows machines or they were inveterate C# or VB fans.

This is probably not a sign of deeper openness, however. Microsoft, like most others in the industry, clearly see the future is in net-delivered cloud-based subscription services. Azure, Office365, Skype, Exchange Online etc etc are likely to be where most of the money comes from in the years ahead. .NET is nothing like as effective at locking people in than providing a service that handles all the data, communication and business processes of an individual or organization. Moreover, if more .NET developers can be sucked in to developing for other platforms, that means more that can be pulled in to Microsoft’s cloud systems though, to be fair, it does mean Microsoft has to actually compete on even ground to win, rather than solely relying on market dominance. But it does have a lot of cash to outspend many of its rivals, and raw computer power together with the money to support it plays a large role in achieving success in this area.

The cloud is a new (well, extremely old but now accepted and dominant) form of closed system in which the development technology really shouldn’t matter much any more. I worry a great deal about this though. In the past we were just locked in by data formats, closed licences and closed software (perniciously driven by upgrade cycles that rendered what we had purchased obsolete and unsupported), but at least the data were under our control. Now they are not. I know of no cloud-based services that have not at some point changed terms and conditions, often for the worse, few that I would trust with my data any further than I could throw them, and none at all that are impervious to bankrupcy, take-overs and mergers. When this happened in the past we always had a little lead time to look for an alternative solution and our systems kept running. Nowadays, a business can be destroyed in the seconds it takes to shut down or alter a system in the cloud.

Address of the bookmark: http://www.wired.com/2014/11/microsoft-open-sources-net-says-will-run-linux-mac/

Agoraphobia and the modern learner

Abstract:Read/write social technologies enable rich pedagogies that centre on sharing and constructing content but have two notable weaknesses. Firstly, beyond the safe, nurturing environment of closed groups, students participating in more or less public network- or set-oriented communities may be insecure in their knowledge and skills, leading to resistance to disclosure. Secondly, it is hard to know who and what to trust in an open environment where others may be equally unskilled or, sometimes, malevolent. We present partial solutions to these problems through the use of collective intelligence, discretionary disclosure controls and mindful design.

Address of the bookmark: http://www-jime.open.ac.uk/jime/article/viewArticle/2014-03/html

Professor forces students to buy his own $200 textbook

This article is actually purportedly about the very unsurprising discovery that students who can’t afford textbooks are downloading them illegally, even for ethics classes. Shocking! Not. However, the thing that really shocks me about this article is the example given of the professor demanding that his students purchase his own $200 etextbook. Piracy seems a pretty minor crime compared with this apparently outrageous, blatant, extortionate abuse of power. 

 

Address of the bookmark: http://www.washingtonpost.com/blogs/answer-sheet/wp/2014/09/17/more-students-are-illegally-downloading-college-textbooks-for-free/

Teaching Crowds: Learning and Social Media

The free PDF preview of the new book by me and Terry Anderson is now available from the AU Press website. It is a complete and unabridged version of the paper book. It’s excellent value!

The book is about both how to teach crowds and how crowds can teach us, particularly at a distance and especially with the aid of social software.

For the sake of your health we do not recommend trying to read the whole thing in PDF format unless you have a very big and high resolution tablet or e-reader, or are unusually comfortable reading from a computer screen, but the PDF file is not a bad way to get a flavour of the thing, skip-read it, and/or to find or copy passages within it. You can also download individual chapters and sections if you wish. 

The paper and epub versions should be available for sale at the end of September, 2014, at a very reasonable price. 

Address of the bookmark: http://www.aupress.ca/index.php/books/120235