Measuring transactional distance in web-based learning environments: an initial instrument development

From the ironically named Taylor & Francis journal ‘Open Learning’ (which is closed), an interesting attempt to come up with a means to measure transactional distance. Regular readers will know that I am a fan of Moore’s theory of transactional distance, a systems theory that explains some of the central the dynamics of educational systems and that can be extremely valuable in both designing and predicting the effects of distance learning, but that is susceptible to multiple interpretations and that is fuzzy around the edges. Coming up with a reliable instrument to measure it would therefore be quite useful.

Abstract:

“This study was an initial attempt to operationalise Moore’s transactional distance theory by developing and validating an instrument measuring the related constructs: dialogue, structure, learner autonomy and transactional distance. Data were collected from 227 online students and analysed through an exploratory factor analysis. Results suggest that the instrument, in general, shows promise as a valid and reliable measure of the constructs related to transactional distance theory. Potential refinement of the instrument and future research directions are included at the end of the article.”

There’s lots of good discussion of previous work in this paper and some fair attempts to dismantle the mechanisms and meanings of transactional distance, as well as a good research process capable of revealing some interesting insights. However, I am unconvinced by some of the very basic assumptions, so the instrument remains a bit blunt. I am a bit disappointed that one of my papers is cited for its minor criticism of the fuzziness of the theory, but the authors do not consider the major point of the paper (and a solution to much of that fuzziness) that the fundamental dynamic of transactional distance is concerned with control. I have a very strong suspicion that they might have found far more useful things in this study if they had explicitly taken that on board and tried to examine the exchange of control in the system.  Instead, they got caught in the well-known trap of seeing autonomy as a personal and unsituated characteristic, and made rough assumptions about structure/dialogue that take no account of the scale (or, as the late John Holland would have more accurately put it the boundaries) of the systems being looked at. These are not separate or separable categories – the dynamics shift according to where and when you place the boundaries. They would also have benefitted greatly from considering the various presences in the community of inquiry model, which would have made it easier to lose that very arbitrary one-to-one correspondence of teacher, student and content roles that constrains the model in quite artificial ways. Teachers are also other students, writers of content, and the creators of the surrounding physical and organizational environment. Again, the boundaries are not fixed, nor are they mutually exclusive. The most disappointing thing, though, is that that their initial hypotheses about the nature of transactional distance (which is, after all, what it was supposed to be about and that might have been a really valuable contribution, if validated) got completely lost in the process. The one thing that they really needed to show is the one thing that they did not. This is not a bad thing at all, and it is a discovery that is worthy of discussion. However, that is not quite how they see it:

“Transactional distance included learner–instructor transactional distance and learner–learner transactional distance. The original closeness, shared understanding and perceived learning did not merge; yet, the related items merged into the learner–instructor transactional distance and learner–learner transactional distance, respectively.”

This rather begs the question – if their initial model was not correct, what is that transactional distance that they are talking about and that they are attempting to measure? Their initial model, though fuzzy, was interesting and based on some thoughtful analysis but, in the final model, all they have done is to say that there are two different kinds of transactional distance depending on whether you are a learner or a teacher, without saying what they are, coming up with a sweeping sub-categorization that is just an artefact of the initial assumptions.  I think another closely related part of the problem is that they assumed at the start that transactional distance is in some way additional and separate to structure, dialogue and autonomy, rather than strictly following Moore’s meaning that it is a function of them. Their worthwhile attempt to analyze it further, by unpicking aspects of that, turned out to be fruitless because the aspects they picked were not the right ones.

This is not to suggest that the results are valueless. Far from it. This is a nicely conducted study that models a little of the complexity of learning transactions in a useful, if fuzzy, way, that explores the various meanings of transactional distance expressed in the literature pretty well, and, as well, helps to show some relatively unfruitful lines of enquiry. It’s just that it doesn’t meet the objectives set out in its own title, and it does not really do much to reduce the fuzziness of the construct that is the main problem that it set out to solve.

Address of the bookmark: http://www.tandfonline.com/doi/abs/10.1080/02680513.2015.1065720#.Vco2kbcgpf9

Punishing a Child Is Effective If Done Correctly

The title of this post is the title of the paper, and very much not a statement of my opinion. The paper explains how. The questions that immediately spring to mind are ‘effective for what?’ and ‘compared with what?’ The answers from the paper are that it is effective (ish) for making children behave the way you want them to behave, if done in the recommended manner for a limited subset of contexts and people, compared with explaining to kids why their behaviour is unacceptable. Sigh.

Behaviourist approaches do often work as a way of producing the desired behaviour – that is their appeal and that is their point. They do not work at all well when compared with alternatives (explaining is only one of thousands of alternatives, the choice of which depends entirely on context), and they almost always have extremely undesirable side-effects. There are many subsidiary lessons that punishment teaches, including that you should obey those with more power, that you are less worthy than those with more power, that forcible manipulation is an acceptable thing to do to other people, etc. The same applies to rewards.

When I was young, untutored, and overwhelmed with the hassles of parenthood I did sometimes use punishment for my kids in much the same way that this research recommends as well as, occasionally, in anger. I am not proud of that. I think it is entirely understandable but it is a thing to be ashamed of, not to be celebrated. It is a lazy, short-termist short cut that has far more unpleasant side-effects than the benefits it brings. There is always an alternative and, though it may take longer, may be uncomfortable and it may take more patience, that alternative is almost always better in the long run. If we treat children like dogs (and behaviourist methods aren’t even that great for dogs) they will likely grow up obedient – unless they react against it, which is a strong possibility – and, like dogs, if we let the leash slip or they spot a way to avoid punishment while doing something bad, they are likely to take it. Even if they don’t, if the only reason they don’t do the bad thing is habitual fear, the world will be a much sadder place.

Address of the bookmark: http://apa.org/news/press/releases/2015/08/punishing-child.aspx

Welcome to The Internet of Compromised Things

Jeff Atwood clearly and coherently explains why connecting to the Internet is scary. It’s especially scary when all of our devices – cars, lights, heating, gas pumps, locks, surveillance cameras, TVs, etc – are connected. Most of us have learned to be at least a bit careful with our computers but we tend to be more careless and trusting of those simple plugin devices. Unfortunately, among the weakest links are our routers and, once owned, it is really hard to escape the malware that controls them. Worse, like many of our devices, their updates and configuration tend to be ignored or forgotten. As more and more devices embed powerful and dangerous net-connected computers, this problem is going to get a lot worse over the coming years. Some good advice in this article on protecting yourself as best you can.

Address of the bookmark: http://blog.codinghorror.com/welcome-to-the-internet-of-compromised-things/

We're heading Straight for AOL 2.0 · Jacques Mattheij

Interesting commentary on the hijacking and usurpation of open protocols by web companies intent on making a profit by closing their ecosystems via non-standard apps layered over HTTP. As Mattheij notes, this is very similar to the way AOL, CompuServe and other commercial providers used to lock in their users. Now, instead of running proprietary systems over layer 2-4 protocols (as AOL et al used to do), vendors are running them over layer 5 (or, for OSI purists, layer 7) protocols, with proprietary APIs designed to hook others into their closed systems (think Facebook or Google logins). The end result is the same, and it’s a very bad result.

Mattheij writes

Please open up your protocols, commit to keeping them open and publish a specification. And please never do what twitter did (start open, then close as soon as you gain traction).

I completely concur.

Address of the bookmark: http://jacquesmattheij.com/aol-20

elearnspace › White House: Innovation in Higher Education

Brilliant piece by our own George Siemens on his thoughts on visiting the White House for a special meeting on higher education. There’s a strong US-centric focus to George’s report, understandably enough, but many of the issues he speaks of resonate internationally. Things are changing, and the change is coming soon, and George is very good at pinning down the implications of that. A worthwhile read for anyone with an interest in the future of higher education.

Address of the bookmark: http://www.elearnspace.org/blog/2015/08/03/white-house-innovation-in-higher-education/

Education software company Blackboard is looking to sell for $3 billion

Blackboard logoUnwanted gift: a few careless owners, many botched repair jobs, not firing on all cylinders, tarnished reputation, some wheels missing, but only slightly used.

A bargain for anyone with a 19th Century attitude to education, seeking thousands of locked-in, resentful customers who will continue to complainingly pay through the nose for any old rubbish because it is too difficult and expensive to move to a different platform. Get it before they all go!

Address of the bookmark: http://www.businessinsider.com/education-software-company-blackboard-is-looking-to-sell-for-3-billion-2015-7

After Internet.org Backlash, Facebook Opens Portal To Court More Operators

Techcrunch article by Jon Russell on how Facebook is pretending (very badly, like one unpracticed in the art) to be nice by opening up its Internet.org branch to a few more developers.

In case you are not familiar with this bit of exploitation of the poor, the claimed ‘public service’ aspect of Internet.org is that it gets people online who would otherwise be unable to afford it, specifically in the third world, by making access to (some) online services free of data charges. I’d have to agree, that sounds nice enough, and that’s certainly the spin Zuckerberg puts on it. The evil side of it is that it is essentially a portal to Facebook and a few hand-filtered other sites, not the Internet as we know it, it is immensely destructive to net neutrality, and is nothing more than a bare-faced attempt to make money out of people that have too little of it, and to hook them into Facebook’s all-consuming centralized people farm. Zuckerberg is allegedly proud of the fact that around half of the millions that have signed up thus far have moved on to paid plans that actually do allow access to the Internet – likely the reason for the (otherwise odd) inclusion of Google Search in the original small lineup of options, inasmuch as non-approved sites come with a warning that users need to buy the real thing now. Of course, by that time, they are already Facebook sign-ups too, which is what this is really about. This is much the same tactic used by drug dealers seeking new customers by giving out samples and it similarly immoral. It is absurd to suggest, as Zuckerberg apparently does, that allowing a few more people to develop for the platform and suggesting that they in turn allow access to further sites (as long as they conform to Facebook’s conditions)  makes it in any way more open. It is coercing companies into using the app using much the same techniques it applies to building people’s social networks. A filtered internet via a Facebook-controlled app is not the free (as in speech) and open Internet and, ultimately, the most notable beneficiary is Facebook, though it is certainly doing the partner operators no harm either. The choice of domain name is cynical in the extreme – I’d admire the chutzpah if it were not so ugly. My respect goes to the many Indian companies that are pulling out in protest at its shameless destruction of net neutrality and greedy marketing under the false banner of philanthropy.

Address of the bookmark: http://techcrunch.com/2015/07/27/facebook-internet-org-one/

Interview with Kinshuk (part II) in AUSU's Voice Magazine

The second part of AUSU’s Voice Magazine’s interview with Kinshuk (first part here) in which he talks about some of his rich ideas around smart learning, the interplay between digital technologies and pedagogies, fine-grained accreditation, and the value of social interaction in learning. Excellent insights into the thinking of one of AU’s finest profs, who also happens to be one of the smartest (and most prolific) edtech researchers on the planet. His bubbly personality and deeply humanistic, caring perspective on such things comes across very well in this interview.

Address of the bookmark: http://www.voicemagazine.org/articles/featuredisplay.php?ART=10648

Expertise and the Illusion of Knowledge

A post about the Dunning-Kruger effect, which basically claims (and, in a series of studies) demonstrates that ignorance is often typified not the absence of knowledge but by the illusion of it. People think they know more than they do and, at least in many cases, the less they know, the more they think they know. People as in us.

For teachers, this is one of the trickiest things to overcome when we want to give learners control: how do learners distinguish between ignorance and knowledge? If you do not know that you need to know more, you do not have the power nor motivation to take the steps to change that. The role of a teacher (whether an appointed individual or not) to challenge misconceptions and highlight ignorance is a crucial one.  But it should not be about proving or, worse still, telling someone less able than yourself that they are wrong: that’s just a power trip. Ideally, learners should develop ways to uncover their own ignorance – to be surprised or confounded, to see their own mistakes – rather than have someone do it for them.  I think that this means that teachers, amongst other things, should create conditions for surprise to occur, opportunities to safely fail (without judgement), opportunities to reflect, and support for those seeking to uncover the cause of their new-found ignorance.

Address of the bookmark: http://theness.com/neurologicablog/index.php/expertise-and-the-illusion-of-knowledge/