From Representation to Emergence: Complexity's challenge to the epistemology of schooling – Osberg – 2008 – Educational Philosophy and Theory – Wiley Online Library

This is my second post for today on the subject of boundaries and complex systems (yes, I am writing a paper!), this time pointing to a paper by Osberg, Biesta and Cilliers from 2008 that applies the concepts to knowledge and education. It’s a fascinating paper, drawing a theory of knowledge out of complex systems that the authors rather deftly fit with Dewey’s transactional realism and (far less compellingly) a bit of deconstructionism.

I think this sits very firmly within the connectivist family of theories (Stephen Downes may disagree!) albeit from a slightly different perspective. The context is the realm of complex (mostly complex adaptive) systems but the notion of knowledge as an emergent and shifting phenomenon born of engagement – a process, not a product – and the significance of the connected whole in both enabling and embodying it all is firmly in the connectivist tradition. It’s a slightly different perspective but one that is well-grounded in theory and comes to quite a similar conclusion, aptly put:

education (becoming educated) is no longer about understanding a finished  universe, or even about participating in a finished and stable universe. It is the result, rather, of participating in the creation of an unfinished universe.

The authors begin by defining what they describe as a ‘representational’ or ‘spatial’ epistemology that underpins most education. This is not quite as simplistic as it sounds – they include models and theories in this, at least. Their point is that education takes people out of ‘real life’ and therefore must rely on a means to represent ‘real life’ to do its job properly. I think this is pushing it a bit: yes, that is true of a fair amount of intentional teaching but there is a lot that goes on in education systems that is unintentional, or emerges as a by-product of interaction, or that happens in playgrounds, cafes, or common rooms, that is very different and is not just an incidental to the process but quite critical to it. To pretend that educational systems are nothing but the explicit things we intentionally do to people is, I think deliberately, creating a bit of a straw man. They make much the same point: I guess it is done to distinguish this from their solution, which is an ’emergentist’ epistemology.

The really interesting stuff for me comes from Cillier’s contribution (I’m guessing) on boundaries, which makes the simple and obvious point that complex systems (as opposed to complicated ones) are inherently incompressible, so any model we make of them is inaccurate: in leaving out the tiniest thing we make it impossible to make deterministic predictions, save in that we can create boundaries to focus on particular aspects we might care about and come up with probabalistic inferences (e.g. predicting the weather). Those boundaries are thus, of necessity, created (or, more accurately, negotiated), not discovered. They are value-laden. Thus:

“…models and theories that reduce the world to a system of rules or laws cannot be understood as pure representations of a universe that exists independently, but should rather be understood as valuable but provisional and temporary tools by means of which we constantly re-negotiate our understanding of and being in the world

They go on…

We need boundaries around our regularities before we can model or theorise them, before we can find their rules of operation, because rules make sense only in terms of boundaries. The point is that the setting of the boundary creates the condition of possibility for a rule or a law to exist. When a boundary is not naturally given, as is the case with natural complex systems, the rules that we ‘discover’ also cannot be understood as naturally given. Rules and ‘laws’ are not ‘real’ features of the systems we theorise about. Theories that attempt to reduce complexity to a system of rules or laws, like our models which do precisely this, therefore cannot be understood as pictures of reality.

So, the rules that we find are pragmatic ones – they are tools, rather than pictures of reality, that help us to renegotiate our world and the meaning we make in and of it:

From this perspective, knowledge is not about ‘the world’ as such, it is not about truth; rather, it is about what we can do in the world, how we can change it.One could say ‘acquiring’ knowledge does not ‘solve’ problems for us: it creates problems for us to solve.”

At this point they come round to Dewey, whose transactional model is not about finding out about the world but leads to a constantly emerging and ever renegotiated state of being.

“…in acting, we create knowledge, and in creating knowledge, we learn to act in different ways and in acting in different ways we bring about new knowledge which changes our world, which causes us to act differently, and so on, unendingly. There is no final truth of the matter, only increasingly diverse ways of interacting in a world that is becoming increasingly complex.

One of the more significant aspects of this, that is not dwelt on anything like enough in this paper but that forms a consistent subtext, is that this is a fundamentally social pursuit. This is a complex system not just of individuals negotiating an active relationship with the world, but of people doing it together, as part of a complex system that drives its own adaptation, at every scale and within every (overlapping, interpenetrating) boundary.

They continue with an, I think, unsuccessful attempt to align this perspective with postmodernist/poststructuralist/deconstructionist theory, claiming that Dillon’s differentiation between the radical relationality of complexity and poststructuralist theorists is illusory, because a complex system is always in a state of becoming without being, so it is much the same kind of thing. Whether or not this is true, I don’t think it adds anything significant to the arguments.

The paper rushes to a rather unsatisfactory conclusion – at last hitting the promised topic of the title – about the role of this emergentist epistemology in schooling:

Acquisition is no longer the name of the game …. This means questions about what to present in the curriculum and whether these things should be directly presented or should be represented (such that children may acquire knowledge of these things most efficiently or effectively) are no longer relevant as curricular questions. While content is important, the curriculum is less concerned with what content is presented and how, and more with the idea that content is engaged with and responded to …. Here the content that is engaged is not pre-given, but emerges from the educative situation itself. With this conception of knowledge and the world, the curriculum becomes a tool for the emergence of new worlds rather than a tool for stabilisation and replication

This follows quite naturally and makes sense, but it diminishes the significance of a pretty obvious elephant in the room, which is that the educational institution itself is one of those boundaried systems that plays a huge role in and of itself, not to mention with other boundaried systems, regardless of the processes enacted within its boundaries. I think this is symptomatic of a big gap that the paper very much implies but barely attempts to address, which is that all of these complex systems involved processes, structures, rules, tools, objects, content (whatever that is!), media, and a host of other things are part of those complex systems. Knowledge is indeed a dynamic process, a state of becoming or of being, but it incorporates really a lot of things, only a limited number of which are in the minds of individuals. It’s not about people learning – it’s about that whole, massive, complex adaptive system itself.

Address of the bookmark: http://onlinelibrary.wiley.com/doi/10.1111/j.1469-5812.2007.00407.x/abstract;jsessionid=901674561113DC6F72BDE8756B165030.f04t03?systemMessage=Wiley+Online+Library+will+be+disrupted+on+11th+July+2015+at+10%3A00-16%3A00+BST+%2F+05%3A00-11%3A00+EDT+%2F+17%3A00-23%3A00++SGT++for+essential+maintenance.++Apologies+for+the+inconvenience&userIsAuthenticated=false&deniedAccessCustomisedMessage=

Boundaries and Hierarchies in Complex Systems

This rather elderly paper by Paul Cilliers peters off to an unsatisfyingly vague and obvious conclusion, but it does have some quite useful clarifications and observations about the nature of boundaries as they relate to hierarchies, networks and complex systems in general. I particularly like:

“We often fall into the trap of thinking of a boundary as something that separates one thing from another. We should rather think of a boundary as something that constitutes that which is bounded. “

This simple observation leads to further thoughts on how we choose those boundaries and the (necessary) ways we create models that make use of them. The thing is, we are the creators of those boundaries, at least in any complex system – Cilliers mentions neural networks as a good example – so what we choose to model is always determined by us and, like any model, it is and must be a partial representation, not an analogue, of the impossible complexities of the world it models. In a very real sense, we shape our understanding of the world through the boundaries that we choose to (or are hard-wired to) consider significant and there are always other places to draw those boundaries that change the meaning of what we are observing. It makes the analysis of complex systems quite hard, because we can seldom see beyond the boundaries we create that simplify the complexity in them and we have a tendency to over-simplify: as he points out, even apparently clear hierarchies shift and interpenetrate one another. This is more than, though related to, categories and metaphors of the sort examined by the likes of Hofstadter or Lakoff.

Since this paper was written, John Holland has done some mind-bending and deeply thought-provoking work on signals and boundaries in complex systems that delves far deeper and that begins to address the problem head-on, but which I have been struggling to understand properly for many months: I’m pretty certain that Holland is onto something of staggering importance, if I could only grasp precisely what that might be! He is not the clearest of writers and he tends to leave a lot unsaid and assumed, that the reader has to fill in. It’s also complicated stuff – suffice to say, stochastic urns play a significant role. This paper by Cilliers is a good stab at the issue from a high altitude philosophical perspective that makes a few of the wicked and profound issues quite clear.

Address of the bookmark: http://blogs.cim.warwick.ac.uk/complexity/wp-content/uploads/sites/11/2014/02/Cilliers-2001-Boundaries-Hierarchies-and-Networks.pdf

Over two dozen people with ties to India’s $1-billion exam scam have died mysteriously in recent months

“… the scale of the scam in the central state of Madhya Pradesh is mind-boggling. Police say that since 2007, tens of thousands of students and job aspirants have paid hefty bribes to middlemen, bureaucrats and politicians to rig test results for medical schools and government jobs.

So far, 1,930 people have been arrested and more than 500 are on the run. Hundreds of medical students are in prison — along with several bureaucrats and the state’s education minister. Even the governor has been implicated.

A billion-dollar fraud scheme, perhaps dozens murdered, nearly 2000 in jail and hundreds more on the run. How can we defend a system that does this to people? Though opportunities for corruption may be higher in India, it is not peculiar to the culture. It is worth remembering that more than two-thirds of high school Canadian students cheat (I have seen some estimates that are notably higher – this was just the first in the search results and illustrates the point well enough):

According to a survey of Canadian university & college students:

  • Cheated on written work in high school 73%
  • Cheated on tests in high school 58%
  • Cheated on a test as undergrads 18%
  • Helped someone else cheat on a test 8%

According to a survey of 43,000 U.S. high school students:

  • Used the internet to plagiarize 33%
  • Cheated on a test last year 59%
  • Did it more than twice 34%
  • Think you need to cheat to get ahead 39%

Source: http://www.cbc.ca/manitoba/features/universities/

When it is a majority phenomenon, this is the moral norm, not an aberration. The problem is a system that makes this a plausible and, for many, a preferable solution, despite knowing it is wrong. This means the system is flawed, far more than the people in it. The problems emerge primarily because, in the cause of teaching, we make people do things they do not want to do, and threaten them/reward them to enforce compliance. It’s not a problem with human nature, it’s a rational reaction to extrinsic motivation, especially when the threat is as great as we make it. Even my dog cheats under those conditions if she can get away with it.  When the point of learning is the reward, then there is no point to learning apart from the reward and, when it’s to avoid punishment, it’s even worse. The quality of learning is always orders of magnitude lower than when we learn something because we want to learn it, or as a side-effect of doing something that interests us, but the direct consequence of extrinsic motivation is to sap away intrinsic motivation, so even those with an interest mostly have at least some of it kicked or cajolled out of them. That’s a failure on a majestic scale. If tests given in schools and universities had some discriminatory value it might still be justifiable but perhaps the dumbest thing of all about the whole crazy mess is that a GPA has no predictive value at all when it comes to assessing competence.

Address of the bookmark: http://www.theprovince.com/health/Over+dozen+people+with+ties+India+billion+exam+scam+have+died/11191722/story.html

Exam focus damaging pupils' mental health, says NUT – BBC News

A report on a survey of 8,000 teachers and a review of the research.

The report sponsors observe…

“Many of the young people Young Minds works with say that they feel completely defined by their grades and that this is very detrimental to their wellbeing and self-esteem.”

It seems that at least some of their teachers do indeed (reluctantly) define them that way…

One junior school teacher said: “I am in danger of seeing them more in terms of what colour they are in my pupils’ list eg are they red (below expectation), green (above expectation) or purples (Pupil Premium) – rather than as individuals.”

Indeed, it appears to be endemic…

Kevin Courtney, deputy general-secretary of the NUT, said: “Teachers at the sharp end are saying this loud and clear, ‘If it isn’t relevant to a test then it is not seen as a priority.’

“The whole culture of a school has become geared towards meeting government targets and Ofsted expectations. As this report shows, schools are on the verge of becoming ‘exam factories’.”

He argued the accountability agenda was “damaging children’s experience of education”, which should be joyful and leave them with “a thirst for knowledge for the rest of their lives”.

This is terrible and tragic. So surely the British government is trying to do something about it? Not so much…

A Department for Education spokesperson said: “Part of our commitment to social justice is the determination to ensure every child is given an education that allows them realise their potential.

“That’s why we are raising standards with a rigorous new curriculum, world class exams and new accountability system that rewards those schools which help every child to achieve their best.”

Helping people to realise their potential is a noble aim. A “rigorous new curriculum, world class exams and new accountability system” is a guaranteed way to prevent that from happening. Duh. Didn’t those that run the UK government learn anything in their expensive private schools? Oh…

Address of the bookmark: http://www.bbc.co.uk/news/education-33380155

The death of the exam: Canada is at the leading edge of killing the dreaded annual ‘final’ for good | National Post

Good news!

There’s not much to disagree with in this article, that reports on some successful efforts to erode the monstrously ugly blight of exams in Canada and beyond, and some of the more obvious reasoning behind the initiatives to kill them. They don’t work, they’re unfair, they’re antagonistic to learning, they cause pain, etc. All true.

Address of the bookmark: http://news.nationalpost.com/news/canada/the-death-of-the-exam-canada-is-at-the-leading-edge-of-killing-the-final-for-good

The LMS as a paywall

I was writing about openness in education in a chapter I am struggling with today, and had just read Tony Bates’s comments on iQualify, an awful cloud rental service offering a monolithic locked-in throwback that just makes me exclaim, in horror, ‘Oh good grief! Seriously?’ And it got me thinking.

Learning management systems, as implemented in academia, are basically paywalls. You don’t get in unless you pay your fees. So why not pick up on what publishers infamously already do and allow people to pay per use? In a self-paced model like that used at Athabasca it makes perfect sense and most of the infrastructure – role-based time-based access etc – and of course the content already exists. Not every student needs 6 months of access or the trimmings of a whole course but, especially for those taking a challenge route (just the assessment), it would often be useful to have access to a course for a little while in order to get a sense of what the expectations might be, the scope of the content, and the norms and standards employed. On occasion, it might even be a good idea to interact with others. Perhaps we could sell daily, weekly or monthly passes. Or we could maybe do it at a finer level of granularity too/instead: a different pass for different topics, or different components like forums, quizzes or assignment marking. Together, following from the publishers’ lead, such passes might cost 10 or 20 times the total cost of simply subscribing to a whole course if every option were purchased, but students could strategically pick the parts they actually need, so reducing their own overall costs.

This idea is, of course, stupid. This is not because it doesn’t make economic and practical sense: it totally does, notwithstanding the management, technical and administrative complexity it entails. It is stupid because it flips education on its head. It makes chunks of learning into profit centres rather than the stuff of life. It makes education into a product rather than celebrating its role as an agent of personal and societal growth. It reduces the rich, intricately interwoven fabric of the educational experience to a set of instrumentally-driven isolated events and activities. It draws attention to accreditation as the be-all and end-all of the process. It is aggressively antisocial, purpose-built to reduce the chances of forming a vibrant learning community. This is beginning to sound eerily familiar. Is that not exactly what, in too high a percentage of our courses, we are doing already?

If we and other universities are to survive and thrive, the solution is not to treat courses and accreditation as products or services. The ongoing value of a university is to catalyze the production and preservation of knowledge: that is what we are here for, that is what makes us worthwhile having. Courses are just tools that support that process, though they are far from the only ones, while accreditation is not even that: it’s just a byproduct, effluent from the educational process that happens to have some practical societal value (albeit at enormous cost to learning). In physical universities there are vast numbers of alternatives that support the richer purpose of creating and sustaining knowledge: cafes, quads, hallways, common rooms, societies, clubs, open lectures, libraries, smoking areas, student accommodation, sports centres, theatres, workshops, studios, research labs and so on. Everywhere you go you are confronted with learning opportunities and people to learn with and from, and the taught courses are just part of the mix, often only a small part. At least, that is true in a slightly idealized world – sadly, the vast majority of physical universities are as stupidly focused on the tools as we are, so those benefits are an afterthought rather than the main thing to celebrate, and are often the first things to suffer when cuts come along. Online, such beyond-the-course opportunities are few and far between: the Landing is (of course) built with exactly that concern in mind, but there’s precious little sign of it anywhere else at AU, one of the most advanced online universities in the world.  The nearest thing most students get to it is the odd Facebook group or Twitter interaction, which seems an awful waste to me, though a fascinating phenomenon that blurs the lines between the institution and the broader community.

It is already possible to take a high quality course for free in almost any subject that interests you and, more damagingly, any time now there will soon be sources of accreditation that are as prestigious as those awarded by universities but orders of magnitude cheaper, not to mention compellingly cut-price  options from universities that can leverage their size and economies of scale (and, perhaps, cheap labour) to out-price the rest of us. Competing on these grounds makes no sense for a publicly funded institution the role of which is not to be an accreditation mill but to preserve, critique, observe, transform and support society as a whole. We need to celebrate and cultivate the iceberg, not just its visible tip. Our true value is not in our courses but in our people (staff and students) and the learning community that they create.

What is a BOOC?

This acronym is very ripe for satire. My local public library is full of BOOCs, quite a few of which might be thought of as big, open, offline courses.

Dan Hickey explains: a BOOC is in fact a big open online course with up to about 500 members. This size limit is to enable more interactive, participitatory and socially-driven, though still teacher-managed, pedagogies in a large-ish setting without breaking the bank. It’s an open and pedagogically enlightened version of the recently hyped SPOC or, as we normally refer to it, ‘online course’. 

Dan’s model is most interesting as a testbed for OpenBadges and the use of WikiFolios, allowing participatory, learner-driven and on-demand assessment in some very admirable and interesting ways. So, though the acronym is a little painful, the ideas behind the implementation itself are most cool. One to follow.

Address of the bookmark: http://www.indiana.edu/~booc/what-is-a-booc/

The EDUCAUSE NGDLE and an API of One's Own (Michael Feldstein)

Michael Feldstein responds on NGDLEs with a brilliant in-depth piece on the complex issues involved in building standards for online learning tool interoperability and more. I wish I’d read this before posting my own most recent response because it addresses several of the same issues with similar conclusions, but in greater depth and with more eloquence, as well as bringing up some other important points such as the very complex differences in needs between different contexts of application. My post does add things that Michael’s overlooks and the perspective is a little different (so do read it anyway!), but the overlapping parts are far better and more thoroughly expressed by Michael.

This is an idea that has been in the air and ripe for exploitation for a very long time but, as Michael says in his post and as I also claim in mine, there are some very big barriers when it comes down to implementing such a thing and a bunch of wicked problems that are very hard to resolve to everyone’s satisfaction. We have been here before, several times: let’s hope the team behind NGDLE finds ways to avoid the mistakes we made in the past.

Address of the bookmark: http://mfeldstein.com/the-educause-ngdle-and-an-api-of-ones-own/

Niggles about NGDLEs – lessons from ELF

Malcom Brown has responded to Tony Bates and me in an Educause guest post in which he defends the concept of the NGDLE and expands a bit on the purposes behind it. This does help to clarify the intent although, as I mentioned in my earlier post, I am quite firmly in favour of the idea, so I am already converted on the main points. I don’t mind the Lego metaphor if it works, but I do think we should concentrate more on the connections than the pieces. I also see that it is fairly agnostic to pedagogy, at least in principle. And I totally agree that we desperately need to build more flexible, assemblable systems along these lines if we are to enable effective teaching, management of the learning process and, much much more importantly, if we are to support effective learning. Something like the proposed environment (more of an ecosystem, I’d say) is crucial if we want to move on.

But…

It has been done before, over ten years ago in the form of ELF, in much more depth and detail and with large government and standards bodies supporting it, and it is important to learn the lessons of what was ultimately a failed initiative. Well – maybe not failed, but certainly severely stalled. Parts persist and have become absorbed, but the real value of it was as a model for building tools for learning, and that model is still not as widespread as it should be. The fact that the Educause initiative describes itself as ‘next generation’ is perhaps the most damning evidence of its failure.

Elves

Why ELF ‘failed’

I was not a part of nor close to the ELF project but, as an outsider, I suspect that it suffered from four major and interconnected problems:

  1. It was very technically driven and framed in the language of ICTs, not educators or learners. Requirements from educators were gathered in many ways, with workshops, working groups and a highly distributed team of experts in the UK, Australia, the US, Canada, the Netherlands and New Zealand (it was a very large project). Some of the central players had a very deep understanding of the pedagogical and organizational needs of not just learners but organizations that support them, and several were pioneers in personal learning environments (PLEs) that went way beyond the institution. But the focus was always on building the technical infrastructure – indeed, it had to be, in order to operationalize it. For those outside the field, who had not reflected deeply on the reasons this was necessary, it likely just seemed like a bunch of techies playing with computers. It was hard to get the message across.
  2. It was far too over-ambitious, perhaps bolstered by the large amounts of funding and support from several governments and large professional bodies. The e-learning framework was just one of several strands like e-science, e-libraries and so on, that went to make up the e-framework. After a while, it simply became the e-framework and, though conceptually wonderful, in practical terms it was attempting far too much in one fell swoop. It became so broad, complex and fuzzy that it collapsed under its own weight. It was not helped by commercial interests that were keen to keep things as proprietary and closed as they could get away with. Big players were not really on board with the idea of letting thousands of small players enter their locked-in markets, which was one of the avowed intents behind it. So, when government funding fizzled out, there was no one to take up such a huge banner. A few small flags might have been way more successful.
  3. It was too centralized (oddly, given its aggressively decentralized intent and the care taken to attempt to avoid that). With the best of intent, developers built over-engineered standards relying on web service architectures that the rest of the world was abandoning because they were too clunky, insufficiently agile and much too troublesome to implement. I am reminded, when reading many of the documents that were produced at the time, of the ISO OSI network standards of the late 80s that took decades to reach maturity through ornate webs of committees and working groups, were beautifully and carefully engineered, and that were thoroughly and completely trounced by the lighter, looser, more evolved, more distributed TCP/IP standards that are now pretty much ubiquitous. For large complex systems, evolution beats carefully designed engineering every single time.
  4. The fact that it was created by educators whose framing was entirely within the existing system meant that most of the pieces that claimed to relate to e-learning (as opposed to generic services) had nothing to do with learning at all, but were representative of institutional roles and structures: marking, grading, tracking, course management, resource management, course validation, curriculum, reporting and so on. None of this has anything to do with learning and, as I have argued on many occasions elsewhere, may often be antagonistic to learning. While there were also components that were actually about learning, they tended to be framed in the context of existing educational systems (writing lessons, creating formal portfolios, sequencing of course content, etc). Though very much built to support things like PLEs as well as institutional environments, the focus was the institution far more than the learner.

As far as I can tell, any implementation of the proposed NGDLE is going to run into exactly the same problems. Though the components described are contemporary and the odd bit of vocabulary has evolved a bit, all of them can be found in the original ELF model and the approach to achieving it seems pretty much the same. Moreover, though the proposed architecture is flexible enough to support pretty much anything – as was ELF – there is a tacit assumption that this is about education as we know it, updated to support the processes and methods that have been developed since (and often in response to) the heinous mistakes we made when we designed the LMSs that dominate education today. This is not surprising – if you ask a bunch of experts for ideas you will get their expertise, but you will not get much in the way of invention or new ideas. The methodology is therefore almost guaranteed to miss the next big thing. Those ideas may come up but they will be smoothed out in an averaging process and dissenting models will not become part of the creed. This is what I mean when I criticize it as a view from the inside.

Much better than the LMS

If implemented, a NGDLE will undoubtedly be better than any LMS, with which there are manifold problems. In the first place, LMSs are uniformly patterned on mediaeval educational systems, with all their ecclesiastic origins, power structures and rituals intact. This is crazy, and actually reinforces a lot of things we should not be doing in the first place, like courses, intimately bound assessment and accreditation, and laughably absurd attempts to exert teacher control, without the slightest consideration of the fact that pedagogies determined by the physics of spaces in which we lock doors and keep learners controlled for an hour or two at a time make no sense whatsoever in online learning. In the second place, centralized systems have to maintain an uneasy and seldom great balance between catering to every need and remaining usably simple. This inevitable leads to compromises, from small things (e.g. minor formatting annoyances in discussion forums) to the large (e.g. embedded roles or units of granularity that make everything a course). While customization options can soften this a little, centralized systems are structurally flawed by their very nature. I have discussed such things in some depth elsewhere, including both my published books. Suffice to say, the LMS shapes us in its own image, and its own image is authoritarian, teacher-controlled and archaic. So, a system that componentizes things so that we can disaggregate any or all of it, provide local control (for teachers and other learners as well as institutions and administrators) and allow creative assemblies is devoutly to be wished for. Such a system architecture can support everything from the traditional authoritarian model to the loosest of personal learning environments, and much in between.

Conclusion

NGDLE is a misnomer. We have already seen that generation come and go. But, as a broad blueprint for where we should be going and what we should be doing now, both ELF and NGDLE provide patterns that we should be using and thinking about whenever we implement online learning tools and content and, for that, I welcome it. I am particularly appreciative that NGDLE provides reinvigorated support for approaches that I have been pushing for over a decade but that ICT departments and even faculty resist implacably. It’s great to be able to point to the product of so many experts and say ‘look, I am not a crank: this is a mainstream idea’. We need a sea-change in how we think of learning technologies and such initiatives are an important part of creating the culture and ethos that lets this happen. For that I totally applaud this initiative.

In practical terms, I don’t think much of this will come from the top-down, apart from in the development of lightweight, non-prescriptive standards and the norming of the concepts behind it. Of current standards, I think TinCan is hopeful, though I am a bit concerned that it is becoming over-ornate in its emerging development. LTI is a good idea, sufficiently mature, and light enough to be usable but, again, in its new iteration it is aiming higher than might be wise. Caliper is OK but also showing signs of excessive ambition. Open Badges are great but I gather that is becoming less lightweight in its latest incarnation. We need more of such things, not more elaborate versions of them. Unfortunately, the nature of technology is that it always evolves towards increasing complexity. It would be much better if we stuck with small, working pieces and assembled those together rather than constantly embellishing good working tools. Unix provides a good model for that, with tools that have worked more or less identically for decades but that constantly gain new value in recombination.

Footnote: what became of ELF?

It is quite hard to find information about ELF today. It seems (as an outsider) that the project just ground to a halt rather than being deliberately killed. There were lots of exemplar projects, lots of hooks and plenty of small systems built that applied the idea and the standards, many of which are still in use today, but it never achieved traction. If you want to find out more, here is a small reading list:

http://www.elframework.org/ – the main site (the link to the later e-framework site leads to a broken page)

http://www.elframework.org/projects.html  – some of the relevant projects ELF incorporated.

https://web.archive.org/web/20061112235250/http://www.jisc.ac.uk/uploaded_documents/Altilab04-ELF.pd – good, brief overview from 2004 of what it involved and how it fitted together

 https://web.archive.org/web/20110522062036/http://www.jisc.ac.uk/uploaded_documents/AltilabServiceOrientedFrameworks.pdf  – spooky: this is about ‘Next Generation E-Learning Environments’ rather than digital ones. But, though framed in more technical language, the ideas are the same as NGDLE.

http://www.webarchive.org.uk/wayback/archive/20110621221935/http://www.elearning.ac.uk/features/nontechguide2 – a slightly less technical variant (links to part 1, which explains web services for non-technical people)

See also https://web.archive.org/web/20090330220421/http://www.elframework.org/general/requirements/scenarios/Scenario%20Apparatus%20UK%205%20(manchester%20lipsig).doc and https://web.archive.org/web/20090330220553/http://www.elframework.org/general/requirements/use_cases/EcSIGusecases.zip, a set of scenarios and use cases that are eerily similar to those proposed for NGDLE.

If anyone has any information about what became of ELF, or documents that describe its demise, or details of any ongoing work, I’d be delighted to learn more!