Innovations in learning and teaching

Recently I received an email asking me to identify, with almost no constraints, some examples of innovative teaching and learning practices in universities. Gosh, that’s a tricky one. I don’t think I can provide a sensible answer, for several reasons:

 

  • I’m aware of no teachers (including learning designers, mentors, tutors, coordinators, professors, etc) who have *not* innovated in teaching nor many who don’t do so as a matter of course. There are differing degrees of innovation, naturally, but to teach is to learn and it is necessarily a creative process. I don’t see how it would be possible to teach without innovating. They might not be very astounding or good innovations, of course.
  • Maybe it depends at what scale you are looking at it. If I had no significant innovations in every course that I write and maybe in every lesson or activity I design then I think I would give up now. It could be as small a thing as finding a new way to express an old problem, a use of a trick used elsewhere in a new setting, or as big a thing as a whole new way of conducting the process. It’s all innovation.
  • Innovation in learning is a trickier one still to pin down which reflects an important issue that there are many teaching activities that fail to lead to effective learning and even more learning that involves nothing much like teaching. The use of paper mills for contract cheating and hint sites for exam cheating is pretty innovative sometimes.
  • And then there’s the issue of innovation vs invention – in many universities it is undeniably innovative to use an LMS or get rid of exams while many have dissed such things as prehistoric dinosaurs that are not fit for purpose for over a decade (for the LMS) and over 200 years (for the exam) In each case, about the time it was invented, in fact.
  • Similarly, the kinds of innovation that would matter somewhere like Athabasca would not be the same as for a conventional campus-based university – approaches to self-paced learning, for instance, would have little applicability elsewhere. 
  • Much of this relates to the fact that innovation is very context-sensitive. For some contexts, simply using a different tone of voice might be a major innovation. In others, one might have to try harder.
  • This also relates back to the re-invention problem: much of what we still identify as innovative was suggested by Dewey a hundred years ago. 
  • Is an innovation in making more reliable summative assessments an example of an innovation in learning and teaching? Or a means to improve the efficiency of student script processing using OCR or LSA tools? Or a citation management tool? I’m not sure. It depends on context.
  • What about MOOCs? The teaching is often from within a university but the learning is not.

 

An innovation, by and large, is a novel application of an existing idea in a different setting. It’s not about inventing something never seen before, but of doing something in a context where it has not been tried previously. This comes back to the adjacent possible and some stronger variants on technological determinism. Once some technologies and systems are in place it is inevitable that other things will follow. In some cases, this is obvious and indisputable: for instance, a combination of LMS availability and a mandate to use it by an institution means that simply using it is not an innovation – you may innovate in the ways you use it, but not simply in using it. In other cases, the effect is subtler but no less compelling. For example, we have long known that dialogue can be a very powerful tool for learning but, for those involved in distance education, the opportunities to use it used to be expensive and impractical, for the most part. When large-scale ubiquitous cheap and simple communication became available it was not innovative to use it – it would be totally bizarre not to use it, in fact, a sign of idiocy or extreme complacency. There may be some details about the implementation and adapting cost-effectively to specific technologies that could be described as innovative, but the imperative to use the tools in the first place for learning is as compelling as the institutional edict: it’s too obvious to be described as an innovation, unless we describe everything we do as an innovation. Which, of course, in some ways it probably is.

So – does anyone have any ideas for answers to the question?  At a large scale I’m thinking that some of the more interesting innovations of the last couple of decades might include (bearing in mind these are not new inventions and there are lots of uninnovative ways to go about them):

  • Google search and Wikipedia: the two most successful online learning tools ever created, I think. Everyone who has ever used them to learn has probably found innovative ways to learn as a result. In terms of impact, these two tools (and their ilk) are having a greater transformative effect on learning in universities and elsewhere than anything since the invention of the printing press. They are the tip of the wedge that will, eventually, completely transform formal education.
  • e-portfolios: nothing new in concept, but the associated pedagogies, benefits of electronic aggregation, supporting tools and processes mean they seem to be gaining a lot of traction the world over and are a darn good learner-centred idea whose time has come.
  • action learning: an old-ish idea (at least early 90s, probably before) but one of the few truly andragogic pedagogies that has achieved some transformative effects where it has been used
  • MOOCs: connectivist approaches, openness, large scaling, lack of coercion to learn, and a genuinely different approach to semi-formal learning make these and their cousins still pretty innovative. It’s not all a good innovation. They probably only benefit a very small proportion of the participants or, more accurately, the ones that really do participate probably gain a lot more than those who participate less, but the use of emergence, crowds, distributed networks, reified connections and so on shows what I believe to be the right direction to be heading, even if the pedagogies, supporting infrastructures, formal processes for recognition and tools are not quite there yet.

I could probably think of hundreds of smaller innovations, ways of using pedagogies and other tools differently, new tools, new processes, new combinations. But that’s just the problem – it’s really hard for me to see the wood for the trees.

 

 

Google to Launch Major New Social Network Called Circles, Possibly Today (Updated)

It sounds like Google is heading in the same direction that we are heading on the Landing, offering different ways of interacting with different people. This is necessary in the evolution if social software. It will be interesting to discover whether they are also thinking of personal as well as social contexts – not only do we present different facets of ourselves to different people at different times (and the same people at different times – a much trickier problem) but we also adopt very different roles at different times in our personal lives. I think differently, need different things, talk to different people and read different things depending on what I am doing and what I mean to do.

That’s the idea behind the poorly named ‘context switcher’ that is being developed at Athabasca – to adopt different personas at different times and different contexts both for other people and for our own personal purposes. I Just wish we had a better name that made the meaning more obvious. ‘Circles’ is pretty good in a social context but less meaningful in a personal context so I would reject that. Lately I’ve been thinking that ‘facets’ captures the meaning better (it is about different facets of ourselves, whether for our own benefit or the benefit of others) but ‘facets’ is (like context switching) maybe a little technical. It works well for me and anyone else who has ever read Ranganathan, but maybe lacks popular appeal.

Any and all ideas appreciated!

Address of the bookmark: http://www.readwriteweb.com/archives/google_to_launch_major_new_social_network_called_c.php?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+readwriteweb+%28ReadWriteWeb%29

How The New York Times Is Incorporating Social & Algorithmic Recommendations

Interesting report about use of various forms of recommendations in NYT. The article suggests a likely division into human-edited, friend-recommended and algorithmically recommended stories that neatly captures what Terry Anderson and I have been discussing in terms of groups, networks and collectives. The transition between hierarchical group (the editor decides) to network (your friends suggest) to collective (sets are mined for crowd opinions) mirrors the traditional classroom, the network and the collective intelligence of some Web systems in online learning.

Address of the bookmark: http://mashable.com/2011/03/10/new-york-times-recommendations-2/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Mashable+%28Mashable%29

Technologies and learning

I’ve spent far more time than is healthy over the past few years thinking about technology, learning and education, and how they fit with each other. I was interested to read this recent meta-meta-study on the effects of computers in education, but it really tells us nothing we did not already know (though it has some good insights into why it is tricky and what may be needed).

The trouble with a focus on a tool, especially something like a computer that is a potentially infinite number of tools, is that it tells us practically nothing at all of value about the learning technology. All education, bar none, is technology-enhanced learning and all, bar none, involves tools – minimally, cognitive/social tools like pedagogies that are assumed to lead to learning (and, usually, tools to assess it has happened), organisational tools to support bringing people together, clocks to assist that process, spaces constructed to not hinder it too much, not to mention the ultimate toolset, language itself. That’s just a small part of the list, of course. Most education, especially in a formal context, involves dozens or even hundreds or thousands of tools, assembled into technologies which may themselves be part of technological assemblies, in order for it to happen. The issue is not whether a technology like language (say) is used, but how it is used. And that’s what no metastudy that focuses on a single set of tools will ever tell us in any useful way. For that matter, it seldom comes out properly in the original studies themselves. You might just as stupidly ask what effect chalk has on learning. Used well, in conjunction with other tools like blackboards, classroom seating arrangements, intelligent pedagogies and a caring teacher, it can have a hugely beneficial effect and, without it (assuming other co-occurring variables like the presence of a blackboard and a pedagogy that requires it) things can go terribly wrong. Defining a technology must include thinking about what it uses and what it is being used for – otherwise it is just talking about objects that are of no interest or value. So, we should be looking at the technological assemblies that we use and how they work together, of which specific tools are a necessary but not even close-to sufficient component. We are not going to show anything valuable about computers per se because they are universal tools, media and environments: because of that flexibility, they can be used to improve learning. They let us do pretty much anything that we want if we can program and use them effectively. If tools can improve learning, and computers can be pretty much any tool, then of course they can be of phenomenal value. That’s just basic logic. It would be stupid to suggest otherwise. It’s not even worth asking the question.  We might ask reasonable questions about the economics of using them, access or health issues and so on, yet it is as certain as night follows day that computers can help people to learn. But how? Now, that is a really good (and less well-answered) research question which actually strikes at the heart of what all education is about.

Bearing that in mind, I have been wondering of late about the differences between social interactions online and face to face. Some differences appear to be obvious, even in the most immersive of online communication systems – the lack of important cues like scent, touch, peripheral vision, limitations on hearing background noises, limitations of rendition of video (even in 3D at high resolution) the fact that no commitment to meet in one place has been made (and therefore no continuation beyond the communication event itself), the fact that each participant exists in an environment where they are differently distracted, and so on. But, of course, such things may occur in face to face environments too. People have disabilities that limit shared sensations, if I sit opposite you at a table, my distractions are different from yours (I once failed an interview at least partly because I alone was facing a window over the sea and thought I could see whales playing in the waves, but that’s another story) and my commitment to go to a class down the hall may be very different from yours to come from a poorly connected village 50 miles away. In most respects, there are analogous situations in the most mundane of face to face meetings to those we experience routinely in online scenarios and, though the scale of effect may vary, the means of dealing with problems may be more straightforward and the ubiquity of the problems may vary, we still have to face them. 

I’d be really interested to hear of any research that has looked into such constraints in a face to face setting without the intervention of computers – differences caused by seating arrangements, differences caused by being at the front of the class or the back, the effects of a teacher with body odour issues, the effects of distance traveled to class on commitment, and so on. Does anyone know of such studies? I’ve read a few here and there but not looked too carefully at the literature. I’m guessing some work must have been done on this, especially with regard to the effects of disabilities. My suspicion is that such easy and commonplace problems might tell us some useful things about how to fill the transactional distance gap in online systems.

 

How Facebook Is Killing Your Authenticity

Yet another article bemoaning the uni-dimensionality of Facebook identity – something we have been banging on about for a really long time. I guess there are two potential outcomes for this groundswell of revolt:

  1. A mass (and probably slow) move away from Facebook to a federated and more or less loosely joined set of identities and/or the kind of context-switching functionality we are working on (still) for the Landing.
  2. Facebook waking up to the problem and doing more about it than adding some group functionality

I think both are clearly happening but I fear the second option might be more likely to succeed in the short term than the first. Facebook developers are very smart and I’m certain they have been working hard on the problem for some while. But the last thing the Web needs is centralised control. We need to own our multiple identities and to be free to adopt innovative solutions. Unfortunately, reliance on a central provider reduces our capacity to manage multiple identities (it’s not a technical limitation but Metcalfe’s and Reid’s laws ensure that alternatives have a geometrically dwindling change of success) and constrains innovation in exactly the place it is needed most right now.

Address of the bookmark: http://www.businessinsider.com/how-facebook-is-killing-your-authenticity-2011-3

Woz to educators: “be brave, use the new technology”

Steve Wozniak in great inspirational form discussing a very straightforward, pragmatic and obvious approach to education which is hard to argue with. It is, as he observes, very very far from the norm.

The interview ends with a few comments on the much-maligned Apple Newton which make me wonder a bit – the idea was to make the computer do the work for you but one of the more memorable things about the Newton was the high failure rate in how it interpreted what was written on it. This is a big risk in hardening technologies – the more the computer does for you, the fewer decisions you need to make, the more control the programmer has over your life. This is particularly bad when the programmer fails but, even when the program works as it should, we need to be acutely aware of how our work is being shaped by the design of the system. I think a big difference between the Newton and the iPad (which he also mentions) is that the iPad gives much greater control to the end-user, not at an individual app level but in the wide range of apps that may be selected. The problem becomes one of finding the right app, not of battling with the machine which is, of course, still quite a big problem. But it is a problem that is soluble by ordinary mortals, not programmers. And that is a big difference.

Address of the bookmark: http://arstechnica.com/apple/news/2011/03/woz-to-educators-be-brave-use-the-new-technology.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss

learner-teaching-learning analytics

I’ve been having some interesting discussions in Banff this week with folks interested in ‘learning analytics’. I put it in quotes because I’m not convinced that it is a) a distinct field or b) one thing.

Ignoring issues of massive overlaps and shared values with other fields (such as data mining, collaborative filtering, adaptive hypermedia, natural language processing, learning design and evaluation and so on) which make it hard to distinguish at times, it seems to me that there are at least three subfields:

learner analytics: used by admins, policy makers, governments and so on to see what learners are doing with a view to taking some action at a pragmatic or policy level as a result. May also be used by teachers to monitor and understand learners and their needs. Rarely, but potentially, of use to learners.

teaching analytics: looking at the success or otherwise of teaching interventions – courses, assessments, teaching acts, content construction, learning design, etc, with a view to changing the teaching process to make it better. Pretty much exclusively the domain of those involved in the teaching process like teachers and instructional designers.

learning analytics: looking at how people are learning, including construction of artefacts, interactions with others, progression, etc, with a view to taking direct action to improve it, usually (but by no means necessarily) by and for the learner.

I care about learning analytics and see great practical value in teaching analytics. Analysing learning and teaching is almost entirely about helping people to learn and, while it may be poorly done, the intentions are almost all aimed at making learners’ lives better. Analysing learners involves some murkier areas: it may have many motivations, including potentially risky ones like implementing efficiencies, targeting for marketing, allocating resources and so on as well as clearly good things like identifying under-represented groups or at-risk learners. I suspect that it may become the most popular analytics domain in education but, because of the dangers, it demands more serious cross-disciplinary and ethically well-considered research than the others. 

Survey: 85% of Employees Under 25 Use Personal E-Mail Accounts for Work

This accords with my experience – in fact, I’m amazed that the figure isn’t higher across the board.

Email is the boring plumbing of the Internet but so totally essential it seems remarkable that more effort is not put in to supporting it properly so that such problems do not arise. I don’t mean management of the software and hardware, removal of spam, virus protection etc- that part I take as a given. I mean the real management, in which managers listen to what the people using their machines need, proactively anticipate their wants, monitor how they are doing. A free email account from Google can run rings round what most organisations can provide, in reliability, security, spam protections, capacity, speed, accounting, manageability. If corporate email could serve needs better then people would be less inclined to bypass it, but the investment needed if networking departments were to listen and act on what people want and need would be huge, given decades of underinvestment. Necessary, though. 

 

Address of the bookmark: http://www.readwriteweb.com/enterprise/2011/02/survey-85-of-employees-under-2.php?utm_source=pulsenews&utm_medium=referral&utm_campaign=Feed%3A+readwriteweb+%28ReadWriteWeb%29

The Institutions of Theseus

From Wikipedia…

The ship wherein Theseus and the youth of Athens returned [from Crete] had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their place, insomuch that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same.

—Plutarch, Theseus (http://en.wikipedia.org/wiki/Ship_of_Theseus)

Over the past few days I have been in Japan thanks to the OU of Japan, talking a bit but mainly listening to the great and good talking about how their interest groups (universities, publishers and libraries) should react to a world of increasingly rapid and disruptive change. What interests me greatly about all of this (apart from the research, analyses and arguments presented, of course, much of which has been wonderful) is the assumption, by all, that their particular institutional sector should persist, no matter how much it may change. Libraries become learning centres, universities become publishers, publishers become universities… but the fundamental unit and mindset in each endangered institution persists. We imagine many interesting new futures for our institutions but the end of the institution remains unimaginable. 

I wonder, at what point would we stand up and say ‘we’ve had a good run and we did a good job in our time, but now we are irrelevant and getting in the way of making life better. It’s time to call it a day. What shall we do next?’ I don’t think most of us would ever do this. We just have too much investment in what we have done and it is too much trouble to change the whole thing at once. It’s a bit like realizing that your house is not very well designed or efficient and therefore knocking it down and building a new one. But, though we may resist that for decades or even centuries, at some point, it has to happen. The trouble with the incessant expansion of the adjacent possible is that the argument for demolition gets stronger at an accelerating rate.

We have learned to accept this with computers for some time. From the late 1980s to the early 2000s I had one computer, albeit one very much like the Ship of Theseus, which had no parts in it’s final incarnation that were present in the original of the late 1980s. But since then, while some information has stayed on my machines since the mid 1980s, when one machine no longer suits my needs, I get another. It’s cheaper and simpler than modifying an old one. The same has been true of cellphones for as long as they have been with us.

It is, of course, a lot lot harder to do that with big institutions and businesses. Hundreds or even thousands of years of slow modification and adaptation have made things like libraries, universities and publishers a very fixed and deeply interwoven part of culture and society, not to mention infrastructure. It’s not as simple as replacing a computer to replace an education system on which almost every other institution and industry in some way depends, especially once we move beyond higher education to schools. We can keep modifying and replacing parts for a long time, I guess, but at some point we will need a new ship.

Interestingly, the analogy holds in the extended Ship of Theseus paradox, in which the rotten planks of the original are collected and eventually reassembled to form the original ship while the ship that bears the name no longer contains any physical parts from the original (which is the ‘real’ ship? At what point does one change to the other?) – libraries are starting to pick up the discarded bits that schools used to own, publishers are taking over various roles of universities, universities are taking pieces of community libraries and trying to become publishers…the list goes on. But maybe we could build better ships instead?

When radical change to universities happens it will probably happen quite fast and it will probably slip in rather unexpectedly from somewhere else entirely. Erik Duval made a great provocative throwaway comment in a panel with assembled publishers that Wikipedia had made other encyclopedias irrelevant already. No one argued that one, but a few looked uncomfortable. Some tried to suggest a hybrid of open and closed content as the best way forward but they were unconvincing. Thanks to the Web, libraries too are already finding their traditional role of custodians of information largely usurped and are becoming something almost entirely unrecognisable to librarians of the past – far closer to educators and infrastructure providers than librarians in many cases. Education, however, is far more deeply intertwined with other things, from work and family patterns to accreditation, from relatively unencumbered knowledge generation to the preservation of culture. And it comes with quasi-religious trappings. It’s a tougher nut to crack.

The separation of accreditation and learning support may be the catalyst for a cascade of change. PLAR/APEL and challenge for credit processes already largely separate learning from accreditation and may be the means to achieve a disruptive and positive evolution that is more widespread in its effects. Accreditation is the biggest single link between educational institutions and the rest of the social ecology so, when that goes, the rest becomes more open to change, competition and evolutionary pressure.

This is a risky process – high quality methods of accreditation are vital if we are to see positive change: if the separation leads to more standardised exams, for example, then the result would be a disaster for learning – teaching the most efficient way to pass tests will trump deeper learning every time and we could expect to see education mills tuning themselves to fit the tests in the worst way possible.

Competence-based methods that valorise diversity and creativity, like portfolios, are vital if we are to see a positive revolution and not make life worse than it is already. Richer competence-based assessment is also essential if we are to preserve the value of expertise in professors and lecturers: if standard tests become the norm, assessor roles will be significantly deskilled and the diversity that is one of the great parts of the university ship that must be preserved and nurtured will be largely stamped out. It’s not that there is no place for that kind of test but it should not become the norm. It is important that, in separating learning and assessment, both should remain useful roles for a university, even if other companies, networks and organisations may provide one or the other parts for the learners. There should be nothing to prevent learners from choosing to have both provided by a university if they want, and it might be good for them to do so in many cases. But they should also be able to choose to learn elsewhere and be assessed by the university, or learn at the university and be assessed elsewhere, or bypass the university altogether, if they wish.

If we can cut the direct rope linking higher education and promotion/job finding, then the stage will be set for potential radical and positive change which will, peculiarly, be a return to traditional values in which universities are again set firmly in their role as creators and nurturers of knowledge.

I’m quite looking forward to that.