Thomas Frey: By 2030 over 50% of Colleges will Collapse

Thomas Frey provides an analysis of current trends in education (and, more broadly, learning) and predicts a grim future for colleges and, by extension, schools and universities. This is not a uniformly well-informed article – Frey is clearly an outsider with a somewhat caricatured or at least highly situated US-centric view of the educational system – but, though repeating arguments that have been made for decades and offering no novel insights, the issues are well summarized, well expressed, and the overall thrust of the article is hard to argue with.

His main points are summarized in a list:

  1. Overhead costs too high – Even if the buildings are paid for and all money-losing athletic programs are dropped, the costs associated with maintaining a college campus are very high. Everything from utilities, to insurance, to phone systems, to security, to maintenance and repair are all expenses that online courses do not have. Some of the less visible expenses involve the bonds and financing instruments used to cover new construction, campus projects, and revenue inconsistencies. The cost of money itself will be a huge factor.
  2. Substandard classes and teachers – Many of the exact same classes are taught in thousands of classroom simultaneously every semester. The law of averages tells us that 49.9% of these will be below average. Yet any college that is able to electronically pipe in a top 1% teacher will suddenly have a better class than 99% of all other colleges.
  3. Increasingly visible rating systems – Online rating systems will begin to torpedo tens of thousands of classes and teachers over the coming years. Bad ratings of one teacher and one class will directly affect the overall rating of the institution.
  4. Inconvenience of time and place – Yes, classrooms help focus our attention and the world runs on deadlines. But our willingness to flex schedules to meet someone else’s time and place requirements is shrinking. Especially when we have a more convenient option.
  5. Pricing competition – Students today have many options for taking free courses without credits vs. expensive classes with credits and very little in between. That, however, is about to change. Colleges focused primarily on course delivery will be facing an increasingly price sensitive consumer base.
  6. Credentialing system competition – Much like a doctor’s ability to write prescriptions, a college’s ability to grant credits has given them an unusual competitive advantage, something every startup entrepreneur is searching for. However, traditional systems for granting credits only work as long as people still have faith in the system. This “faith in the system” is about to be eroded with competing systems. Companies like Coursera, Udacity, and iTunesU are well positioned to start offering an entirely new credentialing system.
  7. Relationships formed in colleges will be replaced with other relationship-building systems – Social structures are changing and the value of relationships built in college, while often quite valuable, are equally often overrated. Just as a dating relationship today is far more likely to begin online, business and social relationships in the future will also happen in far different ways.
  8. Sudden realization that “the emperor has no clothes!” – Education, much like our money supply, is a system built on trust. We are trusting colleges to instill valuable knowledge in our students, and in doing so, create a more valuable workforce and society. But when those who find no tangible value begin to openly proclaim, “the emperor has no clothes!” colleges will find themselves in a hard-to-defend downward spiral.

It is notable that many of the issues raised are fully addressed by online universities like AU, and have been for decades. We have moved on to bigger and more intractible problems! In particular, the idea that classes and teachers are a fixture that cannot be changed is a bit quaint. It is also fair to say that Frey has only a rough idea of how education works: the notion that high quality lectures has anything much to do with learning or the university experience shows a failure to understand the beast – but then, the same is true of potential students and more than a few professors. But pricing competition, credentialling competition, relationship-building and, above all, the ’emporer has no clothes’ arguments hit home, and I think will have the effects he anticipates much sooner than 2050. Nothing new here, and a bit coarse, but it clearly expresses the stark reality of the consequences.

Address of the bookmark: http://www.futuristspeaker.com/2013/07/by-2030-over-50-of-colleges-will-collapse/

» Assessing teachers’ digital competencies Virtual Canuck

Terry Anderson on an Estonian approach to assessing teacher competences (and other projects) using Elgg – the same framework that underpins the Landing. I’ve downloaded the tool they have developed, Digimina, and will be trying it out, not just for exactly the purposes it was developed, but as the foundation for a more generalized toolset for sharing the process of assessment. May spark some ideas, I think.

A nice approach to methodology: Terry prefers the development of design principles as the ‘ultimate’ aim of design-based research (DBR), but I like the notion of software as a hypothesis that is used here. It’s essentially a ‘sciency’ way of describing the notion of trying out an idea to see whether it works that makes no particular claims to generality, but that both derives from and feeds a model of what can be done, what needs to be done, and why it should be done. The generalizable part is not the final stage, but the penultimate stage of design in this DBR model. In this sense, it formalizes the very informal notion of bricolage, capturing some of its iterative nature. It’s not quite enough, I think, any more than other models of DBR quite capture the process in all its richness. This is because the activity of formulating that hypothesis itself follows a very similar pattern at a much finer-grained scale to that of the bigger model. When building code, you try out ideas, see where it takes you, and that inspires new ideas through the process of writing as much as of designing and specifying. Shovelling that into a large-scale process model hides where at least an important amount of the innovation actually happens, perhaps over-emphasizing the importance of explicit evaluation phases and underplaying the role of construction itself.

Address of the bookmark: http://terrya.edublogs.org/2015/04/24/assessing-teachers-digital-competencies/

Wait for It: Delayed Feedback Can Enhance Learning – Scientific American

Report on a nice bit of cognitivist research – as the title suggests, delayed feedback (ie don’t give the answer right away) assists retention, and is best done after an unpredictable delay of a few seconds. What’s most interesting about it is the hypothesized reason: it’s curiosity. It only works if it piques your interest enough to want to know the answer, and your level of attention is raised when the timing of an upcoming anticipated event is uncertain. Like so many things in learning, motivation plays a big role here. 

Address of the bookmark: http://www.scientificamerican.com/article/wait-for-it-delayed-feedback-can-enhance-learning/

Government to close two in every five universities – University World News

This is pretty bad news for universities in Russia, coming on top of existing major cuts. It is notable that this mostly affects certificate mills with dubious credentials and very shady practices that have sprung up since the 90s, but will also affect some state-funded institutions. While there is clearly a long-overdue crackdown in progress on unethical companies pretending to offer education but really just selling certification, this doesn’t seem to tell the whole story and the article does not explain the underlying problems that this is a solution for. It makes me wonder whether this is just a local problem in Russia, or whether it is a part of a more general trend. I presume there may be some places where universities are gaining ground but, for the most part, the news I read suggests that most, the world over, are in more or less worsening straits. Is there any research out there on this as a global phenomenon? 

Address of the bookmark: http://www.universityworldnews.com/article.php?story=20150417043945585

Open University’s numbers dive 28% as pool of part-timers dries up

Quite a slide in just five years!

It is blamed by the incoming VC on general drops in student numbers in the UK, that most notably affect part-time students (like AU, all of the OU’s students are part-time). A recent article suggested a 37% drop in recent years in the UK HE sector overall, so 28% is perhaps not that awful, in relative terms.

The drop is not too surprising, given that UK fees have risen precipitously in recent years thanks to decades of attack on higher education by both labour and conservative governments. In the OU’s case, according to one of the commenters, this equates to an increase from £500 (a bit over $900 Canadian) to £1600 (about $3000) for a course. When course costs rise, part-timers (many of whom are self-funding and were, in the past, doing it for love as often as for career change or advancement) are inevitably going to be the first casualties.

Address of the bookmark: http://www.timeshighereducation.co.uk/news/open-universitys-numbers-dive-28-as-pool-of-part-timers-dries-up/2018593.article

Instructional quality of Massive Open Online Courses (MOOCs)

This is a very interesting, if (I will argue) flawed, paper by Margaryan, Bianco and Littlejohn using a Course Scan instrument to examine the instructional design qualities of 76 randomly selected MOOCs (26 cMOOCs and 50 xMOOCs – the imbalance was caused by difficulties finding suitable cMOOCs). The conclusions drawn are that very few MOOCs, if any, show much evidence of sound instructional design strategies. In fact they are, according to the authors, almost all an instructional designer’s worst nightmare, on at least some dimensions.  
I like this paper but I have some fairly serious concerns with the way this study was conducted, which means a very large pinch of salt is needed when considering its conclusions. The central problem lies in the use of prescriptive criteria to identify ‘good’ instructional design practice, and then using them as quantitative measures of things that are deemed essential to any completed course design. 

Doubtful criteria 

It starts reasonably well. Margaryan et al use David Merrill’s well-accepted abstracted principles for instructional design to identify kinds of activities that should be there in any course and that, being somewhat derived from a variety of models and theories, are pretty reasonable: problem centricity, activation of prior learning, expert demonstration, application and integration. However, the chinks begin to show even here, as it is not always essential that all of these are explicitly contained within a course itself, even though consideration of them may be needed in the design process – for example, in an apprenticeship model, integration might be a natural part of learners’ lives, while in an open ‘by negotiated outcome’ course (e.g. a typical European PhD) the problems may be inherent in the context. But, as a fair approximation of what activities should be in most conventional taught courses, it’s not bad at all, even though it might show some courses as ‘bad’ when they are in fact ‘good’. 
The authors also add five more criteria abstracted from literature relating rather loosely to ‘resources’, including: expert feedback; differentiation (i.e. personalization); collaboration; authentic resources; and use of collective knowledge (i.e. cooperative sharing). These are far more contentious, with the exception of feedback, which almost all would agree should be considered in some form in any learning design (and which is a process thing anyway, not a resource issue). However, even this does not always need to be the expert feedback that the authors demand: automated feedback (which is, to be fair, a kind of ossified expert feedback, at least when done right), peer feedback or, best of all, intrinsic feedback can often be at least as good in most learning contexts. Intrinsic feedback (e.g. when learning to ride a bike, falling off it or succeeding to stay upright) is almost always better than any expert feedback, albeit that it can be enhanced by expert advice. None of the rest of these ‘resources’ criteria are essential to an effective learning design. They can be very useful, for sure, although it depends a great deal on context and how it is done, and there are often many other things that may matter as much or more in a design, like including support for reflection, for example, or scope for caring or passion to be displayed, or design to ensure personal relevance. It is worth noting that Merrill observes that, beyond the areas of broad agreement (which I reckon are somewhat shoehorned to fit), there is much more in other instructional design models that demands further research and that may be equally if not more important than those identified as common.

It ain’t what you do…

Like all things in education, it ain’t what you do but how you do it that makes all the difference, and it is all massively dependent on subject, context, learners and many other things. Prescriptive measures of instructional design quality like these make no sense when applied post-hoc because they ignore all this. They are very reasonable starting frameworks for a designer that encourage focus on things that matter and can make a big difference in the design process, but real life learning designs have to take the entire context into account and can (and often should) be done differently. Learning design (I shudder at the word ‘instructional’ because it implies so many unhealthy assumptions and attitudes) is a creative and situated activity. It makes no more sense to prescribe what kinds of activities and resources should be in a course than it does to prescribe how paintings should be composed. Yes, a few basics like golden ratios, rules of thirds, colour theory, etc can help the novice painter produce something acceptable, but the fact that a painting disobeys these ‘rules’ does not make it a bad painting: sometimes, quite the opposite. Some of the finest teaching I have ever seen or partaken of has used the most appalling instructional design techniques, by any theoretical measure.

Over-rigid assumptions and requirements

One of the biggest troubles with such general-purpose abstractions is that they make some very strong prior assumptions about what a course is going to be like and the context of delivery. Thanks to their closer resemblance to traditional courses (from which it should be clearly noted that the design criteria are derived) this is, to an extent, fair-ish for xMOOCs. But, even in the case of xMOOCs, the demand that collaboration, say, must occur is a step too far: as decades of distance learning research has shown (and Athabasca University proved for decades), great learning can happen without it and, while cooperative sharing is pragmatic and cost-effective, it is not essential in every course. Yes, these things are often a very good idea. No, they are not essential. Terry Anderson’s well-verified (and possibly self-confirming, though none the worse for it) theorem of interaction equivalency  makes this pretty clear.

cMOOCs are not xMOOCs

Prescriptive criteria as a tool for evaluation make no sense whatsoever in a cMOOC context. This is made worse because the traditional model is carried to extremes in this paper, to the extent that the authors bemoan the lack of clear learning outcomes. This doesn’t naturally fall out from the design principles at all, so I don’t understand why they are even mentioned, and it seems an abitrary criterion that has no validity or justification beyond the fact that they are typically used in university teaching. As teacher-prescribed learning outcomes are anathema to Connectivism it is very surprising indeed that the cMOOCs actually scored higher than the xMOOCs on this metric, which makes me wonder whether the means of differentiation were sufficiently rigorous. A MOOC that genuinely followed Connectivist principles would not provide learning outcomes at all: foci and themes, for sure, but not ‘at the end of this course you will be able to x’. And, anyway, as a lot of research and debate has shown, learning outcomes are of far greater value to teachers and instructional designers than they are to learners, for whom they may, if not handled with great care, actually get in the way of effective learning. It’s a process thing – helpful for creating courses, almost useless for taking them. The same problem occurs in the use of course organization in the criteria – cMOOC content is organized bottom-up by learners, so it is not very surprising that they lack careful top-down planning, and that is part of the point.

Apparently, some cMOOCs are not cMOOCs either

As well as concerns about the means of differentiating courses and the metrics used, I am also concerned with how they were applied. It is surprising that there was even a single cMOOC that didn’t incorporate use of ‘collective knowledge’ (the authors’ term for cooperative sharing and knowledge construction) because, without that, it simply isn’t a cMOOC: it’s there in the definition of Connectivism . As for differentiation, part of the point of cMOOCs is that learning happens through the network which, by definition, means people are getting different options or paths, and choosing those that suit their needs. The big point in both cases is that the teacher-designed course does not contain the content in a cMOOC: beyond the process support needed to build and sustain a network, any content that may be provided by the facilitators of such a course is just a catalyst for network formation and a centre around which activity flows and learner-generated content and activity is created. With that in mind it is worth pointing out that problem-centricity in learning design is an expression of teacher control which, again, is anathema to how cMOOCs work. Assuming that a cMOOC succeeds in connecting and mobilizing a network, it is all but certain that a great deal of problem-based and inquiry-based learning will be going on as people post, others respond, and issues become problematized. Moreover, the problems and issues will be relevant and meaningful to learners in ways that no pre-designed course can ever be. The content of a cMOOC is largely learner-generated so of course a problem focus is often simply not there in static materials supplied by people running it. cMOOCs do not tell learners what to do or how to do it, beyond very broad process support which is needed to help those networks to accrete. It would therefore be more than a little weird if they adhered to instructional design principles derived from teacher-led face-to-face courses in their designed content because, if they did, they would not be cMOOCs. Of course, it is perfectly reasonable to criticize cMOOCs as a matter of principle on these grounds: given that (depending on the network) few will know much about learning and how to support it, one of the big problems with connectivist methods is that of getting lost in social space, with insufficient structure or guidance to suit all learning needs, insufficient feedback, inefficient paths and so on. I’d have some sympathy with such an argument, but it is not fair to judge cMOOCs on criteria that their instigators would reject in the first place and that they are actively avoiding. It’s like criticizing cheese for not being chalky enough.

It’s still a good paper though

For all that I find the conclusions of this paper very arguable and the methods highly criticizable, it does provide an interesting portrait of MOOCs using an unconventional lens. We need more research along these lines because, though the conclusions are mostly arguable, what is revealed in the process is a much richer picture of the kinds of things that are and are not happening in MOOCs. These are fine researchers who have told an old story in a new way, and this is enlightening stuff that is worth reading.
 
As an aside, we also need better editors and reviewers for papers like this: little tell-tales like the fact that ‘cMOOC’ gets to be defined as ‘constructivist MOOC’ at one point (I’m sure it’s just a slip of the keyboard as the authors are well aware of what they are writing about) and more typos than you might expect in a published paper suggest that not quite enough effort went into quality control at the editorial end. I note too that this is a closed journal: you’d think that they might offer better value for the money that they cream off for their services.

Address of the bookmark: http://www.sciencedirect.com/science/article/pii/S036013151400178X

Learning in an introductory physics MOOC: All cohorts learn equally, including an on-campus class | Colvin | The International Review of Research in Open and Distance Learning

Thanks to Tony Bates for pointing to and providing a fine review of this interesting article which shows evidence of learning gain in people who were taking an xMOOC.

I have little to add to Tony’s comments apart from to mention the very obvious elephant in this room: that the sampling was skewed by the fact that it only considered considerably less than 10% of the original populace of the MOOC that actually got close to finishing it. It is not too surprising that most of those who had the substantial motivation demanded to finish the course (a large percentage of whom were very experienced learners in related fields) actually did pretty well. What it does not tell us is whether, say, a decent open textbook might have been equally or more effective for these manifestly highly motivated and proficient students. If so, it might not be a particularly cost-effective way of learning.

The study does compare performance of a remedial class of students (ie that had failed an initial course) who received plentiful further face to face support with that of the voluntarily subscribed online students. But the authors rightly note that it would be foolish to read anything into any of the differences found, including the fact that the campus-based students seemed to gain nothing from additional remedial tuition (they may be overly pessimistic about that: without that remedial effort, they might have done even worse) because the demographics and motivations of these students were a million miles removed from the rest of the cohort. Chalk and cheese.

One other interesting thing that is worth highlighting: this is one in a long line of articles focusing on interventions that, when looked at closely, suggest that people who spend more time learning learn more. I suspect that a lot of the value of this and indeed many courses comes from being given permission to learn (or, for the campus students, being made to do so) along with having a few signposts to show the way, a community to learn with, and a schedule to follow. Note that almost none of this has anything to do with how well or how badly a specific course is designed or implemented: it is in the nature of the beast itself. Systems teach as much as teachers. The example of the campus-based students suggests that this may not always be enough although, sadly, the article doesn’t compare the time on task for this group with the rest. It may well be that, despite an extra 4 hours in class each week, they still spent less time actually learning. In fact, given a prima facie case that these students had already mostly demonstrated a lack of interest and/or ability in the subject, then even that tutorial time may have not been dedicated learning time.

A small niggle: the comparison with in-class learning on different courses conducted by Hake in a 1998 study, which is mentioned a couple of times in the article, is quite spurious. There is a world of difference between predominantly extrinsically motivated classroom-bound students and those doing it because, self-evidently, they actually want to do it. If you were to extract the most motivated 10% of any class you might see rather different learning patterns too. The nearest comparison that would make a little sense here is with the remedial campus-bound students though, for aforementioned reasons, that would not be quite fair either.

Little or none of this is news to the researchers, who in their conclusion carefully write:

“Our self-selected online students are interested in learning, considerably older, and generally have many more years of college education than the on-campus freshmen with whom they have been compared. The on-campus students are taking a required course that most have failed to pass in a previous attempt. Moreover, there are more dropouts in the online course (but over 50% of students making a serious attempt at the second weekly test received certificates) and these dropouts may well be students learning less than those who remained. The pre- and posttest analysis is further blurred by the fact that the MOOC students could consult resources before answering, and, in fact, did consult within course resources significantly more during the posttest than in the pretest.”

This is a good and fair account of reasons to be wary of these results. What it boils down to is that there are almost no notable firm conclusions to be drawn from them about MOOCs in general, save that people taking them sometimes learn something or, at least, are able to past tests about them. This is also true of most people that read Wikipedia articles.

For all that, the paper is very well written, the interventions are well-described (and include some useful statistics, like the fact that 95% of the small number that attempted more than 50% of the questions went on to gain a certificate), the research methods are excellent, the analysis is very well conducted, and, in combination with others that I hope will follow, this very good paper should contribute a little to a larger body of future work from which more solid conclusions can be drawn. As Tony says, we need more studies like this.

Address of the bookmark: http://www.irrodl.org/index.php/irrodl/article/view/1902/3009

Five myths about Moocs | Opinion | Times Higher Education

Diana Laurillard chipping in with a perceptive set of observations, most interestingly describing education as a personal client industry, in which tutor/student ratios are remarkably consistent at around 1:25, so it is no great surprise that it doesn’t scale up. Seems to me that she is quite rightly attacking a particular breed of EdX, Coursera, etc xMOOC but it doesn’t have to be this way, and she carefully avoids discussing *why* that ratio is really needed – her own writings and her variant on conversation theory suggest there might be alternative ways of looking at this.

Her critique that xMOOCs appear to succeed only for those that already know how to be self-guided learners is an old chestnut that hits home. She is right in saying that MOOCs (xMOOCs) are pretty poor educational vehicles if the only people who benefit are those that can already drive, and it supports her point about the need for actual teachers for most people *if* we continue to teach in a skeuomorphic manner, copying the form of traditional courses without thinking why we do what we do and how courses actually work.

For me this explains clearly once again that the way MOOCs are being implemented is wrong and that we have to get away from the ‘course’ part of the acronym, and start thinking about what learners really need, rather than what universities want to give them.

Address of the bookmark: http://www.timeshighereducation.co.uk/comment/opinion/five-myths-about-moocs/2010480.article