Cocktails and educational research

A lot of progress has been made in medicine in recent years through the application of cocktails of drugs. Those used to combat AIDS are perhaps the most well-known, but there are many other applications of the technique to everything from lung cancer to Hodgkin’s lymphoma. The logic is simple. Different drugs attack different vulnerabilities in the pathogens etc they seek to kill. Though evolution means that some bacteria, viruses or cancers are likely to be adapted to escape one attack, the more different attacks you make, the less likely it will be that any will survive.

Simulated learningUnfortunately, combinatorial complexity means this is not a simply a question of throwing a bunch of the best drugs of each type together and gaining their benefits additively. I have recently been reading John H. Miller’s ‘A crude look at the whole: the science of complex systems in business, life and society‘ which is, so far, excellent, and that addresses this and many other problems in complexity science. Miller uses the nice analogy of fashion to help explain the problem: if you simply choose the most fashionable belt, the trendiest shoes, the latest greatest shirt, the snappiest hat, etc, the chances of walking out with the most fashionable outfit by combining them together are virtually zero. In fact, there’s a very strong chance that you will wind up looking pretty awful. It is not easily susceptible to reductive science because the variables all affect one another deeply. If your shirt doesn’t go with your shoes, it doesn’t matter how good either are separately. The same is true of drugs. You can’t simply pick those that are best on their own without understanding how they all work together. Not only may they not additively combine, they may often have highly negative effects, or may prevent one another being effective, or may behave differently in a different sequence, or in different relative concentrations. To make matters worse, side effects multiply as well as therapeutic benefits so, at the very least, you want to aim for the smallest number of compounds in the cocktail that you can get away with. Even were the effects of combining drugs positive, it would be premature to believe that it is the best possible solution unless you have actually tried them all. And therein lies the rub, because there are really a great many ways to combine them.

Miller and colleagues have been using the ideas behind simulated annealing to create faster, better ways to discover working cocktails of drugs. They started with 19 drugs which, a small bit of math shows, could be combined in 2 to the power of 19 different ways – about half a million possible combinations (not counting sequencing or relative strength issues). As only 20 such combinations could be tested each week, the chances of finding an effective, let alone the best combination, were slim within any reasonable timeframe. Simplifying a bit, rather than attempting to cover the entire range of possibilities, their approach finds a local optimum within one locale by picking a point and iterating variations from there until the best combination is found for that patch of the fitness landscape. It then checks another locale and repeats the process, and iterates until they have covered a large enough portion of the fitness landscape to be confident of having found at least a good solution: they have at least several peaks to compare. This also lets them follow up on hunches and to use educated guesses to speed up the search. It seems pretty effective, at least when compared with alternatives that attempt a theory-driven intentional design (too many non-independent variables), and is certainly vastly superior to methodically trying every alternative, inasmuch as it is actually possible to do this within acceptable timescales.

The central trick is to deliberately go downhill on the fitness landscape, rather than following an uphill route of continuous improvement all the time, which may simply get you to the top of an anthill rather than the peak of Everest in the fitness landscape. Miller very effectively shows that this is the fundamental error committed by followers of the Six-Sigma approach to management, an iterative method of process improvement originally invented to reduce errors in the manufacturing process: it may work well in a manufacturing context with a small number of variables to play with in a fixed and well-known landscape, but it is much worse than useless when applied in a creative industry like, say, education, because the chances that we are climbing a mountain and not an anthill are slim to negligible. In fact, the same is true even in manufacturing: if you are just making something inherently weak as good as it can be, it is still weak. There are lessons here for those that work hard to make our educational systems work better. For instance, attempts to make examination processes more reliable are doomed to fail because it’s exams that are the problem, not the processes used to run them. As I finish this while listening to a talk on learning analytics, I see dozens of such examples: most of the analytics tools described are designed to make the various parts of the educational machine work ‘ better’, ie. (for the most part) to help ensure that students’ behaviour complies with teachers’ intent. Of course, the only reason such compliance was ever needed was for efficient use of teaching resources, not because it is good for learning. Anthills.

This way of thinking seems to me to have potentially interesting applications in educational research. We who work in the area are faced with an irreducibly large number of recombinable and mutually affective variables that make any ethical attempt to do experimental research on effectiveness (however we choose to measure that – so many anthills here) impossible. It doesn’t stop a lot of people doing it, and telling us about p-values that prove their point in more or less scupulous studies, but they are – not to put too fine a point on it – almost always completely pointless.  At best, they might be telling us something useful about a single, non-replicable anthill, from which we might draw a lesson or two for our own context. But even a single omitted word in a lecture, a small change in inflection, let alone an impossibly vast range of design, contextual, historical and human factors, can have a substantial effect on learning outcomes and effectiveness for any given individual at any given time. We are always dealing with a lot more than 2 to the power of 19 possible mutually interacting combinations in real educational contexts. For even the simplest of research designs in a realistic educational context, the number of possible combinations of relevant variables is more likely closer to 2 to the power of 100 (in base 10 that’s  1,267,650,600,228,229,401,496,703,205,376). To make matters worse, the effects we are looking for may sometimes not be apparent for decades (having recombined and interacted with countless others along the way) and, for anything beyond trivial reductive experiments that would tell us nothing really useful, could seldom be done at a rate of more than a handful per semester, let alone 20 per week. This is a very good reason to do a lot more qualitative research, seeking meanings, connections, values and stories rather than trying to prove our approaches using experimental results. Education is more comparable to psychology than medicine and suffers the same central problem, that the general does not transfer to the specific, as well as a whole bunch of related problems that Smedslund recently coherently summarized. The article is paywalled, but Smedlund’s abstract states his main points succinctly:

“The current empirical paradigm for psychological research is criticized because it ignores the irreversibility of psychological processes, the infinite number of influential factors, the pseudo-empirical nature of many hypotheses, and the methodological implications of social interactivity. An additional point is that the differences and correlations usually found are much too small to be useful in psychological practice and in daily life. Together, these criticisms imply that an objective, accumulative, empirical and theoretical science of psychology is an impossible project.”

You could simply substitute ‘education’ for ‘psychology’ in this, and it would read the same. But it gets worse, because education is as much about technology and design as it is about states of mind and behaviour, so it is orders of magnitude more complex than psychology. The potential for invention of new ways of teaching and new states of learning is essentially infinite. Reductive science thus has a very limited role in educational research, at least as it has hitherto been done.

But what if we took the lessons of simulated annealing to heart? I recently bookmarked an approach to more reliable research suggested by the Christensen Institute that might provide a relevant methodology. The idea behind this is (again, simplifying a bit) to do the experimental stuff, then to sweep the normal results to one side and concentrate on the outliers, performing iterations of conjectures and experiments on an ever more diverse and precise range of samples until a richer, fuller picture results. Although it would be painstaking and longwinded, it is a good idea. But one cycle of this is a bit like a single iteration of Miller’s simulated annealing approach, a means to reach the top of one peak in the fitness landscape, that may still be a low-lying peak. However if, having done that, we jumbled up the variables again and repeated it starting in a different place, we might stand a chance of climbing some higher anthills and, perhaps, over time we might even hit a mountain and begin to have something that looks like a true science of education, in which we might make some reasonable predictions that do not rely on vague generalizations. It would either take a terribly long time (which itself might preclude it because, by the time we had finished researching, the discipline will have moved somewhere else) or would hit some notable ethical boundaries (you can’t deliberately mis-teach someone), but it seems more plausible than most existing techniques, if a reductive science of education is what we seek.

To be frank, I am not convinced it is worth the trouble. It seems to me that education is far closer as a discipline to art and design than it is to psychology, let alone to physics. Sure, there is a lot of important and useful stuff to be learned about how we learn: no doubt about that at all, and a simulated annealing approach might speed up that kind of research. Painters need to know what paints do too. But from there to prescribing how we should therefore teach spans a big chasm that reductive science cannot, in principle or practice, cross. This doesn’t mean that we cannot know anything: it just means it’s a different kind of knowledge than reductive science can provide. We are dealing with emergent phenomena in complex systems that are ontologically and epistemologically different from the parts of which they consist. So, yes, knowledge of the parts is valuable, but we can no more predict how best to teach or learn from those parts than we can predict the shape and function of the heart from knowledge of cellular organelles in its constituent cells. But knowledge of the cocktails that result – that might be useful.

 

 

Interview with George Siemens in AU student union's Voice magazine (part 3)

Final part of a three part interview with George Siemens (following from the  first and second parts), in which he describes some thoughts about the future and nature of educational systems, and in which he has some great stuff to say about motivation and assessment in particular. I like this:

Make things relevant to students, but also give students an opportunity to write themselves into the curriculum. That is, to be able to see the outcome of the benefits, the way in which it can make them a better person, and the way it can make the world a better place. You can’t directly motivate someone, but you can set conditions under which people of different attributes will become motivated.”

Exactly so – it’s about creating conditions, not about telling or controlling. It’s about making and supporting a space (physical, virtual, social, conceptual, organizational, temporal, curricular, etc) that learners both belong to and own. 

Address of the bookmark: http://www.voicemagazine.org/articles/featuredisplay.php?ART=10462

Hacking Our Brains: Motivating Others By Snatching Back Rewards

Ingenious approach to extrinsic motivation – give something, then use the threat of taking it away to ‘motivate’ people to do what you want them to do. It’s an old idea, but one that has not seen as much use as you would expect in things like student grading or occupational performance assessments. Though tied up in language of the endowment effect, the essence of this method is punishment rather than reward, and we tend to be more punishment-averse than reward-seeking, so it works ‘better’. It’s still rampant behaviourism, presented in a cognitivist wrapper to make it look shinier. 

As with all forms of extrinsic motivation, this does two things, both inimical to learning. Firstly, it leads to a focus on avoiding the punishment, rather than on the pleasure of the learning activity itself. I don’t see this as a great leap forward from rewarding with grades in a learning context – it just makes it even more extrinsic and even more likely to destroy any intrinsic motivation a learner might have had in the first place so that, once punishment has been avoided, the value of the activity itself is diminished and, mostly, the things that make it useful are forgotten. Secondly, it is an even worse assertion of power than a reward. Again, I don’t see this as having any meaningful value in a learning context. It teaches greater compliance, not the topic at hand. That’s a bad lesson, unless you think that education is preparation for life in which you should be a compliant tool that reluctantly does the bidding of those in power through fear of punishment. A society organized that way is not the kind of society I want to live in. Surely we have grown out of this? If not, surely we should?

The notion that people need to be forced to comply in order to learn what we want to teach them is barbaric, distasteful and, ultimately, deeply counter-productive. Countless generations of learners have had their love of learning viciously attacked by such attitudes, and have learned with less efficiency, less depth, and less value to society as a result. It’s a systemic failure on an unbelievably massive scale, embedded so deeply in our educational systems we hardly even notice it any more. Done to one person it is bad enough but, done systematically at a worldwide scale, to ever younger generations of children, it hampers the intelligence and compassion of our species in ways that cut deep and leave us bleeding. Despite this, most of us still manage to come out of this without all of our innate love of learning completely destroyed. Our intrinsic motivation can be a powerful counter-force, just occasionally what we are taught aligns well enough with what we want and need to learn, we discover other ways and things to learn that are meaningful and not imposed upon us, and there are quite a lot of great teachers out there that manage to enthuse and inspire despite the odds stacked against them. Few if any of us survive unscathed, though most of us get something useful here and there despite the obstacles. But we could be so much more.

Address of the bookmark: http://readwrite.com/2015/05/07/reward-then-deduct-loss-aversion-brain-hack

Half an Hour: The Study, and Other Stuff

This is the latest in a fascinating ongoing argument between George Siemens and Stephen Downes over the value, reliability and focus of Preparing for the Digital University, a report created by George, Dragan Gasevic, Shane Dawson and many others on the current state of research and practice in (mainly) online and distance learning. Because I am quoted in Stephen’s post, and in George’s post to which it is a response, as agreeing with Stephen, I’d like to clarify just what I agree with.

Where I disagree with Stephen is that I do think it is a good report that pulls together a lot of good research as well as other sources to provide a rich and informative picture of what universities have been doing in the field of online and distance learning, and how they got there. I think its audience is mainly not seasoned edtech researchers, though there is a lot of valuable synthesis and analysis in it that those of us in the field can and will certainly use. I see it as a strength that it does not just limit itself to ‘reliable’ research (whatever that may be – I’ve seldom found an unequivocal example of that elusive beast) and I am quite happy with the range and depth of the sources it uses. This is an expert summary and analysis by some of the top experts in the field who know whereof they speak. Of course it misses some things and over-emphasizes others, but that is the nature of the animal and I think it does a very good job of remaining broad, informative and clear.

I think Stephen and I are in rough agreement, though, in observing the boundary that the report does not try too hard to cross: the challenge that some of the research presents to the very notion of the university as we now know it and, to a lesser extent, the under-representation of ideas and research that relate to that. The latter point is a tricky systemic problem because, on the whole, the majority of work and writing in that space is under-represented in literature that, because it tends to come from universities, tends to focus on universities. As this report is about universities, it is quite reasonable that this is the body of literature it uses.

The relative lack of beyond-the-institution thinking is, I believe, a concern for Stephen but, for me, it’s just something that, if I were writing it, I would want to add more of. It seems to me that this report will have most value in providing information for policy makers, managers of institutions, and those who are beginning to discover the field. It will open some eyes, help people to avoid old mistakes, and open up some important discussions. But, thanks to the intentional focus on the university and how we got to now, the structures, processes and measures are mostly rooted in an assumption that the university as we know it can and should persist. The discussion that emerges will inevitably tend to focus on how digital technologies can be used to do what we already do in (from a birds-eye perspective) only slightly different ways. In doing so, it may blind participants to the very real threats to their whole way of life as well as opportunities that are worth grasping. This may not be the best idea but it is not a weakness in the report as such – it is, after all, doing exactly what it says on the box. In fact, it is to its credit that it does address some approaches and tools that are transformative. 

I agree with George (and with Stephen’s hopes) that universities are and will continue to be really important institutions that can and should offer great value to our societies for a long time to come. We would invent them very differently, or maybe not invent anything like them at all, if we started afresh, knowing what we now know. The reality is, however, that this is what we have and it has enormous momentum that is not going to stop any time soon so, if we are to make the best use of it, we should both understand and make improvements to it. This report is a solid foundation for that. There are some risks that it might, without further reflection, lead to ‘improvement’ of the wrong things – those that are counter-productive to the goals of increasing knowledge and learning in the world – and so further ensconce harmful practices. My pet hobby horses include courses, grades, and the unholy linking of learning and accreditation, for instance.  But there are other huge problems like the trend to systemic exclusion of disadvantaged people, the treatment of students as customers and the unnatural separation of disciplines and fields. Stephen mentions more. With that in mind, it would be useful to think a bit further about ways that the foundations of the university like teaching, accreditation, community, knowledge production, knowledge dissemination, being a knowledge repository, a source of expertise and so on are being not-too-subtly eroded by things that are enabled by the net, as well as to further critique the embedded patterns, limitations, biases, and blind-spots that make those foundations brittle and liable to crack or crumble.

The basis for that is all there in this report but, on reflection, I think the discussion of those issues is something for a further report as it demands a different level and kind of analysis. I am not at all sure that the Gates Foundation would want to fund such a thing, but it should. Actually, maybe that line of thinking is a bit too narrow. After all, the exchange between George and Stephen, as well as contributions by others (e.g. George Veletsianos) is already, and self-referentially, doing much the same job such a report might do, and maybe doing it better. The learning dialogue and knowledge creation that is occurring through this distributed conversation is as rich as the report itself and, in its own way, at least as valuable. If the report had not been written then that dialogue might not have occurred, so it is a good anchor, but it is part of a richer knowledge network. And that’s exactly the point: technologies like social media are deeply subversive because they enable us do some of the job that universities traditionally do without requiring a university as a necessary intermediary, with all the limitations and exclusions that implies. The patterns, technologies, economic models, checks and balances are not there yet to replace all of a university’s functions – we have much research to do and many inventions yet to invent, and I am very aware that it is only because of the university that I for one am able to participate in this – but the change is already happening, and it is quite profound.

Address of the bookmark: http://halfanhour.blogspot.ca/2015/05/the-study-and-other-stuff.html

Half an Hour: Research and Evidence

Stephen Downes defends his attack on the recent report on the current state of online (etc) learning developed by George Siemens, Dragan Gasevic and Shane Dawson.

I have mixed feeling about this. As such reports go, I think it is a good one. It does knit together a fair sample of the literature, including bits from journalists and bloggers as well as more and less credible research, in a form that I think is digestible enough and sufficiently broad for corporate folk who need to get up to speed, including those running academies. Its methods are clear and its outputs are accessible. Though written by very well-informed researchers (not just George) and making use of copious amounts of research, I don’t think it is really aimed at researchers in the field. My impression is that it’s mainly for the under-informed policy makers that need to be better informed, not for those of use who already know this stuff, and it does that job well. It’s a lot more than journalism, but a little less than an academic paper. I can also see a useful role for it for those that need to know roughly where we are now in online learning (e.g. edtech developers), but that are not seeking to become researchers in the field. 

I think the more fundamental problem, and one that both George and Stephen seem to be fencing around, is in its title. The suggestion that it is about ‘preparing for the digital university’ is tricky on two counts. First of all, ‘preparing’ seems a funny word to use: it’s like saying we are ‘preparing’ for a storm when the waves are high around us and we are on the verge of capsize. Secondly, and more tellingly, ‘digital university’ implies an expected outcome that is rather at odds with a lot of both George and Stephen’s work. The assumption that a university is the answer to the problem (which is what the title implies) is tricky, to say the least, especially given quite a lot of the discussion surrounding incursions by commercial and alternative (especially bottom-up) forms of accreditation and learning that step far outside the realms of traditional academia and challenge its very foundations. That final chapter mentions quite a few tools and approaches that relegate the institution to a negligible role but there are hints of this scattered through much of the report, from commercial incursions to uses of reputation measures in Stack Overflow. If we are thinking of preparing for the future, the language and methods of formal education, courses, and mediaeval institutions might be a fair place to start but maybe not the place to aim for. There’s a tension throughout the report between the soft disruptive nature of digital technologies (not so much the tools but what people do with them) and the hard mechanization of arbitrarily evolved patterns. For instance, between social recommendation and automated marking. The latter reinforces the university as an institution even if it does upset some power structures and working practices a little. The former (potentially) disrupts the notion and societal role of the university itself. For the most part, this report is a review of the history and current state of online/distance/blended learning in formal education. This is in keeping with the title, but not with the ultimate thrust of at least a few of the findings. That does rather stifle the potential for really getting under the skin of the problem. It’s a view from the inside, not from above. Though it hints at transformation, it is ultimately in defence of the realm. Personally speaking, I would have liked to see a bit more critique of the realm itself. The last chapter, in particular, provides some evidence that could be used to make such a case, but does not really push it where it wants to go. But I’m not the one this report is aimed at.

 

Address of the bookmark: http://halfanhour.blogspot.ca/2015/05/research-and-evidence.html

Can Behavioral Tools Improve Online Student Outcomes? Experimental Evidence from a Massive Open Online Course

Well-written and intelligently argued paper from Richard W. Patterson, using an experimental (well, nearly) approach to discover the effects of a commitment device, reminder and focus tool to improve course completion and performance in a MOOC.  It seems that providing tools to support students to  pre-commit to limiting ‘distracting Internet time’ (and that both measures and controls this) has a striking positive effect, though largely on those that appear to be extrinsically motivated: they want to successfully complete the course, rather than to enjoy the process of learning. Reminders are pretty useless for anyone (I concur – personally I find them irritating and, after a while, guilt-inducing and thus more liable to cause procrastination) and blocking distracting websites has very little if any effect – unsurprising really, because they don’t really block distractions at all: if you want to be distracted, you will simply find another way. This is good information.

It seems to me that those who have learned to be extrinsically motivated might benefit from this, though it will reinforce their dangerous predeliction, encourage bad habits, and benefit most those that have already figured out how to work within a traditional university system and that are focused on the end point rather than the journey. While I can see some superficially attractive merit in providing tools that help you to achieve your goals by managing the process, it reminds me a little of diet plans and techniques that, though sometimes successful in the short term, are positively harmful in the long term. This is the pattern that underlies all behaviourist models – it sort-of works up to a point (the course-setter’s goals are complied with), but the long-term impact on the learner is generally counter-productive. This approach will lead to more people completing the course, not more people learning to love the subject and hungry to apply that knowledge and learn more. In fact, it opposes such a goal. This is not about inculcating habits of mind but of making people do things that, though they want to reach some further end as a result, they do not actually want to do and, once the stimulus is taken away, will likely never want to do again. It is far better to concentrate on supporting intrinsic motivation and to build learning activities that people will actually want to do – challenges that they feel impelled to solve, supporting social needs, over which they feel some control. For that, the instructivist course format is ill suited to the needs of most. 

Abstract

Online education is an increasingly popular alternative to traditional classroom- based courses. However, completion rates in online courses are often very low. One explanation for poor performance in online courses is that aspects of the online environ- ment lead students to procrastinate, forget about, or be distracted from coursework. To address student time-management issues, I leverage insights from behavioral economics to design three software tools including (1) a commitment device that allows students to pre-commit to time limits on distracting Internet activities, (2) a reminder tool that is triggered by time spent on distracting websites, and (3) a focusing tool that allows students to block distracting sites when they go to the course website. I test the impact of these tools in a large-scale randomized experiment (n=657) conducted in a massive open online course (MOOC) hosted by Stanford University. Relative to students in the control group, students in the commitment device treatment spend 24% more time working on the course, receive course grades that are 0.29 standard deviations higher, and are 40% more likely to complete the course. In contrast, outcomes for students in the reminder and focusing treatments are not statistically distinguishable from the control. These results suggest that tools designed to address procrastination can have a significant impact on online student performance. 

Address of the bookmark: http://www.human.cornell.edu/pam/academics/phd/upload/PattersonJMP11_18.pdf

Thomas Frey: By 2030 over 50% of Colleges will Collapse

Thomas Frey provides an analysis of current trends in education (and, more broadly, learning) and predicts a grim future for colleges and, by extension, schools and universities. This is not a uniformly well-informed article – Frey is clearly an outsider with a somewhat caricatured or at least highly situated US-centric view of the educational system – but, though repeating arguments that have been made for decades and offering no novel insights, the issues are well summarized, well expressed, and the overall thrust of the article is hard to argue with.

His main points are summarized in a list:

  1. Overhead costs too high – Even if the buildings are paid for and all money-losing athletic programs are dropped, the costs associated with maintaining a college campus are very high. Everything from utilities, to insurance, to phone systems, to security, to maintenance and repair are all expenses that online courses do not have. Some of the less visible expenses involve the bonds and financing instruments used to cover new construction, campus projects, and revenue inconsistencies. The cost of money itself will be a huge factor.
  2. Substandard classes and teachers – Many of the exact same classes are taught in thousands of classroom simultaneously every semester. The law of averages tells us that 49.9% of these will be below average. Yet any college that is able to electronically pipe in a top 1% teacher will suddenly have a better class than 99% of all other colleges.
  3. Increasingly visible rating systems – Online rating systems will begin to torpedo tens of thousands of classes and teachers over the coming years. Bad ratings of one teacher and one class will directly affect the overall rating of the institution.
  4. Inconvenience of time and place – Yes, classrooms help focus our attention and the world runs on deadlines. But our willingness to flex schedules to meet someone else’s time and place requirements is shrinking. Especially when we have a more convenient option.
  5. Pricing competition – Students today have many options for taking free courses without credits vs. expensive classes with credits and very little in between. That, however, is about to change. Colleges focused primarily on course delivery will be facing an increasingly price sensitive consumer base.
  6. Credentialing system competition – Much like a doctor’s ability to write prescriptions, a college’s ability to grant credits has given them an unusual competitive advantage, something every startup entrepreneur is searching for. However, traditional systems for granting credits only work as long as people still have faith in the system. This “faith in the system” is about to be eroded with competing systems. Companies like Coursera, Udacity, and iTunesU are well positioned to start offering an entirely new credentialing system.
  7. Relationships formed in colleges will be replaced with other relationship-building systems – Social structures are changing and the value of relationships built in college, while often quite valuable, are equally often overrated. Just as a dating relationship today is far more likely to begin online, business and social relationships in the future will also happen in far different ways.
  8. Sudden realization that “the emperor has no clothes!” – Education, much like our money supply, is a system built on trust. We are trusting colleges to instill valuable knowledge in our students, and in doing so, create a more valuable workforce and society. But when those who find no tangible value begin to openly proclaim, “the emperor has no clothes!” colleges will find themselves in a hard-to-defend downward spiral.

It is notable that many of the issues raised are fully addressed by online universities like AU, and have been for decades. We have moved on to bigger and more intractible problems! In particular, the idea that classes and teachers are a fixture that cannot be changed is a bit quaint. It is also fair to say that Frey has only a rough idea of how education works: the notion that high quality lectures has anything much to do with learning or the university experience shows a failure to understand the beast – but then, the same is true of potential students and more than a few professors. But pricing competition, credentialling competition, relationship-building and, above all, the ’emporer has no clothes’ arguments hit home, and I think will have the effects he anticipates much sooner than 2050. Nothing new here, and a bit coarse, but it clearly expresses the stark reality of the consequences.

Address of the bookmark: http://www.futuristspeaker.com/2013/07/by-2030-over-50-of-colleges-will-collapse/

» Assessing teachers’ digital competencies Virtual Canuck

Terry Anderson on an Estonian approach to assessing teacher competences (and other projects) using Elgg – the same framework that underpins the Landing. I’ve downloaded the tool they have developed, Digimina, and will be trying it out, not just for exactly the purposes it was developed, but as the foundation for a more generalized toolset for sharing the process of assessment. May spark some ideas, I think.

A nice approach to methodology: Terry prefers the development of design principles as the ‘ultimate’ aim of design-based research (DBR), but I like the notion of software as a hypothesis that is used here. It’s essentially a ‘sciency’ way of describing the notion of trying out an idea to see whether it works that makes no particular claims to generality, but that both derives from and feeds a model of what can be done, what needs to be done, and why it should be done. The generalizable part is not the final stage, but the penultimate stage of design in this DBR model. In this sense, it formalizes the very informal notion of bricolage, capturing some of its iterative nature. It’s not quite enough, I think, any more than other models of DBR quite capture the process in all its richness. This is because the activity of formulating that hypothesis itself follows a very similar pattern at a much finer-grained scale to that of the bigger model. When building code, you try out ideas, see where it takes you, and that inspires new ideas through the process of writing as much as of designing and specifying. Shovelling that into a large-scale process model hides where at least an important amount of the innovation actually happens, perhaps over-emphasizing the importance of explicit evaluation phases and underplaying the role of construction itself.

Address of the bookmark: http://terrya.edublogs.org/2015/04/24/assessing-teachers-digital-competencies/

Wait for It: Delayed Feedback Can Enhance Learning – Scientific American

Report on a nice bit of cognitivist research – as the title suggests, delayed feedback (ie don’t give the answer right away) assists retention, and is best done after an unpredictable delay of a few seconds. What’s most interesting about it is the hypothesized reason: it’s curiosity. It only works if it piques your interest enough to want to know the answer, and your level of attention is raised when the timing of an upcoming anticipated event is uncertain. Like so many things in learning, motivation plays a big role here. 

Address of the bookmark: http://www.scientificamerican.com/article/wait-for-it-delayed-feedback-can-enhance-learning/