Activity trackers flop without cash motivation – Futurity

http://www.futurity.org/activity-trackers-motivation-1281832-2/

Another from the annals of unnecessary and possibly harmful research on motivation. Unsurprisingly, fitness trackers do nothing for motivation and, even less surprisingly, if you offer a reward then people do exercise more, but are significantly less active when the reward is taken away…

…at the end of twelve months, six months after the incentives were removed, this group showed poorer step outcomes than the tracker only group, suggesting that removing the incentives may have demotivated these individuals and caused them to do worse than had the incentives never been offered.

This effect has been demonstrate countless times. Giving rewards infallibly kills intrinsic motivation. When will we ever learn?

One interesting take-away is that (whether or not the subjects took more steps) there were no noticeable improvements in health outcomes across the entire experimental group. Perhaps this is because 6 months is not long enough to register the minor improvements involved, or maybe the instrument for measuring improved outcomes was too coarse. More likely, and as I have previously observed, subjects probably did things to increase their step count at the expense of other healthy activities like cycling etc. 

What is education for?

Dave Cormier, in typically excellent form, reflects on the differences between education and learning in his latest post. I very much agree with pretty much everything he writes here. This extract condenses the central point that, I think, matters more than any other:

Learning is a constant. It is what humans do. They don’t, ever, learn exactly what you want them to learn in your education system. They may learn to remember that 7+5=12 as my children are currently being taught to do by rote, but they also ‘learn’ that math is really boring. We drive them to memorise so their tests will be higher, but is it worth the tradeoff? Is a high score on addition worth “math is boring?””

This is crucial: it is impossible to live and not to learn. Failure to learn is not an option. What matters is what we learn and how we learn it. The thing is, as Dave puts it:

Education is a totally different beast than learning. Learning is a thing a person does. Education is something a society does to its citizens. When we think about what we want to do with ‘education’ suddenly we need to start thinking about what we as a society think is important for our citizens to know. There was a time, in an previous democracy, where learning how to interact in your democracy was the most important part of an education system. When i look through my twitter account now I start to think that learning to live and thrive with difference without hate and fear might be a nice thing for an education system to be for.”

My take on this

I have written here and there about the deep intertwingled relationship between education and indoctrination (e.g, most recently, here). Most of its early formal incarnations were, and a majority of them still are, concerned with passing on doctrine, often of a religious, quasi-religious, or political nature. To do that also requires the inculcation of values, and the acquisition of literacies (by my definition, the set of hard, human-enacted technologies needed to engage with a given culture, be that culture big or small). The balance between indoctrination, inculcation and literacy acquisition has shifted over the years and varies according to culture, context, and level, but education remains, at its heart, a process for helping learners learn to be in a given society or subset of it. This remains true even at the highest levels of terminal degrees: PhDs are almost never about the research topic so much as they are about learning to be an academic, a researcher, someone that understands and lives the norms, values and beliefs of the academic research community in which their discipline resides. To speak the language of a discipline. It is best to speak multiple languages, of course. One of the reasons I am a huge fan of crossing disciplinary boundaries is that it slightly disrupts that process by letting us compare, contrast, and pick between the values of different cultures, but such blurring is usually relatively minor. Hard core physicists share much in common with the softest literary theorists. Much has been written about the quality of ‘graduateness‘, typically with some further intent in mind (eg. employability) but what the term really refers to is a gestalt of ways of thinking, behaving, and believing that have what Wittgenstein thought of as family likenesses. No single thing or cluster of things typifies a graduate, but there are common features spread between them. We are all part of the same family.

Education has a lot to do with replication and stability but it is, and must always have been, at least as much about being able to adapt and change that society. While, in days gone by, it might have been enough to use education as a means to produce submissive workers, soldiers, and priests, and to leave it to higher echelons to manage change (and manage their underlings), it would be nice to think that we have gone beyond that now. In fact, we must go beyond that now, if we are to survive as a species and as a planet. Our world is too complex for hierarchical management alone.

I believe that education must be both replicative and generative. It must valorize challenge to beliefs and diversity as much as it preserves wisdom and uniformity. It must support both individual needs and social needs, the needs of people and the needs of the planet, the needs of all the societies within and intersecting with its society. This balance between order and chaos is about sustaining evolution. Evolution happens on the edge of chaos, not in chaos itself (the Red Queen Regime), and not in order (the Stalinist Regime). This is not about design so much as it is about the rules of change in a diverse complex adaptive system. The ever burgeoning adjacent possible means that our societies, as much as ecosystems, can do nothing but evolve to ever greater complexity, ever greater interdependence but, equally, ever greater independence, ever greater diversity. We are not just one global society, we are billions of them, overlapping, cross-cutting, independent, interdependent. And there is not just one educational system that needs to change. There are millions of them, millions of pieces of them, and more of them arriving all the time. We don’t need to change Education: that’s too simplistic and would, inevitably, just replace one set of mistakes with another. We need to change educations.

Address of the bookmark: http://davecormier.com/edblog/2016/10/24/planning-for-educational-change-what-is-education-for/

A Devil’s Dictionary of Educational Technology – Medium

Delightful compendium from Bryan Alexander. I particularly like:

Analytics, n. pl. “The use of numbers to confirm existing prejudices, and the design of complex systems to generate these numbers.”

Big data. n. pl. 1.When ordinary surveillance just isn’t enough.

Failure, n. 1. A temporary practice educators encourage in students, which schools then ruthlessly, publicly, and permanently punish.

Forum, n. 1. Social Darwinism using 1980s technology.

 World Wide Web, n. A strange new technology, the reality of which can be fended off or ignored through the LMS, proprietary databases, non-linking mobile apps, and judicious use of login requirements.

 

 

Address of the bookmark: https://medium.com/@bryanalexander/a-devils-dictionary-of-educational-technology-1c3ea9a0b932#.aqn3aqsho

Curiosity Is Not Intrinsically Good

Interesting reflections in Scientific American on morbid curiosity – that we are driven by our curiosity, sometimes even when we actually know that there is a strong likelihood it will hurt us. In the article, as the title implies, this is portrayed as a bad thing. I disagree.

“The drive to discover is deeply ingrained in humans, on par with the basic drives for food or sex, says Christopher Hsee of the University of Chicago, a co-author of the paper. Curiosity is often considered a good instinct—it can lead to new scientific advances, for instance—but sometimes such inquiry can backfire. “The insight that curiosity can drive you to do self-destructive things is a profound one,” says George Loewenstein, a professor of economics and psychology at Carnegie Mellon University who has pioneered the scientific study of curiosity.”

Bub in a boxThis is not exactly a novel, nor a profound insight: we even have a popular proverb for it that I mention to my cats on an almost daily basis. They don’t listen. 

There is a strong relationship between curiosity and the desire for competence: a need to know how things work, how to do something we cannot yet do, why things are the way they are, where our limits lie, how to become more capable of acting in the world. From an evolutionary perspective we are curious with a purpose. It allows us to make effective use of our environment, to become competent within it. This is really good for survival so, of course, it is selected for. That it sometimes drives us to do things that harm us is actually a very positive feature, as long as it is balanced with a sufficient level of caution and the harm it causes is not too great. It helps us to know what to avoid, as well as what is useful to us. It also helps us to be more adaptable to bad things that we cannot avoid. It makes us more flexible, and lets us both know and extend our limits.

The first experiment described here involved people playing with pens even knowing that some were novelty items that would give them an electric shock. I’m not sure why the researchers mixed in some harmless pens in this because, even when pain is an absolute certainty, curiosity can drive us to experience it. I have long used electrostatic zappers that are designed to alleviate the itch in mosquito bites by administering a sharp and slightly painful shock to the skin. I have yet to meet a single child and have met very few adults that did not want to try it out on their own skin, regardless of whether they had any bites, in the full and certain knowledge that it would hurt. This is described in the article as self-destructive curiosity but I don’t think that’s right at all. If subjects had been convincingly warned that some pens would kill or maim them, then I am quite certain that very few would have played with them (some might, of course – evolution thrives on variation and, in some environments, high-risk strategies might pay off). But being curious about what kind of pain it might cause is really just a way of discovering or achieving competence, of discovering how we cope with this kind of shock, of testing hypotheses about ourselves and the environment, as well as finding out whether such joke pens actually work as advertised. This is potentially useful information: it will make you less likely to be a victim of a practical joke, or perhaps inspire you to perform one more effectively. Either way, it’s probably not a big thing in the grand scheme of things but, then again, very few learning experiences are. The value is more about how we integrate and connect such experiences.

The article describes another experiment in which participants were encouraged to predict their feelings after being shown an unpleasant image. Those so primed were less likely to choose to see it. Again, this makes sense in the light of what we already know. We are curious with a purpose – to learn – so, if we reflect a bit on what we have already learned, then it might dull our curiosity to experience something bad again. That’s potentially useful. I’m not sure that it is always a good thing, though. I happen to like, say, some horror movies that disgust me, or comedies that rely on discomfort for their humour. In fact, the anticipation of fear or disgust is often one of the main things that drives their plots and keeps my eyes glued to them. If the zombie apocalypse comes, I will be totally prepared. It also prepares me better for things that are going to really upset me. Likewise for funfair rides, sailing on a breezy day, exercising until it hurts, eating hot chili, or struggling with difficult deadlines.

So while, yes, we absolutely should learn from experience, we also need to remember that it can lead us into fixed ways of thinking that can, when conditions change, be less adaptable and adaptive. There is an ever-shifting balance between fear and curiosity that we need to embrace, perhaps especially when curiosity leads to the likelihood of something unpleasant (though not too unpleasant) happening. And, even when the danger is great, there are also risks that are sometimes worth taking. ‘What if..?’ is one of the most powerful phrases in any language.

Address of the bookmark: http://www.scientificamerican.com/article/curiosity-is-not-intrinsically-good/

Cocktails and educational research

A lot of progress has been made in medicine in recent years through the application of cocktails of drugs. Those used to combat AIDS are perhaps the most well-known, but there are many other applications of the technique to everything from lung cancer to Hodgkin’s lymphoma. The logic is simple. Different drugs attack different vulnerabilities in the pathogens etc they seek to kill. Though evolution means that some bacteria, viruses or cancers are likely to be adapted to escape one attack, the more different attacks you make, the less likely it will be that any will survive.

Simulated learningUnfortunately, combinatorial complexity means this is not a simply a question of throwing a bunch of the best drugs of each type together and gaining their benefits additively. I have recently been reading John H. Miller’s ‘A crude look at the whole: the science of complex systems in business, life and society‘ which is, so far, excellent, and that addresses this and many other problems in complexity science. Miller uses the nice analogy of fashion to help explain the problem: if you simply choose the most fashionable belt, the trendiest shoes, the latest greatest shirt, the snappiest hat, etc, the chances of walking out with the most fashionable outfit by combining them together are virtually zero. In fact, there’s a very strong chance that you will wind up looking pretty awful. It is not easily susceptible to reductive science because the variables all affect one another deeply. If your shirt doesn’t go with your shoes, it doesn’t matter how good either are separately. The same is true of drugs. You can’t simply pick those that are best on their own without understanding how they all work together. Not only may they not additively combine, they may often have highly negative effects, or may prevent one another being effective, or may behave differently in a different sequence, or in different relative concentrations. To make matters worse, side effects multiply as well as therapeutic benefits so, at the very least, you want to aim for the smallest number of compounds in the cocktail that you can get away with. Even were the effects of combining drugs positive, it would be premature to believe that it is the best possible solution unless you have actually tried them all. And therein lies the rub, because there are really a great many ways to combine them.

Miller and colleagues have been using the ideas behind simulated annealing to create faster, better ways to discover working cocktails of drugs. They started with 19 drugs which, a small bit of math shows, could be combined in 2 to the power of 19 different ways – about half a million possible combinations (not counting sequencing or relative strength issues). As only 20 such combinations could be tested each week, the chances of finding an effective, let alone the best combination, were slim within any reasonable timeframe. Simplifying a bit, rather than attempting to cover the entire range of possibilities, their approach finds a local optimum within one locale by picking a point and iterating variations from there until the best combination is found for that patch of the fitness landscape. It then checks another locale and repeats the process, and iterates until they have covered a large enough portion of the fitness landscape to be confident of having found at least a good solution: they have at least several peaks to compare. This also lets them follow up on hunches and to use educated guesses to speed up the search. It seems pretty effective, at least when compared with alternatives that attempt a theory-driven intentional design (too many non-independent variables), and is certainly vastly superior to methodically trying every alternative, inasmuch as it is actually possible to do this within acceptable timescales.

The central trick is to deliberately go downhill on the fitness landscape, rather than following an uphill route of continuous improvement all the time, which may simply get you to the top of an anthill rather than the peak of Everest in the fitness landscape. Miller very effectively shows that this is the fundamental error committed by followers of the Six-Sigma approach to management, an iterative method of process improvement originally invented to reduce errors in the manufacturing process: it may work well in a manufacturing context with a small number of variables to play with in a fixed and well-known landscape, but it is much worse than useless when applied in a creative industry like, say, education, because the chances that we are climbing a mountain and not an anthill are slim to negligible. In fact, the same is true even in manufacturing: if you are just making something inherently weak as good as it can be, it is still weak. There are lessons here for those that work hard to make our educational systems work better. For instance, attempts to make examination processes more reliable are doomed to fail because it’s exams that are the problem, not the processes used to run them. As I finish this while listening to a talk on learning analytics, I see dozens of such examples: most of the analytics tools described are designed to make the various parts of the educational machine work ‘ better’, ie. (for the most part) to help ensure that students’ behaviour complies with teachers’ intent. Of course, the only reason such compliance was ever needed was for efficient use of teaching resources, not because it is good for learning. Anthills.

This way of thinking seems to me to have potentially interesting applications in educational research. We who work in the area are faced with an irreducibly large number of recombinable and mutually affective variables that make any ethical attempt to do experimental research on effectiveness (however we choose to measure that – so many anthills here) impossible. It doesn’t stop a lot of people doing it, and telling us about p-values that prove their point in more or less scupulous studies, but they are – not to put too fine a point on it – almost always completely pointless.  At best, they might be telling us something useful about a single, non-replicable anthill, from which we might draw a lesson or two for our own context. But even a single omitted word in a lecture, a small change in inflection, let alone an impossibly vast range of design, contextual, historical and human factors, can have a substantial effect on learning outcomes and effectiveness for any given individual at any given time. We are always dealing with a lot more than 2 to the power of 19 possible mutually interacting combinations in real educational contexts. For even the simplest of research designs in a realistic educational context, the number of possible combinations of relevant variables is more likely closer to 2 to the power of 100 (in base 10 that’s  1,267,650,600,228,229,401,496,703,205,376). To make matters worse, the effects we are looking for may sometimes not be apparent for decades (having recombined and interacted with countless others along the way) and, for anything beyond trivial reductive experiments that would tell us nothing really useful, could seldom be done at a rate of more than a handful per semester, let alone 20 per week. This is a very good reason to do a lot more qualitative research, seeking meanings, connections, values and stories rather than trying to prove our approaches using experimental results. Education is more comparable to psychology than medicine and suffers the same central problem, that the general does not transfer to the specific, as well as a whole bunch of related problems that Smedslund recently coherently summarized. The article is paywalled, but Smedlund’s abstract states his main points succinctly:

“The current empirical paradigm for psychological research is criticized because it ignores the irreversibility of psychological processes, the infinite number of influential factors, the pseudo-empirical nature of many hypotheses, and the methodological implications of social interactivity. An additional point is that the differences and correlations usually found are much too small to be useful in psychological practice and in daily life. Together, these criticisms imply that an objective, accumulative, empirical and theoretical science of psychology is an impossible project.”

You could simply substitute ‘education’ for ‘psychology’ in this, and it would read the same. But it gets worse, because education is as much about technology and design as it is about states of mind and behaviour, so it is orders of magnitude more complex than psychology. The potential for invention of new ways of teaching and new states of learning is essentially infinite. Reductive science thus has a very limited role in educational research, at least as it has hitherto been done.

But what if we took the lessons of simulated annealing to heart? I recently bookmarked an approach to more reliable research suggested by the Christensen Institute that might provide a relevant methodology. The idea behind this is (again, simplifying a bit) to do the experimental stuff, then to sweep the normal results to one side and concentrate on the outliers, performing iterations of conjectures and experiments on an ever more diverse and precise range of samples until a richer, fuller picture results. Although it would be painstaking and longwinded, it is a good idea. But one cycle of this is a bit like a single iteration of Miller’s simulated annealing approach, a means to reach the top of one peak in the fitness landscape, that may still be a low-lying peak. However if, having done that, we jumbled up the variables again and repeated it starting in a different place, we might stand a chance of climbing some higher anthills and, perhaps, over time we might even hit a mountain and begin to have something that looks like a true science of education, in which we might make some reasonable predictions that do not rely on vague generalizations. It would either take a terribly long time (which itself might preclude it because, by the time we had finished researching, the discipline will have moved somewhere else) or would hit some notable ethical boundaries (you can’t deliberately mis-teach someone), but it seems more plausible than most existing techniques, if a reductive science of education is what we seek.

To be frank, I am not convinced it is worth the trouble. It seems to me that education is far closer as a discipline to art and design than it is to psychology, let alone to physics. Sure, there is a lot of important and useful stuff to be learned about how we learn: no doubt about that at all, and a simulated annealing approach might speed up that kind of research. Painters need to know what paints do too. But from there to prescribing how we should therefore teach spans a big chasm that reductive science cannot, in principle or practice, cross. This doesn’t mean that we cannot know anything: it just means it’s a different kind of knowledge than reductive science can provide. We are dealing with emergent phenomena in complex systems that are ontologically and epistemologically different from the parts of which they consist. So, yes, knowledge of the parts is valuable, but we can no more predict how best to teach or learn from those parts than we can predict the shape and function of the heart from knowledge of cellular organelles in its constituent cells. But knowledge of the cocktails that result – that might be useful.

 

 

Be less pigeon

I love the slogan that Audrey Watters has chosen for her new branding:

Be less pigeon

As she puts it…

“I wanted my work to both highlight the longstanding relationship between behaviorism and testing – built into the ideology and the infrastructure since ed-tech’s origins in the early twentieth century – and to remind people that there are also alternatives to treating students like animals to be trained.”

Absolutely.

Address of the bookmark: http://hackeducation.com/2016/06/08/pigeons

This is the Teenage Brain on Social Media

An article in Neuroscience News about a recent (paywalled – grr) brain-scan study of teenagers, predictably finding that having your photos liked on social media sparks off a lot of brain activity, notably in areas associated with reward, as well as social activity and visual attention. So far so so, and a bit odd that this is what Neuroscience News chose to focus on, because that’s only a small subsection of the study and by far the least interesting part. What’s really interesting to me about the study is that the researchers mainly investigated the effects of existing likes (or, as they put it ‘quanitfiable social endorsements’) on whether teens liked a photo, and scanned their brains while doing so. As countless other studies (including mine) have suggested, not just for teens, the effects were significant. As many studies have previously shown, photos endorsed by peers – even strangers – are a great deal more likely to be liked, regardless of their content. The researchers actually faked the likes and noted that the effect was the same whether showing ‘neutral’ content or risky behaviours like smoking and drinking. Unlike most existing studies, the researchers feel confident to describe this in terms of peer-approval and conformity, thanks to the brain scans. As the abstract puts it:

“Viewing photos with many (compared with few) likes was associated with greater activity in neural regions implicated in reward processing, social cognition, imitation, and attention.”

The paper itself is a bit fuzzy about which areas are activated under which conditions: not being adept at reading brain scans, I am still unsure about whether social cognition played a similarly important role when seeing likes of one’s own photos compared with others liked by many people, though there are clearly some significant differences between the two. This bothers me a bit because, within the discussion of the study itself, they say:

“Adolescents model appropriate behavior and interests through the images they post (behavioral display) and reinforce peers’ behavior through the provision of likes (behavioral reinforcement). Unlike offline forms of peer influence, however, quantifiable social endorsement is straightforward, unambiguous, and, as the name suggests, purely quantitative.”

I don’t think this is a full explanation as it is confounded by the instrument used. An alternative plausible explanation is that, when unsure of our own judgement, we use other cues (which, in this case, can only ever come from other people thanks to the design of the system) to help make up our minds. A similar effect would have been observed using other cues such as, for example, list position or size, with no reference to how many others had liked the photos or not. Most of us (at least, most that don’t know how Google works) do not see the ordering of Google Search results as social endorsement, though that is exactly what it is, but list position is incredibly influential in our choice of links to click and, presumably, our neural responses to such items on the page. It would be interesting to further explore the extent to which the perception of value comes from the fact that it is liked by peers as opposed to the fact that the system itself (a proxy expert) is highlighting an image as important. My suspicion is that there might be a quantifiable social effect, at least in some subjects, but it might not be as large as that shown here. There’s very good evidence that subjects scanned much-like photos with greater care, which accords with other studies in the area, though it does not necessarily correlate with greater social conformity. As ever, we look for patterns and highlights to help guide our behaviours – we do not and cannot treat all data as equal.

There’s a lot of really interesting stuff in this apart from that though. I am particularly interested in the activiation of the frontal gyrus, previously associated with imitation, when looking at much liked photos. This is highly significant in the transmission of memes as well as in social learning generally.

Address of the bookmark: http://neurosciencenews.com/nucleus-accumbens-social-media-4348/

Bigotry and learning analytics

Unsurprisingly, when you use averages to make decisions about actions concerning individual people, they reinforce biases. This is exactly the basis of bigotry, racism, sexism and a host of other well-known evils, so programming such bias into analytics software is beyond a bad idea. This article describes how algorithmic systems are used to help make decisions about things like bail and sentencing in courts. Though race is not explicitly taken into account, correlates like poverty and acquaintance with people that have police records are included. In a perfectly vicious circle, the system reinforces biases over time. To make matters worse, this particular system uses secret algorithms, so there is no accountability and not much of a feedback loop to improve them if they are in error.

This matters to educators because this is very similar to what much learning analytics does too (there are exceptions, especially when used solely for research purposes). It looks at past activity, however that is measured, compares it to more or less discriminatory averages or similar aggregates of other learners’ past activity, and then attempts to guide future behaviour of individuals (teachers or students) based on the differences. This latter step is where things can go badly wrong, but there would be little point in doing it otherwise. The better examples inform rather than adapt, allowing a human intermediary to make decisions, but that’s exactly what the algorithmic risk assessment described in the article does too and it is just as risky. The worst examples attempt to directly guide learners, sometimes adapting content to suit their perceived needs. This is a terribly dangerous idea.

Address of the bookmark: http://boingboing.net/2016/05/24/algorithmic-risk-assessment-h.html

A blueprint for breakthroughs: Federally funded education research in 2016 and beyond | Christensen Institute

An interesting proposal from Horn & Fisher that fills in one of the most gaping holes in conventional quantitative research in education (specifically randomized controlled trials but also less rigorous efforts like A/B testing etc) by explicitly looking at the differences in those that do not fit in the average curve – the ones that do not benefit, or that benefit to an unusual degree, the outliers. As the authors say:

“… the ability to predict what works, for which students, in what circumstances, will be crucial for building effective, personalized-learning environments. The current education research paradigm, however, stops short of offering this predictive power and gets stuck measuring average student and sub-group outcomes and drawing conclusions based on correlations, with little insight into the discrete, particular contexts and causal factors that yield student success or failure. Those observations that do move toward a causal understanding often stop short of helping understand why a given intervention or methodology works in certain circumstances, but not in others.

I have mixed feelings about this. Yes, this process of iterative refinement is a much better idea than simply looking at improvements in averages (with no clear causal links) and they are entirely right to critique those that use such methods but:

a) I don’t think it will ever succeed in the way it hopes, because every context is significantly different and this is a complex design problem, where even miniscule differences can have huge effects. Learning never repeats twice. Though much improved on what it replaces, it is still trying to make sense through tools of reductive materialism whereas what we are dealing with, and what the authors’ critique implies, is a different kind of problem. Seeking this kind of answer is like seeking the formula for painting a masterpiece. It’s only ever partially (at best) about methodologies and techniques, and it is always possible to invent new ones that change everything.

b) It relies on the assumption that we know exactly what we are looking for: that what we seek to measure is the thing that matters. It might be exactly what is needed for personalized education (where you find better ways to make students behave the way you want them to behave) but exactly the opposite for personal education (where every case is different, where education is seen as changing the whole person in unfathomably rich and complex ways).

That said, I welcome any attempts to stop the absurdity of trying to intervene in ways that benefit the (virtually non-existent) average student and that instead attempt to focus on each student. This is a step in the right direction.

 

augmented research cycle

Address of the bookmark: http://www.christenseninstitute.org/publications/a-blueprint-for-breakthroughs/

Universities can’t solve our skills gap problem, because they caused it | TechCrunch

Why this article is wrong

This article is based on a flawed initial premise: that universities are there to provide skills for the marketplace. From that perspective, as the writer, Jonathan Munk, suggests, there’s a gap between both what universities generally support and what employers generally need, and the perceptions of students and employers about the skills they actually possess. If we assume that the purpose of universities is to churn out market-ready workers, with employer-friendly skills, they are indeed singularly failing and will likely continue to do so.  As Munk rightly notes:

“… universities have no incentive to change; the reward system for professors incentivizes research over students’ career success, and the hundreds of years of institutional tradition will likely inhibit any chance of change. By expecting higher education to take on closing the skills gap, we’re asking an old, comfortable dog to do new tricks. It will not happen.”

Actually quite a lot of us, and even quite a few governments (USA notwithstanding) are pretty keen on the teaching side of things, but Munk’s analysis is substantially correct and, in principle, I’m quite comfortable with that. There are far better, cheaper and faster ways to get most marketable job skills than to follow a university program, and providing such skills is not why we exist. This is not to say that we should not do such things. For pedagogical and pragmatic reasons, I am keen to make it possible for students to gain useful workplace skills from my courses, but it has little to do with the job market. It’s mainly because it makes the job of teaching easier, leads to more motivated students, and keeps me on my toes having to stay in touch with the industry in my particular subject area. Without that, I would not have the enthusiasm needed to build or sustain a learning community, I would be seen as uninterested in the subject, and what I’d teach would be perceived as less relevant, and would thus be less motivating. That’s also why, in principle, combining teaching and research is a great idea, especially in strongly non-vocational subjects that don’t actually have a marketplace. But, if it made more sense to teach computing with a 50 year old language and machine that should be in a museum, I would do so at the drop of a hat. It matters far more to me that students develop the intellectual tools to be effective lifelong learners, develop values and patterns of thinking that are commensurate with both a healthy society and personal happiness, become part of a network of learners in the area, engage with the community/network of practice, and see bigger pictures beyond the current shiny things that attract attention like flames to a moth. This focus on being, rather than specific skills, is good for the student, I hope, but it is mainly good for everyone. Our customer is neither the student nor the employer: it’s our society. If we do our jobs right then we both stabilize and destablize societies, feeding them with people that are equipped to think, to create, to participate, reflectively, critically, and ethically: to make a difference. We also help to feed societies with ideas, theories, models and even the occasional artefact, that make life better and richer for all though, to be honest, I’m not sure we do so in the most cost-effective ways. However, we do provide an open space with freedom to explore things that have no obvious economic value, without the constraints or agendas of the commercial world, nor those of dangerously partisan or ill-informed philanthropists (Zuckerberg, Gates – I’m thinking of you). We are a social good. At least, that’s the plan – most of us don’t quite live up to our own high expectations. But we do try. The article acknowledges this role:

“Colleges and universities in the U.S. were established to provide rich experiences and knowledge to their students to help them contribute to society and improve their social standing.”

Politely ignoring the US-centricity of this claim and its mild inaccuracy, I’d go a bit further: in the olden days, it was also about weeding out the lower achievers and/or, in many countries (the US was again a notable offender), those too poor to get in. Universities were (and most, AU being a noble and rare exception, still are) a filter, that makes the job of recruiters easier by removing the chaff from the wheat before we even get to them, and then again when we give out the credits: that‘s the employment advantage. It’s very seldom (directly) because of our teaching. We’re just big expensive sieves, from that perspective. However, the article goes on to say:

“But in the 1930s, with millions out of work, the perceived role of the university shifted away from cultural perspective to developing specific trades. Over time, going to college began to represent improved career prospects. That perception persists today. A survey from 2015 found the top three reasons people chose to go to college were:

  • improved employment opportunities
  • make more money
  • get a good job”

I’m glad that Munk correctly uses the term ‘perception’, because this is not a good reason to go to a university. The good job is a side-effect, not the purpose, and it is becoming less important with each passing year. Partly this is due to market saturation and degree inflation, partly due to better alternatives becoming more widespread, especially thanks to the Internet. One of the ugliest narratives of modern times is that the student should pay for their education because they will earn more money as a result. Utter nonsense. They will earn more money because they would have earned more money anyway, even if universities had never existed. The whole point of that filtering is that it tends to favour those that are smarter and thus more likely to earn more. In fact, were it not for the use of university qualifications as a pre-filter that would exclude them from a (large but dwindling) number of jobs, they would have earned far more money by going straight into the workforce. I should observe in passing that open universities like AU are not entirely immune from this role. Though not much filtering for ability on entry, AU and other open universities do none-the-less act as filters inasmuch as those that are self-motivated enough to handle the rigours of a distance-taught university program while otherwise engaged, usually while working, are far better candidates for most jobs than those who simply went to a university because that was the natural next step. A very high proportion of our students that make it to the end do so with flying colours, because those that survive are incredibly good survivors. I’ve seen the quality of work that comes out of this place and been able to compare it with that from the best of traditional universities: our students win hands down, almost every time. The only time I have seen anything like as good was in Delhi, where 30 students were selected in a program each year from over 3,000 fully qualified applicants (i.e. those with top grades from their schools). This despite, or perhaps because of, the fact that computing students had to sit an entrance exam that, bizarrely and along with other irrelevances, required them to know about Brownian motion in gases. I have yet to come across a single computing role where such knowledge was needed. Interestingly, they were not required to know about poetry, art, or music, though I have certainly come across computing roles where appreciation of such things would have been of far greater value.

Why this article is right

If it were just about job-ready skills like, in computing, the latest frameworks, languages and systems, the lack of job-readiness would not bother me in the slightest. However, as the article goes on to say, it is not just the ‘technical’ (in the loosest sense) skills that are the problem. The article mentions, as key employer concerns, critical thinking, creativity, and oral and written communication skills. These are things that we should very much be supporting and helping students to develop, however we perceive our other roles. In fact, though the communication stuff is mainly a technical skillset, creativity and problem-solving are pretty much what it is all about so, if students lack these things, we are failing even by our own esoteric criteria.

I do see a tension here, and a systematic error in our teaching. A goodly part of it is down to a misplaced belief that we are teaching stuff, rather than teaching a way of being. A lot of courses focus on a set of teacher-specified outcomes, and on accreditation of those set outcomes, and treat the student as (at best) input for processing or (at worst) a customer for a certificate. When the process is turned into a mechanism for outputting people with certificates, with fixed outcomes and criteria, the process itself loses all value. ‘We become what we behold’ as McLuhan put it: if that’s how we see it, that’s how it will be. This is a vicious circle. Any mechanism that churns students out faster or more efficiently will do. In fact, a lot of discussion and design in our universities is around doing exactly that. For example, the latest trend in personalization (a field, incidentally, that has been around for decades) is largely based on that premise: there is stuff to learn, and personalization will help you to learn it faster, better and cheaper than before. As a useful by-product, it might keep you on target (our target, not yours).  But one thing it will mostly not do is support the development of critical thinking, nor will it support the diversity, freedom and interconnection needed for creative thinking. Furthermore, it is mostly anything but social, so it also reduces capacity to develop those valuable social communication skills. This is not true of all attempts at personalization, but it is true of a lot of them, especially those with most traction. The massive prevalence of cheating is directly attributable to the same incorrect perception: if cheating is the shortest path to the goal (especially if accompanied by a usually-unwarranted confidence in avoiding detection) then of course quite a few people will take it. The trouble is, it’s the wrong goal. Education is a game that is won through playing it well, not through scoring.

The ‘stuff’ has only ever been raw material, a medium and context for the really important ways of being, doing and thinking that universities are mostly about. When the stuff becomes the purpose, the purpose is lost. So, universities are trying and, inevitably, failing to be what employers want, and in the process failing to do what they are actually designed to do in the first place. It strikes me that everyone would be happier if we just tried to get back to doing what we do best. Teaching should be personal, not personalized. Skills should be a path to growth, not to employment. Remembered facts should be the material, not the product. Community should be a reason for teaching, not a means by which it occurs. Universities should be places we learn to be, not places we be to learn. They should be purveyors of value, not of credentials.

 

Address of the bookmark: http://techcrunch.com/2016/05/08/universities-cant-solve-our-skills-gap-problem-because-they-caused-it/