EdTech Books

This is a great, well presented and nicely curated selection of open books on education and educational technology, ranging from classics (and compilations of chapters by classic authors) to modern guides, textbooks, and blog compilations, covering everything from learning theory to choice of LMS. Some are peer-reviewed, there’s a mix of licences from PD to restrictive CC , and there’s good guidance provided about the type and quality of content. There’s also support for collaboration and publication. All books are readable online, most can be downloaded as (at least) PDF. I think the main target audience is students of education/online learning, and practitioners – at least, there’s a strong practical focus.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/7161867/edtech-books (where you can find some really interesting comments, including the one that my automated syndicator mistakenly turned into the main post the first time it ran)

How distance changes everything: slides from my keynote at the University of Ottawa

These are the slides from my keynote at the University of Ottawa’s “Scaffolding a Transformative Transition to Distance and Online Learning” symposium today. In the presentation I discussed why distance learning really is different from in-person learning, focusing primarily on the fact that they are the motivational inverse of one another. In-person teaching methods evolved in response to the particular constraints and boundaries imposed by physics, and consist of many inventions – pedagogical and otherwise – that are counter-technologies designed to cope with the consequences of teaching in a classroom, a lot of which are not altogether wise. Many of those constraints do not exist online, and yet we continue to do very similar things, especially those that control and dictate what students should do, as well as when, and how they should do it. This makes no sense, and is actually antagonistic to the natural flow of online learning. I provided a few simple ideas and prompts for thinking about how to go more with the flow.

The presentation was only 20 minutes of a lively and inspiring hour-long session, which was fantastic fun and provided me with many interesting questions and a chance to expand further on the ideas.

uottawa2020HowDistanceChangesEverything

Joyful assessment: beyond high-stakes testing

Here are my slides from my presentation at the Innovate Learning Summit yesterday. It’s not world-shattering stuff – just a brutal attack on proctored, unseen written exams (PUWEs, pronounced ‘pooies’), followed by a description of the rationale, process, benefits, and unwanted consequences behind the particular portfolio-based approach to assessment employed in most of my teaching. It includes a set of constraints that I think are important to consider in any assessment process, grouped into pedagogical, motivational, and housekeeping (mainly relating to credentials) clusters. I list 13 benefits of my approach relating to each of those clusters, which I think make a pretty resounding case for using it instead of traditional assignments and tests. However, I also discuss outstanding issues, most of which relate to the external context and expectations of students or the institution, but a couple of which are fairly fundamental flaws (notably the extreme importance of prompt, caring, helpful instructor/tutor engagement in making it all work, which can be highly problematic when it doesn’t happen) that I am still struggling with.

Skills lost due to COVID-19 school closures will hit economic output for generations (hmmm)

Snippet from OECD report on covid-19 and education This CBC report is one of many dozens of articles in the world’s press highlighting one rather small but startling assertion in a recent OECD report on the effects of Covid-19 on education – that the ‘lost’ third of a year of schooling in many countries will lead to an overall lasting drop in GDP of 1.5% across the world. Though it contains many more fascinating and useful insights that are far more significant and helpful, the report itself does make this assertion quite early on and repeats it for good measure, so it is not surprising that journalists have jumped on it. It is important to observe, though, that the reasoning behind it is based on a model developed by Hanushek and Woessman over several years, and an unpublished article by the authors that tries to explain variations in global productivity according to amount and  – far more importantly – the quality of education: that long-run productivity is a direct consequence of the cognitive skills (or knowledge capital) of a nation, that can be mapped directly to how well and how much the population is educated.

As an educator I find this model, at a glance, to be reassuring and confirmatory because it suggests that we do actually have a positive effect on our students. However, there may be a few grounds on which it might be challenged (disclaimer: this is speculation). The first and most obvious is that correlation does not equal causation. The fact that countries that do invest in improving education consistently see productivity gains to match in years to come is interesting, but it raises the question of what led to that investment in the first place and whether that might be the ultimate cause, not the education itself.  A country that has invested in increasing the quality of education would, normally, be doing so as a result of values and circumstances that may lead to other consequences and/or be enabled by other things (such as rising prosperity, competition from elsewhere, a shift to more liberal values, and so on).  The second objection might be that, sure, increased quality of education does lead to greater productivity, but that it is not the educational process that is causing it, as such. Perhaps, for instance, an increased focus on attainment raises aspirations. A further objection might be that the definition of ‘quality’ does not measure what they think it measures. A brief skim of the model used suggests that it makes extensive use of scores from the likes of TIMSS, PIRLS and PISA, standardized test approaches used to compare educational ‘effectiveness’ in different regions that embody quite a lot of biases, are often manipulated at a governmental level, and that, as I have mentioned once or twice before, are extremely dubious indicators of learning: in fact, even when they are not manipulated, they may indicate willingness to comply with the demands of the powerful more than learning (does that improve GDP? Probably).  Another objection might be that absence of time spent in school does not equate to absence of education. Indeed, Hanushek and Woessman’s central thesis is that it is not the amount but the quality of schooling that matters, so it seems bizarre that they might fall back on quantifying learning by time spent in school. We know for sure that, though students may not have been conforming to curricula at the rate desired by schools and colleges, they have not stopped learning. In fact, in many ways and in many places, there are grounds to believe that there have been positive learning benefits: better family learning, more autonomy, more thoughtful pedagogies, more intentional learning community forming, and so on.  Out of this may spring a renewed focus on how people learn and how best to support them, rather than maintaining a system that evolved in mediaeval times to support very different learning needs, and that is so solidly packed with counter technologies and so embedded in so many other systems that have nothing to do with learning that we have lost sight of the ones that actually matter. If education improves as a result, then (if it is true that better and more education improves the bottom line) we may even see gains in GDP. I expect that there are other reasons for doubt: I have only skimmed the surface of the possible concerns.

I may be wrong to be sceptical –  in fairness, I have not read the many papers and books produced by Hanushek and Woessman on the subject, I am not an economist, nor do I have sufficient expertise (or interest) to analyze the regression model that they use. Perhaps they have fully addressed such concerns in that unpublished paper and the simplistic cause-effect prediction distorts their claims. But, knowing a little about complex adaptive systems, my main objection is that this is an entirely new context to which models that have worked before may no longer apply and that, even if they do, there are countless other factors that will affect the outcome in both positive and negative ways, so this is not so much a prediction as an observation about one small part of a small part of a much bigger emergent change that is quite unpredictable. I am extremely cautious at the best of times whenever I see people attempting to find simple causal linear relationships of this nature, especially when they are so precisely quantified, especially when past indicators are applied to something wholly novel that we have never seen before with such widespread effects, especially given the complex relationships at every level, from individual to national.  I’m glad they are telling the story – it is an interesting one that no doubt contains grains of important truths – but it is just an informative story, not predictive science.  The OECD has a bit of track record on this kind of misinterpretation, especially in education. This is the same organization that (laughably, if it weren’t so influential) claimed that educational technology in the classroom is bad for learning. There’s not a problem with the data collection or analysis, as such. The problem is with the predictions and recommendations drawn from it.

Beyond methodological worries, though, and even if their predictions about GDP are correct (I am pretty sure they are not – there are too many other factors at play, including huge ones like the destruction of the environment that makes the odd 1.5% seem like a drop in the barrel) then it might be a good thing. It might be that we are moving – rather reluctantly – into a world in which GDP serves as an even less effective measure of success than it already is. There are already plentiful reasons to find it wanting, from its poor consideration of ecological consequences to its wilful blindness to (and causal effect upon) inequalities, to its simple inadequacy to capture the complexity and richness of human culture and wealth. I am a huge fan of the state of Bhutan’s rejection of the GDP, that it has replaced with the GNH happiness index. The GNH makes far more sense, and is what has led Bhutan to be one of the only countries in the world to be carbon positive, as well as being (arguably but provably) one of the happiest countries in the world. What would you rather have, money (at least for a few, probably not you), or happiness and a sustainable future? For Bhutan, education is not for economic prosperity: it is about improving happiness, which includes good governance, sustainability, and preservation of (but not ossification of) culture.

Many educators – and I am very definitely one of them – share Bhutan’s perspective on education. I think that my customer is not the student, or a government, or companies, but society as a whole, and that education makes (or should make) for happier, safer, more inventive, more tolerant, more stable, more adaptive societies, as well as many other good things. It supports dynamic meta-stability and thus the evolution of culture. It is very easy to lose sight of that goal when we have to account to companies, governments, other institutions, and to so many more deeply entangled sets of people with very different agendas and values, not to mention our inevitable focus on the hard methods and tools of whatever it is that we are teaching, as well as the norms and regulations of wherever we teach it. But we should not ever forget why we are here. It is to make the world a better place, not just for our students but for everyone. Why else would we bother?

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6578662/skills-lost-due-to-covid-19-school-closures-will-hit-economic-output-for-generations-hmmm

How Assessment is Changing in The Digital Age – Five Guiding Principles | teachonline.ca

This article from teachonline.ca draws from a report by JISC (the UK academic network organization) to provide 5 ‘principles’ for assessment. I put the scare quotes around ‘principles’ because they are mostly descriptive labels for trends and they are woefully non-inclusive. There is also a subtext here – that I do understand is incredibly hard to avoid because I failed to fully do so myself in my own post last week – that assessment is primarily concerned with proving competence for the sake of credentials (it isn’t). Given these caveats, most of what is written here, however, makes some sense. Lecture with skeleton

Principle 1: authentic assessment. I completely agree that assessment should at least partly be of authentic activities. It is obvious how that plays out in applied disciplines with a clear workplace context. If you are learning how to program, for instance, then of course you should write programs that have some value in a realistic context and it goes without saying that you should assess the same. This includes aspects of the task that we might not traditionally assess in a typical programming course such as analysis, user experience testing, working with others, interacting with StackOverflow, sharing via GitHub, copying code from others, etc. It is less obvious in the case of something like, say, philosophy, or history, or latin, though, or, indeed, in any subject that is primarily found in academia. Authentic assessment for such things would probably be an essay or conference presentation, or perhaps some kind of argument, most of the time, because that’s what real life is like for most people in such fields (whether that should be the case remains an open issue). We should be wary, though, of making this the be-all and end-all, because there’s a touch of behaviourism lurking behind the idea: can the student perform as expected? There are other things that matter. For instance, I think that it is incredibly important to reflect on any learning activity, even though that might not mirror what is typically done in an authentic context. It can significantly contribute to learning but it can also reveal things that may not be obvious when we judge what is done in an authentic context, such as why people did what they did or whether they would do it the same way again. There may also be stages along the way that are not particularly authentic, but that contribute to learning the hard skills needed in order to perform effectively in the authentic context: learning a vocabulary, for example, or doing something dangerous in a cut-down, safe environment. We should probably not summatively assess such things (they should rarely contribute to a credential because they do not demonstrate applied capabilityre), but formative assessment – including of this kind of activity – is part of all learning.

Principle 2: accessible and inclusive assessment. Well, duh. Of course this should be how it is done. Not so much a principle as plain common decency. Was this not ever so? Yes it was. Only an issue when careless people forget that some media are less inclusive than others, or that not everyone knows or cares about golf. Nothing new here.

Principle 3: appropriately automated assessment. This is a reaction to bad assessment, not a principle for good assessment. There is a principle that really matters here but it is not appropriate automation: it is that assessment should enhance and improve the student experience. Automation can sometimes do that. It is appropriate for some kinds of formative feedback (see examples of non-authentic learning above)  but very little else which, in the context of this article (that implicitly focuses on the final judgment), means it is a bad idea to use it at all.

Principle 4: continuous assessment. I don’t mind this one at all. Again, the principle is not what the label claims, though. The principle here is that assessment should be designed to improve learning. For sure, if it is used as a filter to sort the great from the not great, then the filter should be authentic which, for the most part, means no high stakes, high stress, one-chance tests, and that overall behaviours and performance over time are what matters. However, there is a huge risk of therefore assessing learning in progress rather than capability once a course is done. If we are interested in assessing competence for credentials, then I’d rather do it at the end, once learning has been accomplished (ignoring the inconvenient detail that this is not a terminal state and that learning must always undergo ever-dynamic renewal and transformation until the day we die). Of course, the work done along the way will make up the bulk of the evidence for that final judgment but it allows for the fact that learning changes people, and that what we did early on in the journey seldom represents what we are able to do in the light of later learning.

Principle 5: secure assessment. Why is this mentioned in an article about assessment in the digital age? Is cheating a new invention? Was it (intentionally) insecure before? This is just a description of how some people have noticed that traditional forms of assessment are really dumb in a context that includes Wikipedia, Google, and communications devices the size of a peanut. Pointless, and certainly not a new principle for the Digital Age. In fairness, if the principles above are followed in spirit as well as in letter, it is not likely to be a huge issue but, then, why make it a principle? It’s more a report on what teachers are thinking and talking about.

The summary is motherhood and apple pie, albeit that it doesn’t entirely fall out from the principles (choice over when to be assessed, or peer assessment, for instance, are not really covered in the principles, though they are very good ideas).

I’m glad that people are sharing ideas about this but I think that there are more really important principles than these: that students should have control over their own assessment, that it should never reward or punish, that it should always support learning, and so on. I wrote a bit about this the other day, and, though that is a work in progress, I think it gets a little closer to what actually matters than this.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6531701/how-assessment-is-changing-in-the-digital-age-five-guiding-principles-teachonlineca

Evaluating assessment

Exam A group of us at AU have begun discussions about how we might transform our assessment practices, in the light of the far-reaching AU Imagine plan and principles. This is a rare and exciting opportunity to bring about radical and positive change in how learning happens at the institution. Hard technologies influence soft more than vice versa, and assessments (particularly when tied to credentials) tend to be among the hardest of all technologies in any pedagogical intervention. They are therefore a powerful lever for change. Equally, and for the same reasons, they are too often the large, slow, structural elements that infest systems to stunt progress and innovation.

Almost all learning must involve assessment, whether it be of one’s own learning, or provided by other people or machines. Even babies constantly assess their own learning. Reflection is assessment. It is completely natural and it only gets weird when we treat it as a summative judgment, especially when we add grades or credentials to the process, thus normally changing the purpose of learning from achieving competence to achieving a reward. At best it distorts learning, making it seem like a chore rather than a delight, at worst it destroys it, even (and perhaps especially) when learners successfully comply with the demands of assessors and get a good grade. Unfortunately, that’s how most educational systems are structured, so the big challenge to all teachers must be to eliminate or at least to massively reduce this deeply pernicious effect. A large number of the pedagogies that we most value are designed to solve problems that are directly caused by credentials. These pedagogies include assessment practices themselves.

With that in mind, before the group’s first meeting I compiled a list of some of the main principles that I adhere to when designing assessments, most of which are designed to reduce or eliminate the structural failings of educational systems. The meeting caused me to reflect a bit more. This is the result:

Principles applying to all assessments

  • The primary purpose of assessment is to help the learner to improve their learning. All assessment should be formative.
  • Assessment without feedback (teacher, peer, machine, self) is judgement, not assessment, pointless.
  • Ideally, feedback should be direct and immediate or, at least, as prompt as possible.
  • Feedback should only ever relate to what has been done, never the doer.
  • No criticism should ever be made without also at least outlining steps that might be taken to improve on it.
  • Grades (with some very rare minor exceptions where the grade is intrinsic to the activity, such as some gaming scenarios or, arguably, objective single-answer quizzes with T/F answers) are not feedback.
  • Assessment should never ever be used to reward or punish particular prior learning behaviours (e.g. use of exams to encourage revision, grades as goals, marks for participation, etc) .
  • Students should be able to choose how, when and on what they are assessed.
  • Where possible, students should participate in the assessment of themselves and others.
  • Assessment should help the teacher to understand the needs, interests, skills, and gaps in knowledge of their students, and should be used to help to improve teaching.
  • Assessment is a way to show learners that we care about their learning.

Specific principles for summative assessments

A secondary (and always secondary) purpose of assessment is to provide evidence for credentials. This is normally described as summative assessment, implying that it assesses a state of accomplishment when learning has ended. That is a completely ridiculous idea. Learning doesn’t end. Human learning is not in any meaningful way like programming a computer or storing stuff in a database. Knowledge and skills are active, ever-transforming, forever actively renewed, reframed, modified, and extended. They are things we do, not things we have.

With that in mind, here are my principles for assessment for credentials (none of which supersede or override any of the above core principles for assessment, which always apply):

  • There should be no assessment task that is not in itself a positive learning activity. Anything else is at best inefficient, at worst punitive/extrinsically rewarding.
  • Assessment for credentials must be fairly applied to all students.
  • Credentials should never be based on comparisons between students (norm-referenced assessment is always, unequivocally, and unredeemably wrong).
  • The criteria for achieving a credential should be clear to the learner and other interested parties (such as employers or other institutions), ideally before it happens, though this should not forestall the achievement and consideration of other valuable outcomes.
  • There is no such thing as failure, only unfinished learning. Credentials should only celebrate success, not punish current inability to succeed.
  • Students should be able to choose when they are ready to be assessed, and should be able to keep trying until they succeed.
  • Credentials should be based on evidence of competence and nothing else.
  • It should be impossible to compromise an assessment by revealing either the assessment or solutions to it.
  • There should be at least two ways to demonstrate competence, ideally more. Students should only have to prove it once (though may do so in many ways and many times, if they wish).
  • More than one person should be involved in judging competence (at least as an option, and/or on a regularly taken sample).
  • Students should have at least some say in how, when, and where they are assessed.
  • Where possible (accepting potential issues with professional accreditation, credit transfer, etc) they should have some say over the competencies that are assessed, in weighting and/or outcome.
  • Grades and marks should be avoided except where mandated elsewhere. Even then, all passes should be treated as an ‘A’ because students should be able to keep trying until they excel.
  • Great success may sometimes be worthy of an award – e.g. a distinction – but such an award should never be treated as a reward.
  • Assessment for credentials should demonstrate the ability to apply learning in an authentic context. There may be many such contexts.
  • Ideally, assessment for credentials should be decoupled from the main teaching process, because of risks of bias, the potential issues of teaching to the test (regardless of individual needs, interests and capabilities) and the dangers to motivation of the assessment crowding out the learning. However, these risks are much lower if all the above principles are taken on board.

I have most likely missed a few important issues, and there is a bit of redundancy in all this, but this is a work in progress. I think it covers the main points.

Further random reflections

There are some overriding principles and implied specifics in all of this. For instance, respect for diversity, accessibility, respect for individuals, and recognition of student control all fall out of or underpin these principles. It implies that we should recognize success, even when it is not the success we expected, so outcome harvesting makes far more sense than measurement of planned outcomes. It implies that failure should only ever be seen as unfinished learning, not as a summative judgment of terminal competence, so appreciative inquiry is far better than negative critique. It implies flexibility in all aspects of the activity. It implies, above and beyond any other purpose, that the focus should always be on learning. If assessment for credentials adversely affects learning then it should be changed at once.

In terms of implementation, while objective quizzes and their cousins can play a useful formative role in helping students to self-assess and to build confidence, machines (whether implemented by computers or rule-following humans) should normally be kept out of credentialling. There’s a place for AI but only when it augments and informs human intelligence, never when it behaves autonomously. Written exams and their ilk should be avoided, unless they conform to or do not conflict with all the above principles: I have found very few examples like this in the real world, though some practical demonstrations of competence in an authentic setting (e.g. lab work and reporting) and some reflective exercises on prior work can be effective.

A portfolio of evidence, including a reflective commentary, is usually going to be the backbone of any fair, humane, effective assessment: something that lets students highlight successes (whether planned or not), that helps them to consolidate what they have learned, and that is flexible enough to demonstrate competence shown in any number of ways. Outputs or observations of authentic activities are going to be important contributors to that. My personal preference in summative assessments is to only use the intended (including student-generated) and/or harvested outcomes for judging success, not for mandated assignments. This gives flexibility, it works for every subject, and it provides unquivocal and precise evidence of success. It’s also often good to talk with students, perhaps formally (e.g. a presentation or oral exam), in order to tease out what they really know and to give instant feedback. It is worth noting that, unlike written exams and their ilk, such methods are actually fun for all concerned, albeit that the pleasure comes from solving problems and overcoming challenges, so it is seldom easy.

Interestingly, there are occasions in traditional academia where these principles are, for the most part, already widely applied. A typical doctoral thesis/dissertation, for example, is often quite close to it (especially in more modern professional forms that put more emphasis on recording the process), as are some student projects. We know that such things are a really good idea, and lead to far richer, more persistent, more fulfilling learning for everyone. We do not do them ubiquitously for reasons of cost and time. It does take a long time to assess something like this well, and it can take more time during the rest of the teaching process thanks to the personalization (real personalization, not the teacher-imposed form popularized by learning analytics aficionados) and extra care that it implies. It is an efficient use of our time, though, because of its active contribution to learning, unlike a great many traditional assessment methods like teacher-set assignments (minimal contribution) and exams (negative contribution).  A lot of the reason for our reticence, though, is the typical university’s schedule and class timetabling, which makes everything pile on at once in an intolerable avalanche of submissions. If we really take autonomy and flexibility on board, it doesn’t have to be that way. If students submit work when it is ready to be submitted, if they are not all working in lock-step, and if it is a work of love rather than compliance, then assessment is often a positively pleasurable task and is naturally staggered. Yes, it probably costs a bit more time in the end (though there are plenty of ways to mitigate that, from peer groups to pedagogical design) but every part of it is dedicated to learning, and the results are much better for everyone.

Some useful further reading

This is a fairly random selection of sources that relate to the principles above in one way or another. I have definitely missed a lot. Sorry for any missing URLs or paywalled articles: you may be able to find downloadable online versions somewhere.

Boud, D., & Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment & Evaluation in Higher Education, 31(4), 399-413. Retrieved from https://www.jhsph.edu/departments/population-family-and-reproductive-health/_docs/teaching-resources/cla-01-aligning-assessment-with-long-term-learning.pdf

Boud, D. (2007). Reframing assessment as if learning were important. Retrieved from https://www.researchgate.net/publication/305060897_Reframing_assessment_as_if_learning_were_important

Cooperrider, D. L., & Srivastva, S. (1987). Appreciative inquiry in organizational life. Research in organizational change and development, 1, 129-169.

Deci, E. L., Vallerand, R. J., Pelletier, L. G., & Ryan, R. M. (1991). Motivation and education: The self-determination perspective. Educational Psychologist, 26(3/4), 325-346.

Hussey, T., & Smith, P. (2002). The trouble with learning outcomes. Active Learning in Higher Education, 3(3), 220-233.

Kohn, A. (1999). Punished by rewards: The trouble with gold stars, incentive plans, A’s, praise, and other bribes (Kindle ed.). Mariner Books. (this one is worth forking out money for).

Kohn, A. (2011). The case against grades. Educational Leadership, 69(3), 28-33.

Kohn, A. (2015). Four Reasons to Worry About “Personalized Learning”. Retrieved from http://www.alfiekohn.org/blogs/personalized/ (check out Alfie Kohn’s whole site for plentiful other papers and articles – consistently excellent).

Reeve, J. (2002). Self-determination theory applied to educational settings. In E. L. Deci & R. M. Ryan (Eds.), Handbook of Self-Determination research (pp. 183-203). Rochester, NY: The University of Rochester Press.

Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Publications. (may be worth paying for if such things interest you).

Wilson-Grau, R., & Britt, H. (2012). Outcome harvesting. Cairo: Ford Foundation. http://www.managingforimpact.org/sites/default/files/resource/outome_harvesting_brief_final_2012-05-2-1.pdf.

Technology, technique, and teaching

These are the slides from my recent talk with students studying the philosophy of education at Pace University.

This is a mashup of various talks I have given in recent years, with a little new stuff drawn from my in-progress book. It starts with a discussion of the nature of technology, and the distinction between hard and soft technologies that sees relative hardness as the amount of pre-orchestration in a technology (be it a machine or a legal system or whatever). I observe that pedagogical methods (‘pedagogies’ for short) are soft technologies to those who are applying them, if not to those on the receiving end. It is implied (though I forgot to explicitly mention) that hard technologies are always more structurally significant than soft ones: they frame what is possible.

All technologies are assemblies, and (in education), the pedagogies applied by learners are always the most important parts of those assemblies. However, in traditional in-person classrooms, learners are (by default) highly controlled due to the nature of physics – the need to get a bunch of people together in one place at one time, scarcity of resources,  the limits of human voice and hearing, etc – and the consequent power relationships and organizational constraints that occur.  The classroom thus becomes the environment that frames the entire experience, which is very different from what are inaccurately described as online learning environments (which are just parts of a learner’s environment).

Because of physical constraints, the traditional classroom context is inherently very bad for intrinsic motivation. It leads to learners who don’t necessarily want to be there, having to do things they don’t necessarily want to do, often being either bored or confused. By far the most common solution to that problem is to apply externally regulated extrinsic motivation, such as grades, punishments for non-attendance, rules of classroom behaviour, and so on. This just makes matters much worse, and makes the reward (or the avoidance of punishment) the purpose of learning. Intelligent responses to this situation include cheating, short-term memorization strategies, satisficing, and agreeing with the teacher. It’s really bad for learning. Such issues are not at all surprising: all technologies create as well as solve problems, so we need to create counter technologies to deal with them. Thus, what we normally recognize as good pedagogy is, for the most part, a set of solutions to the problems created by the constraints of in-person teaching, to bring back the love of learning that is destroyed by the basic set-up. A lot of good teaching is therefore to do with supporting at least better, more internally regulated forms of extrinsic motivation.

Because pedagogies are soft technologies, skill is needed to use them well. Harder pedagogies, such as Direct Instruction, that are more prescriptive of method tend (on average) to work better than softer pedagogies such as problem-based learning, because most teachers tend towards being pretty average: that’s implicit in the term, after all. Lack of skill can be compensated for through the application of a standard set of methods that only need to be done correctly in order to work. Because such methods can also work for good teachers as well as the merely average or bad, their average effectiveness is, of course, high. Softer pedagogical methods such as active learning, problem-based learning, inquiry-based learning, and so on rely heavily on passionate, dedicated, skilled, time-rich teachers and so, on average, tend to be less successful. However, when done well, they outstrip more prescriptive methods by a large margin, and lead to richer, more expansive outcomes that go far beyond those specified in a syllabus or test. Softer technologies, by definition, allow for greater creativity, flexibility, adaptability, and so on than harder technologies but are therefore difficult to implement. There is no such thing as a purely hard or purely soft technology, though, and all exist on a spectrum,. Because all pedagogies are relatively soft technologies, even those that are quite prescriptive, almost any pedagogical method can work if it is done well: clunky, ugly, weak pedagogies used by a fantastic teacher can lead to great, persistent, enthusiastic learning. As Hattie observes, almost everything works – at least, that’s true of most things that are reported on in educational research studies :-). But (and this is the central message of my book, the consequences of which are profound) it ain’t what you do, it’s the way that you do it, that’s what gets results.

Problems can occur, though, when we use the same methods that work in person in a different context for which they were not designed. Online learning is by far the most dominant mode of learning (for those with an Internet connection – some big social, political, economic, and equity issues here) on the planet. Google, YouTube, Wikipedia, Reddit, StackExchange, Quora, etc, etc, etc, not to mention email, social networking sites, and so on, are central to how most of us in the online world learn anything nowadays. The weird thing about online education (in the institutional sense) is that online learning is far less obviously dominant, and tends to be viewed in a far less favourable light when offered as an option. Given the choice, and without other constraints, most students would rather learn in-person than online. At least in part, this is due to the fact that those of us working in formal online education continue to apply pedagogies and organizational methods that solved problems in in-person classrooms, especially with regard to teacher control: the rewards and punishments of grades, fixed length courses, strictly controlled pathways, and so on are solutions to problems that do not exist or that exist in very different forms for online learners, whose learning environment is never entirely controlled by a teacher.

The final section of the presentation is concerned with what – in very broad terms – native distance pedagogies might look like. Distance pedagogies need to acknowledge the inherently greater freedoms of distance learners and the inherently distributed nature of distance learning. Truly learner-centric teaching does not seek to control, but to support, and to acknowledge the massively distributed nature of the activity, in which everyone (including emergent collective and networked forms arising from their interactions) is part of the gestalt teacher, and each learner is – from their perspective – the most important part of all of that. To emphasize that none of this is exactly new (apart from the massive scale of connection, which does matter a lot), I include a slide of Leonardo’s to-do list that describes much the same kinds of activity as those that are needed of modern learners and teachers.

For those seeking more detail, I list a few of what Terry Anderson and I described as ‘Connectivist-generation’ pedagogical models. These are far more applicable to native online learning than earlier pedagogical generations that were invented for an in-person context. In my book I am now describing this new, digitally native generation as ‘complexivist’ pedagogies, which I think is a more accurate and less confusing name. It also acknowledges that many theories and models in the family (such as John Seely Brown’s distributed cognitive apprenticeship) predate Connectivism itself. The term comes from Davis’s and Sumara’s 2006 book, ‘Complexity and Education‘, which is a great read that deserves more attention than it received when it was published.

Slides: Technology, technique and teaching

Beyond learning outcomes

What we teach, what a student learns, what we assess This is a slide deck for a talk I’m giving today, at a faculty workshop, on the subject of learning outcomes.

I think that well-considered learning outcomes can be really helpful when planning and designing learning activities, especially where there is a need to assess learning. They can help keep a learning designer focused, and to remember to ensure that assessment activities actually make a positive contribution to learning. They can also be helpful to teachers while teaching, as a framework to keep them on track (if they wish to remain on track).  However, that’s about it. Learning outcomes are not useful when applied to bureaucratic ends, they are very poor descriptors of what learning actually happens, as a rule, and they are of very little (if any) use to students under most circumstances (there are exceptions – it’s a design issue, not a logical flaw).

The big point of my talk, though, is that we should be measuring what students have actually learned, not whether they have learned what we think we have taught, and that the purpose of everything we do should be to support learning, not to support bureaucracy.

I frame this in terms of the relationships between:

  • what we teach (what we actually teach, not just what we think we are teaching, including stuff like attitudes, beliefs, methods of teaching, etc),
  • what a student learns in the process (an individual student, not students as a whole), and
  • what we assess (formally and summatively, not necessarily as part of the learning process).

There are many things that we teach that any given student will not learn, albeit that (arguably) we wouldn’t be teaching at all if learning were not happening for someone. Most students get a small subset of that. There are also many things that we teach without intentionally teaching, not all of them good or useful.

There are also very many things that students learn that we do not teach, intentionally or otherwise. In fact, it is normal for us to mandate this as part of a learning design: any mildly creative or problem-solving/inquiry-oriented activity will lead to different learning outcomes for every learner. Even in the most horribly regimented teaching contexts, students are the ones that connect everything together, and that’s always going to include a lot more than what their teachers teach.

Similarly, there are lots of things that we assess that we do not teach, even with great constructive alignment. For example, the students’ ability to string a sentence together tends to be not just a prerequisite but something that is actively graded in typical assessments.

My main points are that, though it is good to have a teaching plan (albeit that it should be flexible,  reponsive to student needs, and should accommodate serendipity)learning :

  • students should be participants in planning outcomes and
  • we should assess what students actually learn, not what we think we are teaching.

From a learning perspective, there’s less than no point in summatively judging what learners have not learned. However, that’s exactly what most institutions actually do. Assessment should be about how learners have positively changed, not whether they have met our demands.

This also implies that students should be participants in the planning and use of learning outcomes: they should be able to personalize their learning, and we should recognize their needs and interests. I use andragogy to frame this, because it is relatively uncontroversial, is easily understood, and doesn’t require people to change everything in their world view to become better teachers, but I could have equally used quite a large number of other models. Connectivism, Communities of Practice, and most constructivist theories, for instance, force us to similar conclusions.

I suggest that appreciative inquiry may be useful as an approach to assessment, inasmuch as the research methodology is purpose-built to bring about positive change, and its focus on success rather than failure makes sense in a learning context.

I also suggest the use of outcome mapping (and its close cousin, outcome harvesting) as a means of capturing unplanned as well as planned outcomes. I like these methods because they only look at changes, and then try to find out what led to those changes. Again, it’s about evaluation rather than judgment.

DT&L2018 spotlight presentation: The Teaching Gestalt

The teaching gestalt  presentation slides (PDF, 9MB)

This is my Spotlight Session from the 34th Distance Teaching & Learning Conference, at Wisconsin Madison, August 8th, 2018. Appropriately enough, I did this online and at a distance thanks to my ineptitude at dealing with the bureaucracy of immigration. Unfortunately my audio died as we moved to the Q&A session so, if anyone who was there (or anyone else) has any questions or observations, do please post them here! Comments are moderated.

The talk was concerned with how online learning is fundamentally different from in-person learning, and what that means for how (or even whether) we teach, in the traditional formal sense of the word.

Teaching is always a gestalt process, an emergent consequence of the actions of many teachers, including most notably the learners themselves, which is always greater than (and notably different from) the sum of its parts. This deeply distributed process is often masked by the inevitable (thanks to physics in traditional classrooms) dominance of an individual teacher in the process. Online, the mask falls off. Learners invariably have both far greater control and far more connection with the distributed gestalt. This is great, unless institutional teachers fight against it with rewards and punishments, in a pointless and counter-productive effort to try to sustain the level of control that is almost effortlessly attained by traditional in-person teachers, and that is purely a consequence of solving problems caused by physical classroom needs, not of the needs of learners. I describe some of the ways that we deal with the inherent weaknesses of in-person teaching especially relating to autonomy and competence support, and observe how such pedagogical methods are a solution to problems caused by the contingent side effects of in person teaching, not to learning in general.

The talk concludes with some broad characterization of what is different when teachers choose to let go of that control.  I observe that what might have been Leonardo da Vinci’s greatest creation was his effective learning process, without which none of the rest of his creations could have happened. I am hopeful that now, thanks to the connected world that we live in, we can all learn like Leonardo, if and only if teachers can learn to let go.

Turns out the STEM ‘gender gap’ isn’t a gap at all

Grace Hopper and Univac, image from en.wikipedia.org/wiki/Grace_HopperAt least in Ontario, it seems that there are about as many women as men taking STEM programs at undergraduate level. This represents a smaller percentage of women taking STEM subjects overall because there are way more women entering university in the first place. A more interesting reading of this, therefore, is not that we have a problem attracting women to science, technology, engineering, and mathematics, but that we have a problem attracting men to the humanities, social sciences, and the liberal arts. As the article puts it:

“it’s not that women aren’t interested in STEM; it’s that men aren’t interested in poetry—or languages or philosophy or art or all the other non-STEM subjects.”

That’s a serious problem.

As someone with qualifications in both (incredibly broad) areas, and interests in many sub-areas of each,  I find the arbitrary separation between them to be ludicrous, leading to no end of idiocy at both extremes, and little opportunity for cross-fertilization in the middle. It bothers me greatly that technology subjects like computing or architecture should be bundled with sciences like biology or physics, but not with social sciences or arts, which are way more relevant and appropriate to the activities of most computer professionals. In fact, it bothers me that we feel the need to separate out large fields like this at all. Everyone plays lip service to cross-disciplinary work but, when we try to take that seriously and cross the big boundaries, there is so much polarization between the science and arts communities that they usually don’t even understand one another, let alone work in harmony. We don’t just need more men in the liberal arts – we need more scientists, engineers, and technologists to cross those boundaries, whatever their gender. And, vice versa, we need more liberal artists (that sounds odd, but I have no better term) and social scientists in the sciences and, especially, in technology.

But it’s also a problem of category errors in the other direction. This clumping together of the whole of STEM conceals the fact that in some subjects – computing, say – there actually is a massive gender imbalance (including in Ontario), no matter how you mess with the statistics. This is what happens when you try to use averages to talk about specifics: it conceals far more than it reveals.

I wish I knew how to change that imbalance in my own designated field of computing, an area that I deliberately chose precisely because it cuts across almost every other field and did not limit me to doing one kind of thing. I do arts, science, social science, humanities, and more, thanks to working with machines that cross virtually every boundary.

I suspect that fixing the problem has little to do with marketing our programs better, nor with any such surface efforts that focus on the symptoms rather than the cause. A better solution is to accept and to celebrate the fact that the field of computing is much broader and vastly more interesting than the tiny subset of it that can be described as computer science, and to build up from there. It’s especially annoying that the problem exists at Athabasca where a wise decision was made long ago not to offer a computer science program. We have computing and information systems programs, but not any programs in computer science. Unfortunately, thanks to a combination of lazy media and computing profs (suffering from science envy) that promulgate the nonsense, even good friends of mine that should know better sometimes describe me as a computer scientist (I am emphatically not), and even some of our own staff think of what we do as computer science. To change that perception means not just a change in nomenclature, but a change in how and what we, at least in Athabasca, teach. For example, we might mindfully adopt an approach that contextualizes computing around projects and applications, rather than its theory and mechanics. We might design a program that doesn’t just lump together a bunch of disconnected courses and call it a minor but that, in each course (if courses are even needed), actively crosses boundaries – to see how code relates to poetry, how art can inform and be informed by software, how understanding how people behave can be used in designing better systems, how learning is changed by the tools we create, and so on.

We don’t need disciplines any more, especially not in a technology field. We need connections. We don’t need to change our image. We need to change our reality. I’m finding that to be quite a difficult challenge right now.

 

Address of the bookmark: http://windsorstar.com/opinion/william-watson-turns-out-the-stem-gender-gap-isnt-a-gap-at-all/wcm/ee4217ec-be76-4b72-b056-38a7981348f2

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2929581/turns-out-the-stem-%E2%80%98gender-gap%E2%80%99-isn%E2%80%99t-a-gap-at-all