Are experienced online teachers best-placed to help in-person teachers cope with suddenly having to teach online? Maybe not.

lecturingI recently downloaded What Teacher Educators Should Have Learned From 2020. This is an open edited book, freely downloadable from the AACE site, for teachers of teachers whose lives were disrupted by the sudden move to emergency remote teaching over the past year or so.  I’ve only skimmed the contents and read a couple of the chapters, but my first impressions are positive. Edited by Richard Ferdig and Kristine Pytash, It springs from the very active and engaged AACE SITE community, which is a good indicator of expertise and experience. It seems well organized into three main sections:

  1.         Social and Emotional Learning for Teacher Education.
  2.         Online Teaching and Learning for Teacher Education.
  3.         eXtended Reality (XR) for Teacher Education

I like the up-front emphasis on social and emotional aspects, addressing things like belongingness, compassion, and community, mainly from theoretical/model-oriented perspectives, and the other sections seem wisely chosen to meet practitioner needs. The chapters adopt a standardized structure:

  • Introduction. 
  • What We Know. 
  • Lessons Learned for Research. 
  • Lessons Learned for Practice. 
  • What You Should Read. 
  • References

Again, this seems pretty sensible, maintaining a good focus on actionable knowledge and practical steps to be taken. It’s not quite a textbook, but it’s a useful teach-yourself resource with good coverage. I look forward to dipping into it a bit more deeply. I expect to find some good ideas, good practices, and good theoretical models to support my teaching and my understanding of the issues. And I’m really pleased that it is being released as an open publication: well done, AACE, for making this openly available.

But I do wonder a little about who else will read this.

Comfort zones and uncomfortable zones

The other day I was chatting with a neighbour who teaches a traditional hard science subject at one of the local universities, who was venting about the problems of teaching via Zoom. He knew that I had a bit of interest and experience in this area, so he asked whether I had any advice. I started to suggest some ways of rethinking it as a pedagogical opportunity, but he was not impressed. Even something as low-threshold and straightforward as flipping the classroom or focusing on what students do rather than what he has to tell them was a step too far. He patiently explained that he has classes with hundreds of students and fixed topics that they need to learn, and he really didn’t see it as desirable or even possible to depart from his well-tried lecture format. At least it would be too much work and he didn’t have the time for it. I did try to push back on that a bit and I may have mentioned the overwhelming body of research that suggests this might not be a wise move, but he was pretty clear and firm about this.  What he actually wanted was for someone to make (or tell him how to make) the digital technology as easy and as comfortably familiar as the lecture theatre, and that would somehow make the students as engaged as he perceived them to normally be in his lectures, without notably changing how he taught. The problem was the darn technology, not the teaching. I bit my tongue at this point. I eventually came up with a platitude or two about trying to find different ways to make learning visible, about explicitly showing that he cares, about taking time to listen, about modelling the behaviour he wanted to see, about using the chat to good advantage, and about how motivation differs online and off, but I don’t think it helped. I suspect that the only things that really resonated with him were suggestions about how to get the most out of a webcam and a recommendation to get a better microphone.

Within the context in which he usually teaches, he is probably a very good teacher. He’s a likeable person who clearly cares a lot about his students, he knows a lot about his subject, and he knows how to make it appealing within the situation that he normally works. His courses, as he described them, are very conventional, relying a lot on the structure given to them by the industry-driven curriculum and the university’s processes, norms, and structures, and he fills his role in all that admirably. I think he is pretty typical of the vast majority of teachers. They’re good at what they do, comfortable with how they do it, and they just want the technology to accommodate them continuing to do so without unnecessary obstacles.

Unfortunately, technology doesn’t work that way.

The main reason it doesn’t work is very simple: technologies (including pedagogies) affect one another in complex and recursive ways, so (with some trivial exceptions) you can’t change one element (especially a large element) and expect the rest to work as they did before.  It’s simple, intuitive, and obvious but unless you are already well immersed in both systems theories and educational theory, really taking it to heart and understanding how it must affect your practice demands a pretty big shift in weltanschauung, which is not the kind of thing I was keen to start while on my way to the store in the midst of a busy day.

To make matters worse, even if teachers do acknowledge the need to change, their assumption that things will eventually (maybe soon) return to normal means that they are – reasonably enough –  not willing and probably not able to invest a lot of time into it. A big part of the reason for this is that, thanks to the aforementioned interdependencies, they are probably running round like blue-arsed flies just trying to keep things together, and filling their time with fixing the things that inevitably break in the process. Systems thrive on this kind of self-healing feedback loop. I guess teachers figure that, if they can work out how to tread water until the pandemic has run its course, it will be OK in the end.

If only.

Why in-person education works

The hallmark technologies (mandatory lectures, assignments, grades, exams, etc, etc) of in-person teaching are worse than awful but, just as a talented musician can make beautiful noises with limited technical knowledge and sub-standard instruments, so there are countless teachers who use atrocious methods in dreadful contexts but who successfully lead their students to learn. As long as the technologies are soft and flexible enough to allow them to paper over the cracks of bad tools and methods with good technique, talent, and passion, it works well enough for enough people enough of the time and can (with enough talent and passion) even be inspiring.

It would not work at all, though, without the massive machinery that surrounds it.

An institution (including its systems, structures, and tools) is itself designed to teach, no matter how bad the teachers are within it. The opportunities for students to learn from and with others around them, including other students, professors, support staff, administrators, and so on; the supporting technologies, including rules, physical spaces, structures, furnishings, and tools; the common rooms, the hallways, the smokers’ areas (best classrooms ever), the lecture theatres, the bars and the coffee shops; the timetables that make students physically travel to a location together (and thus massively increase salience); the notices on the walls; the clubs and societies; the librarians, the libraries, the students reading and writing within those libraries, echoing and amplifying the culture of learning that pervades them; the student dorms and shared kitchens where even more learning happens; the parties; even the awful extrinsic motivation of grades, teacher power, and norms and rules of behaviour that emerged in the first place due to the profound motivational shortcomings of in-person teaching. All of this and more conspires to support a basic level of at least mediocre (but good enough) learning, whether or not teachers teach well. It’s a massively distributed technology enacted by many coparticipants, of which designated teachers are just a part, and in which students are the lead actors among a cast of thousands. Online, those thousands are often largely invisible. At best, their presence tends to be highly filtered, channeled, or muted.

Why in-person methods don’t transfer well online

When most of that massive complex machinery is suddenly removed, leaving nothing but a generic interface better suited to remote business meetings than learning or, much worse, some awful approximation of all the evil, hard, disempowering technologies of traditional teaching wrapped around Zoom, or nightmarishly inhuman online proctoring systems, much of the teaching (in the broadest sense) disappears with it. Teaching in an institution is not just what teachers do. It’s the work of a community; of all the structures the community creates and uses; of the written and unwritten rules; of the tacit knowledge imparted by engagement in a space made for learning; of the massive preparation of schooling and the intricate loops that connect it with the rest of society; of attitudes and cultures that are shaped and reinforced by all the rest.  It’s no wonder that teachers attempting to transfer small (but the most visible) parts of that technology online struggle with it. They need to fill the ever-widening gaps left when most of the comfortable support structures of in-person institutions that made it possible in the first place are either gone or mutated into something lean and hungry. It can be done, but it is really hard work.

More abstractly, a big part of the problem with this transfer-what-used-to-work-in-person approach is that it is a technology-first approach to the problem that focuses on one technology rather than the whole. The technology of choice in this case happens to be a set of pedagogical methods, but it is no different in principle than picking a digital tool and letting that decide how you will teach. Neither makes much sense. All the technologies in the assembly – including pedagogies, digital tools, regulations, designs, and structures – have to work together. No single technology has precedence, beyond the one that results from assembling the rest. To make matters worse, what-used-to-work-in-person pedagogies were situated solutions to the problems of teaching in physical classrooms, not universally applicable methods of teaching. Though there are some similarities here and there, the problems of teaching online are not at all the same as those of in-person teaching so of course the solutions are different. Simply transferring in-person pedagogies to an online context is much like using the paddles from a kayak to power a bicycle. You might move, but you won’t move far, you won’t move fast, you won’t move where you want to go, and it is quite likely to end in injury to yourself or others.

Such problems have, to a large extent, been adequately solved by teachers and institutions that work primarily online. Online institutions and organizations have infrastructure, processes, rules, tools, cultures, and norms that have evolved to work together, starting with the baseline assumption that little or none of the physical stuff will ever be available. Anything that didn’t work never made it to first base, or has not survived. Those that have been around a while might not be perfect, but they have ironed out most of the kinks and filled in most of the gaps. Most of my work, and that of my smarter peers, begins in this different context. In fact, in my case, it mainly involves savagely critiquing that context and figuring out ways to improve it, so it is yet another step removed from where in-person teachers are now.

OK, maybe I could offer a little advice or, at least, a metaphor

Roughly 20 years ago I did share a similar context. Working in an in-person university, I had to lead a team of novice online teachers from geographically dispersed colleges to create and teach a blended program with 28 new online courses. We built the whole thing in 6 months from start to finish, including the formal evaluations and approvals process. I could share some generic lessons from what I discovered then, the main one being to put most of the effort into learning to teach online, not into designing course materials. Put dialogue and community first, not structure. For instance, make the first thing students see in the LMS the discussion, not your notes or slides, and use the discussion to share content and guide the process. However, I’d mostly feel like the driver of a Model T Ford trying to teach someone to drive a Tesla. Technologies have changed, I have changed, my memory is unreliable.

bicycleIn fact, I haven’t driven a car of any description in years. What I normally do now is, metaphorically, much closer to riding a bicycle, which I happen to do and enjoy a lot in real life too. A bike is a really smart, well-adapted, appropriate, versatile, maintainable, sustainable soft technology for getting around. The journey tends to be much more healthy and enjoyable, traffic jams don’t bother you, you can go all sorts of places cars cannot reach, and you can much more easily stop wherever you like along the way to explore what interests you. You can pretty much guarantee that you will arrive when and where you planned to arrive, give or take a few minutes. In the city, it’s often the fastest way to get around, once you factor in parking etc. It’s very liberating. It is true that more effort is needed to get from A to B, bad weather can be a pain, and it would not be the fastest or most comfortable way to reach the other side of the continent: sometimes, alternative forms of transport are definitely worth taking and I’m not against them when it’s appropriate to use them. And the bike I normally ride does have a little electric motor in one of the wheels that helps push me up hills (not much, but enough) but it doesn’t interfere with the joy (or most of the effort) of riding.  I have learned that low-threshold, adaptable, resilient systems are often much smarter in many ways than high-tech platforms because they are part-human. They can take on your own smartness and creativity in ways no amount of automation can match. This is true of online learning tools as much as it is true of bicycles. Blogs, wikis, email, discussion forums, and so on often beat the pants off learning management systems, commercial teaching platforms, learning analytics tools or AI chatbots for many advanced pedagogical methods because they can become what you want them to be, rather than what the designer thought you wanted, and they can go anywhere, without constraint. Of course, the flip side is that they take more effort, sometimes take more time, and (without enormous care) can make it harder for all concerned to do things that are automated and streamlined in more highly engineered tools, so they might not always be the best option in all circumstances, any more than a bike is the best way to get up a snowy mountain or to cross an ocean.

Why you shouldn’t listen to my advice

It’s sad but true that most of what I would really like to say on the subject of online learning won’t help teachers on the ground right now, and it is actually worse than the help their peers could give them because what I really want to tell them is to change everything and to see the world completely differently. That’s pretty threatening, especially in these already vulnerable times, and not much use if you have a class to teach tomorrow morning.

The AACE book is more grounded in where in-person teachers are now. The chapter “We Need to Help Teachers Withstand Public Criticism as They Learn to Teach Online”, for example, delves into the issues well, in accessible ways that derive from a clear understanding of the context.  However, the book cannot help but be an implicit (and, often, explicit) critique of how teachers currently teach: that’s implied in the title, and in the chapter structures.  If you’re already interested enough in the subject and willing enough to change how you teach that you are reading this book in the first place, then this is great. You are 90% of the way there already, and you are ready to learn those lessons. One of the positive sides of emergency remote teaching has been that it has encouraged some teachers to reflect on their teaching practices and purposes, in ways that will probably continue to be beneficial if and when they return to in-person teaching. They will enjoy this book, and they may be the intended audience. But they are not the ones that really need it.

I would quite like to see (though maybe not to read) a different kind of book containing advice from beginners. Maybe it would have a title something like ‘What I learned in 2020’ or ‘How I survived Zoom.’ Emergency remote teachers might be more inclined to listen to the people who didn’t know the ‘right’ ways of doing things when the crisis began, who really didn’t want to change, who maybe resented the imposition, but who found ways to work through it from where they were then, rather than where the experts think (or know) they should be aiming now. It would no doubt annoy me and other distance learning researchers because, from the perspective of recognized good practice, much of it would probably be terrible but, unlike what we have to offer, it would actually be useful. A few chapters in the AACE book are grounded in concrete experience of this nature, but even they wind up saying what should have happened, framing the solutions in the existing discourse of the distance learning discipline. Most chapters consist of advice from experts who already knew the answers before the pandemic started. It is telling that the word ‘should’ occurs a lot more frequently than it should. This is not a criticism of the authors or editors of the book: the book is clear from the start that it is going to be a critique of current practice and a practical guidebook to the territory, and most of the advice I’ve seen in it so far makes a lot of sense. It’s just not likely to affect many of the ones who have no wish to change not just their practices but their fundamental attitudes to teaching. Sadly, that’s also true of this post which, I think, is therefore more of an explanation of why I’ve been staring into the headlights for most of the pandemic, rather than a serious attempt to help those in need. I hope there’s some value in that because it feels weird to be a (slight, minor, still-learning) expert in the field with very strong opinions about how online learning should work, but to have nothing useful to say on the subject at the one time it ought to have the most impact.

Read the book:

Ferdig, R.E. & Pytash, K.E. (2021). What Teacher Educators Should Have Learned From 2020. Association for the Advancement of Computing in Education (AACE). Retrieved March 22, 2021 from https://www.learntechlib.org/primary/p/219088/.

EdTech Books

This is a great, well presented and nicely curated selection of open books on education and educational technology, ranging from classics (and compilations of chapters by classic authors) to modern guides, textbooks, and blog compilations, covering everything from learning theory to choice of LMS. Some are peer-reviewed, there’s a mix of licences from PD to restrictive CC , and there’s good guidance provided about the type and quality of content. There’s also support for collaboration and publication. All books are readable online, most can be downloaded as (at least) PDF. I think the main target audience is students of education/online learning, and practitioners – at least, there’s a strong practical focus.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/7161867/edtech-books (where you can find some really interesting comments, including the one that my automated syndicator mistakenly turned into the main post the first time it ran)

Joyful assessment: beyond high-stakes testing

Here are my slides from my presentation at the Innovate Learning Summit yesterday. It’s not world-shattering stuff – just a brutal attack on proctored, unseen written exams (PUWEs, pronounced ‘pooies’), followed by a description of the rationale, process, benefits, and unwanted consequences behind the particular portfolio-based approach to assessment employed in most of my teaching. It includes a set of constraints that I think are important to consider in any assessment process, grouped into pedagogical, motivational, and housekeeping (mainly relating to credentials) clusters. I list 13 benefits of my approach relating to each of those clusters, which I think make a pretty resounding case for using it instead of traditional assignments and tests. However, I also discuss outstanding issues, most of which relate to the external context and expectations of students or the institution, but a couple of which are fairly fundamental flaws (notably the extreme importance of prompt, caring, helpful instructor/tutor engagement in making it all work, which can be highly problematic when it doesn’t happen) that I am still struggling with.

Skills lost due to COVID-19 school closures will hit economic output for generations (hmmm)

Snippet from OECD report on covid-19 and education This CBC report is one of many dozens of articles in the world’s press highlighting one rather small but startling assertion in a recent OECD report on the effects of Covid-19 on education – that the ‘lost’ third of a year of schooling in many countries will lead to an overall lasting drop in GDP of 1.5% across the world. Though it contains many more fascinating and useful insights that are far more significant and helpful, the report itself does make this assertion quite early on and repeats it for good measure, so it is not surprising that journalists have jumped on it. It is important to observe, though, that the reasoning behind it is based on a model developed by Hanushek and Woessman over several years, and an unpublished article by the authors that tries to explain variations in global productivity according to amount and  – far more importantly – the quality of education: that long-run productivity is a direct consequence of the cognitive skills (or knowledge capital) of a nation, that can be mapped directly to how well and how much the population is educated.

As an educator I find this model, at a glance, to be reassuring and confirmatory because it suggests that we do actually have a positive effect on our students. However, there may be a few grounds on which it might be challenged (disclaimer: this is speculation). The first and most obvious is that correlation does not equal causation. The fact that countries that do invest in improving education consistently see productivity gains to match in years to come is interesting, but it raises the question of what led to that investment in the first place and whether that might be the ultimate cause, not the education itself.  A country that has invested in increasing the quality of education would, normally, be doing so as a result of values and circumstances that may lead to other consequences and/or be enabled by other things (such as rising prosperity, competition from elsewhere, a shift to more liberal values, and so on).  The second objection might be that, sure, increased quality of education does lead to greater productivity, but that it is not the educational process that is causing it, as such. Perhaps, for instance, an increased focus on attainment raises aspirations. A further objection might be that the definition of ‘quality’ does not measure what they think it measures. A brief skim of the model used suggests that it makes extensive use of scores from the likes of TIMSS, PIRLS and PISA, standardized test approaches used to compare educational ‘effectiveness’ in different regions that embody quite a lot of biases, are often manipulated at a governmental level, and that, as I have mentioned once or twice before, are extremely dubious indicators of learning: in fact, even when they are not manipulated, they may indicate willingness to comply with the demands of the powerful more than learning (does that improve GDP? Probably).  Another objection might be that absence of time spent in school does not equate to absence of education. Indeed, Hanushek and Woessman’s central thesis is that it is not the amount but the quality of schooling that matters, so it seems bizarre that they might fall back on quantifying learning by time spent in school. We know for sure that, though students may not have been conforming to curricula at the rate desired by schools and colleges, they have not stopped learning. In fact, in many ways and in many places, there are grounds to believe that there have been positive learning benefits: better family learning, more autonomy, more thoughtful pedagogies, more intentional learning community forming, and so on.  Out of this may spring a renewed focus on how people learn and how best to support them, rather than maintaining a system that evolved in mediaeval times to support very different learning needs, and that is so solidly packed with counter technologies and so embedded in so many other systems that have nothing to do with learning that we have lost sight of the ones that actually matter. If education improves as a result, then (if it is true that better and more education improves the bottom line) we may even see gains in GDP. I expect that there are other reasons for doubt: I have only skimmed the surface of the possible concerns.

I may be wrong to be sceptical –  in fairness, I have not read the many papers and books produced by Hanushek and Woessman on the subject, I am not an economist, nor do I have sufficient expertise (or interest) to analyze the regression model that they use. Perhaps they have fully addressed such concerns in that unpublished paper and the simplistic cause-effect prediction distorts their claims. But, knowing a little about complex adaptive systems, my main objection is that this is an entirely new context to which models that have worked before may no longer apply and that, even if they do, there are countless other factors that will affect the outcome in both positive and negative ways, so this is not so much a prediction as an observation about one small part of a small part of a much bigger emergent change that is quite unpredictable. I am extremely cautious at the best of times whenever I see people attempting to find simple causal linear relationships of this nature, especially when they are so precisely quantified, especially when past indicators are applied to something wholly novel that we have never seen before with such widespread effects, especially given the complex relationships at every level, from individual to national.  I’m glad they are telling the story – it is an interesting one that no doubt contains grains of important truths – but it is just an informative story, not predictive science.  The OECD has a bit of track record on this kind of misinterpretation, especially in education. This is the same organization that (laughably, if it weren’t so influential) claimed that educational technology in the classroom is bad for learning. There’s not a problem with the data collection or analysis, as such. The problem is with the predictions and recommendations drawn from it.

Beyond methodological worries, though, and even if their predictions about GDP are correct (I am pretty sure they are not – there are too many other factors at play, including huge ones like the destruction of the environment that makes the odd 1.5% seem like a drop in the barrel) then it might be a good thing. It might be that we are moving – rather reluctantly – into a world in which GDP serves as an even less effective measure of success than it already is. There are already plentiful reasons to find it wanting, from its poor consideration of ecological consequences to its wilful blindness to (and causal effect upon) inequalities, to its simple inadequacy to capture the complexity and richness of human culture and wealth. I am a huge fan of the state of Bhutan’s rejection of the GDP, that it has replaced with the GNH happiness index. The GNH makes far more sense, and is what has led Bhutan to be one of the only countries in the world to be carbon positive, as well as being (arguably but provably) one of the happiest countries in the world. What would you rather have, money (at least for a few, probably not you), or happiness and a sustainable future? For Bhutan, education is not for economic prosperity: it is about improving happiness, which includes good governance, sustainability, and preservation of (but not ossification of) culture.

Many educators – and I am very definitely one of them – share Bhutan’s perspective on education. I think that my customer is not the student, or a government, or companies, but society as a whole, and that education makes (or should make) for happier, safer, more inventive, more tolerant, more stable, more adaptive societies, as well as many other good things. It supports dynamic meta-stability and thus the evolution of culture. It is very easy to lose sight of that goal when we have to account to companies, governments, other institutions, and to so many more deeply entangled sets of people with very different agendas and values, not to mention our inevitable focus on the hard methods and tools of whatever it is that we are teaching, as well as the norms and regulations of wherever we teach it. But we should not ever forget why we are here. It is to make the world a better place, not just for our students but for everyone. Why else would we bother?

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6578662/skills-lost-due-to-covid-19-school-closures-will-hit-economic-output-for-generations-hmmm

How Assessment is Changing in The Digital Age – Five Guiding Principles | teachonline.ca

This article from teachonline.ca draws from a report by JISC (the UK academic network organization) to provide 5 ‘principles’ for assessment. I put the scare quotes around ‘principles’ because they are mostly descriptive labels for trends and they are woefully non-inclusive. There is also a subtext here – that I do understand is incredibly hard to avoid because I failed to fully do so myself in my own post last week – that assessment is primarily concerned with proving competence for the sake of credentials (it isn’t). Given these caveats, most of what is written here, however, makes some sense. Lecture with skeleton

Principle 1: authentic assessment. I completely agree that assessment should at least partly be of authentic activities. It is obvious how that plays out in applied disciplines with a clear workplace context. If you are learning how to program, for instance, then of course you should write programs that have some value in a realistic context and it goes without saying that you should assess the same. This includes aspects of the task that we might not traditionally assess in a typical programming course such as analysis, user experience testing, working with others, interacting with StackOverflow, sharing via GitHub, copying code from others, etc. It is less obvious in the case of something like, say, philosophy, or history, or latin, though, or, indeed, in any subject that is primarily found in academia. Authentic assessment for such things would probably be an essay or conference presentation, or perhaps some kind of argument, most of the time, because that’s what real life is like for most people in such fields (whether that should be the case remains an open issue). We should be wary, though, of making this the be-all and end-all, because there’s a touch of behaviourism lurking behind the idea: can the student perform as expected? There are other things that matter. For instance, I think that it is incredibly important to reflect on any learning activity, even though that might not mirror what is typically done in an authentic context. It can significantly contribute to learning but it can also reveal things that may not be obvious when we judge what is done in an authentic context, such as why people did what they did or whether they would do it the same way again. There may also be stages along the way that are not particularly authentic, but that contribute to learning the hard skills needed in order to perform effectively in the authentic context: learning a vocabulary, for example, or doing something dangerous in a cut-down, safe environment. We should probably not summatively assess such things (they should rarely contribute to a credential because they do not demonstrate applied capabilityre), but formative assessment – including of this kind of activity – is part of all learning.

Principle 2: accessible and inclusive assessment. Well, duh. Of course this should be how it is done. Not so much a principle as plain common decency. Was this not ever so? Yes it was. Only an issue when careless people forget that some media are less inclusive than others, or that not everyone knows or cares about golf. Nothing new here.

Principle 3: appropriately automated assessment. This is a reaction to bad assessment, not a principle for good assessment. There is a principle that really matters here but it is not appropriate automation: it is that assessment should enhance and improve the student experience. Automation can sometimes do that. It is appropriate for some kinds of formative feedback (see examples of non-authentic learning above)  but very little else which, in the context of this article (that implicitly focuses on the final judgment), means it is a bad idea to use it at all.

Principle 4: continuous assessment. I don’t mind this one at all. Again, the principle is not what the label claims, though. The principle here is that assessment should be designed to improve learning. For sure, if it is used as a filter to sort the great from the not great, then the filter should be authentic which, for the most part, means no high stakes, high stress, one-chance tests, and that overall behaviours and performance over time are what matters. However, there is a huge risk of therefore assessing learning in progress rather than capability once a course is done. If we are interested in assessing competence for credentials, then I’d rather do it at the end, once learning has been accomplished (ignoring the inconvenient detail that this is not a terminal state and that learning must always undergo ever-dynamic renewal and transformation until the day we die). Of course, the work done along the way will make up the bulk of the evidence for that final judgment but it allows for the fact that learning changes people, and that what we did early on in the journey seldom represents what we are able to do in the light of later learning.

Principle 5: secure assessment. Why is this mentioned in an article about assessment in the digital age? Is cheating a new invention? Was it (intentionally) insecure before? This is just a description of how some people have noticed that traditional forms of assessment are really dumb in a context that includes Wikipedia, Google, and communications devices the size of a peanut. Pointless, and certainly not a new principle for the Digital Age. In fairness, if the principles above are followed in spirit as well as in letter, it is not likely to be a huge issue but, then, why make it a principle? It’s more a report on what teachers are thinking and talking about.

The summary is motherhood and apple pie, albeit that it doesn’t entirely fall out from the principles (choice over when to be assessed, or peer assessment, for instance, are not really covered in the principles, though they are very good ideas).

I’m glad that people are sharing ideas about this but I think that there are more really important principles than these: that students should have control over their own assessment, that it should never reward or punish, that it should always support learning, and so on. I wrote a bit about this the other day, and, though that is a work in progress, I think it gets a little closer to what actually matters than this.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/6531701/how-assessment-is-changing-in-the-digital-age-five-guiding-principles-teachonlineca

Evaluating assessment

Exam A group of us at AU have begun discussions about how we might transform our assessment practices, in the light of the far-reaching AU Imagine plan and principles. This is a rare and exciting opportunity to bring about radical and positive change in how learning happens at the institution. Hard technologies influence soft more than vice versa, and assessments (particularly when tied to credentials) tend to be among the hardest of all technologies in any pedagogical intervention. They are therefore a powerful lever for change. Equally, and for the same reasons, they are too often the large, slow, structural elements that infest systems to stunt progress and innovation.

Almost all learning must involve assessment, whether it be of one’s own learning, or provided by other people or machines. Even babies constantly assess their own learning. Reflection is assessment. It is completely natural and it only gets weird when we treat it as a summative judgment, especially when we add grades or credentials to the process, thus normally changing the purpose of learning from achieving competence to achieving a reward. At best it distorts learning, making it seem like a chore rather than a delight, at worst it destroys it, even (and perhaps especially) when learners successfully comply with the demands of assessors and get a good grade. Unfortunately, that’s how most educational systems are structured, so the big challenge to all teachers must be to eliminate or at least to massively reduce this deeply pernicious effect. A large number of the pedagogies that we most value are designed to solve problems that are directly caused by credentials. These pedagogies include assessment practices themselves.

With that in mind, before the group’s first meeting I compiled a list of some of the main principles that I adhere to when designing assessments, most of which are designed to reduce or eliminate the structural failings of educational systems. The meeting caused me to reflect a bit more. This is the result:

Principles applying to all assessments

  • The primary purpose of assessment is to help the learner to improve their learning. All assessment should be formative.
  • Assessment without feedback (teacher, peer, machine, self) is judgement, not assessment, pointless.
  • Ideally, feedback should be direct and immediate or, at least, as prompt as possible.
  • Feedback should only ever relate to what has been done, never the doer.
  • No criticism should ever be made without also at least outlining steps that might be taken to improve on it.
  • Grades (with some very rare minor exceptions where the grade is intrinsic to the activity, such as some gaming scenarios or, arguably, objective single-answer quizzes with T/F answers) are not feedback.
  • Assessment should never ever be used to reward or punish particular prior learning behaviours (e.g. use of exams to encourage revision, grades as goals, marks for participation, etc) .
  • Students should be able to choose how, when and on what they are assessed.
  • Where possible, students should participate in the assessment of themselves and others.
  • Assessment should help the teacher to understand the needs, interests, skills, and gaps in knowledge of their students, and should be used to help to improve teaching.
  • Assessment is a way to show learners that we care about their learning.

Specific principles for summative assessments

A secondary (and always secondary) purpose of assessment is to provide evidence for credentials. This is normally described as summative assessment, implying that it assesses a state of accomplishment when learning has ended. That is a completely ridiculous idea. Learning doesn’t end. Human learning is not in any meaningful way like programming a computer or storing stuff in a database. Knowledge and skills are active, ever-transforming, forever actively renewed, reframed, modified, and extended. They are things we do, not things we have.

With that in mind, here are my principles for assessment for credentials (none of which supersede or override any of the above core principles for assessment, which always apply):

  • There should be no assessment task that is not in itself a positive learning activity. Anything else is at best inefficient, at worst punitive/extrinsically rewarding.
  • Assessment for credentials must be fairly applied to all students.
  • Credentials should never be based on comparisons between students (norm-referenced assessment is always, unequivocally, and unredeemably wrong).
  • The criteria for achieving a credential should be clear to the learner and other interested parties (such as employers or other institutions), ideally before it happens, though this should not forestall the achievement and consideration of other valuable outcomes.
  • There is no such thing as failure, only unfinished learning. Credentials should only celebrate success, not punish current inability to succeed.
  • Students should be able to choose when they are ready to be assessed, and should be able to keep trying until they succeed.
  • Credentials should be based on evidence of competence and nothing else.
  • It should be impossible to compromise an assessment by revealing either the assessment or solutions to it.
  • There should be at least two ways to demonstrate competence, ideally more. Students should only have to prove it once (though may do so in many ways and many times, if they wish).
  • More than one person should be involved in judging competence (at least as an option, and/or on a regularly taken sample).
  • Students should have at least some say in how, when, and where they are assessed.
  • Where possible (accepting potential issues with professional accreditation, credit transfer, etc) they should have some say over the competencies that are assessed, in weighting and/or outcome.
  • Grades and marks should be avoided except where mandated elsewhere. Even then, all passes should be treated as an ‘A’ because students should be able to keep trying until they excel.
  • Great success may sometimes be worthy of an award – e.g. a distinction – but such an award should never be treated as a reward.
  • Assessment for credentials should demonstrate the ability to apply learning in an authentic context. There may be many such contexts.
  • Ideally, assessment for credentials should be decoupled from the main teaching process, because of risks of bias, the potential issues of teaching to the test (regardless of individual needs, interests and capabilities) and the dangers to motivation of the assessment crowding out the learning. However, these risks are much lower if all the above principles are taken on board.

I have most likely missed a few important issues, and there is a bit of redundancy in all this, but this is a work in progress. I think it covers the main points.

Further random reflections

There are some overriding principles and implied specifics in all of this. For instance, respect for diversity, accessibility, respect for individuals, and recognition of student control all fall out of or underpin these principles. It implies that we should recognize success, even when it is not the success we expected, so outcome harvesting makes far more sense than measurement of planned outcomes. It implies that failure should only ever be seen as unfinished learning, not as a summative judgment of terminal competence, so appreciative inquiry is far better than negative critique. It implies flexibility in all aspects of the activity. It implies, above and beyond any other purpose, that the focus should always be on learning. If assessment for credentials adversely affects learning then it should be changed at once.

In terms of implementation, while objective quizzes and their cousins can play a useful formative role in helping students to self-assess and to build confidence, machines (whether implemented by computers or rule-following humans) should normally be kept out of credentialling. There’s a place for AI but only when it augments and informs human intelligence, never when it behaves autonomously. Written exams and their ilk should be avoided, unless they conform to or do not conflict with all the above principles: I have found very few examples like this in the real world, though some practical demonstrations of competence in an authentic setting (e.g. lab work and reporting) and some reflective exercises on prior work can be effective.

A portfolio of evidence, including a reflective commentary, is usually going to be the backbone of any fair, humane, effective assessment: something that lets students highlight successes (whether planned or not), that helps them to consolidate what they have learned, and that is flexible enough to demonstrate competence shown in any number of ways. Outputs or observations of authentic activities are going to be important contributors to that. My personal preference in summative assessments is to only use the intended (including student-generated) and/or harvested outcomes for judging success, not for mandated assignments. This gives flexibility, it works for every subject, and it provides unquivocal and precise evidence of success. It’s also often good to talk with students, perhaps formally (e.g. a presentation or oral exam), in order to tease out what they really know and to give instant feedback. It is worth noting that, unlike written exams and their ilk, such methods are actually fun for all concerned, albeit that the pleasure comes from solving problems and overcoming challenges, so it is seldom easy.

Interestingly, there are occasions in traditional academia where these principles are, for the most part, already widely applied. A typical doctoral thesis/dissertation, for example, is often quite close to it (especially in more modern professional forms that put more emphasis on recording the process), as are some student projects. We know that such things are a really good idea, and lead to far richer, more persistent, more fulfilling learning for everyone. We do not do them ubiquitously for reasons of cost and time. It does take a long time to assess something like this well, and it can take more time during the rest of the teaching process thanks to the personalization (real personalization, not the teacher-imposed form popularized by learning analytics aficionados) and extra care that it implies. It is an efficient use of our time, though, because of its active contribution to learning, unlike a great many traditional assessment methods like teacher-set assignments (minimal contribution) and exams (negative contribution).  A lot of the reason for our reticence, though, is the typical university’s schedule and class timetabling, which makes everything pile on at once in an intolerable avalanche of submissions. If we really take autonomy and flexibility on board, it doesn’t have to be that way. If students submit work when it is ready to be submitted, if they are not all working in lock-step, and if it is a work of love rather than compliance, then assessment is often a positively pleasurable task and is naturally staggered. Yes, it probably costs a bit more time in the end (though there are plenty of ways to mitigate that, from peer groups to pedagogical design) but every part of it is dedicated to learning, and the results are much better for everyone.

Some useful further reading

This is a fairly random selection of sources that relate to the principles above in one way or another. I have definitely missed a lot. Sorry for any missing URLs or paywalled articles: you may be able to find downloadable online versions somewhere.

Boud, D., & Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment & Evaluation in Higher Education, 31(4), 399-413. Retrieved from https://www.jhsph.edu/departments/population-family-and-reproductive-health/_docs/teaching-resources/cla-01-aligning-assessment-with-long-term-learning.pdf

Boud, D. (2007). Reframing assessment as if learning were important. Retrieved from https://www.researchgate.net/publication/305060897_Reframing_assessment_as_if_learning_were_important

Cooperrider, D. L., & Srivastva, S. (1987). Appreciative inquiry in organizational life. Research in organizational change and development, 1, 129-169.

Deci, E. L., Vallerand, R. J., Pelletier, L. G., & Ryan, R. M. (1991). Motivation and education: The self-determination perspective. Educational Psychologist, 26(3/4), 325-346.

Hussey, T., & Smith, P. (2002). The trouble with learning outcomes. Active Learning in Higher Education, 3(3), 220-233.

Kohn, A. (1999). Punished by rewards: The trouble with gold stars, incentive plans, A’s, praise, and other bribes (Kindle ed.). Mariner Books. (this one is worth forking out money for).

Kohn, A. (2011). The case against grades. Educational Leadership, 69(3), 28-33.

Kohn, A. (2015). Four Reasons to Worry About “Personalized Learning”. Retrieved from http://www.alfiekohn.org/blogs/personalized/ (check out Alfie Kohn’s whole site for plentiful other papers and articles – consistently excellent).

Reeve, J. (2002). Self-determination theory applied to educational settings. In E. L. Deci & R. M. Ryan (Eds.), Handbook of Self-Determination research (pp. 183-203). Rochester, NY: The University of Rochester Press.

Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Publications. (may be worth paying for if such things interest you).

Wilson-Grau, R., & Britt, H. (2012). Outcome harvesting. Cairo: Ford Foundation. http://www.managingforimpact.org/sites/default/files/resource/outome_harvesting_brief_final_2012-05-2-1.pdf.

Technology, technique, and teaching

These are the slides from my recent talk with students studying the philosophy of education at Pace University.

This is a mashup of various talks I have given in recent years, with a little new stuff drawn from my in-progress book. It starts with a discussion of the nature of technology, and the distinction between hard and soft technologies that sees relative hardness as the amount of pre-orchestration in a technology (be it a machine or a legal system or whatever). I observe that pedagogical methods (‘pedagogies’ for short) are soft technologies to those who are applying them, if not to those on the receiving end. It is implied (though I forgot to explicitly mention) that hard technologies are always more structurally significant than soft ones: they frame what is possible.

All technologies are assemblies, and (in education), the pedagogies applied by learners are always the most important parts of those assemblies. However, in traditional in-person classrooms, learners are (by default) highly controlled due to the nature of physics – the need to get a bunch of people together in one place at one time, scarcity of resources,  the limits of human voice and hearing, etc – and the consequent power relationships and organizational constraints that occur.  The classroom thus becomes the environment that frames the entire experience, which is very different from what are inaccurately described as online learning environments (which are just parts of a learner’s environment).

Because of physical constraints, the traditional classroom context is inherently very bad for intrinsic motivation. It leads to learners who don’t necessarily want to be there, having to do things they don’t necessarily want to do, often being either bored or confused. By far the most common solution to that problem is to apply externally regulated extrinsic motivation, such as grades, punishments for non-attendance, rules of classroom behaviour, and so on. This just makes matters much worse, and makes the reward (or the avoidance of punishment) the purpose of learning. Intelligent responses to this situation include cheating, short-term memorization strategies, satisficing, and agreeing with the teacher. It’s really bad for learning. Such issues are not at all surprising: all technologies create as well as solve problems, so we need to create counter technologies to deal with them. Thus, what we normally recognize as good pedagogy is, for the most part, a set of solutions to the problems created by the constraints of in-person teaching, to bring back the love of learning that is destroyed by the basic set-up. A lot of good teaching is therefore to do with supporting at least better, more internally regulated forms of extrinsic motivation.

Because pedagogies are soft technologies, skill is needed to use them well. Harder pedagogies, such as Direct Instruction, that are more prescriptive of method tend (on average) to work better than softer pedagogies such as problem-based learning, because most teachers tend towards being pretty average: that’s implicit in the term, after all. Lack of skill can be compensated for through the application of a standard set of methods that only need to be done correctly in order to work. Because such methods can also work for good teachers as well as the merely average or bad, their average effectiveness is, of course, high. Softer pedagogical methods such as active learning, problem-based learning, inquiry-based learning, and so on rely heavily on passionate, dedicated, skilled, time-rich teachers and so, on average, tend to be less successful. However, when done well, they outstrip more prescriptive methods by a large margin, and lead to richer, more expansive outcomes that go far beyond those specified in a syllabus or test. Softer technologies, by definition, allow for greater creativity, flexibility, adaptability, and so on than harder technologies but are therefore difficult to implement. There is no such thing as a purely hard or purely soft technology, though, and all exist on a spectrum,. Because all pedagogies are relatively soft technologies, even those that are quite prescriptive, almost any pedagogical method can work if it is done well: clunky, ugly, weak pedagogies used by a fantastic teacher can lead to great, persistent, enthusiastic learning. As Hattie observes, almost everything works – at least, that’s true of most things that are reported on in educational research studies :-). But (and this is the central message of my book, the consequences of which are profound) it ain’t what you do, it’s the way that you do it, that’s what gets results.

Problems can occur, though, when we use the same methods that work in person in a different context for which they were not designed. Online learning is by far the most dominant mode of learning (for those with an Internet connection – some big social, political, economic, and equity issues here) on the planet. Google, YouTube, Wikipedia, Reddit, StackExchange, Quora, etc, etc, etc, not to mention email, social networking sites, and so on, are central to how most of us in the online world learn anything nowadays. The weird thing about online education (in the institutional sense) is that online learning is far less obviously dominant, and tends to be viewed in a far less favourable light when offered as an option. Given the choice, and without other constraints, most students would rather learn in-person than online. At least in part, this is due to the fact that those of us working in formal online education continue to apply pedagogies and organizational methods that solved problems in in-person classrooms, especially with regard to teacher control: the rewards and punishments of grades, fixed length courses, strictly controlled pathways, and so on are solutions to problems that do not exist or that exist in very different forms for online learners, whose learning environment is never entirely controlled by a teacher.

The final section of the presentation is concerned with what – in very broad terms – native distance pedagogies might look like. Distance pedagogies need to acknowledge the inherently greater freedoms of distance learners and the inherently distributed nature of distance learning. Truly learner-centric teaching does not seek to control, but to support, and to acknowledge the massively distributed nature of the activity, in which everyone (including emergent collective and networked forms arising from their interactions) is part of the gestalt teacher, and each learner is – from their perspective – the most important part of all of that. To emphasize that none of this is exactly new (apart from the massive scale of connection, which does matter a lot), I include a slide of Leonardo’s to-do list that describes much the same kinds of activity as those that are needed of modern learners and teachers.

For those seeking more detail, I list a few of what Terry Anderson and I described as ‘Connectivist-generation’ pedagogical models. These are far more applicable to native online learning than earlier pedagogical generations that were invented for an in-person context. In my book I am now describing this new, digitally native generation as ‘complexivist’ pedagogies, which I think is a more accurate and less confusing name. It also acknowledges that many theories and models in the family (such as John Seely Brown’s distributed cognitive apprenticeship) predate Connectivism itself. The term comes from Davis’s and Sumara’s 2006 book, ‘Complexity and Education‘, which is a great read that deserves more attention than it received when it was published.

Slides: Technology, technique and teaching

Beyond learning outcomes

What we teach, what a student learns, what we assess This is a slide deck for a talk I’m giving today, at a faculty workshop, on the subject of learning outcomes.

I think that well-considered learning outcomes can be really helpful when planning and designing learning activities, especially where there is a need to assess learning. They can help keep a learning designer focused, and to remember to ensure that assessment activities actually make a positive contribution to learning. They can also be helpful to teachers while teaching, as a framework to keep them on track (if they wish to remain on track).  However, that’s about it. Learning outcomes are not useful when applied to bureaucratic ends, they are very poor descriptors of what learning actually happens, as a rule, and they are of very little (if any) use to students under most circumstances (there are exceptions – it’s a design issue, not a logical flaw).

The big point of my talk, though, is that we should be measuring what students have actually learned, not whether they have learned what we think we have taught, and that the purpose of everything we do should be to support learning, not to support bureaucracy.

I frame this in terms of the relationships between:

  • what we teach (what we actually teach, not just what we think we are teaching, including stuff like attitudes, beliefs, methods of teaching, etc),
  • what a student learns in the process (an individual student, not students as a whole), and
  • what we assess (formally and summatively, not necessarily as part of the learning process).

There are many things that we teach that any given student will not learn, albeit that (arguably) we wouldn’t be teaching at all if learning were not happening for someone. Most students get a small subset of that. There are also many things that we teach without intentionally teaching, not all of them good or useful.

There are also very many things that students learn that we do not teach, intentionally or otherwise. In fact, it is normal for us to mandate this as part of a learning design: any mildly creative or problem-solving/inquiry-oriented activity will lead to different learning outcomes for every learner. Even in the most horribly regimented teaching contexts, students are the ones that connect everything together, and that’s always going to include a lot more than what their teachers teach.

Similarly, there are lots of things that we assess that we do not teach, even with great constructive alignment. For example, the students’ ability to string a sentence together tends to be not just a prerequisite but something that is actively graded in typical assessments.

My main points are that, though it is good to have a teaching plan (albeit that it should be flexible,  reponsive to student needs, and should accommodate serendipity)learning :

  • students should be participants in planning outcomes and
  • we should assess what students actually learn, not what we think we are teaching.

From a learning perspective, there’s less than no point in summatively judging what learners have not learned. However, that’s exactly what most institutions actually do. Assessment should be about how learners have positively changed, not whether they have met our demands.

This also implies that students should be participants in the planning and use of learning outcomes: they should be able to personalize their learning, and we should recognize their needs and interests. I use andragogy to frame this, because it is relatively uncontroversial, is easily understood, and doesn’t require people to change everything in their world view to become better teachers, but I could have equally used quite a large number of other models. Connectivism, Communities of Practice, and most constructivist theories, for instance, force us to similar conclusions.

I suggest that appreciative inquiry may be useful as an approach to assessment, inasmuch as the research methodology is purpose-built to bring about positive change, and its focus on success rather than failure makes sense in a learning context.

I also suggest the use of outcome mapping (and its close cousin, outcome harvesting) as a means of capturing unplanned as well as planned outcomes. I like these methods because they only look at changes, and then try to find out what led to those changes. Again, it’s about evaluation rather than judgment.

DT&L2018 spotlight presentation: The Teaching Gestalt

The teaching gestalt  presentation slides (PDF, 9MB)

This is my Spotlight Session from the 34th Distance Teaching & Learning Conference, at Wisconsin Madison, August 8th, 2018. Appropriately enough, I did this online and at a distance thanks to my ineptitude at dealing with the bureaucracy of immigration. Unfortunately my audio died as we moved to the Q&A session so, if anyone who was there (or anyone else) has any questions or observations, do please post them here! Comments are moderated.

The talk was concerned with how online learning is fundamentally different from in-person learning, and what that means for how (or even whether) we teach, in the traditional formal sense of the word.

Teaching is always a gestalt process, an emergent consequence of the actions of many teachers, including most notably the learners themselves, which is always greater than (and notably different from) the sum of its parts. This deeply distributed process is often masked by the inevitable (thanks to physics in traditional classrooms) dominance of an individual teacher in the process. Online, the mask falls off. Learners invariably have both far greater control and far more connection with the distributed gestalt. This is great, unless institutional teachers fight against it with rewards and punishments, in a pointless and counter-productive effort to try to sustain the level of control that is almost effortlessly attained by traditional in-person teachers, and that is purely a consequence of solving problems caused by physical classroom needs, not of the needs of learners. I describe some of the ways that we deal with the inherent weaknesses of in-person teaching especially relating to autonomy and competence support, and observe how such pedagogical methods are a solution to problems caused by the contingent side effects of in person teaching, not to learning in general.

The talk concludes with some broad characterization of what is different when teachers choose to let go of that control.  I observe that what might have been Leonardo da Vinci’s greatest creation was his effective learning process, without which none of the rest of his creations could have happened. I am hopeful that now, thanks to the connected world that we live in, we can all learn like Leonardo, if and only if teachers can learn to let go.

Evidence mounts that laptops are terrible for students at lectures. So what?

The Verge reports on a variety of studies that show taking notes with laptops during lectures results in decreased learning when compared with notes taken using pen and paper. This tells me three things, none of which is what the article is aiming to tell me:

  1. That the institutions are teaching very badly. Countless decades of far better evidence than that provided in these studies shows that giving lectures with the intent of imparting information like this is close to being the worst way to teach. Don’t blame the students for poor note taking, blame the institutions for poor teaching. Students should not be put in such an awful situation (nor should teachers, for that matter). If students have to take notes in your lectures then you are doing it wrong.
  2. That the students are not skillful laptop notetakers. These studies do not imply that laptops are bad for notetaking, any more than giving students violins that they cannot play implies that violins are bad for making music. It ain’t what you do, it’s the way that you do it. If their classes depend on effective notetaking then teachers should be teaching students how to do it. But, of course, most of them probably never learned to do it well themselves (at least using laptops). It becomes a vicious circle.
  3. That laptop and, especially, software designers have a long way to go before their machines disappear into the background like a pencil and paper. This may be inherent in the medium, inasmuch as a) they are vastly more complex toolsets with much more to learn about, and b) interfaces and apps constantly evolve so, as soon as people have figured out one of them, everything changes under their feet. It becomes a vicious cycle.

The extra cognitive load involved in manipulating a laptop app (and stopping the distractions that manufacturers seem intent on providing even if you have the self-discipline to avoid proactively seeking them yourself) can be a hindrance unless you are proficient to the point that it becomes an unconscious behaviour. Few of us are. Tablets are a better bet, for now, though they too are becoming overburdened with unsought complexity and unwanted distractions. I have for a couple of years now been taking most of my notes at conferences etc with an Apple Pencil and an iPad Pro, because I like the notetaking flexibility, the simplicity, the lack of distraction (albeit that I have to actively manage that), and the tactile sensation of drawing and doodling. All of that likely contributes to making it easier to remember stuff that I want to remember. The main downside is that, though I still gain laptop-like benefits of everything being in one place, of digital permanence, and of it being distributed to all my devices, I have, in the process, lost a bit in terms of searchability and reusability. I may regret it in future, too, because graphic formats tend to be less persistent over decades than text. On the bright side, using a tablet, I am not stuck in one app. If I want to remember a paper or URL (which is most of what I normally want to remember other than my own ideas and connections that are sparked by the speaker) I tend to look it up immediately and save it to Pocket so that I can return to it later, and I do still make use of a simple notepad for things I know I will need later. Horses for courses, and you get a lot more of both with a tablet than you do with a pencil and paper. And, of course, I can still use pen and paper if I want a throwaway single-use record – conference programs can be useful for that.

 

 

 

 

Address of the bookmark: https://www.theverge.com/2017/11/27/16703904/laptop-learning-lecture

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2871283/evidence-mounts-that-laptops-are-terrible-for-students-at-lectures-so-what