Educational ends and means: McNamara’s Fallacy and the coming robot apocalypse (presentation for TAMK)


These are the slides that I used for my talk with a delightful group of educational leadership students from TAMK University of Applied Sciences in Tampere, Finland at (for me) a somewhat ungodly hour Wednesday night/Thursday morning after a long day. If you were in attendance, sorry for any bleariness on my part. If not, or if you just want to re-live the moment, here is the video of the session (thanks Mark!)man shaking hands with a robot

The brief that I was given was to talk about what generative AI means for education and, if you have been following any of my reflections on this topic then you’ll already have a pretty good idea of what kinds of issues I raised about that. My real agenda, though, was not so much to talk about generative AI as to reflect on the nature and roles of education and educational systems because, like all technologies, the technology that matters in any given situation is the enacted whole rather than any of its assembled parts. My concerns about uses of generative AI in education are not due to inherent issues with generative AIs (plentiful though those may be) but to inherent issues with educational systems that come to the fore when you mash the two together at a grand scale.

The crux of this argument is that, as long as we think of the central purposes of education as being the attainment of measurable learning outcomes or the achievement of credentials, especially if the focus is on training people for a hypothetical workplace, the long-term societal effects of inserting generative AIs into the teaching process are likely to be dystopian. That’s where Robert McNamara comes into the picture. The McNamara Fallacy is what happens when you pick an aspect of a system to measure, usually because it is easy, and then you use that measure to define success, choosing to ignore or to treat as irrelevant anything that cannot be measured. It gets its name from Robert McNamara, US Secretary of Defense during the Vietnam war, who famously measured who was winning by body count, which is probably among the main reasons that the US lost the war.

My concern is that measurable learning outcomes (and still less the credentials that signify having achieved them) are not the ends that matter most. They are, more, means to achieve far more complex, situated, personal and social ends that lead to happy, safe, productive societies and richer lives for those within them. While it does play an important role in developing skills and knowledge, education is thus more fundamentally concerned with developing values, attitudes, ways of thinking, ways of seeing, ways of relating to others, ways of understanding and knowing what matters to ourselves and others, and finding how we fit into the social, cultural, technological, and physical worlds that we inhabit. These critical social, cultural, technological, and personal roles have always been implicit in our educational systems but, at least in in-person institutions, it seldom needs to be made explicit because it is inherent in the structures and processes that have evolved over many centuries to meet this need. This is why naive attempts to simply replicate the in-person learning experience online usually fail: they replicate the intentional teaching activities but neglect to cater for the vast amounts of learning that occur simply due to being in a space with other people, and all that emerges as a result of that. It is for much the same reasons that simply inserting generative AI into existing educational structures and systems is so dangerous.

If we choose to measure the success or failure of an educational system by the extent to which learners achieve explicit learning outcomes and credentials, then the case for using generative AIs to teach is extremely compelling. Already, they are far more knowledgeable, far more patient, far more objective, far better able to adapt their teaching to support individual student learning, and far, far cheaper than human teachers. They will get better. Much better. As long as we focus only on the easily measurable outcomes and the extrinsic targets, simple economics combined with their measurably greater effectiveness means that generative AIs will increasingly replace teachers in the majority of teaching roles.  That would not be so bad – as Arthur C. Clarke observed, any teacher that can be replaced by a machine should be – were it not for all the other more important roles that education plays, and that it will continue to play, except that now we will be learning those ways of being human from things that are not human and that, in more or less subtle ways, do not behave like humans. If this occurs at scale – as it is bound to do – the consequences for future generations may not be great. And, for the most part, the AIs will be better able to achieve those learning outcomes themselves – what is distinctive about them is that they are, like us, tool users, not simply tools – so why bother teaching fallible, inconsistent, unreliable humans to achieve them? In fact, why bother with humans at all? There are, almost certainly, already large numbers of instances in which at least part of the teaching process is generated by an AI and where generative AIs are used by students to create work that is assessed by AIs.

It doesn’t have to be this way. We can choose to recognize the more important roles of our educational systems and redesign them accordingly, as many educational thinkers have been recommending for considerably more than a century. I provide a few thoughts on that in the last few slides that are far from revolutionary but that’s really the point: we don’t need much novel thinking about how to accommodate generative AI into our existing systems. We just need to make those systems work the way we have known they should work for a very long time.

Download the slides | Watch the video

I am a professional learner, employed as a Full Professor and Associate Dean, Learning & Assessment, at Athabasca University, where I research lots of things broadly in the area of learning and technology, and I teach mainly in the School of Computing & Information Systems. I am a proud Canadian, though I was born in the UK. I am married, with two grown-up children, and three growing-up grandchildren. We all live in beautiful Vancouver.

5 Comments on Educational ends and means: McNamara’s Fallacy and the coming robot apocalypse (presentation for TAMK)

  1. Yes, if we take a cynical uncaring attitude to education, we can mess up learning, without computer assistance.

    The transcripts of conversations between President Nixon and senior people in his administration, show the McNamara Fallacy was the least of their worries. What might be called the Robodebt fallacy is blaming a technical system for what is actually a deliberate callus human decision. Nixon’s priority in Vietnam was not to win the war, but win the next US election. In the case of the Australian Robodebt debacle, a group of senior government people decided to use a crude computer algorithm to target vulnerable welfare recipients for debt recovery. Similarly with the UK Post Office fiasco.

    1. Jon Dron says:

      Absolutely: the point is that we have to look at the whole technology, very much including the things that we do with it, not the individual parts in isolation. But it is very rarely cynicism and lack of care that causes a laser-sharp focus on learning outcomes and credentials to the exclusion of everything else. For the most part it is a genuine and often passionately held belief that such measures will improve quality. This was certainly true of McNamara. In my role as associate dean I spend far too much of my time making sure that outcomes are measured and achieve and, in fairness, this is important, because we should at least be doing what we intend and claim to do. The problem is that this is the stuff that helps the machine of education achieve its purpose, but it is not its purpose. When we replace the human parts – teachers and students – with machines that can teach and achieve the measured outcomes at least as well as humans we are also replacing the very many things they do that we do not or cannot measure, and that actually matter more. We take the signals to be the thing itself. That’s the problem. Evil individuals can sometimes make use of this phenomenon to achieve their own ends, and they thrive on damn lies and statistics that give the illusion of objectivity, but it’s mostly the good but blinkered folk caught up in the system, doing things for all the right reasons, who cause the greater harm.

      1. You can’t see and do everything, just do your bit as best you can. We can use AI where it makes sense to do so, both in terms of meeting the measures set, and to benefit the students. We can also educate the educators to avoid falling into the trap of relying too much on the technology. As an Honorary Lecturer I am only a small cog in the education machine, but I still have a reasonable level autonomy to do my teaching the way I want. Where not, I go to a meeting of my professional body, and ask them change the accreditation rules for all the universities in Australia, and though international accords, the world.

        1. Jon Dron says:

          We all rely on technology, whether it be ways of teaching, language, books, whiteboards, accreditation rules, or generative AIs. The skill is in choosing the right ones for the right purposes, and in the techniques that we bring to assembling them. All will have unwanted side-effects – that’s what Postman called the Faustian Bargain of technology – and all will be affected by how they are assembled with all the rest, many of which are likely to be counter-technologies to others, which is why the whole matters more than the parts. It’s true that we can’t and shouldn’t even want to see everything – there are very good reasons to black-box a lot of them – and there is no way we can predict in any detail the complex interactions of the ones we use, the ones the learners use, and the myriad other technologies in the system. And it is true that immediate concerns will inevitably shout louder than the things beyond our immediate responsibilities. However, I think we should try: very often, terrible problems result directly from everyone doing what is reasonable in their immediate context but that leads to something profoundly wrong on a greater scale, our effects on the environment being an obvious and pressing example. It’s a systems thing.

          1. Yes, we need to weigh up the benefits and costs of technology, and particularly which fall on whom. There may be unexpected side effects, but we can do something about the “Somebody Else’s Problem” expected side effects, where someone get the benefits, and someone else gets the problems. Recently I was interviewed by Sky News in Australia about a boom in data centers being installed, and their increased energy use. This is likely to increase due to AI requiring more computing power. When you ask AI to do something, you don’t think “How much coal did that burn?”. It could be a good question for students of Green ICT Strategies at Athabasca.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.