Can The Sims Show Us That We’re Inherently Good or Evil?

As it turns out, yes. temptations to be unkind

The good news is that we are intuitively altruistic. This doesn’t necessarily mean we are born that way. This is probably learned behaviour that co-evolves with that of those around us. The hypothesis on which this research is based (with good grounding) is that we learn through repeated interactions to behave kindly to others. At least, by far the majority of us. A few jerks (as the researchers discovered) are not intuitively generous and everyone behaves selfishly or unkindly sometimes. This is mainly because there are such jerks around, though sometimes because the perceived rewards for being a jerk might outweigh the benefits. Indeed, in almost all moral decisions, we tend to weigh benefits against harm, and it is virtually impossible to do anything at all without at least some harm being caused in some way, so the nicest of us are jerks to at least some people. It might upset the person who gave you a beautiful scarf that you wrecked it while saving a drowning child, for instance. Donating to a charity might reduce the motivation of governments to intervene in humaniarian crises. Letting a car in front of you to change lanes in front of you slows everyone in the queue behind you. Very many acts of kindness have costs to others. But, on the whole, we tend towards kindness, if only as an attitude. There is plentiful empirical evidence that this is true, some of which is referred to in the article. The researchers sought an explanation at a systemic, evolutionary level.

The researchers developed a simulation of a Prisoners’ Dilemma scenario. Traditional variants on the game make use of rational agents that weigh up defection and cooperation over time in deciding whether or not to defect, using a variety of different rules (the most effective of which is usually the simplest ‘tit-for-tat’). Their twist was to allow agents to behave ‘intuitively’ under some circumstances. Some agents were intuitively selfish, some not. In predominantly multiple round games,  “the winning agents defaulted to cooperating but deliberated if the price was right and switched to betrayal if they found they were in a one-shot game.” In predominantly one-shot games – not the norm in human societies – the always-cooperative agents died out completely. Selfish agents that deliberated did not do well in any scenario. As ever, ubiquitous selfish behaviour in a many-round game means that everyone loses, especially the selfish players.  So, wary cooperation is a winning strategy when most other people are kind, and it benefits everyone so it is a winning strategy for societies and favoured by evolution. The explanation, they suggest is that:

when your default is to betray, the benefits of deliberating—seeing a chance to cooperate—are uncertain, depending on what your partner does. With each partner questioning the other, and each partner factoring in the partner’s questioning of oneself, the suspicion compounds until there’s zero perceived benefit to deliberating. If your default is to cooperate, however, the benefits of deliberating—occasionally acting selfishly—accrue no matter what your partner does, and therefore deliberation makes more sense.

This accords with our natural inclinations. As Rand, one of the researchers, puts it:  “It feels good to be nice—unless the other person is a jerk. And then it feels good to be mean.” If there are no rewards for being a jerk under any circumstances, or the rewards for being kind are greater, then perhaps we can all learn to be a bit nicer.

The really good news is that, because such behaviour is learned, selfish behaviour can be modified and intuitive responses can change. In experiments, the researchers have demonstrated that this can occur within less than half an hour, albeit in a very limited and artificial single context. The researchers suggest that, in situations that reward back-stabbing and ladder-climbing (the norm in corporate culture), all it should take is a little top-down intervention such as bonuses and recognition for helpful behaviour in order to set a cultural change in motion that will ultimately become self-sustaining. I’m not totally convinced by that – extrinsic reward does not make lessons stick and the learning is lost the moment the reward is taken away. However, because cooperation is inherently better for everyone than selfishness, perhaps those that are driven by such things might realize that those extrinsic rewards they crave are far better achieved through altruism than through selfishness as long as most people are acting that way most of the time, and this might be a way to help create such a culture.  Getting rid of divisive and counter-productive extrinsic motivation, such as performance-related pay, might be a better (or at least complementary) long-term approach.

Address of the bookmark: http://nautil.us/issue/37/currents/selfishness-is-learned

This is the Teenage Brain on Social Media

An article in Neuroscience News about a recent (paywalled – grr) brain-scan study of teenagers, predictably finding that having your photos liked on social media sparks off a lot of brain activity, notably in areas associated with reward, as well as social activity and visual attention. So far so so, and a bit odd that this is what Neuroscience News chose to focus on, because that’s only a small subsection of the study and by far the least interesting part. What’s really interesting to me about the study is that the researchers mainly investigated the effects of existing likes (or, as they put it ‘quanitfiable social endorsements’) on whether teens liked a photo, and scanned their brains while doing so. As countless other studies (including mine) have suggested, not just for teens, the effects were significant. As many studies have previously shown, photos endorsed by peers – even strangers – are a great deal more likely to be liked, regardless of their content. The researchers actually faked the likes and noted that the effect was the same whether showing ‘neutral’ content or risky behaviours like smoking and drinking. Unlike most existing studies, the researchers feel confident to describe this in terms of peer-approval and conformity, thanks to the brain scans. As the abstract puts it:

“Viewing photos with many (compared with few) likes was associated with greater activity in neural regions implicated in reward processing, social cognition, imitation, and attention.”

The paper itself is a bit fuzzy about which areas are activated under which conditions: not being adept at reading brain scans, I am still unsure about whether social cognition played a similarly important role when seeing likes of one’s own photos compared with others liked by many people, though there are clearly some significant differences between the two. This bothers me a bit because, within the discussion of the study itself, they say:

“Adolescents model appropriate behavior and interests through the images they post (behavioral display) and reinforce peers’ behavior through the provision of likes (behavioral reinforcement). Unlike offline forms of peer influence, however, quantifiable social endorsement is straightforward, unambiguous, and, as the name suggests, purely quantitative.”

I don’t think this is a full explanation as it is confounded by the instrument used. An alternative plausible explanation is that, when unsure of our own judgement, we use other cues (which, in this case, can only ever come from other people thanks to the design of the system) to help make up our minds. A similar effect would have been observed using other cues such as, for example, list position or size, with no reference to how many others had liked the photos or not. Most of us (at least, most that don’t know how Google works) do not see the ordering of Google Search results as social endorsement, though that is exactly what it is, but list position is incredibly influential in our choice of links to click and, presumably, our neural responses to such items on the page. It would be interesting to further explore the extent to which the perception of value comes from the fact that it is liked by peers as opposed to the fact that the system itself (a proxy expert) is highlighting an image as important. My suspicion is that there might be a quantifiable social effect, at least in some subjects, but it might not be as large as that shown here. There’s very good evidence that subjects scanned much-like photos with greater care, which accords with other studies in the area, though it does not necessarily correlate with greater social conformity. As ever, we look for patterns and highlights to help guide our behaviours – we do not and cannot treat all data as equal.

There’s a lot of really interesting stuff in this apart from that though. I am particularly interested in the activiation of the frontal gyrus, previously associated with imitation, when looking at much liked photos. This is highly significant in the transmission of memes as well as in social learning generally.

Address of the bookmark: http://neurosciencenews.com/nucleus-accumbens-social-media-4348/

Bigotry and learning analytics

Unsurprisingly, when you use averages to make decisions about actions concerning individual people, they reinforce biases. This is exactly the basis of bigotry, racism, sexism and a host of other well-known evils, so programming such bias into analytics software is beyond a bad idea. This article describes how algorithmic systems are used to help make decisions about things like bail and sentencing in courts. Though race is not explicitly taken into account, correlates like poverty and acquaintance with people that have police records are included. In a perfectly vicious circle, the system reinforces biases over time. To make matters worse, this particular system uses secret algorithms, so there is no accountability and not much of a feedback loop to improve them if they are in error.

This matters to educators because this is very similar to what much learning analytics does too (there are exceptions, especially when used solely for research purposes). It looks at past activity, however that is measured, compares it to more or less discriminatory averages or similar aggregates of other learners’ past activity, and then attempts to guide future behaviour of individuals (teachers or students) based on the differences. This latter step is where things can go badly wrong, but there would be little point in doing it otherwise. The better examples inform rather than adapt, allowing a human intermediary to make decisions, but that’s exactly what the algorithmic risk assessment described in the article does too and it is just as risky. The worst examples attempt to directly guide learners, sometimes adapting content to suit their perceived needs. This is a terribly dangerous idea.

Address of the bookmark: http://boingboing.net/2016/05/24/algorithmic-risk-assessment-h.html

A blueprint for breakthroughs: Federally funded education research in 2016 and beyond | Christensen Institute

An interesting proposal from Horn & Fisher that fills in one of the most gaping holes in conventional quantitative research in education (specifically randomized controlled trials but also less rigorous efforts like A/B testing etc) by explicitly looking at the differences in those that do not fit in the average curve – the ones that do not benefit, or that benefit to an unusual degree, the outliers. As the authors say:

“… the ability to predict what works, for which students, in what circumstances, will be crucial for building effective, personalized-learning environments. The current education research paradigm, however, stops short of offering this predictive power and gets stuck measuring average student and sub-group outcomes and drawing conclusions based on correlations, with little insight into the discrete, particular contexts and causal factors that yield student success or failure. Those observations that do move toward a causal understanding often stop short of helping understand why a given intervention or methodology works in certain circumstances, but not in others.

I have mixed feelings about this. Yes, this process of iterative refinement is a much better idea than simply looking at improvements in averages (with no clear causal links) and they are entirely right to critique those that use such methods but:

a) I don’t think it will ever succeed in the way it hopes, because every context is significantly different and this is a complex design problem, where even miniscule differences can have huge effects. Learning never repeats twice. Though much improved on what it replaces, it is still trying to make sense through tools of reductive materialism whereas what we are dealing with, and what the authors’ critique implies, is a different kind of problem. Seeking this kind of answer is like seeking the formula for painting a masterpiece. It’s only ever partially (at best) about methodologies and techniques, and it is always possible to invent new ones that change everything.

b) It relies on the assumption that we know exactly what we are looking for: that what we seek to measure is the thing that matters. It might be exactly what is needed for personalized education (where you find better ways to make students behave the way you want them to behave) but exactly the opposite for personal education (where every case is different, where education is seen as changing the whole person in unfathomably rich and complex ways).

That said, I welcome any attempts to stop the absurdity of trying to intervene in ways that benefit the (virtually non-existent) average student and that instead attempt to focus on each student. This is a step in the right direction.

 

augmented research cycle

Address of the bookmark: http://www.christenseninstitute.org/publications/a-blueprint-for-breakthroughs/

Universities can’t solve our skills gap problem, because they caused it | TechCrunch

Why this article is wrong

This article is based on a flawed initial premise: that universities are there to provide skills for the marketplace. From that perspective, as the writer, Jonathan Munk, suggests, there’s a gap between both what universities generally support and what employers generally need, and the perceptions of students and employers about the skills they actually possess. If we assume that the purpose of universities is to churn out market-ready workers, with employer-friendly skills, they are indeed singularly failing and will likely continue to do so.  As Munk rightly notes:

“… universities have no incentive to change; the reward system for professors incentivizes research over students’ career success, and the hundreds of years of institutional tradition will likely inhibit any chance of change. By expecting higher education to take on closing the skills gap, we’re asking an old, comfortable dog to do new tricks. It will not happen.”

Actually quite a lot of us, and even quite a few governments (USA notwithstanding) are pretty keen on the teaching side of things, but Munk’s analysis is substantially correct and, in principle, I’m quite comfortable with that. There are far better, cheaper and faster ways to get most marketable job skills than to follow a university program, and providing such skills is not why we exist. This is not to say that we should not do such things. For pedagogical and pragmatic reasons, I am keen to make it possible for students to gain useful workplace skills from my courses, but it has little to do with the job market. It’s mainly because it makes the job of teaching easier, leads to more motivated students, and keeps me on my toes having to stay in touch with the industry in my particular subject area. Without that, I would not have the enthusiasm needed to build or sustain a learning community, I would be seen as uninterested in the subject, and what I’d teach would be perceived as less relevant, and would thus be less motivating. That’s also why, in principle, combining teaching and research is a great idea, especially in strongly non-vocational subjects that don’t actually have a marketplace. But, if it made more sense to teach computing with a 50 year old language and machine that should be in a museum, I would do so at the drop of a hat. It matters far more to me that students develop the intellectual tools to be effective lifelong learners, develop values and patterns of thinking that are commensurate with both a healthy society and personal happiness, become part of a network of learners in the area, engage with the community/network of practice, and see bigger pictures beyond the current shiny things that attract attention like flames to a moth. This focus on being, rather than specific skills, is good for the student, I hope, but it is mainly good for everyone. Our customer is neither the student nor the employer: it’s our society. If we do our jobs right then we both stabilize and destablize societies, feeding them with people that are equipped to think, to create, to participate, reflectively, critically, and ethically: to make a difference. We also help to feed societies with ideas, theories, models and even the occasional artefact, that make life better and richer for all though, to be honest, I’m not sure we do so in the most cost-effective ways. However, we do provide an open space with freedom to explore things that have no obvious economic value, without the constraints or agendas of the commercial world, nor those of dangerously partisan or ill-informed philanthropists (Zuckerberg, Gates – I’m thinking of you). We are a social good. At least, that’s the plan – most of us don’t quite live up to our own high expectations. But we do try. The article acknowledges this role:

“Colleges and universities in the U.S. were established to provide rich experiences and knowledge to their students to help them contribute to society and improve their social standing.”

Politely ignoring the US-centricity of this claim and its mild inaccuracy, I’d go a bit further: in the olden days, it was also about weeding out the lower achievers and/or, in many countries (the US was again a notable offender), those too poor to get in. Universities were (and most, AU being a noble and rare exception, still are) a filter, that makes the job of recruiters easier by removing the chaff from the wheat before we even get to them, and then again when we give out the credits: that‘s the employment advantage. It’s very seldom (directly) because of our teaching. We’re just big expensive sieves, from that perspective. However, the article goes on to say:

“But in the 1930s, with millions out of work, the perceived role of the university shifted away from cultural perspective to developing specific trades. Over time, going to college began to represent improved career prospects. That perception persists today. A survey from 2015 found the top three reasons people chose to go to college were:

  • improved employment opportunities
  • make more money
  • get a good job”

I’m glad that Munk correctly uses the term ‘perception’, because this is not a good reason to go to a university. The good job is a side-effect, not the purpose, and it is becoming less important with each passing year. Partly this is due to market saturation and degree inflation, partly due to better alternatives becoming more widespread, especially thanks to the Internet. One of the ugliest narratives of modern times is that the student should pay for their education because they will earn more money as a result. Utter nonsense. They will earn more money because they would have earned more money anyway, even if universities had never existed. The whole point of that filtering is that it tends to favour those that are smarter and thus more likely to earn more. In fact, were it not for the use of university qualifications as a pre-filter that would exclude them from a (large but dwindling) number of jobs, they would have earned far more money by going straight into the workforce. I should observe in passing that open universities like AU are not entirely immune from this role. Though not much filtering for ability on entry, AU and other open universities do none-the-less act as filters inasmuch as those that are self-motivated enough to handle the rigours of a distance-taught university program while otherwise engaged, usually while working, are far better candidates for most jobs than those who simply went to a university because that was the natural next step. A very high proportion of our students that make it to the end do so with flying colours, because those that survive are incredibly good survivors. I’ve seen the quality of work that comes out of this place and been able to compare it with that from the best of traditional universities: our students win hands down, almost every time. The only time I have seen anything like as good was in Delhi, where 30 students were selected in a program each year from over 3,000 fully qualified applicants (i.e. those with top grades from their schools). This despite, or perhaps because of, the fact that computing students had to sit an entrance exam that, bizarrely and along with other irrelevances, required them to know about Brownian motion in gases. I have yet to come across a single computing role where such knowledge was needed. Interestingly, they were not required to know about poetry, art, or music, though I have certainly come across computing roles where appreciation of such things would have been of far greater value.

Why this article is right

If it were just about job-ready skills like, in computing, the latest frameworks, languages and systems, the lack of job-readiness would not bother me in the slightest. However, as the article goes on to say, it is not just the ‘technical’ (in the loosest sense) skills that are the problem. The article mentions, as key employer concerns, critical thinking, creativity, and oral and written communication skills. These are things that we should very much be supporting and helping students to develop, however we perceive our other roles. In fact, though the communication stuff is mainly a technical skillset, creativity and problem-solving are pretty much what it is all about so, if students lack these things, we are failing even by our own esoteric criteria.

I do see a tension here, and a systematic error in our teaching. A goodly part of it is down to a misplaced belief that we are teaching stuff, rather than teaching a way of being. A lot of courses focus on a set of teacher-specified outcomes, and on accreditation of those set outcomes, and treat the student as (at best) input for processing or (at worst) a customer for a certificate. When the process is turned into a mechanism for outputting people with certificates, with fixed outcomes and criteria, the process itself loses all value. ‘We become what we behold’ as McLuhan put it: if that’s how we see it, that’s how it will be. This is a vicious circle. Any mechanism that churns students out faster or more efficiently will do. In fact, a lot of discussion and design in our universities is around doing exactly that. For example, the latest trend in personalization (a field, incidentally, that has been around for decades) is largely based on that premise: there is stuff to learn, and personalization will help you to learn it faster, better and cheaper than before. As a useful by-product, it might keep you on target (our target, not yours).  But one thing it will mostly not do is support the development of critical thinking, nor will it support the diversity, freedom and interconnection needed for creative thinking. Furthermore, it is mostly anything but social, so it also reduces capacity to develop those valuable social communication skills. This is not true of all attempts at personalization, but it is true of a lot of them, especially those with most traction. The massive prevalence of cheating is directly attributable to the same incorrect perception: if cheating is the shortest path to the goal (especially if accompanied by a usually-unwarranted confidence in avoiding detection) then of course quite a few people will take it. The trouble is, it’s the wrong goal. Education is a game that is won through playing it well, not through scoring.

The ‘stuff’ has only ever been raw material, a medium and context for the really important ways of being, doing and thinking that universities are mostly about. When the stuff becomes the purpose, the purpose is lost. So, universities are trying and, inevitably, failing to be what employers want, and in the process failing to do what they are actually designed to do in the first place. It strikes me that everyone would be happier if we just tried to get back to doing what we do best. Teaching should be personal, not personalized. Skills should be a path to growth, not to employment. Remembered facts should be the material, not the product. Community should be a reason for teaching, not a means by which it occurs. Universities should be places we learn to be, not places we be to learn. They should be purveyors of value, not of credentials.

 

Address of the bookmark: http://techcrunch.com/2016/05/08/universities-cant-solve-our-skills-gap-problem-because-they-caused-it/

What’s So New about the New Atheists? – Virtual Canuck

This is a nicely crafted, deeply humanist, gentle and thought-provoking sermon, given by Terry Anderson to members of his Unitarian church on atheistic thinking and values.

I have a lot of sympathy with the Unitarians. A church that does not expect belief in any gods or higher powers; that welcomes members with almost any theistic, deistic, agnostic or atheistic persuasions; that mostly eschews hierarchies and power structures; that focuses on the value of community; that is open to exploring the mysteries of being, wherever they may be found; that is doing good things for and with others, and that is promoting tolerance and understanding of all people and all ideas is OK with me. It’s kind of a club for the soul (as in ‘soul music’, not as in ‘immaterial soul’). As Terry observes, though, it does have some oddness at its heart. It’s a bit like Christianity, without the Christ and without the mumbo jumbo, but it still retains some relics of its predominantly Christian ancestry. Terry focuses (amongst other things) on the word ‘faith’ as being a particularly problematic term in at least one of its meanings.

For all their manifest failings and evils they are used to justify or permit, religious teachings can often provide a range of useful perspectives on the universe, as long as we don’t take them any more seriously than fairy tales or poetry: which is to say, very seriously at some levels, not at all seriously in what they tell us of how to act, what to believe, or what they claim to have happened. And, while the whole ‘god’ idea is, at the very best, metaphorical, I think the metaphor has potential value. Whether or not you believe in, disbelieve in or dismiss deities as nonsense (to be clear, depending on the variant, I veer between disbelief and outright dismissal), it is extremely important to retain a notion of the sacred – a sense of wonder, humbleness, awe, majesty etc – and a strong reflective awareness of the deeply connected, meaning-filled lives of ourselves and others, and of our place in the universe. For similar reasons I am happy to use an equally fuzzy word like ‘soul’ for something lacking existential import, but meaningful as a placeholder for something that the word ‘mind’ fails to address. It can be helpful in reflection, discussion and meditation, as well as poetry. There are beautiful souls, tortured souls, and more: few other words will do.  I also think that there is great importance in rituals and shared, examined values, in things that give us common grounding to explore the mysteries and wonders of what is involved in being a human being, living with other human beings, on a fragile and beautiful planet, itself a speck in a staggeringly vast cosmos. This sermon, then, offers useful insights into a way of quasi-religious thinking that does not rely on a nonsensical belief system but that still retains much of the value of religions. I’m not tempted to join the Unitarians (like Groucho, I am suspicious of any club that would accept me as a member), but I respect their beliefs (and lack of beliefs), and respect even more their acknowledgement of their own uncertainties and their willingness to explore them.

Address of the bookmark: http://virtualcanuck.ca/2016/04/27/whats-so-new-about-the-new-atheists/

Online Learning: Why Libraries Could Be the Key to MOOCs’ Success | MindShift

Thanks to Gerald Ardito for pointing this one out to me. It’s about the growing use of libraries for learning circles, where groups of learners get together locally to study, in this case around MOOCs provided via P2PU. Librarians – rarely subject-matter experts – organize these groups and provide support for the process, but most of the learning engagement is peer-to-peer. As the article notes, the process is quite similar to that of a book club.

 

Learning circle at a library

 

As the article suggests, such learning circles are popping up all over the place, not just in libraries. Indeed, the Landing has been used by our students to arrange quite similar study-buddy groups at AU, albeit with less formal organization and intent and not always working on the same courses together. Though there are benefits to be had from co-constructing knowledge together, people do not necessarily need to be working on the same thing. Simply being there to support , enthuse, or inspire one another is often enough to bring real benefits. There are two models, both of which work. The first, as in the case of these learning circles, is to use central coordination online, with local communities working on the same things at roughly the same times. The second is distributed the other way round, with the local communities providing the centre, but with individuals working online in different contexts.

This blurring between local and online is a growing and significant trend. It somewhat resembles the pattern of business and innovation centres that bring together people from many companies etc, working remotely from their own organizations in a shared local space. Doing different things in physical spaces shared with other people helps to overcome many of the issues of isolation experienced by online workers and learners, especially in terms of motivation, without the need to move everyone in an organization (be it a university, a class, or a company) into the same physical location. It adds economies of scale, too, allowing the use of shared resources (e.g. printers, 3D printers, heating, conferencing facilities, etc), and reduces environmentally and psychologically costly issues around commuting and relocating. Moreover, decoupling location and work while supporting physical community brings all the benefits of diversity that, in a traditional organization or classroom, tend to get lost. Working online does not and should not interfere with local connection with real human beings, and this is a great way to support our need to be with other people, and the value that we get from being with them. From the perspective of the environment, our local communities, our psychological well-being, our relationships, our creativity, and our bank balances, local communities and remote working, or remote communities and local working, both seem far more sensible, at least for many occupations and many kinds of learning.

The article reports completion rates of 45-55%, which is at least an order of magnitude greater than the norm for MOOCs, although it would be unwise to read too much into that because of the self-selection bias inherent in this: it might well be that those who were sufficiently interested to make the effort to visit the libraries would be those that would persist anyway. However, theory and experience both suggest that the benefits of getting together at one place and time should lead to far greater motivation to persist. Going somewhere with other people at a particular time to do something is, after all, pretty much the only significant value in most lectures. This is just a more cost-effective, learning-effective, human way of doing that.

 

Address of the bookmark: http://ww2.kqed.org/mindshift/2016/04/25/online-learning-why-libraries-could-be-the-key-to-moocs-success/

Interview with Maiga Chang

A nice interview in AUSU’s Voice Magazine – continued at https://www.voicemagazine.org/articles/featuredisplay.php?ART=11372 – with SCIS’s own Maiga Chang, describing his teaching and research. Maiga’s bubbly enthusiasm comes through strongly in this, and his responses are filled with great insights. I particularly like (in the second part of the interview) his thoughts on what makes Athabasca University so distinctive, and its value in the future of learning:

What are the benefits of teaching at AU compared to traditional universities?
There are differences. They are different from traditional university and AU because we are almost purely online as a university. We teach students with a lot of help from technology. So, in that case, I would say that teaching at AU that we are the
pioneers of teaching students with technology, artificial intelligence applications, learning analytics – everything. I would say that this kind of teaching and learning should be the future. As you know, some people start to work on full time jobs after K-12 and some of them go to university for another four years, which means they only learn in traditional classroom or in traditional setting for 12 to 16, maybe 18 years.

How long will you live? How long will you need to learn? You will need to learn for your whole life. When you graduate from high school and university, you cannot go back to university unless you want to quit a job when you want to learn once again. You will need another way of doing life-long learning.

AU gives us the opportunity to create a kind of smart learning environment. So if we can use our research results to make a smarter learning environment, then we can provide students with more personalized learning experiences, which can make them learn more efficient, and learn the things that they really need and want to see on their own way and own pace. That is another good thing for students, I would say, teaching at AU.

What do you think are the strengths of learning at AU?
This is the future. Like the students right now in high school and in primary school, you can ask them. They are trying to use mobile devices to learn. Also, as you know, they will post something on their Facebook or their blog. That is the future. As a parent, around 50% of students at AU have family, even children. When they learn at AU, they are adapting to the future of learning, and, in that case, when their child or children have a question. In my upbringing, I could not ask questions of my parents about using Facebook, but right now, you can, because people use Facebook. Now when you’re taking an AU course, you are sometimes asked to make a video, put it on YouTube, and then you can teach your children, your child.

One more thing is very important. It is self-regulated learning skill. It is very important for everyone because it helps you efficiently learn, or digest, or plan your goal. When you learn with AU, you will learn that kind of skills. You can teach your child and children, and other family members.”

Great stuff! I have one comment to add on a small part of this:  I am firmly with Alfie Kohn and, more recently and in similar vein, Stephen Downes on the side of ‘personal’ rather than ‘personalized’. Personalized learning does have a place in the rich tapestry of tools and methods to help with meeting a range of learning needs, but it is very important that personalization is not something done to learners. Too often, it is the antithesis of self-direction, too often it reinforces and automates teacher control, too often it is isolating and individually focused, too often it sacrifices caring, breadth and serendipity in the service of efficiency, and that efficiency is too often narrowly defined in terms of teacher goals. Knowing Maiga, and seeing what else he talks about in this interview, I’m pretty sure that’s not what he means here! Personal learning means focusing on what learners need, want, find exciting, interesting, challenging, problematic or mind-expanding. It is inherently and deeply a social activity supported by and engaged with others, and it is, at the same time, inherently a celebration of diversity and individuality. For some skills – mechanical foundations for example, or as controllable advisory input – personalization can contribute to that, but it should never usurp the personal.

Address of the bookmark: https://www.voicemagazine.org/archives/articledisplay.php?ART=11338&issue=2414

Recording of my TCC2016 keynote: The Distributed Teacher

This is the recording of my keynote at the TCC2016 online conference, on the nature of learning and teaching: the inherently social, distributed nature of it, why e-learning is fundamentally different from p-learning, and how we harmfully transfer pedagogies and processes from physical classrooms to online contexts in which they do not belong. If you want to watch it, skip the first 5 minutes because there was a problem with the sound and video (I hate you, Adobe Connect): the talk itself begins at a few seconds after the 5 minute mark.

Downloadable slides and details of the themes are at https://landing.athabascau.ca/file/view/1598774/the-distributed-teacher-slides-from-my-tcc-2016-keynote

Address of the bookmark: http://squirrel.adobeconnect.com/p1bvy7grca7/