Be less pigeon

I love the slogan that Audrey Watters has chosen for her new branding:

Be less pigeon

As she puts it…

“I wanted my work to both highlight the longstanding relationship between behaviorism and testing – built into the ideology and the infrastructure since ed-tech’s origins in the early twentieth century – and to remind people that there are also alternatives to treating students like animals to be trained.”

Absolutely.

Address of the bookmark: http://hackeducation.com/2016/06/08/pigeons

Can The Sims Show Us That We’re Inherently Good or Evil?

As it turns out, yes. temptations to be unkind

The good news is that we are intuitively altruistic. This doesn’t necessarily mean we are born that way. This is probably learned behaviour that co-evolves with that of those around us. The hypothesis on which this research is based (with good grounding) is that we learn through repeated interactions to behave kindly to others. At least, by far the majority of us. A few jerks (as the researchers discovered) are not intuitively generous and everyone behaves selfishly or unkindly sometimes. This is mainly because there are such jerks around, though sometimes because the perceived rewards for being a jerk might outweigh the benefits. Indeed, in almost all moral decisions, we tend to weigh benefits against harm, and it is virtually impossible to do anything at all without at least some harm being caused in some way, so the nicest of us are jerks to at least some people. It might upset the person who gave you a beautiful scarf that you wrecked it while saving a drowning child, for instance. Donating to a charity might reduce the motivation of governments to intervene in humaniarian crises. Letting a car in front of you to change lanes in front of you slows everyone in the queue behind you. Very many acts of kindness have costs to others. But, on the whole, we tend towards kindness, if only as an attitude. There is plentiful empirical evidence that this is true, some of which is referred to in the article. The researchers sought an explanation at a systemic, evolutionary level.

The researchers developed a simulation of a Prisoners’ Dilemma scenario. Traditional variants on the game make use of rational agents that weigh up defection and cooperation over time in deciding whether or not to defect, using a variety of different rules (the most effective of which is usually the simplest ‘tit-for-tat’). Their twist was to allow agents to behave ‘intuitively’ under some circumstances. Some agents were intuitively selfish, some not. In predominantly multiple round games,  “the winning agents defaulted to cooperating but deliberated if the price was right and switched to betrayal if they found they were in a one-shot game.” In predominantly one-shot games – not the norm in human societies – the always-cooperative agents died out completely. Selfish agents that deliberated did not do well in any scenario. As ever, ubiquitous selfish behaviour in a many-round game means that everyone loses, especially the selfish players.  So, wary cooperation is a winning strategy when most other people are kind, and it benefits everyone so it is a winning strategy for societies and favoured by evolution. The explanation, they suggest is that:

when your default is to betray, the benefits of deliberating—seeing a chance to cooperate—are uncertain, depending on what your partner does. With each partner questioning the other, and each partner factoring in the partner’s questioning of oneself, the suspicion compounds until there’s zero perceived benefit to deliberating. If your default is to cooperate, however, the benefits of deliberating—occasionally acting selfishly—accrue no matter what your partner does, and therefore deliberation makes more sense.

This accords with our natural inclinations. As Rand, one of the researchers, puts it:  “It feels good to be nice—unless the other person is a jerk. And then it feels good to be mean.” If there are no rewards for being a jerk under any circumstances, or the rewards for being kind are greater, then perhaps we can all learn to be a bit nicer.

The really good news is that, because such behaviour is learned, selfish behaviour can be modified and intuitive responses can change. In experiments, the researchers have demonstrated that this can occur within less than half an hour, albeit in a very limited and artificial single context. The researchers suggest that, in situations that reward back-stabbing and ladder-climbing (the norm in corporate culture), all it should take is a little top-down intervention such as bonuses and recognition for helpful behaviour in order to set a cultural change in motion that will ultimately become self-sustaining. I’m not totally convinced by that – extrinsic reward does not make lessons stick and the learning is lost the moment the reward is taken away. However, because cooperation is inherently better for everyone than selfishness, perhaps those that are driven by such things might realize that those extrinsic rewards they crave are far better achieved through altruism than through selfishness as long as most people are acting that way most of the time, and this might be a way to help create such a culture.  Getting rid of divisive and counter-productive extrinsic motivation, such as performance-related pay, might be a better (or at least complementary) long-term approach.

Address of the bookmark: http://nautil.us/issue/37/currents/selfishness-is-learned

This is the Teenage Brain on Social Media

An article in Neuroscience News about a recent (paywalled – grr) brain-scan study of teenagers, predictably finding that having your photos liked on social media sparks off a lot of brain activity, notably in areas associated with reward, as well as social activity and visual attention. So far so so, and a bit odd that this is what Neuroscience News chose to focus on, because that’s only a small subsection of the study and by far the least interesting part. What’s really interesting to me about the study is that the researchers mainly investigated the effects of existing likes (or, as they put it ‘quanitfiable social endorsements’) on whether teens liked a photo, and scanned their brains while doing so. As countless other studies (including mine) have suggested, not just for teens, the effects were significant. As many studies have previously shown, photos endorsed by peers – even strangers – are a great deal more likely to be liked, regardless of their content. The researchers actually faked the likes and noted that the effect was the same whether showing ‘neutral’ content or risky behaviours like smoking and drinking. Unlike most existing studies, the researchers feel confident to describe this in terms of peer-approval and conformity, thanks to the brain scans. As the abstract puts it:

“Viewing photos with many (compared with few) likes was associated with greater activity in neural regions implicated in reward processing, social cognition, imitation, and attention.”

The paper itself is a bit fuzzy about which areas are activated under which conditions: not being adept at reading brain scans, I am still unsure about whether social cognition played a similarly important role when seeing likes of one’s own photos compared with others liked by many people, though there are clearly some significant differences between the two. This bothers me a bit because, within the discussion of the study itself, they say:

“Adolescents model appropriate behavior and interests through the images they post (behavioral display) and reinforce peers’ behavior through the provision of likes (behavioral reinforcement). Unlike offline forms of peer influence, however, quantifiable social endorsement is straightforward, unambiguous, and, as the name suggests, purely quantitative.”

I don’t think this is a full explanation as it is confounded by the instrument used. An alternative plausible explanation is that, when unsure of our own judgement, we use other cues (which, in this case, can only ever come from other people thanks to the design of the system) to help make up our minds. A similar effect would have been observed using other cues such as, for example, list position or size, with no reference to how many others had liked the photos or not. Most of us (at least, most that don’t know how Google works) do not see the ordering of Google Search results as social endorsement, though that is exactly what it is, but list position is incredibly influential in our choice of links to click and, presumably, our neural responses to such items on the page. It would be interesting to further explore the extent to which the perception of value comes from the fact that it is liked by peers as opposed to the fact that the system itself (a proxy expert) is highlighting an image as important. My suspicion is that there might be a quantifiable social effect, at least in some subjects, but it might not be as large as that shown here. There’s very good evidence that subjects scanned much-like photos with greater care, which accords with other studies in the area, though it does not necessarily correlate with greater social conformity. As ever, we look for patterns and highlights to help guide our behaviours – we do not and cannot treat all data as equal.

There’s a lot of really interesting stuff in this apart from that though. I am particularly interested in the activiation of the frontal gyrus, previously associated with imitation, when looking at much liked photos. This is highly significant in the transmission of memes as well as in social learning generally.

Address of the bookmark: http://neurosciencenews.com/nucleus-accumbens-social-media-4348/

Bigotry and learning analytics

Unsurprisingly, when you use averages to make decisions about actions concerning individual people, they reinforce biases. This is exactly the basis of bigotry, racism, sexism and a host of other well-known evils, so programming such bias into analytics software is beyond a bad idea. This article describes how algorithmic systems are used to help make decisions about things like bail and sentencing in courts. Though race is not explicitly taken into account, correlates like poverty and acquaintance with people that have police records are included. In a perfectly vicious circle, the system reinforces biases over time. To make matters worse, this particular system uses secret algorithms, so there is no accountability and not much of a feedback loop to improve them if they are in error.

This matters to educators because this is very similar to what much learning analytics does too (there are exceptions, especially when used solely for research purposes). It looks at past activity, however that is measured, compares it to more or less discriminatory averages or similar aggregates of other learners’ past activity, and then attempts to guide future behaviour of individuals (teachers or students) based on the differences. This latter step is where things can go badly wrong, but there would be little point in doing it otherwise. The better examples inform rather than adapt, allowing a human intermediary to make decisions, but that’s exactly what the algorithmic risk assessment described in the article does too and it is just as risky. The worst examples attempt to directly guide learners, sometimes adapting content to suit their perceived needs. This is a terribly dangerous idea.

Address of the bookmark: http://boingboing.net/2016/05/24/algorithmic-risk-assessment-h.html

A blueprint for breakthroughs: Federally funded education research in 2016 and beyond | Christensen Institute

An interesting proposal from Horn & Fisher that fills in one of the most gaping holes in conventional quantitative research in education (specifically randomized controlled trials but also less rigorous efforts like A/B testing etc) by explicitly looking at the differences in those that do not fit in the average curve – the ones that do not benefit, or that benefit to an unusual degree, the outliers. As the authors say:

“… the ability to predict what works, for which students, in what circumstances, will be crucial for building effective, personalized-learning environments. The current education research paradigm, however, stops short of offering this predictive power and gets stuck measuring average student and sub-group outcomes and drawing conclusions based on correlations, with little insight into the discrete, particular contexts and causal factors that yield student success or failure. Those observations that do move toward a causal understanding often stop short of helping understand why a given intervention or methodology works in certain circumstances, but not in others.

I have mixed feelings about this. Yes, this process of iterative refinement is a much better idea than simply looking at improvements in averages (with no clear causal links) and they are entirely right to critique those that use such methods but:

a) I don’t think it will ever succeed in the way it hopes, because every context is significantly different and this is a complex design problem, where even miniscule differences can have huge effects. Learning never repeats twice. Though much improved on what it replaces, it is still trying to make sense through tools of reductive materialism whereas what we are dealing with, and what the authors’ critique implies, is a different kind of problem. Seeking this kind of answer is like seeking the formula for painting a masterpiece. It’s only ever partially (at best) about methodologies and techniques, and it is always possible to invent new ones that change everything.

b) It relies on the assumption that we know exactly what we are looking for: that what we seek to measure is the thing that matters. It might be exactly what is needed for personalized education (where you find better ways to make students behave the way you want them to behave) but exactly the opposite for personal education (where every case is different, where education is seen as changing the whole person in unfathomably rich and complex ways).

That said, I welcome any attempts to stop the absurdity of trying to intervene in ways that benefit the (virtually non-existent) average student and that instead attempt to focus on each student. This is a step in the right direction.

 

augmented research cycle

Address of the bookmark: http://www.christenseninstitute.org/publications/a-blueprint-for-breakthroughs/

Former Facebook Workers: We Routinely Suppressed Conservative News

The unsurprising fact that Facebook selectively suppresses and promotes different things has been getting a lot of press lately. I am not totally convinced yet that this particular claim of political bias itself is 100% credible: selectively chosen evidence that fits a clearly partisan narrative from aggrieved ex-employees should at least be viewed with caution, especially given the fact that it flies in the face of what we know about Facebook. Facebook is a deliberate maker of filter bubbles, echo chambers and narcissism amplifiers and it thrives on giving people what it thinks they want. It has little or no interest in the public good, however that may be perceived, unless that drives growth. It just wants to increase the number and persistence of eyes on its pages, period. Engagement is everything. Zuckerberg’s one question that drives the whole business is “Does it make us grow?” So, it makes little sense that it should selectively ostracize a fair segment of its used/users.

This claim reminds me of those that attack the BBC for both its right wing and its left wing bias. There are probably those that critique it for being too centrist too. Actually, in the news today, NewsThump, noting exactly that point, sums it up well. The parallels are interesting. The BBC is a deliberately created institution, backed by a government, with an aggressively neutral mission, so it is imperative that it does not show bias. Facebook has also become a de facto institution, likely with higher penetration than the BBC. In terms of direct users it is twenty times the size of the entire UK population, albeit that BBC programs likely reach a similar number of people. But it has very little in the way of ethical checks and balances beyond legislation and popular opinion, is autocratically run, and is beholden to no one but its shareholders. Any good that it does (and, to be fair, it has been used for some good) is entirely down to the whims of its founder or incidental affordances. For the most part, what is good for Facebook is not good for its used/users. This is a very dangerous way to run an institution.

Whether or not this particular bias is accurately portrayed, it does remain highly problematic that what has become a significant source of news, opinion and value setting for about a sixth of the world’s population is clearly susceptible to systematic bias, even if its political stance remains, at least in intent and for purely commercial reasons, somewhat neutral. For a site in such a position of power, though, almost every decision becomes a political decision. For instance, though I approve of its intent to ban gun sales on the site, it is hard not to see this as a politically relevant act, albeit one that is likely more driven by commercial/legal concerns than morality (it is quite happy to point you to a commercial gun seller instead). It is the same kind of thing as its reluctant concessions to support basic privacy control, or its banning of drug sales: though ignoring such issues might drive more engagement from some people, it would draw too much flak and ostracize too many people to make economic sense. It would thwart growth.

The fact that Facebook algorithmically removes 95% or more of potentially interesting content, and then uses humans to edit what else it shows, makes it far more of a publisher than a social networking system. People are farmed to provide stories, rather than paid to produce them, and everyone gets a different set of stories chosen to suit their perceived interests, but the effect is much the same. As it continues with its unrelenting and morally dubious efforts to suck in more people and keep them for more of the time, with ever more-refined and more ‘personalized’ (not personal) content, its editorial role will become ever greater. People will continue to use it because it is extremely good at doing what it is supposed to do: getting and keeping people engaged. The filtering is designed to get and keep more eyes on the page and the vast bulk of effort in the company is focused wholly and exclusively on better ways of doing that. If Facebook is the digital equivalent of a drug pusher (and, in many ways, it is) what it does to massage its feed is much the same as refining drugs to increase their effects and their addictive qualities. And, like actual drug pushing that follows the same principles, the human consequences matter far less than Facebook’s profits. This is bad.

There’s a simple solution: don’t use Facebook. If you must be a Facebook user, for whatever reason, don’t let it use you. Go in quickly and get out (log out, clear your cookies) right away, ideally using a different browser and even a different machine than the one you would normally use. Use it to tell people you care about where to find you, then leave. There are hundreds of millions of far better alternatives – small-scale vertical social media like the Landing, special purpose social networks like LinkedIn (which has its own issues but a less destructive agenda) or GitHub, less evil competitors like Google+, junctions and intermediaries like Pinterest or Twitter, or hundreds of millions of blogs or similar sites that retain loose connections and bottom-up organization. If people really matter to you, contact them directly, or connect through an intermediary that doesn’t have a vested interest in farming you.

Address of the bookmark: http://gizmodo.com/former-facebook-workers-we-routinely-suppressed-conser-1775461006

Universities can’t solve our skills gap problem, because they caused it | TechCrunch

Why this article is wrong

This article is based on a flawed initial premise: that universities are there to provide skills for the marketplace. From that perspective, as the writer, Jonathan Munk, suggests, there’s a gap between both what universities generally support and what employers generally need, and the perceptions of students and employers about the skills they actually possess. If we assume that the purpose of universities is to churn out market-ready workers, with employer-friendly skills, they are indeed singularly failing and will likely continue to do so.  As Munk rightly notes:

“… universities have no incentive to change; the reward system for professors incentivizes research over students’ career success, and the hundreds of years of institutional tradition will likely inhibit any chance of change. By expecting higher education to take on closing the skills gap, we’re asking an old, comfortable dog to do new tricks. It will not happen.”

Actually quite a lot of us, and even quite a few governments (USA notwithstanding) are pretty keen on the teaching side of things, but Munk’s analysis is substantially correct and, in principle, I’m quite comfortable with that. There are far better, cheaper and faster ways to get most marketable job skills than to follow a university program, and providing such skills is not why we exist. This is not to say that we should not do such things. For pedagogical and pragmatic reasons, I am keen to make it possible for students to gain useful workplace skills from my courses, but it has little to do with the job market. It’s mainly because it makes the job of teaching easier, leads to more motivated students, and keeps me on my toes having to stay in touch with the industry in my particular subject area. Without that, I would not have the enthusiasm needed to build or sustain a learning community, I would be seen as uninterested in the subject, and what I’d teach would be perceived as less relevant, and would thus be less motivating. That’s also why, in principle, combining teaching and research is a great idea, especially in strongly non-vocational subjects that don’t actually have a marketplace. But, if it made more sense to teach computing with a 50 year old language and machine that should be in a museum, I would do so at the drop of a hat. It matters far more to me that students develop the intellectual tools to be effective lifelong learners, develop values and patterns of thinking that are commensurate with both a healthy society and personal happiness, become part of a network of learners in the area, engage with the community/network of practice, and see bigger pictures beyond the current shiny things that attract attention like flames to a moth. This focus on being, rather than specific skills, is good for the student, I hope, but it is mainly good for everyone. Our customer is neither the student nor the employer: it’s our society. If we do our jobs right then we both stabilize and destablize societies, feeding them with people that are equipped to think, to create, to participate, reflectively, critically, and ethically: to make a difference. We also help to feed societies with ideas, theories, models and even the occasional artefact, that make life better and richer for all though, to be honest, I’m not sure we do so in the most cost-effective ways. However, we do provide an open space with freedom to explore things that have no obvious economic value, without the constraints or agendas of the commercial world, nor those of dangerously partisan or ill-informed philanthropists (Zuckerberg, Gates – I’m thinking of you). We are a social good. At least, that’s the plan – most of us don’t quite live up to our own high expectations. But we do try. The article acknowledges this role:

“Colleges and universities in the U.S. were established to provide rich experiences and knowledge to their students to help them contribute to society and improve their social standing.”

Politely ignoring the US-centricity of this claim and its mild inaccuracy, I’d go a bit further: in the olden days, it was also about weeding out the lower achievers and/or, in many countries (the US was again a notable offender), those too poor to get in. Universities were (and most, AU being a noble and rare exception, still are) a filter, that makes the job of recruiters easier by removing the chaff from the wheat before we even get to them, and then again when we give out the credits: that‘s the employment advantage. It’s very seldom (directly) because of our teaching. We’re just big expensive sieves, from that perspective. However, the article goes on to say:

“But in the 1930s, with millions out of work, the perceived role of the university shifted away from cultural perspective to developing specific trades. Over time, going to college began to represent improved career prospects. That perception persists today. A survey from 2015 found the top three reasons people chose to go to college were:

  • improved employment opportunities
  • make more money
  • get a good job”

I’m glad that Munk correctly uses the term ‘perception’, because this is not a good reason to go to a university. The good job is a side-effect, not the purpose, and it is becoming less important with each passing year. Partly this is due to market saturation and degree inflation, partly due to better alternatives becoming more widespread, especially thanks to the Internet. One of the ugliest narratives of modern times is that the student should pay for their education because they will earn more money as a result. Utter nonsense. They will earn more money because they would have earned more money anyway, even if universities had never existed. The whole point of that filtering is that it tends to favour those that are smarter and thus more likely to earn more. In fact, were it not for the use of university qualifications as a pre-filter that would exclude them from a (large but dwindling) number of jobs, they would have earned far more money by going straight into the workforce. I should observe in passing that open universities like AU are not entirely immune from this role. Though not much filtering for ability on entry, AU and other open universities do none-the-less act as filters inasmuch as those that are self-motivated enough to handle the rigours of a distance-taught university program while otherwise engaged, usually while working, are far better candidates for most jobs than those who simply went to a university because that was the natural next step. A very high proportion of our students that make it to the end do so with flying colours, because those that survive are incredibly good survivors. I’ve seen the quality of work that comes out of this place and been able to compare it with that from the best of traditional universities: our students win hands down, almost every time. The only time I have seen anything like as good was in Delhi, where 30 students were selected in a program each year from over 3,000 fully qualified applicants (i.e. those with top grades from their schools). This despite, or perhaps because of, the fact that computing students had to sit an entrance exam that, bizarrely and along with other irrelevances, required them to know about Brownian motion in gases. I have yet to come across a single computing role where such knowledge was needed. Interestingly, they were not required to know about poetry, art, or music, though I have certainly come across computing roles where appreciation of such things would have been of far greater value.

Why this article is right

If it were just about job-ready skills like, in computing, the latest frameworks, languages and systems, the lack of job-readiness would not bother me in the slightest. However, as the article goes on to say, it is not just the ‘technical’ (in the loosest sense) skills that are the problem. The article mentions, as key employer concerns, critical thinking, creativity, and oral and written communication skills. These are things that we should very much be supporting and helping students to develop, however we perceive our other roles. In fact, though the communication stuff is mainly a technical skillset, creativity and problem-solving are pretty much what it is all about so, if students lack these things, we are failing even by our own esoteric criteria.

I do see a tension here, and a systematic error in our teaching. A goodly part of it is down to a misplaced belief that we are teaching stuff, rather than teaching a way of being. A lot of courses focus on a set of teacher-specified outcomes, and on accreditation of those set outcomes, and treat the student as (at best) input for processing or (at worst) a customer for a certificate. When the process is turned into a mechanism for outputting people with certificates, with fixed outcomes and criteria, the process itself loses all value. ‘We become what we behold’ as McLuhan put it: if that’s how we see it, that’s how it will be. This is a vicious circle. Any mechanism that churns students out faster or more efficiently will do. In fact, a lot of discussion and design in our universities is around doing exactly that. For example, the latest trend in personalization (a field, incidentally, that has been around for decades) is largely based on that premise: there is stuff to learn, and personalization will help you to learn it faster, better and cheaper than before. As a useful by-product, it might keep you on target (our target, not yours).  But one thing it will mostly not do is support the development of critical thinking, nor will it support the diversity, freedom and interconnection needed for creative thinking. Furthermore, it is mostly anything but social, so it also reduces capacity to develop those valuable social communication skills. This is not true of all attempts at personalization, but it is true of a lot of them, especially those with most traction. The massive prevalence of cheating is directly attributable to the same incorrect perception: if cheating is the shortest path to the goal (especially if accompanied by a usually-unwarranted confidence in avoiding detection) then of course quite a few people will take it. The trouble is, it’s the wrong goal. Education is a game that is won through playing it well, not through scoring.

The ‘stuff’ has only ever been raw material, a medium and context for the really important ways of being, doing and thinking that universities are mostly about. When the stuff becomes the purpose, the purpose is lost. So, universities are trying and, inevitably, failing to be what employers want, and in the process failing to do what they are actually designed to do in the first place. It strikes me that everyone would be happier if we just tried to get back to doing what we do best. Teaching should be personal, not personalized. Skills should be a path to growth, not to employment. Remembered facts should be the material, not the product. Community should be a reason for teaching, not a means by which it occurs. Universities should be places we learn to be, not places we be to learn. They should be purveyors of value, not of credentials.

 

Address of the bookmark: http://techcrunch.com/2016/05/08/universities-cant-solve-our-skills-gap-problem-because-they-caused-it/

What’s So New about the New Atheists? – Virtual Canuck

This is a nicely crafted, deeply humanist, gentle and thought-provoking sermon, given by Terry Anderson to members of his Unitarian church on atheistic thinking and values.

I have a lot of sympathy with the Unitarians. A church that does not expect belief in any gods or higher powers; that welcomes members with almost any theistic, deistic, agnostic or atheistic persuasions; that mostly eschews hierarchies and power structures; that focuses on the value of community; that is open to exploring the mysteries of being, wherever they may be found; that is doing good things for and with others, and that is promoting tolerance and understanding of all people and all ideas is OK with me. It’s kind of a club for the soul (as in ‘soul music’, not as in ‘immaterial soul’). As Terry observes, though, it does have some oddness at its heart. It’s a bit like Christianity, without the Christ and without the mumbo jumbo, but it still retains some relics of its predominantly Christian ancestry. Terry focuses (amongst other things) on the word ‘faith’ as being a particularly problematic term in at least one of its meanings.

For all their manifest failings and evils they are used to justify or permit, religious teachings can often provide a range of useful perspectives on the universe, as long as we don’t take them any more seriously than fairy tales or poetry: which is to say, very seriously at some levels, not at all seriously in what they tell us of how to act, what to believe, or what they claim to have happened. And, while the whole ‘god’ idea is, at the very best, metaphorical, I think the metaphor has potential value. Whether or not you believe in, disbelieve in or dismiss deities as nonsense (to be clear, depending on the variant, I veer between disbelief and outright dismissal), it is extremely important to retain a notion of the sacred – a sense of wonder, humbleness, awe, majesty etc – and a strong reflective awareness of the deeply connected, meaning-filled lives of ourselves and others, and of our place in the universe. For similar reasons I am happy to use an equally fuzzy word like ‘soul’ for something lacking existential import, but meaningful as a placeholder for something that the word ‘mind’ fails to address. It can be helpful in reflection, discussion and meditation, as well as poetry. There are beautiful souls, tortured souls, and more: few other words will do.  I also think that there is great importance in rituals and shared, examined values, in things that give us common grounding to explore the mysteries and wonders of what is involved in being a human being, living with other human beings, on a fragile and beautiful planet, itself a speck in a staggeringly vast cosmos. This sermon, then, offers useful insights into a way of quasi-religious thinking that does not rely on a nonsensical belief system but that still retains much of the value of religions. I’m not tempted to join the Unitarians (like Groucho, I am suspicious of any club that would accept me as a member), but I respect their beliefs (and lack of beliefs), and respect even more their acknowledgement of their own uncertainties and their willingness to explore them.

Address of the bookmark: http://virtualcanuck.ca/2016/04/27/whats-so-new-about-the-new-atheists/