Space is the Machine

Space is the Machine, a book by Bill Hillier, is available online for free, and is also back in print again after too long an absence. Around 15 or so years ago this book changed how I see the world. As my own well-thumbed paper copy has suffered a lot over the years, and is a very large, heavy object that attracts a lot of dust and not much reading, it is delightful to be able to dip into the pristine electronic version and again be inspired.

This site has each chapter individually downloadable. A full 368-page copy is available at

The book is as much a work of philosophy as it is of architecture and urban planning (its main subject matter). It incorporates insights from sociology, psychology, anthropology, network theory, linguistics, complexity theory, distributed cognition, systems theory, aesthetics, engineering, ecology, collective intelligence, topology, emergence and more. The ideas it embodies have far broader potential applications than the built environment, including to ways we think about the purpose and practice of education, as well as to more obviously related things like the design of online social applications. In brief, it provides a way of understanding complex human systems and environments as interconnected configurations of structure, objects, time, and movement, in constant dynamic and emergent interplay with abstract, social and psychological phenomena. There are strong echoes of Jane Jacobs (uncited) and Christopher Alexander (cited) in all of this, but it goes farther up and farther in.

I don’t know whether the book and the theories of space syntax it describes impress most architects and urban planners. As I am neither, that’s not the point for me. Whether all the arguments and conclusions make sense in its intended context or not (and some are a bit suspect, even to an outsider like me) this book repeatedly makes strikingly novel connections between diverse and otherwise incommensurate fields, and it constantly provides new perspectives that make the familiar strange and fascinating. It is inspiring stuff.


Address of the bookmark:

Study shows Facebook spreads nonsense more effectively than fact

An interesting side-effect of the way Facebook relentlessly and amorally drives the growth of its network no matter what the costs: stupidity thrives at the expense of useful knowledge.

This study looks at how information and misinformation spread in a Facebook network, finding that the latter has way more long-term staying power and thus, thanks to EdgeRank and the reification of communication, continues to spread and grow while more ephemeral factual pieces of news disappear from the stream. I suspect this is because actual news has a sell-by date so people move on to the next news. Misinformation of the sort studied (conspiracy theories, etc) has a more timeless and mythic quality that is only loosely connected with facts or events, but it has a high emotional impact and is innately interesting (if true, the world would be a much more surprising place), so it can persist without becoming any more or less relevant. It doesn’t have to spread fast nor even garner much interest at first, because it persists in the network. All it needs to do is wait around for a while – the Matthew Effect and Facebook’s algorithms see to the rest.

There is not much difference between interest in scientific and anti-scientific articles at the start. There is a wave of activity for the first 120 minutes after posting, then a second one 20 hours later (a common pattern). But then the fun starts…

It’s over the long term that serious differences were observed. While the science news had a relatively short tail, petering out quickly, conspiracy theories tended to grow momentum more slowly, but have a much longer tail. They stick around for a longer period of time, meaning they can reach far more people.

Then there’s another problem with the way Facebook works – the much-discussed echo-chamber effect. This effect is far more active in Facebook than in other networks, with algorithms favouring content from people and groups you regularly interact with. So if you share, Like or even click on conspiracy theories a lot, you’re more likely to be shown them in future, reinforcing the misinformation, rather than challenging it.”


Address of the bookmark:

Social Influence Bias: A Randomized Experiment

Fascinating article from 2013 on an experiment on a live website in which the experimenters manipulated rating behaviour by giving an early upvote or downvote. An early upvote had a very large influence on future voting, increasing the chances by nearly a third that a randomly chosen piece of content would gain more upvotes in future, with final ratings increased by 25% on average. Interestingly, downvotes did not have the same effect, making very little overall difference. Topics and prior relationships made some difference.

This accords closely with many similar studies and experiments, including a social navigation study I performed about a decade ago, involving clicking on a treasure map, the twist being that participants had to try to guess where, on average, most other people would click. About half the subjects could see where others had already clicked, the about half could not. The participants were aware that the average was taken from those that could not see where others had clicked. The click patterns of each set were radically different…

Mob effects in social navigation

On closer analysis, of those that could see where others had clicked, around a third of the subjects followed what others had done (as this recent experiment suggests), around a third followed a similar pattern to the ‘blind’ partipants, and around a third actively chose an option because others had not done so – on the face of it this latter behaviour was a bit bizarre, given the conditions of the contest, though it is quite likely that they were assuming just such a bias would occur and acting accordingly.

One thing that might be useful, though very difficult, would be to try to weed out the herd followers and downgrade their ratings. StackExchange tries to do something like this by giving more weight to those that have shown expertise in the past, but it has not fully sorted out the problem of the super-influential that have a lot of good karma as a result of gaming the system, as well as the networks that form within it leading to bias (a problem shared by the less-sophisticated but also quite effective Reddit). At the very least, it might be helpful to introduce a delay to feedback being shown until a certain amount of time has passed or a threshold has been reached.

One thing is certain, though: simple aggregated ratings that are fed back to prospective raters (including those voting in elections) are almost purpose-built to make stupid mobs. As several people have shown, including Surowiecki and Page, crowds are normally only wise when they do not know what the rest of the crowd is thinking. 


Our society is increasingly relying on the digitized, aggregated opinions of others to make decisions. We therefore designed and analyzed a large-scale randomized experiment on a social news aggregation Web site to investigate whether knowledge of such aggregates distorts decision-making. Prior ratings created significant bias in individual rating behavior, and positive and negative social influences created asymmetric herding effects. Whereas negative social influence inspired users to correct manipulated ratings, positive social influence increased the likelihood of positive ratings by 32% and created accumulating positive herding that increased final ratings by 25% on average. This positive herding was topic-dependent and affected by whether individuals were viewing the opinions of friends or enemies. A mixture of changing opinion and greater turnout under both manipulations together with a natural tendency to up-vote on the site combined to create the herding effects. Such findings will help interpret collective judgment accurately and avoid social influence bias in collective intelligence in the future.

Address of the bookmark:

Google Launches Revamped Google Plus Around Interests, Streams

This deserves more than a brief analysis, but it is such an interesting development I feel compelled to comment on it now. If I can find time, I hope to return to it in more depth later when I’ve had a chance to think more carefully about it, and to play with the system some more. The interesting news is that, while there is still a binding role that links disparate Google services together, Google Plus’s focus is now Communities (basically, what we call groups on the Landing) and Collections (on the Landing, a mix of tags – especially in the form they will have in our forthcoming upgrade – and pinboards). In brief, it’s about connecting around what interests people, not about connecting with interesting people.

This new fork of Google Plus interests me most because it is very strongly focused on the social form that Terry Anderson and I describe as the set, as opposed to the network (like Facebook, LinkedIn and others). It is, like Pinterest, Reddit, Stack Exchange or SlashDot, much more about clusters of people around topics and areas of interest and, only as a side-effect, the networks or organized groups that might develop as a result. Some people talk of such things as networks of interest, but I think that is misleading as it implies a meaningful connection between people: as a social form, sets often involve little or no persistent social connection at all. This harks back to pre-web days, performing a technologically advanced version of the same kind of things Usenet newsgroups and bulletin boards used to do. That is still arguably the most interesting way the Internet changes things, because it benefits from the breakdown of physical boundaries and the presence of large, diverse crowds. This enables both crowd wisdom and the long tail and, as a learning tool, it is incredibly powerful. In a slightly different way, Wikipedia is also set-based, and so is YouTube. Apart from Google Search itself, these are probably the most successful examples of e-learning’s phenomenal success in the world today. What is particularly interesting about Google’s move is that, to a greater extent than has previously been possible, it offers a little bit of identity assurance, and controllable privacy, as well as in-built scalability, as well as the means to seamlessly shift into other social forms when needed or desired. There is some super-cool technology behind this, and some careful design. One of the biggest problems as well as an occasional benefit of sets has always been their relative anonymity. The worst flaming, trolling and griefing occurs in sets, rather than networks or closed, organized groups, because they are less intimate and people are less accountable to one another. I don’t think the revamped Google Plus will totally solve that, but it’s a step in the right direction. It also offers the opportunity for growth and evolution of other social forms, including networks. The fact that it offers communities, which can be as set-like or group-like as their owners wish (again, very like Elgg) helps with that a lot, and it seamlessly blends in to other group-oriented toolsets like (notably) Google Docs and Calendars. I hope that it picks up a few hints from Reddit, Stack Exchange or SlashDot (in increasing order of complexity and ingenuity) to help sustain those sets.

Google Plus has, from the start, had this kind of idea in mind. Its ‘Circles’ feature (that mirrors what Elgg and consequently the Landing had many years before) is about sets within networks – about recognizing that people are different in different contexts, wish to disclose different things to different people, and have many overlapping and/or separate spheres of interest at different times. This is fundamentally different from Facebook’s single-identity network model, and fundamentally stronger. Facebook’s model is focused solidly on building vast networks and driving adoption, which it does do incredibly efficiently, but it is a shallowing, smoothing model that devalues and ignores much of what makes us distinctively human. For all its addictive qualities it is also quite dull, and it leads to filter bubbles, echo chambers, narcissism, and a focus on breadth, not depth, of growth and knowledge. It’s a soft toolset that can do more than that, but its business model and basic shape are firmly centred on building the network at any cost. I suppose I should mention Twitter too, though that is a different kind of animal. Using both sets (hashtags) and networks (following), Twitter works because it connects people and other things. It is not a social network (though it has one) but is more of a hybrid between SMS and a social bookmarking service. If only it were not so intent on locking itself in and trying to embrace more than it should, it would be an excellent complement to Google+.

I think this is a minor reshaping of Google Plus, not a major overhaul. It is mostly about better marketing what it already does. I am surprised that anyone, least of all Google, ever imagined it was going head-to-head with Facebook. Google primarily wanted to know more about people so that it could integrate that knowledge into better search, not to build a vast social network. Though it might have liked the idea of stemming the flow of data into a closed system it could not access easily, it almost certainly knew that was a battle it could not win. But it was always attempting something much smarter, in the long term. Google Plus had (and has) a social networking toolset, sure, but that was not what gave it its primary character. It was always much more about stuff people shared, not people sharing stuff, which is of course what Google has done best for a long time and what really interests the company. Unfortunately, it was perceived as an unsuccessful Facebook competitor, and that has not helped its cause one bit. This new development is just a refinement of the system that makes that central differentiating aspect of it clearer and easier to understand.

I hope people get it, even though it is far from perfect. As a matter of principle I’m against any system that seeks to suck in and centralize what should be open and controlled by its users so this is far from the ideal way things should be. Unfortunately, none of the open initiatives that would give genuine ownership and control to users have gained market dominance yet, with the possible exception of WordPress. So, of all the larger companies that occupy this user-farming space, Google is perhaps the least objectionable and the most forward-looking. For all its smart AI and glitz, it might be the most human and, perhaps, the most genuinely open. At least, it tries not to lock its users in so they cannot get out and it seldom breaks standards to lock people in. It also does have some incredibly smart technology that is genuinely useful. Though there are many ways that its famous ‘don’t be evil’ mantra has not worked out as well as it should, it is way too centralized, it does not give true ownership to its users, and it seems to be getting greedier as it grows up, at least it’s not Facebook.

Address of the bookmark:

Why One Social Network Just Turned Off Followers And Hashtags

Storehouse, a sharing app for photo-driven stories, has reversed its decision to embrace social networking of the coarser kind and has created a more intimate and intentional focus on real circles of friends – no feeds, no followers, no hashtags: basically, almost none of the trappings of network-oriented or, especially, set-oriented social media. It has done this in an attempt to diminish the Matthew Effects, echo chambers and filter bubbles of  typical social media sites, where a single individual shouts out what they had for breakfast to thousands or even millions of followers without differentiation, pandering to the perceived interests of the crowd rather than engaging in a more human and intentionally focused exchange. As the founder, Kawano puts it:

“The reality is, you look at your camera roll, and the things that are in there [prove] people are multidimensional, and you don’t have a single set of frames that match up with [everyone else’s tastes],” Kawano says. Storehouse 2.0 wants to support these aspects of your personality across your social sphere. “I’ll share the food photos with friends I know will appreciate the food stuff, and photos with my kids, I’ll share that with family and friends who care about my kids.”

It’s an obvious thing to try to do. This is exactly what we have tried to do with the Landing, with its fine-grained per-post permissions and circles (thanks to its use of Elgg, which normally calls such things ‘collections’), and our own additions of context switching tabs, pinboards and customizable widgets that allow individuals and groups to present not just differently filtered content but differently presented content to different people. The posts you see of mine on the Landing are different from those seen by others and, if you visit my profile, you will see a different facade depending on who you are.  Elgg collections came long before others of their ilk, but they are very similar indeed to what Google has tried to do with its circles and Diaspora tried to do with its aspects. It’s not unrelated to the less embedded and less flexible lists used by Twitter and Facebook. Kawano’s use of the word ‘frames’ suggests a similar inspiration to what has informed our own work, grounded in the work of Erwing Goffman.

The notion that we are all single-dimensional self-publicists all the time is embedded deeply into the business model of Facebook, most of its competitors and most of its predecessors: they feed on narcissism. In fact, they rely on that to make money and drive it relentlessly. But they are exploiting some very limited aspects of what makes human relationships special, to the exclusion of richer, more personal engagement. There are plenty of things that can and should be shared with a large crowd, there is value in self-organized networks where popular things bubble up and memes spread, and there is a huge amount of value to be had from things like tags, that make it easy to discover and learn from one another in lots of different ways. Such networks are rich in learning and great for sustaining weak connections. But these are far from the only communications that matter and they tend to be the least meaningful and salient. It all depends on context and nuance is very important.

The big trouble with our system on the Landing, and others like it (including Storehouse and Google Plus) is that, unless you are logged into the system, it doesn’t know you from Adam. We need open, distributed protocols for this, not centralized vaults that lock us in to the whims and capabilities of companies that are in the business of making money from their role as connectors or that are simply constrained by the toolsets they rely on. On the Landing we actively try to avoid lock-in and have less than no interest in exploiting our users – it’s all about openness and control – but you still need to have an account to use it or see anything apart from public posts like this one. It’s a very serious constraint.

There are solutions that do not rely on everyone having a Facebook account (subject to the whims and invasions of Facebook), but their future is currently looking very bleak. I’m sad that OpenID, OAuth and OpenSocial are struggling to survive, mainly thanks to the onslaught from Facebook and its peers, because these were really hopeful standards that promised a lot, especially in conjunction with smart open architectures like Backplane or applications like OneSocialWeb or Diaspora. The Landing would be so much more useful if anyone – at least among its users – could selectively share anything with anyone, not just either the whole public or subsets of other Landing users.

Even if we can fix these issues, there remain some big complexities. The Landing is very capable of highly nuanced ways of presenting different facades but, the more choices we give, the harder it becomes to make them – soft technologies are hard to use, hard technologies are easy. Our most flexible tool – the Pinboard – takes a huge amount of effort and a learning curve to even produce the simplest of pages. The more rigid we make it the less nuanced it can be, but the simpler it is to understand and use. Managing circles and permissions is not a trivial task. Even Google Plus – a great design – fails to solve this problem. I will be interested to see how StoreHouse copes with this.

Address of the bookmark:

After Backlash, Facebook Opens Portal To Court More Operators

Techcrunch article by Jon Russell on how Facebook is pretending (very badly, like one unpracticed in the art) to be nice by opening up its branch to a few more developers.

In case you are not familiar with this bit of exploitation of the poor, the claimed ‘public service’ aspect of is that it gets people online who would otherwise be unable to afford it, specifically in the third world, by making access to (some) online services free of data charges. I’d have to agree, that sounds nice enough, and that’s certainly the spin Zuckerberg puts on it. The evil side of it is that it is essentially a portal to Facebook and a few hand-filtered other sites, not the Internet as we know it, it is immensely destructive to net neutrality, and is nothing more than a bare-faced attempt to make money out of people that have too little of it, and to hook them into Facebook’s all-consuming centralized people farm. Zuckerberg is allegedly proud of the fact that around half of the millions that have signed up thus far have moved on to paid plans that actually do allow access to the Internet – likely the reason for the (otherwise odd) inclusion of Google Search in the original small lineup of options, inasmuch as non-approved sites come with a warning that users need to buy the real thing now. Of course, by that time, they are already Facebook sign-ups too, which is what this is really about. This is much the same tactic used by drug dealers seeking new customers by giving out samples and it similarly immoral. It is absurd to suggest, as Zuckerberg apparently does, that allowing a few more people to develop for the platform and suggesting that they in turn allow access to further sites (as long as they conform to Facebook’s conditions)  makes it in any way more open. It is coercing companies into using the app using much the same techniques it applies to building people’s social networks. A filtered internet via a Facebook-controlled app is not the free (as in speech) and open Internet and, ultimately, the most notable beneficiary is Facebook, though it is certainly doing the partner operators no harm either. The choice of domain name is cynical in the extreme – I’d admire the chutzpah if it were not so ugly. My respect goes to the many Indian companies that are pulling out in protest at its shameless destruction of net neutrality and greedy marketing under the false banner of philanthropy.

Address of the bookmark:

Protocols Instead Of Platforms: Rethinking Reddit, Twitter, Moderation And Free Speech | Techdirt

Reddit logoInteresting article on the rights of companies to moderate posts, following the recent Reddit furore that, in microcosm, raises a bunch of questions about the future of the social net itself. The distinction between freedom of speech and the rights of hosts to do whatever they goddam please – legal constraints permitting – is a fair and obvious one to make.

The author’s suggestion is to decentralize social media systems (specifically Twitter and Reddit though, by extension, others are implicated) by providing standards/protocols that could be implemented by multiple platforms, allowing the development of an ecosystem where different sites operate different moderation policies but, from an end-user perspective, being no more difficult to use than email.

The general idea behind this is older than the Internet. Of course, there already exist many systems that post via proprietary APIs to multiple places, from WordPress plugins to Known, not to mention those ubiquitous ‘share’ buttons found everywhere, such as at the bottom of this page. But, more saliently, email (SMTP), Internet Relay Chat (IRC), Jabber (XMPP), Usenet news (NNTP) are prototypical and hugely successful examples of exactly this kind of thing. In fact, NNTP is so close to Reddit’s pattern in form and intent that I don’t see why it could not be re-used, perhaps augmented to allow smarter ratings (not difficult within the existing standard). Famously, Twitter’s choice of character limit is entirely down to fitting a whole Tweet, including metadata, into a single SMS message, so that is already essentially done. However standards are not often in the interests of companies seeking lock-in and a competitive edge. Most notably, though they very much want to encourage posting in as many ways as possible, they very much want control of the viewing environment, as the gradual removal of RSS from prominent commercial sites like Twitter and Facebook shows in spades. I think that’s where a standard like this would run into difficulties getting off the ground. That and Metcalfe’s Law: people go where people go, and network value grows proportionally to the square of the number of users of a system (or far more than that, if Reed’s Law holds). Only a truly distributed system ubiquitously used system could avoid that problem. Such a thing has been suggested for Reddit and may yet arrive.

As long as we are in thrall to a few large centralized commercial companies and their platforms – the Stacks, as Bruce Sterling calls them – it ain’t going to work. Though an incomplete, buggy and over-complex implementation played a role, proprietary interest is essentially what has virtually killed OpenSocial, despite being a brilliant idea that was much along these lines but more open, and despite having virtually every large Internet company on board, bar one. Sadly, that one was the single most avaricious, amoral, parasitic company on the Web. Almost single-handedly, Facebook managed to virtually destroy the best thing that might have happened to the social web, that could have made it a genuine web rather than a bunch of centralized islands. It’s still out there, under the auspices of the W3C, but it doesn’t seem to be showing much sign of growth or deployment.

Facebook front pageFacebook has even bigger and worser ambitions. It is now, cynically and under the false pretense of opening access to third world countries, after the Internet itself. I hope the company soon crashes and burns as fast as it rose to prominence – this is theoretically possible, because the same cascades that created it can almost as rapidly destroy it, as the once-huge MySpace and Digg discovered to their cost. Sadly, it is run by very smart people that totally get networks and how to exploit them, and that has no ethical qualms to limit its growth (though it does have some ethical principles about some things, such as open source development – its business model is evil, but not all of its practices). It has so far staunchly resisted attack, notwithstanding its drop in popularity in established markets and a long history of truly stunning breaches of trust.

Do boycott Facebook if you can. If you need a reason, other than that you are contributing to the destruction of the open web by using it, remember that it tracks you hundreds of times in a single browsing session and, flaunting all semblance of ethical behaviour, it attempts to track you even if you opt out from allowing that. You are its product. Sadly, with its acquisition of companies like Instagram and Whatsapp, even if we can kill the primary platform, the infection is deep. But, as Reed’s Law shows, though each new user increases its value, every user that leaves Facebook or even that simply ignores it reduces its value by an identically exponential amount. Your vote counts!

Address of the bookmark:

Super-private social network launched to take on Facebook with support of Anonymous

The first question that emerges for a free, encrypted, ad-free, unsurveilled, intentionally private, celebrating anonymity, social networking site and mobile app like this is ‘How does it make enough money to support itself’? The answer appears to be a freemium model – you pay to use the API more than a basic amount, for storage, and a premium service. I am a little concerned that the terms and conditions seem to give the site owners free access and perpetual rights to use any public content. I don’t see why a creative commons licence could not have been applied, especially given the claimed open nature of the thing. None the less, this is a good step in the right direction, though I have to wonder whether it is really sustainable. A lot depends on its open source software: if content and identity can be distributed further and not limited to this one site, this could be a really interesting alternative to other systems based on a similar business model like WordPress and Known.

The software on which it runs is allegedly open source and available via – unfortunately, though, almost all of it, apart from a mobile client, is disappointingly listed as ‘coming soon’. Definitely one to watch, assuming the server software is to be open-sourced. It will be interesting to compare it with Elgg – the site itself seems slicker than most Elgg installations but .

Address of the bookmark:

Open access: beyond the journal

Interesting and thoughtful argument from Savage Minds mainly comparing the access models of two well-known anthropology journals, one of which has gone open and seems to be doing fine, the other of which is in dire straits and that almost certainly needs to open up, but for which it may be too late. I like two quotes in particular. The first is from the American Anthropologist’s editorial, explaining the difficulties they are in:

If you think that making money by giving away content is a bad idea, you should see what happens when the AAA tries to make money selling it. To put it kindly, our reader-pays model has never worked very well. Getting over our misconceptions about open access requires getting over misconceptions of the success of our existing publishing program. The choice we are facing is not that of an unworkable ideal versus a working system. It is the choice between a future system which may work and an existing system which we know does not.”

The second is from the author of the article:

CollabraOpen Library of the HumanitiesKnowledge Unlatched, and SciELO — blur the distinction between journal, platform, and community the same way Duke Ellington blurred the boundary between composer, performer, and conductor.”

I like that notion of blurring and believe that this is definitely the way to go. We are greatly in need of new models for the sharing, review, and discussion of academic works because the old ones make no sense any more. They are expensive, untimely, exclusionary and altogether over-populous. There have been many attempts to build dedicated platforms for that kind of thing over they years (one of my favourites being the early open peer-reviewing tools of JIME in the late 1990s, now a much more conventional journal, to its loss). But perhaps one of the most intriguing approaches of all comes not from academic presses but from the world of student newspapers. This article reports on a student newspaper shifting entirely into the (commercial but free) social media of Medium and Twitter, getting rid of the notion of a published newspaper altogether but still retaining some kind of coherent identity. I don’t love the notion of using these proprietary platforms one bit, though it makes a lot of sense for cash-strapped journalists trying to reach and interact with a broad readership, especially of students. Even so, there might be more manageable and more open, persistent ways (eg. syndicating from a platform like WordPress or Known). But I do like the purity of this approach and the general idea is liberating.

It might be too radical an idea for academia to embrace at the moment but I see no reason at all that a reliable curatorial team, with some of the benefits of editorial control, posting exclusively to social media, might not entirely replace the formal journal, for both process and product. It already happens to an extent, including through blogs (I have cited many), though it would still be a brave academic that chose to cite only from social media sources, at least for most papers and research reports. But what if those sources had the credibility of a journal editorial team behind them and were recognized in similar ways, with the added benefit of the innate peer review social media enables?  We could go further than that and use a web of trust to assert validity and authority of posts – again, that already occurs to some extent and there are venerable protocols and standards that could be re-used or further developed for that, from open badges to PGP, from trackbacks to WebMention. We are reaching the point where subtle distinctions between social media posts are fully realizable – they are not all one uniform stream of equally reliable content – where identity can be fairly reliably asserted, and where such an ‘unjournal’ could be entirely distributed, much like a Connectivist MOOC. Maybe more so: there is no reason there should even be a ‘base’ site to aggregate it all, as long as trust and identity were well established. It might even be unnecessary to have a name, though a hashtag would probably be worth using.

I wonder what the APA format for such a thing might be?

Address of the bookmark:

Automated Collaborative Filtering and Semantic Transports – draft 0.72

I had to look up this article by the late Sasha Chislenko for a paper I was reviewing today, and I am delighted that it is still available at its original URL, though Chislenko himself died in 2000. I’ve bookmarked the page on systems dating back to 1997 but I don’t think I’ve ever done so on this site, so here it is, still open to the world. Chislenko was writing in public way before it was fashionable and, I think, probably before the first blogs – this is still and, sadly, will always be a work in progress.

This particular page was one of a handful of articles that deeply influenced my early research and set me on a course I’m still pursuing to this day. Back in 1997, as I started my PhD, I had conceived of and started to build a web-based tagging and bookmark sharing system to gather learner-generated recommendations of resources and people so that the crowd could teach itself. It seemed like a common sense idea but I was not aware of anything else like it (this was long before and Slashdot was just a babe in arms), so I was looking for related work and then I found this. It depressed me a little that my idea was not quite as novel as I had hoped, but this article knocked me for six then and it continues to impress me now. It’s still great reading, though many of the suggestions and hopes/fears expressed in it are so commonplace that we seldom give them a second thought any more.

This, along with a special issue of ACM Communications released the same year, was my first introduction to collaborative filtering, the technology that would soon sit behind Amazon and, later, everything from Google Search to Netflix and eBay. It gave a name to what I was doing and to the system I was building, which was consequently christened ‘CoFIND’  (Collaborative Filter in N-Dimensions). 

Chislenko was a visionary who foresaw many of the developments over the past couple of decades and, as importantly, understood many of their potential consequences.  More of his work is available at – just a small sample of his astonishing range, most of it incomplete notes and random ideas, but packed with inspiration and surprisingly accurate prediction. He died far too young.

Address of the bookmark: