Signal : now with proper desktop apps

Signal is arguably the most open, and certainly the most secure, privacy-preserving instant messaging/video or voice-calling system available today. It is open source, ad-free, standards-based, simple, and very well designed. Though not filled with bells and whistles, for most purposes it is a far better alternative to Facebook-owned WhatsApp or other near-competitors like Viber, FaceTime, Skype, etc, especially if you have any concerns about your privacy. Like all such things, Metcalfe’s Law means its value increases with every new user added to the network. It’s still at the low end of the uptake curve, but you can help to change that – get it now and tell your friends!

Like most others of its ilk it hooks into your cellphone number rather than a user name but, once you have installed it on your smartphone, you can associate that number (via a simple 2D barcode) with a desktop client. Until recently it only supported desktop machines via a Chrome browser (or equivalent – I used Vivaldi) but the new desktop clients are standalone, so you don’t have to grind your system to a halt or share data with Google to install it. It is still a bit limited when it comes to audio (simple messaging only) and there still appears to be no video support (which is available on smartphone clients) but this is good progress.

Address of the bookmark: https://signal.org/download/

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2813683/signal-now-with-proper-desktop-apps

The return of the weblog – Ethical Tech

Blogs have evolved a bit over the past 20 years or so, and diversified. The always terrific Ben Werdmuller here makes the distinction between thinkpieces (what I tend to think of as vaguely equivalent to keynote presentations at a conference, less than a journal article, but carefully composed and intended as a ‘publication’) and weblogging (kind of what I am doing here when I bookmark interesting things I have been reading, or simply a diary of thoughts and observations). Among the surprisingly large number of good points that he makes in such a short post is that a weblog is best seen as a single evolving entity, not as a bunch of individual posts:

Blogging is distinct from journalism or formal writing: you jot down your thoughts and hit “publish”. And then you move on. There isn’t an editorial process, and mistakes are an accepted part of the game. It’s raw.

A consequence of this frequent, short posting is that the product isn’t a single post: it’s the weblog itself. Your website becomes a single stream of consciousness, where one post can build on another. The body of knowledge that develops is a reflection of your identity; a database of thoughts that you’ve put out into the world.

This is in contrast to a series of thinkpieces, which are individual articles that live by themselves. With a thinkpiece, you’re writing an editorial; with a blog, you’re writing the book of you, and how you think.

This is a good distinction. I also think that, especially in the posts of popular bloggers like Ben, the blog is also comprised of the comments, trackbacks, and pings that develop around it, as well as tweets, pins, curations, and connections made in other social media. Ideas evolve in the web of commentary and become part of the thing itself. The post is a catalyst and attractor, but it is only part of the whole, at least when it is popular enough to attract commentary.

This distributed and cooperative literary style can also be seen in other forms of interactive publication and dialogue – a Slashdot or Reddit thread, for instance, can sometimes be an incredibly rich source of knowledge, as can dialogue around a thinkpiece, or (less commonly) the comments section of online newspaper articles. What makes the latter less commonly edifying is that their social form tends to be that of the untarnished set, perhaps with a little human editorial work to weed out the more evil or stupid comments: basically, what matters is the topic, not the person. Untarnished sets are a magnet for trolls, and their impersonal nature that obscures the individual can lead to flaming, stupidity, and extremes of ill-informed opinion that crowd out the good stuff. Sites like Slashdot, StackExchange, and Reddit are also mostly set-based, but they use the crowd and an algorithm (a collective) to modulate the results, usually far more effectively than human editors, as well as to provide shape and structure to dialogues, so that dialogues become useful and informative. At least, they do when they work: none are close to perfect (though Slashdot, when used well, is closer than the rest because its algorithms and processes are far more evolved and far more complex, and individuals have far more control over the modulation) but the results can often be amazingly rich.

Blogs, though, tend to develop the social form of a network, with the blogger(s) at the centre. It’s a more intimate dialogue, more personal, yet also more public as they are almost always out in the open web, demanding no rituals of joining in order to participate, no membership, no commitment other than to the person writing the blog. Unlike dedicated social networks there is no exclusion, no pressure to engage, no ulterior motives of platforms trying to drive engagement, less trite phatic dialogue, more purpose, far greater ownership and control. There are plenty of exceptions that prove the rule and plenty of ways this egalitarian structure can be subverted (I have to clean out a lot of spam from my own blogs, for instance) but, as a tendency, it makes blogs still very relevant and valuable, and may go some way to explaining why around a quarter of all websites now run on WordPress, the archetypal blogging platform.

Address of the bookmark: https://words.werd.io/the-return-of-the-weblog-f6b702a7cf99

Originally posted at: https://landing.athabascau.ca/bookmarks/view/2740999/the-return-of-the-weblog-%E2%80%93-ethical-tech

Instagram uses 'I will rape you' post as Facebook ad in latest algorithm mishap

Another in a long line of algorithm fails from the Facebook stable, this time from Instagram…

"I will rape you" post from Instagram used for advertising the service

This is a postcard from our future when AI and robots rule the planet. Intelligence without wisdom is a very dangerous thing. See my recent post on Amazon’s unnerving bomb-construction recommendations for some thoughts on this kind of problem, and how it relates to attempts by some researchers and developers to use learning analytics beyond its proper boundaries.

 

Address of the bookmark: https://www.theguardian.com/technology/2017/sep/21/instagram-death-threat-facebook-olivia-solon

Original page

The Ghost in the Machines of Loving Grace | Library Babel Fish

An article from Barbara Fister about the role and biases of large providers like Google and Facebook in curating, sorting, filtering their content, usefully contrasted with academic librarians’ closely related but importantly different roles. Unlike a library, such systems (and especially Facebook) are not motivated to provide things that are in the interests of the public good. As Fister writes:

“The thing is, Facebook literally can’t afford to be an arbiter. It profits from falsehoods and hype. Social media feeds on clicks, and scandalous, controversial, emotionally-charged, and polarizing information is good for clicks. Things that are short are more valuable than things that are long. Things that reinforce a person’s world view are worth more than those that don’t fit so neatly and might be passed over. Too much cruft will hurt the brand, but too little isn’t good, either. The more we segment ourselves into distinct groups through our clicks, the easier it is to sell advertising. And that’s what it’s about.”

These are not new points but they are well stated and well situated. I particularly like the point that lies and falsehoods are not a reason to censor a resource in and of themselves. We need the ugliness in order to better understand and value the beauty, and we need the whole story, not filtered parts of it that suit the criteria of some arbitrary arbiter. As Fister writes:

“There’s a level of trust there, that our students can and will approach a debate with genuine curiosity and integrity. There’s also a level of healthy distrust. We don’t believe it’s wise to leave decisions about truth and falsehood up to librarians.”

Indeed. She also has good things to say about personalization:

“If libraries were as personalized, you would wave your library card at the door and enter a different library than the next person who arrives. We’d quickly tidy away the books you haven’t shown interest in before; we’d do everything we could to provide material that confirms what you already believe. That doesn’t seem a good way to learn or grow. It seems dishonest.”

Exactly so.  She does, though, tell us about how librarians do influence things, and there’s only a fine and fuzzy (but significant) line between this and the personalization she rejects:

“Newer works on the topic will be shelved nearby that will problematize the questionable work and put it in context.”

I’m not sure that there is much difference in kind between this approach to influencing students and the targeted ads of Google or Facebook. However, there is a world of difference in the intent. What the librarian does is about sense making, and it accords well with one of the key principles I described in my first book of providing signposts, not fenceposts. To give people control, they have to first of all have the choices in the first place, but also they need to know why they are worth making. Organizing relevant works together on the shelf is helping students to make informed choices, scaffolding the research process by showing alternative perspectives. Offering relevant ads, though it might be dishonestly couched in terms of helping people to find the products they want, is not about helping them with what they want to do, but exploiting them to encourage them to do what you want them to do, for your own benefit, not theirs. That’s all the difference in the world.

That difference in intent is one of the biggest differentiators between a system like the Landing and a general-purpose public social media site, and that’s one big reason why it could never make any sense for us to replace the Landing with, say, a Facebook group (a suggestion that still gets aired from time to time, on the utterly mistaken assumption that they duplicate each other’s functionality). The Landing is a learning commons, a network of people that, whatever they might be doing here, share an intent to learn, where people are valued for what they bring to one another, not for what they bring to the owners and shareholders of the company that runs the site. Quite apart from other issues around ownership, privacy and functionality, that’s a pretty good reason to keep it.

 

Address of the bookmark: https://www.insidehighered.com/blogs/library-babel-fish/ghost-machines-loving-grace

Wisdom of the Confident: Using Social Interactions to Eliminate the Bias in Wisdom of the Crowds

A really interesting paper on making crowds smarter.  I find the word ‘confident’ in the title a bit odd because it seems (and I may have misunderstood) that the researchers are actually trying to measure independent thinking rather than confidence. As far as I can tell, this describes a method for separating sheep (those more influenced by others) from goats (those making more independent decisions), at least when you have a sequence of decisions/judgments to work with. The reason it bothers me is that sheep can be confident too (see the US election or Brexit, for example).

We know that crowds can be wise if and only if the agents in the crowd are unaware of the decisions of other agents. If there’s a feedback loop (more accurately, I believe, if there is an insufficiently delayed feedback loop) then you wind up with stupid mobs, driven by preferential attachment and similar dynamics. This is a big problem in many political systems that allow publication of polls and early results. However, some people are, for one reason or another, less influenced by the crowd than others. It would be useful to be able to aggregate their decisions while ignoring those that simply follow the rest, in order to achieve wiser crowds. That’s what the method described here seeks to do.

The paper is more concerned with describing its model than with describing or analyzing the experiment itself, which is a pity as I’d like to know more about the populations used and tasks performed, and whether it really is discriminating confident from independent behaviour. I’ve also done some work in this area and have written about how useful it would be to automatically identify independent thinkers, and to use their captured behaviour instead of that of the whole crowd to make decisions, but I have never implemented that because, in real life, this is quite hard to do. In this experiment, it seems quite possible that the ‘independent’ people might simply have been those that knew more about the domain. That’s great if we are using a sequence of captured data from the same domain (in this case, length of country borders) because we get results from those that know rather than those that guess. But it won’t transfer when the domain changes even slightly: knowing the length of the Swiss border might not well predict knowledge of, say, the length of the Nigerian border, though I guess it might improve things slightly because those that care about such things would be better represented in the sample.

It would take a fair bit of evidence, I suspect, to identify someone as a context-independent independent thinker though, given enough time, it could be done, it would be well worth doing, and this model might provide the means to identify that. I’d like to see it applied in a real context. There are less lengthy and privacy-invading alternatives. For instance, we might capture both a rating/value/judgement/whatever and some measure of confidence. Some kinds of prediction market capture that sort of data and, because of the personal stake in it, might achieve better results when we do not have a long history of data to analyze. Whether and to what extent confidence is related to independence, and whether the results would be better remains to be discovered, of course – there’s a good little research project to be done here – but it would be a good start.

Address of the bookmark: https://arxiv.org/abs/1406.7578

Commons In A Box

Landing-like software from CUNY, based on Buddypress, intended to provide a learning commons with relatively little effort or configuration. It’s a nice bit of packaging, slick, with good collaboration tools and a simple, activity-stream-oriented social network. Commons in a Box is definitely worth looking at if you need a site to support a bottom-up social community or network, and you don’t have a wealth of resources to put into building your own. 

I came across this software because it is being used in the University of Brighton’s newly reborn community site at https://community.brighton.ac.uk which, until it was killed off last year, used to run on Elgg.  I remain a fan of Elgg for building such things, which has a lot more options than BuddyPress available by default, richer access control, and a much more elegant technological design that makes customization more robust and flexible, but this seems to be a great simple solution that just works without demanding much effort, and that, thanks to its WordPress foundations, could be customized to do pretty much anything you’d want a bit of social software to do. 

Address of the bookmark: http://commonsinabox.org/

Former Facebook Workers: We Routinely Suppressed Conservative News

The unsurprising fact that Facebook selectively suppresses and promotes different things has been getting a lot of press lately. I am not totally convinced yet that this particular claim of political bias itself is 100% credible: selectively chosen evidence that fits a clearly partisan narrative from aggrieved ex-employees should at least be viewed with caution, especially given the fact that it flies in the face of what we know about Facebook. Facebook is a deliberate maker of filter bubbles, echo chambers and narcissism amplifiers and it thrives on giving people what it thinks they want. It has little or no interest in the public good, however that may be perceived, unless that drives growth. It just wants to increase the number and persistence of eyes on its pages, period. Engagement is everything. Zuckerberg’s one question that drives the whole business is “Does it make us grow?” So, it makes little sense that it should selectively ostracize a fair segment of its used/users.

This claim reminds me of those that attack the BBC for both its right wing and its left wing bias. There are probably those that critique it for being too centrist too. Actually, in the news today, NewsThump, noting exactly that point, sums it up well. The parallels are interesting. The BBC is a deliberately created institution, backed by a government, with an aggressively neutral mission, so it is imperative that it does not show bias. Facebook has also become a de facto institution, likely with higher penetration than the BBC. In terms of direct users it is twenty times the size of the entire UK population, albeit that BBC programs likely reach a similar number of people. But it has very little in the way of ethical checks and balances beyond legislation and popular opinion, is autocratically run, and is beholden to no one but its shareholders. Any good that it does (and, to be fair, it has been used for some good) is entirely down to the whims of its founder or incidental affordances. For the most part, what is good for Facebook is not good for its used/users. This is a very dangerous way to run an institution.

Whether or not this particular bias is accurately portrayed, it does remain highly problematic that what has become a significant source of news, opinion and value setting for about a sixth of the world’s population is clearly susceptible to systematic bias, even if its political stance remains, at least in intent and for purely commercial reasons, somewhat neutral. For a site in such a position of power, though, almost every decision becomes a political decision. For instance, though I approve of its intent to ban gun sales on the site, it is hard not to see this as a politically relevant act, albeit one that is likely more driven by commercial/legal concerns than morality (it is quite happy to point you to a commercial gun seller instead). It is the same kind of thing as its reluctant concessions to support basic privacy control, or its banning of drug sales: though ignoring such issues might drive more engagement from some people, it would draw too much flak and ostracize too many people to make economic sense. It would thwart growth.

The fact that Facebook algorithmically removes 95% or more of potentially interesting content, and then uses humans to edit what else it shows, makes it far more of a publisher than a social networking system. People are farmed to provide stories, rather than paid to produce them, and everyone gets a different set of stories chosen to suit their perceived interests, but the effect is much the same. As it continues with its unrelenting and morally dubious efforts to suck in more people and keep them for more of the time, with ever more-refined and more ‘personalized’ (not personal) content, its editorial role will become ever greater. People will continue to use it because it is extremely good at doing what it is supposed to do: getting and keeping people engaged. The filtering is designed to get and keep more eyes on the page and the vast bulk of effort in the company is focused wholly and exclusively on better ways of doing that. If Facebook is the digital equivalent of a drug pusher (and, in many ways, it is) what it does to massage its feed is much the same as refining drugs to increase their effects and their addictive qualities. And, like actual drug pushing that follows the same principles, the human consequences matter far less than Facebook’s profits. This is bad.

There’s a simple solution: don’t use Facebook. If you must be a Facebook user, for whatever reason, don’t let it use you. Go in quickly and get out (log out, clear your cookies) right away, ideally using a different browser and even a different machine than the one you would normally use. Use it to tell people you care about where to find you, then leave. There are hundreds of millions of far better alternatives – small-scale vertical social media like the Landing, special purpose social networks like LinkedIn (which has its own issues but a less destructive agenda) or GitHub, less evil competitors like Google+, junctions and intermediaries like Pinterest or Twitter, or hundreds of millions of blogs or similar sites that retain loose connections and bottom-up organization. If people really matter to you, contact them directly, or connect through an intermediary that doesn’t have a vested interest in farming you.

Address of the bookmark: http://gizmodo.com/former-facebook-workers-we-routinely-suppressed-conser-1775461006

Google’s new media apocalypse: How the search giant wants to accelerate the end of the age of websites – Salon.com

A sad article, if ever there was one. This is about Google’s in-kind response to Facebook’s depressingly successful attempts to be a bigger and better AOL/Compuserve (amongst other things, through its ‘philanthropic’ internet.org arm, that people in developing countries afflicted with it sometimes think of as the Internet). The general idea is that Google will host content, rather than linking to it.

This is not the way the Internet should go, and this is not in line with Google’s avowed intent to not be evil. On the bright side, in real life, though poisonous and virulent, it is not the way the Internet really is going: the Internet is, ultimately, self healing, both in technical and in social terms. It might look like a fairly closed system to people that generally interact with it through Facebook or Google Search (or any of hundreds of thousands of other less successful attempts to lock people in) but it is heartening that WordPress dwarfs all of them put together in terms of sites and people that visit them (more than a quarter of all sites), and Worpress sites are, to a very large extent, controlled and owned by the people that run them. And that’s just the most popular content management system: the Web is many times bigger and more distributed than that, and the Internet is vastly bigger still. And, of course, Google is not the only search engine. You can find the rest of the Web in many other ways.

So, though the article claims doom and gloom all round, I remain optimistic that common sense and decency (or indecency if that happens to be your thing) will triumph in the end, and the game will never be over. A few successful parasitic corporations/applications – Facebook (including its subsidiaries like Instagram, Whatsapp, etc), Google, Apple, Amazon, Twitter, Snapchat, LinkedIn, Yahoo, Microsoft, Pinterest, etc – are doing their darnedest to wreck the open Internet, and are definitely shaping much of it, killing open standards, and sucking billions of people into their locked-in lairs, but those billions of people are just a click away (often, from within those systems themselves) to what the Internet is actually composed of, especially the Web side of it. Sure, these are parasites that suck the life out of openness and diversity but, like all parasites, they would be more than foolish to kill their host. And, hearteningly, the network effect (especially the rich get richer Matthew Effect) works just as effectively in reverse, as any former MySpace or Friendster afficionado will tell you. Or AOL, for that matter.

Address of the bookmark: http://www.salon.com/2016/05/01/googles_new_media_apocalypse_how_the_search_giant_wants_to_accelerate_the_end_of_the_age_of_websites/

Reactions to Facebook's reactions

I quite like the word ‘reactions’ that Facebook is using to describe their new options to express feelings about a post. I wish I’d thought of it. This is a matter of much more than passing interest to me as it relates closely to something that occupied a lot of my time over some years of my life. In my own CoFIND social bookmarking system (that first saw the light of day about 18 years ago and underpinned my PhD work) I used to refer to something quite similar as ‘qualities’ – metadata (tags) to show not just that something is good or interesting but how it is good or interesting, that could then be used to rate and thus help to filter and rank a feed of bookmarked resources. CoFIND is an acronym – Collaborative Filter in N Dimensions – that refers to this n-dimensionality of ratings. Facebook’s Reactions feature is a simplified version of this: it’s about categories more than tags, but the thinking behind it is broadly similar. The differences, though, are interesting.

Fuzzy ratings

One of the things that is most notable about Facebook Reactions is that ratings are, like its Likes before them, binary: a simple ‘yes’ or ‘not-rated’.  In most versions of CoFIND (it iterated a lot), users could choose to what extent something was good/loved/annoying/interesting/etc through a Likert scale. Giving the option to choose the strength of a feeling seems much more sensible when talking about fuzzy values like this. I want to be able to signify that I quite like something, or that is is mildly amusing, especially if my intent is to communicate my feelings to others. Facebook’s Reactions are a coarse as a means of expression: it is quite appropriate that its emoticons are literal caricatures.  In all the methods I tried – radio buttons, clickable links, etc – introducing scalar ratings turned out to be way too complex to be usable, but web interfaces were not as rich in those days: I think things like popup draggable sliders (not dissimilar to Facebook’s interface) might make it more feasible nowadays.

Evolving metadata

Facebook Reactions are not just binary but fixed. CoFIND – I think, still uniquely – allowed individuals to create new qualities (reactions), which could then be used by anyone else. It was an n-dimensional rating system where ‘n’ could be any number at all. Qualities quite literally evolved for each community, with more used qualities surviving (being immediately available for use) and less used ones being relegated to backwaters of the system (effectively dying, albeit with the possibility of resurrection if added again). This allowed for such metadata to provide a mirror of the values that mattered most within a given community or network, rather than being imposed uniformly on everyone, and for those values to evolve as the community itself evolved. While I appreciate the simplicity of Facebook’s interface (CoFIND’s most fatal flaw was always that its interface was far too complex to be usable) I still think that user-created ways of emoting – what I have since called ‘fuzzy tags‘ – lead to much more useful reactions that matter within a given community, especially when users can choose the degree to which a fuzzy tag applies. When CoFIND was used in an educational setting, qualities like ‘good for beginners’, ‘authoritative’, or ‘comprehensive’ tended to emerge – they were pedagogical metadata. When used in other contexts, such as to discover what HCI students considered important in a website, site-ranking qualities like ‘slow’, ‘boring’, ‘artistic’ and ‘informative’ appeared.

CoFIND qualities

 

Parcellation

One of the things I hate most vehemently about Facebook is that it same-ifies everything: a person in Facebook has a single unchanging (and permanently reified) identity, with a single network, a single facade, a single caricatured way of being in the world, notwithstanding the odd nod to diversity like pages and lists. Facebook’s business model relies on this, because any clustering or parcellation reduces the potential to connect, and connections are everything to Facebook. This makes me highly sceptical of its claimed ‘discovery’ that people are actually separated by only 3.57 degrees rather than six. Given that the system very deliberately drives them to friend as many others as much as possible, on most tenuous grounds of connection, this is hardly surprising. It shows not that previous studies are mistaken but the extent to which Facebook has manipulated human networks for profit. Apart from evolving to fit a single community, another of the things CoFIND did was to deliberately parcellate the environment, allowing different sets of values to evolve in different contexts. What is ‘good’ in the context of learning to read is not likely to be ‘good’ in the context of learning geometry, so different topics each evolved a (largely) separate set of qualities. This might not have been the best way to drive the growth of large networks, but it was a much better way to enable the self-organized emergence of meaningful communities. It also allowed individuals to express and embrace different facets of themselves, which in turn made it easy to accommodate changing needs and interests: essential in the context of learning, which is (if nothing else) about change.

You can read about the tortuous process of CoFIND’s development and the thinking behind it in my PhD thesis. I continued to develop CoFIND into the mid 2000s but, though the final version was a bit more usable and scalable (I rewrote it in PHP and changed a lot of the mechanisms, simplifying a fair number of things, including losing the fuzzy ratings) I’m still most fond of the final version that is described in the thesis.

Address of the bookmark: http://www.huffingtonpost.com/entry/facebook-reactions-update_us_56ccb128e4b0ec6725e42861?ir=Weird+News&section=us_weird-news&utm_hp_ref=weird-news