12 Awesome Social Media Facts and Statistics for 2013

An interesting summary of the GlobalWebIndex for Q2 2013. 

Some takeaways are:

Google+ is catching up with Facebook in numbers, but not in active usage. This makes sense as Google has a very different and less unpleasant agenda than FB that is all about search, not lock-in.  Google+ gets by far the largest number of visits, which is exactly what Google is aiming for.

Pinterest is still the fastest growing social media system. The report calls it a ‘social network’ but I think that is a slight mischaracterization that doesn’t quite capture its distinctiveness – it’s at least as much about interest sets rather than networks between people, with a focus on content and themes far more than on individual’s connected with one another. Tumblr is not far behind.

The trend is towards increasing mobility, of course. As an aside, it is interesting that Microsoft recently redefined smartphones as PCs. Probably an unwise statement from the point of view of their shareholders as it reduces Windows PCs to a very small percentage of the total.

 

 

Address of the bookmark: http://www.jeffbullas.com/2013/09/20/12-awesome-social-media-facts-and-statistics-for-2013/

EdTechnology Ideas – Education Technology Journal

A new open-access educational technology journal. Looks slick, CC licence, a social approach, and I know and respect a couple of the editorial team, so I think it should be reliable and interesting.

Slightly less clear about the need for yet another journal in a crowded market though I guess it’s good to have a thriving ecosystem with plenty of competing species. However, there is a balance between those benefits and the relatively small amount of attention that can be spread around. Now that there are plenty of open-access journals of this nature I see a strong place for metajournals that consolidate writings around particular themes and/or that use curational skills to identify the best of the best. To some extent this occurs in isolated pockets like blogs and curated sites like Pinterest etc, but there is scope for more concerted and formalized efforts in this field.

Address of the bookmark: http://edtechnologyideas.com/

Fiverr: Graphics, marketing, fun, and more online services for $5

A marketplace for services, many of which start at $5, hence the name. Compared with long-established competitors like Amazon’s Mechanical Turk this is very simple to use and easy to understand – you hire someone for a ‘gig’ and they do the work for you, whether it is proofreading, choosing a gift, teaching you to juggle, turning your room design into a CAD drawing, correcting your code or whatever. Mostly, you pay $5 or some multiple of $5. Being a global site, some of the prices are amazingly low. It has a simple collective approach to reputation management so, like most such sites, it is not too hard to find reliable service providers. I’m torn between concerns about the ease with which it can handle contract cheating and delight that people can distribute workload in such a simple and convenient manner. I’ve not come up with a personal use for it yet but can see the potential value in many different areas.

Address of the bookmark: http://fiverr.com/

When Did Human Speech Evolve?

Balanced critique by Barbera J. King for NPR of a study that reveals strong correlation between brain processes for technology use (flint knapping) and those for language. The study itself uses fTCD to show brain activity while engaged in language and tool-use tasks, with remarkably consistent patterns for both.

The authors suggest that  ‘tool-making and language share a basis in more general human capacities for complex, goal-directed action’. The critique linked here provides grounds for being wary of drawing firm conclusions of this nature because there are other confounding factors (we already use language so it is possible that we are using it to conceptualize how we go about using tools) and the fTCD approach is a bit coarse. However, the study’s results accord well with the widely held view that language is a technology. Whether tool use or language use evolved first is still up for debate, though I strongly suspect that they evolved in tandem. Language is a technology that makes other technologies possible and vice versa: all technologies are mutually constitutive assemblies, evolving as a result of being combined and recombined.

Address of the bookmark: http://www.npr.org/blogs/13.7/2013/09/05/219236801/when-did-human-speech-evolve?ft=1&f=

IGI Global: Open Access

This is a very interesting development. I’ve not looked fully into the fine print but it looks on the face of it as though IGI, publishers of my first book and a number of chapters and articles I have written over the years, may have seen the light and is partially moving to an open publishing model, with free and open sharing and a creative commons licensing structure. This is big news as IGI is quite a significant player in the academic publication market.

Formerly, IGI’s draconian terms and conditions and shameless profiteering at the expense of hard-working academics have put me off working with them ever again but this looks like something that might well change my mind. At the moment it is in beta and looks like it is intended only for papers, but I applaud them for taking this initiative and hope that it will be extended into their book publishing business too.

Address of the bookmark: http://www.igi-global.com/open-access/

Guesses and Hype Give Way to Data in Study of Education – NYTimes.com

This is a report on the What Works Clearinghouse, a set of ‘evidence-based’ experimental studies of things that affect learning outcomes in US schools, measured in the traditional ‘did they do better on the tests’ manner. It’s a great series of reports.

I have a number of big concerns with this approach, however, quite apart from the simplistic measurements of learning outcomes that ignore what is arguably the most important role of education – it is about changing how you think, not just about knowing stuff or acquiring specific skills. There is not much measurement of that apart from, indirectly, through the acquisition of the metaskill of passing tests, which seems counter-productive to me. What bothers me more though is the naive analogy between education and clinical practice. The problem is an old one that Checkland expressed quite nicely when talking of soft systems:

“Thus, if a reader tells the author ‘I have used your methodology and it works’, the author will have to reply ‘How do you know that better results might not have been obtained by an ad hoc approach?’ If the assertion is: ‘The methodology does not work’ the author may reply, ungraciously but with logic, ‘How do you know the poor results were not due simply to you incompetence in using the methodology?’

Not only can good methodologies be used badly, bad methodologies can be used well. Teaching and learning are creative acts, each transaction unique and unrepeatable. The worst textbook in the world can be saved by the best teacher, the best methodology can be wrecked by an incompetent or uncaring implementation. Viewed by statistical evidence alone, lectures are rubbish, but most of us who have been educated for long enough using such methods can probably identify at least the odd occasion when our learning has been transformed by one. Equally, if we have been subjected to a poorly conducted active learning methodology, we may have been untouched or, worse, put off learning about the subject. It ain’t what you do, it’s the way that you do it.

Comparing education with medicine is a category mistake. It would be better to compare it with music or painting, for instance. ‘Experimental studies show that children make better art with pencils than with paints’ might be an interesting finding as a statistical oddity, but it would be a crass mistake to therefore no longer allow children to have access to paintbrushes. ‘On average, children playing violins make a horrible noise’ would not be a reason to stop children from learning to play the violin, though it is undoubtedly true. But it is no more ridiculous than telling us that ‘textbook X leads to better outcomes than textbook Y’, that a particular pedagogy is more effective than another, or that the effectiveness of a particular piece of educational software produces no measurable improvement over not using it. Interestingly, the latter point is made in a report from the ‘What Works Clearinghouse’ site at http://ies.ed.gov/ncee/pubs/20094041/pdf/20094041.pdf which, amongst other interesting observations, makes the point that the only thing that does make a statistical difference in the study is teacher/student ratios. Low ratios allow teachers to exhibit artistry, to adapt to learners’ needs, to demonstrate caring for individuals’ learning more easily. This is not about a method that works – it is about enabling multiple methods, adapted to needs. It is about allowing the teacher to be an artist, not an assembly worker implementing a fixed set of techniques.

I am not against experimental studies as long as we are very clear and critical in our interpretation of them and do not over-generalize the results. It would be very useful to know that something really does not ever work for anyone, but I’m not aware of many unequivocal examples of this. Even reward and punishment, that fails in the overwhelming majority of cases, has at least some evidence of success in some cases for some people – very few, but enough to show it is not always wrong.

Even doing nothing which, surely, must be a prime candidate for universal failure, sometimes works very well. I was once in a maths class at school taken by a teacher who, for the last few months of the two-year course, was taken ill. His replacements (for some time we had a different teacher every week, most of whom were not maths teachers and knew nothing of the syllabus) did very little more than sit at the front of the class and keep order while we studied the textbook and chatted amongst ourselves. The average class grade in the national exams sat at the end of it all was considerably higher than had ever been achieved in that school previously – over half of us got A grades where, in the past, twenty percent would have been a good showing. Of course, ‘nothing’ does not begin to describe what actually happened in the class in the absence of a teacher. The textbook itself was a teacher and, more importantly, we were one another’s teachers. Our sick teacher had probably inspired us and the very fact that we were left adrift probably pulled us closer together and made us focus differently than we would have done in the presence of a teacher. Maybe we benefited from the diversity of stand-in teachers. We were probably the kind of group that would benefit from being given more control over our own learning – we were the top set in a school that operated a streaming policy so, had it happened to a different group, the results might have been disastrous. Perhaps we were just a statistically improbably group of math genii (not so for me, certainly, so we might rule that one out!). Maybe the test was easier that year (unlikely as about half a dozen other groups didn’t show such improvement, but perhaps we just happened to have learned the right things for that particular test). I don’t know. And that is the point: the process of learning is hugely complex, multi-faceted, influenced by millions of small and large factors. Again, this is more like art than medicine. The difference between a great painting and a mediocre one is, in many cases, quantitatively small, and often a painting that disobeys the ‘rules’ may be far greater than one that keeps to them. The difference between a competent musician and a maestro is not that great, viewed objectively. In fact, many of my favourite musicians have objectively poor technique, but I would listen to them any day rather than a ‘perfect’ rendition of a midi file played by an unerring computer. The same is true of great teaching although this doesn’t necessarily mean it is necessarily the result of a single great teacher – the role may be distributed among other learners, creators of content, designers of education systems, etc.  I’m fairly sure that, on average, removing a teacher from a classroom at a critical point would not be the best way to ensure high grades in exams, but in this case it appeared to work, for reasons that are unclear but worth investigating. An experimental study might have overlooked us and, even if it did not, would tell us very little about the most important thing here: why it worked. 

We can use experimental studies as a starting point to exploring how and why things fail and how and why they succeed. They are the beginning of a design process, or steps along the way, but they are not the end. It is useful to know that low teacher/student ratios are a strong predictor of success, but only because it encourages us to investigate why that is so. It is even more interesting to investigate why it does not always appear to work. Unlike clinical studies, the answer is seldom reduceable to science and definitely not to statistics, but knowing such things can make us better teachers.

I look forward to the corollary of the What Works Clearinghouse – the Why it Works Clearinghouse.

Address of the bookmark: http://www.nytimes.com/2013/09/03/science/applying-new-rigor-in-studying-education.html?_r=0

LinkedIn launches LinkedIn for Education

This is about connecting people you at colleges or who you went to college with, rather than being a service for academics like academia.edu or others of that ilk, and it’s an incremental change from the existing ways LinkedIn already does pull people who claim the same institutional background together, but an interesting development none the less.

 

Address of the bookmark: http://pro.gigaom.com/blog/linkedin-launches-linkedin-for-education/

Killing stupid software patents is really easy, and you can help

I’ve very rarely come across a software patent that is not really stupid, that does not harm everyone apart from patent trolls and lawyers, and that is not predated by earlier examples of prior art.  This article explains how anyone can easily put a stop to them before they do any damage. Great stuff.

Address of the bookmark: http://boingboing.net/2013/07/24/killing-stupid-software-patent.html

MOOPhD accreditation

A recent post at http://www.insidehighered.com/views/2013/06/05/essay-two-recent-discussions-massive-open-online-education reminded me that the half-formed plan that Torsten Reiners, Lincoln Wood and I dreamt up needs a bit of work.

So, to add a little kindling to get this fire burning…

Our initial ideas centred around supporting the process of doing research and writing papers for a PhD by publication. This makes sense and, we have learned, PhDs by publication are actually the norm in many countries, including Sweden, Malaysia and elsewhere, so it is, in principle, do-able and does not require us to think more than incidentally about the process of accreditation. However, there are often invisible or visible obstacles that institutions put in place to limit the flow of PhDs by publication: residency requirements, only allowing them for existing staff, high costs, and so on.

So why stop there?

Cranking the levers of this idea pump a little further, a mischievous thought occurs to me. Why not get a PhD on reputation alone? That is, after all, exactly how any doctorate is awarded, when it comes down to it: it is basically a means of using transferable reputation (think of this as more like a disease than a gift – reputations are non-rival goods), passing it on from an institution to an awardee, with a mutational process built in whereby the institution itself gets its own research reputation enhanced by a similar pass-it-on process. This system honours the institution at least as much as the awardee, so there’s a rich interchange of honour going on here. Universities are granted the right to award PhDs, typically through a government mandate, but they sustain their reputation and capacity to do so through ongoing scholarship, publication and related activities, and through the activities of those that it honours. A university that awarded PhDs without itself being a significant producer of research, or that produced doctors who never achieved any further research of any note, would not get very far. So, a PhD is only a signal of the research competence in its holder because an awarding body with a high reputation believes the holder to be competent, and it sustains its own reputation through the activities of its members and alumni. That reputation occurs because of the existence of a network of peers, and the network has, till now, mostly been linked through journals, conferences and funding bodies. In other words, though someone goes to the trouble of aggregating the data, the actual vector of reputation transmission is through individuals and teams that are linked via a publication process. 

So why not skip the middle man? What if you could get a PhD based on the direct measures of reputation that are currently aggregated at an institutional level rather than those that have been intentionally formalized and aggregated using conventional methods?

Unpicking this a little further, the fact that someone has had papers published in journals implies that they have undergone the ordeal by fire of peer review, which should mean they are of doctoral quality. But that doesn’t mean they are any good. Journals are far from equal in their acceptance rates, the quality of their reviewers – there are those with good reputations, those with bad ones, and a lot in between. Citations by others help to assure us that they may have something of value in them, but citations often come as a result of criticism, and do not imply approval of the source. We need a means to gauge quality more accurately. That’s why h-index was invented. There are lots of reasons to be critical of this and similar measures: they fail to value great contributions (Einstein would have had a very low h-index had he only published his most important contributions), they embody the Matthew Effect in ways that make their real value questionable,  they poorly distinguish large and small contributions to collaborative papers, and the way they rank importance of journals etc is positively mediaeval. It is remarkable to me to surf through Google Scholar’s rankings and find that people who are among the most respected in my field having relatively low indexes while those that just plug away at good but mundane research having higher ones. Such indexes do none-the-less imply the positive judgements of many peers with more rigour and fairness than would normally be found in a doctoral committees, and they give a usable number to grade contributions. So, a high h-index or i10-index (Google’s measure of papers with more than 10 citations) would satisfy at least part of the need for validation of quality of research output. But, by definition, they undervalue the work of new researchers so they would be poor discriminators if they were the only means to evaluate most doctorates. On the other hand, funding councils have already developed fairly mature processes for evaluating early-career researchers, so perhaps some use could be made of those. Indeed, the fact that someone has successfully gained funding from such a council might be used as partial evidence towards accreditation.

A PhD, even one by publication, is more than just an assortment of papers. It is supposed to show a sustained research program and an original contribution to knowledge. I hope that there are few institutions that would award a PhD to someone who had simply had a few unrelated papers published over a period of years, or to someone who had done a lot of mundane but widely cited reports with no particular research merit. So, we need a bit more than citation indexes or other evidence of being a world-class researcher to offer a credible PhD-standard alternative form of certification.

One way to do this would be to broadly mirror the PhD by publication process within the MOOC. We could require peer ‘marking’, by a suitable panel, of a paper linking a range of others into a coherent bit of doctoral research and perhaps defended in a public webmeeting. This would be a little like common European defence processes, in which theses are defended not just in front of professors but also any member of the public (typically colleagues, friends and families) who would want to come along. We could increase the rigour a little by making it a requirement that those participating in such a panel should have to have a sufficiently high h-index or i-index of their own in a similar subject area, and/or have a relevant doctorate. Eventually the system could become self-supporting, once a few graduates had emerged. In time, being part of such a panel would become a mark of prestige in itself. Perhaps, for pedagogic and systemic reasons, engagement in such a panel would be a prerequisite for making your own ‘doctoral’ defence. Your rating might carry a weighting that accorded with your own reputational index, with those starting out weighted quite low and those with doctorates, ‘real’ doctoral students etc having higher indexes. The candidates themselves and other more experienced examiners might rate these novice examiners, so a great review from an early-career candidate might increase their own ranking.  It might be possible to make use of OpenBadges for this, with badges carrying different weights according to who awarded them and for what they were awarded.

Apart from issues of motivation, the big problem with the peer-based approach is that it could be seen as one of the blind leading the blind, as well as potentially raising ethical issues in terms of bias and lack of accountability. A ‘real’ PhD committee/panel/etc is made up of carefully chosen gurus with an established reputation or, at least, it should be. In North America these are normally the people that supervise the student, which is dodgy, but which normally works OK due to accountability and professional ethics. Elsewhere examiners are external and deliberately unconnected with the candidate, or consist of a mix of supervisors and externals. Whatever the details, the main point here is that the examiners are fully accredited experts, chosen and vetted by the institutional processes that make universities reliable judges in the first place. So, to make it more accountable, more use needs to be made of that reputational network that sustains traditional institutions, at least at the start. To make this work, we would need to get a lot of existing academics with the relevant skills on board. Once it had been rolling for a few years, it ought to become self-sustaining.

This is just the germ of an idea – there’s lots of ways we could build a very cheap system that would have at least as much validity as the accreditation procedures used by most universities. If I were an employer, I’d be a lot more impressed by someone with such a qualification than I would by someone with a PhD from most universities. But I’m just playing with ideas here. My intent is not to create an alternative to the educational system, though that would be very interesting and I don’t object to the idea at all, but to highlight the often weird assumptions on which our educational systems are based and ask some hard questions about them. Why and on what grounds do we set ourselves up as arbiters of competence? What value do we actually add to the process? How, given propensities of new technologies and techniques, could we do it better? 

Our educational systems are not broken at all: they are actually designed not to work. Well, ‘design’ is too strong a word as it suggests a central decision-making process has led to them, whereas they are mainly the result of many interconnected decisions (most of which made sense at the time but, in aggregate, result in strange outcomes) that stretch back to mediaeval times. Things like MOOCs (and related learning tools like Wikipedia, the Khan Academy, StackOverflow, etc) provide a good opportunity to think more clearly and concretely about how we can do it better and why we do it the way we do in the first place.

Doug Engelbart, American inventor and computing legend, has passed away — Tech News and Analysis

Sad news of the death, at 88, of one of the greatest thinkers and inventors of the past century. Although the headlines all proclaim him as the inventor of the mouse, that was only one of his many achievements that were more profoundly influential. Among the many other things that he invented or played a significant role in inventing were the first working hypertext (and hence the Web), the word processor, the Internet (his lab was the second node on its forerunner, the ARPANET), email, video conferencing, windowing systems like the Mac and Windows, and much else besides. A modest and inspiring genius whose vision of augmenting, not replacing, human intellect reverberates loudly to this day.

Address of the bookmark: http://gigaom.com/2013/07/03/doug-engelbart-american-inventor-computing-legend-passes-away/