Interactive Learning Online at Public Universities: Evidence from Randomized Trials | Ithaka S+R

Yet another no-significant-difference paper.

I’d feel a lot more positive towards this report if its abstract did not begin “Online learning is quickly gaining in importance in U.S. higher education, but little rigorous evidence exists as to its effect on student learning outcomes”

For all the ‘rigour’ of their review, it appears that they failed to do the literature review because an absolutely massive amount of rigorous evidence exists about this that shows exactly the same thing. Anyway, here is some more. And, like all the rest, the graphs look nice but otherwise it is pretty pointless. Like all the rest, you might just as well look at the effect of transistors or buildings on student learning outcomes. It ain’t what you do, it’s the way that you do it, that’s what’s missing here.

For the record, they are not actually looking at online learning but at blended (or, as they prefer, ‘hybrid’) approaches. 

Address of the bookmark: http://www.sr.ithaka.org/research-publications/interactive-learning-online-public-universities-evidence-randomized-trials

Texting frequency and moral shallowing

An interesting study that reveals, in accordance with Nicholas Carr’s predictions, that there is a close positive correlation between what most of us would consider moral ugliness and frequent texting, at least among young people in Winnipeg. The correlations between frequent texting and moral dissolution are unsurprising, as the study appears to suggest that 42% of students in Winnipeg appear to text more than 200 times a day. 12%  of them do so more than 300 times a day. That leaves little time for thought. It averages out at once every 3 minutes for 15 hours of the day. I guess they read the replies too And eat and use the bathroom (I don’t want to even think about that in the context of texting). And indulge what appear to be quite prodigious and positively correlated sexual appetites (or that). Luckily for the rest of us, that leaves little time to pursue their interests in wealth and status.  My suspicion would be that most activities apart from breathing that that we engage in 300+ times a day are unlikely to do us much good. 

Address of the bookmark: http://news-centre.uwinnipeg.ca/wp-content/uploads/2013/04/texting-study.pdf

The pedagogical foundations of massive open online courses | Glance | First Monday

A charmingly naive article taking a common-sense, straightforward approach to asking whether the woefully uniform pedagogies of the more popular Coursera-style MOOCs might actually work. The authors identify the common pedagogies of popular MOOCs then use narrative analysis to see whether there has been empirical research to show whether those pedagogies can work. The answer, unsurprisingly, is that they can. It would have been a huge surprise if they couldn’t. This is a bit like asking whether email can be used to communicate.

I like the way this article is constructed and the methods used. Its biggest contribution is probably the very simple (arguably simplistic) description of the central pedagogies of MOOCs. Its ‘discoveries’ are, however, spurious. The fact that countless millions of people do learn online using some or all of the pedagogical approaches used by MOOCs is plenty evidence enough that their methods can work and it really doesn’t demand narrative analysis to demonstrate this blindingly obvious fact – one for the annals of obvious research, I think. Like all soft technologies, it ain’t what you do, it’s the way that you do it, that’s what gets results. ‘Can work well’ in general does not mean ‘does work well’ in the particular. We know that billions of people have learned well from books, but that does not mean that all books teach well, nor that books are the best way to teach any given subject.

Address of the bookmark: http://firstmonday.org/ojs/index.php/fm/article/view/4350/3673

Elgg source code evolution (before 4th May 2013) – YouTube

A fascinating diagram showing developer contributions to the open source core of the Elgg project (used here on the Landing) over the past 5 years or so. Quite fascinating to watch, and especially pleasing to see how the number of contributors has grown over the past year or so, probably as much due to moving to Github from Trac as anything else, though the great work of the Elgg foundation team in building and employing the work of the community goes hand in hand with that. Makes me feel quite a lot more secure about the future of the technology to know that so many people are active in pushing it forward. It would be intriguing to look at the larger ecosystem of plugins that sits around that using a similar visualization.

Address of the bookmark:

Discourse – rebooted forum software

Discourse is an extremely cool and open source reinvention of forum software that is replete with modern features like real-time AJAX loading of threads (which are not the usual tree-like things but more a flat form with contextual threading as and when needed), lots of collective features including reputation management, tagging, rating and ranking, what’s-hot lists and so on. Looks slick, hooks into plenty of other services. I’d like to see something like this on the Landing instead of its simple discussion boards. Not trivial to integrate, but it does have an open and rich API so can be called easily from other systems.

Address of the bookmark: http://www.discourse.org/

Knewton is cool, but it is very dangerous

At the Edtech Innovation 2013 conference last week I attended an impressive talk from Jose Ferreira on Knewton, a tool that does both large-scale learning analytics and adaptive teaching. Interesting and ingenious though the tool is, its implications are chilling. 

Ferreira started out his talk with a view of the history educational technology that somewhat mirrors my own, starting with language as the seminal learning technology that provided the foundation for the rest (I would also consider other thinking tools like drawing, dance and music as being important here, but language is definitely a huge one). He then traced technology innovations like writing, printing, etc and, a little inaccurately, mapped these to their reach within the world population. So, printing reached more people than writing, for instance, and formal schooling opened up education to more people than earlier cottage industry approaches. That mapping was a bit selective as it ignored the near-100% reach of language, as well as the high penetration of broadcast technologies like TV and radio and cinema. But I was OK with the general idea – that educational technologies offer the potential for more people to learn more stuff. That is good.

The talk continued with a commendable dismissal of the industrial model of education that developed a couple of hundred years ago. This model made good economic sense at the time and made much of the improvement to the human condition since then possible (and the improvements are remarkable),  but it makes use of a terrible process that was a necessary evil at the time but that, with modern technologies and needs, no longer makes sense. From a learning perspective it is indeed ludicrous to suggest that groups of people of a similar age should learn the same way at the same time. But there is more. Ferreira skipped over an additional, and crucial, key concern with this model of education. A central problem with the industrial model, when used for more than basic procedural knowledge, is not just that everyone is learning the same way at the same time but that they are (at least if it works, which it thankfully doesn’t) learning the same things. That is a product of the process, not its goal. No one but a fool would deliberately design a system that way: it is simply what happens when you have to find a solution to teaching a lot of people at once, with only simple technologies like timetables, classrooms and books to help, and a very limited set of teaching resources to handle it. It is not something to strive for, unless your goal is cultural and socio-economic subjugation. Although, as people like Illich and Freire eloquently demonstrated a long time ago, such oppression may be the implicit intent, most of us would prefer that not to be the case. Thankfully, what and how we think we teach is very rarely, if ever, precisely what and how people actually learn.  At least, that has been the case till now.  The Knewton system might actually make that process work. 

Knewton has two distinct functions that were not clearly separated in Ferreira’s talk but that are fundamentally different in nature. The first is the feedback on progress for teachers and learners that the system provides. With a small proviso that bad interpretations of such data may do much harm, I think the general idea behind that is great, assuming a classroom model and the educational system that surrounds it remains much as it is now. The technology provides information about learner progress and teaching effectiveness in a palatable form that is genuinely useful in guiding teachers to better understand both how they teach and the ways that students are engaging with the work. It is technically impressive and visually appealling – little fleas on an ontology map showing animated versions of students’ learning paths are cool. Given the teaching context that it is trying to deal with, I have no problems with that idea and applaud the skill and ingenuity of the Knewton team in creating a potentially useful tool for teachers. If that were all it did, it would be excellent. However, the second and far more worrying broad function of Knewton is to channel and guide learners themselves in the ‘right’ direction. This is adaptive hypermedia writ large, and it is emphatically not great. This is particularly problematic as it is based on a (large) ontology of facts and concepts that represent what is ‘right’ from an expert perspective, not on the values of such things nor on the processes for achieving mastery, that may be very different from their ontological relationships with one another.

There is one massive problem with adaptive hypermedia of this nature, notwithstanding the technical problems thanks to the inordinate complexity of the algorithms and mass of data points used here, and ignoring the pedagogical weaknesses of treating expert understanding as a framework for teaching. The big problem is more basic: that it assumes there is a right answer to everything. This is a model of teaching and learning (in that order) that is mired in an objectives driven model. But my reaction here (and while he was talking) to Ferriera’s talk, which I assume was meant to teach me about Knewton, self-referentially shows that’s not always the main value in effective teaching and learning. Basically, what he wanted to tell me is clearly not, mainly, what I learned.  And that is always the case in any decent learning experience worthy of the name. In fact, the backstories, interconnections, recursive, iterative constructions and reconstructions of knowledge that go on in most powerful learning contexts are typically the direct result of what might be perceived by those seeking efficient mastery of learning outcomes as inefficiency.  In educational transactions that work as they should, some of what we learn can be described by learning outcomes but the real big learning that goes on is usually under the waterline and goes way beyond the defined objectives. While skill acquisition is a necessary part of the process and helps to provide foci and tools to think with, meaningful learning is also transformative, creative and generative, and it hooks into what we already know in unpredictable ways.

So Knewton is reinforcing a model that deals with a less-than-complete subset of the value of education. So what? There’s nothing wrong with that in principle and that’s fine if that is all it does. We don’t have to listen to its recommendations, the whole Web is just a click away and, most importantly, we can construct our own interpretations and make our own connections based on what it helps to teach us. It gives us tools to think with. If Knewton is part of a learning experience, surely there is nothing wrong with making it easier to reach certain objectives more easily? If nothing else, teaching should make learning less painful and difficult than it would otherwise have been, and that’s exactly what the system is doing. The problem though is that, if Knewton works as advertised, the paths it provides probably are the most efficient way to learn whatever fact or procedure the system is trying to teach us. This leads to the crucial problem: assuming it works, Knewton reinforces our successful learning strategies (as measured by the narrow objectives of the teacher) and encourages us to take those paths again and again. By adapting to us, rather than making us adapt ourselves, we are not stretched to have to find our own ways through something confusing or vague and we don’t get to explore less fruitful paths that sometimes lead to serendipity and, less commonly but more importantly, to transformation, and that stretch us to learn differently. Knewton, if it works as intended, makes a filter bubble that restricts the range of ways that we have to learn, creating habits of behaviour that send us ever more efficiently to our goals. Fundamentally, learning changes people, and learning how to learn in different ways, facing different problems differently, is precisely what it is all about: mechanical skills just give us better tools for doing that.  The Knewton model does not encourage change and diversity in how we learn: it encourages reinforcement. That is probably fine if we want to learn (say) how to operate a machine, perform mathematical operations, or remember facts, as part of a learning process intended to achieve something more. However, though important, this is not the be-all and end-all of what effective education is all about and is arguably the lesser part of its value. Effective education is about changing how we think. Something that reinforces how we already think is therefore very bad. Human teachers model ways of knowing and thinking that open us up to different ways of thinking and learning – that’s what makes Knewton a useful tool for teachers, because it helps to better reveal how that happens and allows them to reflect and adapt.

None of this would matter all that much if Knewton remains simply one of an educational arsenal of weapons against ignorance in a diverse ecosystem of tools and methods. However, that does not match Ferriera’s ambitions for it: he wants it to reach and teach 100% of the world’s population. He wants it to be freely available and used by everyone, to be the Google of education. That makes it far more dangerous and that’s why it worries me. I am pleased to note that Ferreira is not touting the tool as having value in teaching of softer subjects like art, literature, history, philosophy, or education, and that’s good. But there are those, and I hope Ferreira is not among them, who would like to analyse development in such learning contexts and build tools that make learning in such areas easier in much the same way as Knewton currently does in objectives-driven skill learning. In fact, that is almost an inevitability, an adjacent possible that is too tempting to ignore. This is the thin end of a wedge that could, without much care, critical awareness and reflection about the broader systemic implication, be even more disastrous than the industrial model that Ferreira rightly abhors. Jose Ferreira is a likeable person with good intentions and some neat ideas, so I hope that Knewton achieves modest success for him and his company, especially as a tool for teachers. But I hope even more that it doesn’t achieve the ubiquitous penetration that he intends.

MOOCs do not represent the best of online learning (essay) | Inside Higher Ed

Another post about MOOCs that misses the point. The author, Ronald Legon, seems hopeful that ‘MOOC 2.0’ will arrive with better pedagogy, more support and better design. I have no doubt that what he describes will happen, at least in places, but it is certainly not worthy of the ‘2.0’ moniker. It is simply an incremental natural evolution that adds efficiency and refinement to a weak model, but it’s not a paradigm shift. 

The trouble is that Legon hasn’t bothered to check the history of the genre. The xMOOCs under attack here are not far off the attempts by organizations and companies to replicate the same strategies that worked for old fashioned mass media in the 1990s. They were not so much ‘Web 1.0’ as a bastardization of what the Web was meant from the start to be. That is why those of us who had always been doing ‘Web 2.0’ stuff since the early nineties hate the term. Similarly, xMOOCs are a bastardization of what MOOCs started out to achieve and they miss the point entirely. What is the point? George Siemens explains this better than I could, so here is his take on the topic:

Happily, many people are using xMOOCs in a cMOOC-like way so they are succeeding in learning with one another despite weak pedagogies, unsuitable structures, and excessive length. While the intentions of the people that run them are quite different, many of the people using them to learn are doing so as part of a personal learning journey, in networks and learning communities with others, taking pieces that interest them from different MOOCs and mashing them up. They are in control, not the MOOC creators. Less than 10% completion rates are a worry to the people that run them, not to those that don’t complete them (true, there may be some who are discouraged by the process, but I hope not).

MOOC 2.0, like Web 2.0, is likely to be what MOOC 1.0 (the real MOOC 1.0) tried to be – a cMOOC.  

I do see a glowing future for great content of the sort created for these xMOOCs (big information-heavy sites of the sort found in the 1990s have not ever gone away and continue to flourish) but they may have to adapt a little. I think that they will have to disaggregate the chunks and let go of the control. It is encouraging to see an increasing tendency to reduce their size to 4-week versions, but the whole notion of a fixed-length course is crazy. Sometimes, 4 weeks will do. Sometimes, 4 minutes would be better. Occasionally, 4 years might be the ideal length. Whatever they turn out to be, they must be seen as parts of an individually assembled whole, not as large-scale equivalents of traditional approaches to teaching that only exist due to physical constraints in the first place and that are sustained not only by continuing constraints of that nature but by a ludicrously clunky, counter-productive and unreliable accreditation process.

Address of the bookmark: http://www.insidehighered.com/views/2013/04/25/moocs-do-not-represent-best-online-learning-essay

Spy ebook reader

I like this device but I love the way it is being sold. This and other products on the site are sold (at exhorbitant prices, incidentally, that can all be greatly bettered elsewhere) as devices intended to help people to cheat in exams. Excellent.

While a cheat reading carefully from a watch of this size is unlikely to fool any but the least attentive of invigilators, it and other technologies available on the site demonstrate rather nicely that the arms race between examiners and cheats will never be won by either side. It inevitably leads to spiralling costs that cannot be sustained for schools, universities and other organizations that use them and some cheats will always be caught. This is in the nature of technological evolution. It cannot be otherwise. Sometimes cheats will be on the ascendent, sometimes invigilators, but neither faction can ever win.

I think there is a place for summative exams in some limited areas like driving cars or for journalists, where the method of assessment is authentic for the task being assessed. For formative assessment purposes, they can be a good idea, as long as nothing rides on them, they are ungraded, and the resulting feedback is positive and helpful.  In most other cases, decades of research proves that they are antagonistic to motivation and thus to learning. Studies reveal that the majority of school students, a large number of undergraduates, and a smaller number of post-graduates admit to having cheated in exams. Even the least sophisticated exams are expensive because there is less-than-no contribution to the learning process so costs are always in addition to the cost of teaching. Most even fail in their most basic role, to provide a reliable measure of skill or achievement. It’s way past the time to get rid of them.

Address of the bookmark: http://www.spystudy.com/ebook-reader-Mp3/Mp4-spy-watch.html

MOOCs are so unambitious: introducing the MOOPhD

Massive Open Online PhDs

During my recent visit to Curtin University, Torsten Reiners, Lincoln Wood and I started brainstorming what we think might be an interesting idea. In brief, it is to build and design what should eventually become a massive, open, online PhD program. Well, nearly. This is a work in progress, but we thought it might be worth sharing the idea to help spark other ideas, get feedback and maybe gather a few people around us who might be interested in it.
The starting point for this was thinking about ways of arranging crowd funding for PhD students, which evolved into thinking about other crowd-based/funded research support tools and systems to support that. For example, we looked at possible ways to not only crowd-fund research projects but to provide structures and tools to assist the process: forming and setting up project teams, connecting with others, providing project management support, proposal writing assistance, presenting and sharing results, helping with the process of writing reports and papers for publication, and so on. Before long, what we were designing began to look a little like a research program. And hence, the MOOPhD (or MOOD – massive open online doctorate).
A MOOPhD is a somewhat different kind of animal from a MOOC. It is much longer and much bigger, for a start – more of a program than a course. For many students it might, amongst other things, encapsulate a variety of MOOCs that would help them to gain knowledge of the research process, including a range of research methods courses and perhaps some more specific subject-related courses.  This is quite apart from the central process of supporting the conduct of original research that would form the ‘course’ itself.
A MOOPhD will also attract a very different kind of learner from those found in most MOOCs, notwithstanding the fact that, so far, a lot of MOOC-takers already have at least a first degree, not uncommonly in the same subject area as the MOOC.
Perhaps the biggest difference between a MOOPhD and a MOOC, at least of the xMOOC variety, is the inevitable lack of certainty about the path to the destination. MOOCs usually have a fairly fixed and clear trajectory, as well as moderately fixed content and coverage.  Even cMOOCs that largely lack specified resources, outcomes and assessments, have topics and timetables mapped out in advance. While the intended outcomes of a PhD are typically pretty clear (the ability to perform original and rigorous research, to write academically sound papers and reports, to design a methodology, review literature, etc), and there are commonalities in the process and landmarks along the way, the paths to reaching those goals are anything but determined. A PhD, to a far greater degree than most courses and lower level programs, specifies a method and processes, but not the content or pathways that will be taken along the way. This raises some very interesting and challenging questions about what we mean by ‘course’ and the wisdom and validity of MOOCs in general, but discussion of that can wait for another post. Suffice to say, it is a bit different from what we have seen so far.
There are many existing sites and systems that provide at least some of the tools and methods needed. I have had peripheral involvement with a support network for students investigating learning analytics, for example, and have helped to set up a site to provide resources for graduate students and their supervisors. There are commercial sites like academia.edu and ResearchGate that connect academics, including graduate students. There are some existing MOOCs on research methods and crowd-funding sites to help with fees and kick-starting projects such as http://www.rockethub.com/ or www.razoo.com.  And, of course, there is the complete system of journal and conference reviewing that provides invaluable feedback for nascent researchers. Like all technologies, what we are thinking about involves very little if anything that is radically new, but is mostly an assembly of existing pieces. 
It is likely that, for many, a PhD or other doctorate would not be the final outcome. People would pick and choose the parts that are of value, helping them to set up projects, write papers or form networks. Others might treat it as a useful resource for a more traditional doctoral learning journey.
 

So what might a MOOPhD look like? 

A MOOPhD would, of necessity, be highly modular, offering student-controlled support for all parts of the research process, from research process teaching, through initial proposals, through project management, through community support, through paper writing etc. Students would choose the parts that would be of value to them at different times. Different students would have different needs and interests, and would need different support at different points along the journey. For some, they might just need a bit of help with writing papers. For others, the need might be for gaining specific skills such as statistical analysis or learning how to do reviews.  More broadly, the role of a supervisory team in modelling practice and attitudes would be embedded throughout.
Importantly, apart from badges and certificates of ‘attendance’, a MOOPhD would not be concerned with accreditation. We would normally expect existing processes for PhDs by publication that are available at many institutions to provide the summary assessment, so the program itself would simply be preparation for that. As a result of this process, students would accrue a body of research publications that could be used as evidence of a sustained research journey, and a set of skills that would prepare them for viva voces and other more formal assessment methods. This would be good for universities as they would be able to award more PhDs without the immense resources that are normally needed, and good for students who would need to invest less money (and maybe be surrounded by a bigger learning community).
 

Some features and tools

A MOOPhD might contain (amongst other things):
  • A community of other research students, with opportunities to build and sustain networks of both peers (other students) and established researchers
  • MOOCs to help cover research methods, subject specialisms, etc
  • A great deal of scaffolding: resources to help explain the process, information about everything from ethics to citation, means and criteria to self-assess such as wizards, forms and questionnaires, guidelines for reviewing papers, etc
  • Mentors (not exactly supervisors – too tricky to deal with the numbers)  including both experienced academics and others further on in the PhD process. Mentors might provide input to a group/action learning set of students rather than to individuals, and thus allow students to observe behaviours that the academics model.
  • Exemplars – e.g. marked-up reviews of papers. This is vital as one of the ways of allowing established academics to provide role models and show what it means to be an academic
  • Plentiful resources and links relevant to the field (crowd-generated)
  • A filtering and search system to help identify people and things 
  • A means to provide peer review to others (akin to an online journal submission system)
  • A means to have one’s own ideas and papers reviewed by peers
  • Tutorial support – most likely a variant on action learning sets to support the process. This would cover the whole process from brainstorming, to literature review, to methodology design, to conduct and analysis of research, to evaluation etc. Ideally, each set would be facilitated by a professional academic or at least an experienced peer.
  • A professionally peer reviewed journal system, with experienced academic editorial committees and reviewers (who would only see papers already ranked highly in peer review), leading to publication
  • Support for gaining funding – including crowd funding – for the research, particularly with regard to projects needing resources not already available
  • Support for finding collaborators
  • Support for managing the process – both of the whole venture as well as specific projects
  • Non-academic support – counselling and advice
  • Tools and resources to find accreditors – this is not about providing qualifications but preparing students so that they can easily get them

Some issues

There are some complex and significant problems to solve before this becomes a reality, including:

Accreditation

The main idea behind this is to prepare students for a PhD by publication, not to award doctorates. It is essentially about managing a research learning process and helping students to publish results. However, sustaining motivation over a long period without the promise of accreditation might be an issue.

Access to resources

One of the biggest benefits of an institution for a PhD student is access to closed journals and libraries. While it is possible to pay for such access separately from a course, and a system would certainly contain links to ways of discovering open articles, this could be an obstacle. Of course, while we would not and could not condone the use of the community to share closed articles, it is hard to see how we could police such sharing. 

Ethics

Without an institutional backdrop, there would be no easy way to ensure ethical research. Resources could be provided, action learning sets could be used to discuss such concerns, and counselling might be available (perhaps at a price) to help ensure that a process would be followed that wouldn’t pose an obstacle to gaining accreditation, but it would be difficult to ensure an ethically sound process was followed. This is an area where different countries, regions and universities follow different procedures anyway, and there is only broad uniformity around the world, so some flexibility would be needed.

Governance

Beyond issues of ethics, there is a need to find solutions to disputes, grievances, allegations of cheating etc. This might be highly distributed and enabled through crowd-based processes. A similar issue relates to ‘approvals’ of research projects: there would probably need to be something akin to the typical review processes that determine whether a student’s progress and/or proposed path are sufficient. It is likely that action learning sets could play a big role in assisting this process.

Subject specificity

The skills (and resources) needed for different types of PhD can vary enormously – the skills and resources needed by a mathematician are worlds away from those needed by someone engaged in literary criticism, which are worlds away from those needed by a physicist, astronomer or biologist. It would probably be too big a task to cater for all, and some might be all but impossible (e.g. if they require access to large hadron colliders or telescopes, or are performing dangerous, large scale or simply complex experiments). To some extent this is not the huge problem it first appears to be. It is likely that most of those interested in pursuing this process would already be either working in a relevant field (and thus have resources to call upon) or already be enrolled in an academic program, which would reduce some of the problem, but the chances are that the most likely areas where this process could successfully be applied would be those requiring few resources beyond a good brain, commitment and a computer. There are opportunities for multiple instances of this process across multiple subject areas and disciplines. Given our interests and constraints, we would probably aim in the first instance for people interested in education, technology, business, or some combination of these. However, there is scope for a much broader diversity of systems, probably linked in some ways to gain the benefits of common shared resources and a larger community.

Cold start

As the point of this is to leverage the crowd, it will be of little value if there is not already a crowd involved. The availability of high-quality resources, links and MOOCs might be sufficient to provide an initial boost to draw people to the system, as would a team of interesting mentors and participants, but it would still take a while to pick up steam.

Trust

In some fields, students are already reluctant to share information about their research, so this might be especially tricky in an open PhD process. Building sufficient trust in action learning sets and across the broader community may be problematic. Already, the openness needed for many MOOCs poses a challenge for some, but this process would require more disclosure an an ongoing basis than normal. This might be the price to be paid for an otherwise free program. However, the anticipated high drop-out rate would make it difficult to sustain tight-knit research groups/action learning sets over a prolonged period, and we would probably need to think more about cooperative than collaborative processes, so this may be difficult to manage. 

Start-up costs and maintenance

This will not be a cheap system to build, though development might be staggered. Resources would be needed for building and maintaining the server(s), creating content, managing the editing process for the journal, and so on. Potential funding models include start-up grants, company sponsorship (the value to organizations of a process like this could be immense), crowd-funding, subscription, advertising/marketing, etc. Selling lists of participants bothers me, ethically, but a voluntary entry onto a register that might be passed on to interested companies for a fee might have high value. While we might not award doctorates, those who could stay the course would clearly be very desirable potential employees or research team members.

Encouraging academics to participate

Altruism and social capital can sustain a relatively brief open course, but this kind of process would (unless a different approach can be discovered) require long term commitment and engagement by professional academics. There may be ways to provide value to academics beyond the pleasure of contributing and learning from students. For instance, students may be expected/required to cite academics as co-authors where those academics have had some input into the process, whether in feedback along the way or in reviewing/completing papers they have written, or may be granted access to data collected by students. This would provide some incentive to academics to help ensure the quality of the research, and help students by seeing an experienced academic’s thinking processes in action.

Summary

This is a work in progress and there are some big obstacles in the way of making it a reality. We would welcome any ideas, suggestions or expressions of interest!