ToyRep 3D Printer – Costs Under $85 to Build Using Super Cheap 28BYJ-48 Motors

This is interesting – a fully functional 3D printer for (potentially) under $85. Of course, there are caveats. Though the printer itself seems very capable, even compared with those that cost at least ten or fifteen times as much, a fair amount of skill is needed to build it. Also, it does rely on a fair number of 3D printed parts, so you need to have access to a 3D printer to make one. That said, even if you had to rely on a company to produce those 3D parts for you, and even if you invested in a better printing head than the cheap one described here, it would still be possible to build one of these for a very few hundred dollars. This might not be the perfect solution for schools etc, where reliability and safety are paramount, but it looks like a great alternative for hobbyists wanting to explore Santa Claus machines.

Any moment now, 3D printing looks set to hit the mainstream. I’m still not quite sure what such machines can really do, given their current reliance on PLA or ABS filaments, their slow print speeds, and unreliable operation. I have spent a while browsing Thingiverse looking for projects and have been amused by printable guitars and violins (some glueing and extra components required).  I’ve had a few thoughts about designing bits and pieces like cord organizers, replacement parts for broken devices and instruments, home gadgets, etc, but I have yet to come up with any really compelling use cases that are not more trouble, nor significantly cheaper, than simply buying the things ready made. Most of the objects available on Thingiverse look a lot like uses of Sugru – great fun, ingenious, but embarrassingly amateurish, garish and crude.  And 3D printers are not compact things – you need to put them and their raw materials somewhere. For low-utilization scenarios it’s still more sensible, and not much more expensive, to simply send a design to a 3D printing service.

I feel almost certain that there are educational uses for such things. This is most obviously valuable for kids and those in physical design disciplines (architecture, engineering, interior design, sculpture, etc), and I can think of a few ways of using artefacts to help make concepts more concrete in a physical classroom (physical routers, logic gates, etc, for instance), but I have yet to work out a way to incorporate them into the things I teach online, all of which are conceptual and/or virtual.  I’m hoping that, when I get one, the possible will become more adjacent.

Address of the bookmark: http://3dprint.com/89620/toyrep-3d-printer

Welcome to The Internet of Compromised Things

Jeff Atwood clearly and coherently explains why connecting to the Internet is scary. It’s especially scary when all of our devices – cars, lights, heating, gas pumps, locks, surveillance cameras, TVs, etc – are connected. Most of us have learned to be at least a bit careful with our computers but we tend to be more careless and trusting of those simple plugin devices. Unfortunately, among the weakest links are our routers and, once owned, it is really hard to escape the malware that controls them. Worse, like many of our devices, their updates and configuration tend to be ignored or forgotten. As more and more devices embed powerful and dangerous net-connected computers, this problem is going to get a lot worse over the coming years. Some good advice in this article on protecting yourself as best you can.

Address of the bookmark: http://blog.codinghorror.com/welcome-to-the-internet-of-compromised-things/

We're heading Straight for AOL 2.0 · Jacques Mattheij

Interesting commentary on the hijacking and usurpation of open protocols by web companies intent on making a profit by closing their ecosystems via non-standard apps layered over HTTP. As Mattheij notes, this is very similar to the way AOL, CompuServe and other commercial providers used to lock in their users. Now, instead of running proprietary systems over layer 2-4 protocols (as AOL et al used to do), vendors are running them over layer 5 (or, for OSI purists, layer 7) protocols, with proprietary APIs designed to hook others into their closed systems (think Facebook or Google logins). The end result is the same, and it’s a very bad result.

Mattheij writes

Please open up your protocols, commit to keeping them open and publish a specification. And please never do what twitter did (start open, then close as soon as you gain traction).

I completely concur.

Address of the bookmark: http://jacquesmattheij.com/aol-20

Protocols Instead Of Platforms: Rethinking Reddit, Twitter, Moderation And Free Speech | Techdirt

Reddit logoInteresting article on the rights of companies to moderate posts, following the recent Reddit furore that, in microcosm, raises a bunch of questions about the future of the social net itself. The distinction between freedom of speech and the rights of hosts to do whatever they goddam please – legal constraints permitting – is a fair and obvious one to make.

The author’s suggestion is to decentralize social media systems (specifically Twitter and Reddit though, by extension, others are implicated) by providing standards/protocols that could be implemented by multiple platforms, allowing the development of an ecosystem where different sites operate different moderation policies but, from an end-user perspective, being no more difficult to use than email.

The general idea behind this is older than the Internet. Of course, there already exist many systems that post via proprietary APIs to multiple places, from WordPress plugins to Known, not to mention those ubiquitous ‘share’ buttons found everywhere, such as at the bottom of this page. But, more saliently, email (SMTP), Internet Relay Chat (IRC), Jabber (XMPP), Usenet news (NNTP) are prototypical and hugely successful examples of exactly this kind of thing. In fact, NNTP is so close to Reddit’s pattern in form and intent that I don’t see why it could not be re-used, perhaps augmented to allow smarter ratings (not difficult within the existing standard). Famously, Twitter’s choice of character limit is entirely down to fitting a whole Tweet, including metadata, into a single SMS message, so that is already essentially done. However standards are not often in the interests of companies seeking lock-in and a competitive edge. Most notably, though they very much want to encourage posting in as many ways as possible, they very much want control of the viewing environment, as the gradual removal of RSS from prominent commercial sites like Twitter and Facebook shows in spades. I think that’s where a standard like this would run into difficulties getting off the ground. That and Metcalfe’s Law: people go where people go, and network value grows proportionally to the square of the number of users of a system (or far more than that, if Reed’s Law holds). Only a truly distributed system ubiquitously used system could avoid that problem. Such a thing has been suggested for Reddit and may yet arrive.

As long as we are in thrall to a few large centralized commercial companies and their platforms – the Stacks, as Bruce Sterling calls them – it ain’t going to work. Though an incomplete, buggy and over-complex implementation played a role, proprietary interest is essentially what has virtually killed OpenSocial, despite being a brilliant idea that was much along these lines but more open, and despite having virtually every large Internet company on board, bar one. Sadly, that one was the single most avaricious, amoral, parasitic company on the Web. Almost single-handedly, Facebook managed to virtually destroy the best thing that might have happened to the social web, that could have made it a genuine web rather than a bunch of centralized islands. It’s still out there, under the auspices of the W3C, but it doesn’t seem to be showing much sign of growth or deployment.

Facebook front pageFacebook has even bigger and worser ambitions. It is now, cynically and under the false pretense of opening access to third world countries, after the Internet itself. I hope the company soon crashes and burns as fast as it rose to prominence – this is theoretically possible, because the same cascades that created it can almost as rapidly destroy it, as the once-huge MySpace and Digg discovered to their cost. Sadly, it is run by very smart people that totally get networks and how to exploit them, and that has no ethical qualms to limit its growth (though it does have some ethical principles about some things, such as open source development – its business model is evil, but not all of its practices). It has so far staunchly resisted attack, notwithstanding its drop in popularity in established markets and a long history of truly stunning breaches of trust.

Do boycott Facebook if you can. If you need a reason, other than that you are contributing to the destruction of the open web by using it, remember that it tracks you hundreds of times in a single browsing session and, flaunting all semblance of ethical behaviour, it attempts to track you even if you opt out from allowing that. You are its product. Sadly, with its acquisition of companies like Instagram and Whatsapp, even if we can kill the primary platform, the infection is deep. But, as Reed’s Law shows, though each new user increases its value, every user that leaves Facebook or even that simply ignores it reduces its value by an identically exponential amount. Your vote counts!

Address of the bookmark: https://www.techdirt.com/articles/20150717/11191531671/protocols-instead-platforms-rethinking-reddit-twitter-moderation-free-speech.shtml

Everything Science Knows About Reading On Screens

Well, maybe not everything!

This article contains some interesting and useful information about the current state of the research comparing e-reading vs p-reading. In brief, there are no simple, unequivocal findings. The biggest issues with e-texts apparently relate to the propensity of screen-users to skim and/or be distracted, though there are also issues with knowing where you are in an e-text, which makes it both harder to get the bigger picture of how it all hangs together and more difficult to remember some aspects of what your are reading. On the other hand, there’s good evidence that screens are better for people with some disabilities like age-related sight impairment and dyslexia and the advantages of things like easy search, instant word lookup, shared annotations, variable fonts and, of course, cost and information density, are pretty compelling. In the past I’ve shared some thoughts on some potential solutions to the known problems with e-readers as well as on the relative merits and demerits of each technology. Like all technologies, it ain’t what you do, it’s the way that you do it. Research like this is useful because it helps to identify design problems that we need to solve, not because it provides definitive answers. I don’t think we are going to see much improvement in paper books in the near future, but there’s plenty to work on in e-reading!

Address of the bookmark: http://www.fastcodesign.com/3048297/evidence/everything-science-knows-about-reading-on-screens

The EDUCAUSE NGDLE and an API of One's Own (Michael Feldstein)

Michael Feldstein responds on NGDLEs with a brilliant in-depth piece on the complex issues involved in building standards for online learning tool interoperability and more. I wish I’d read this before posting my own most recent response because it addresses several of the same issues with similar conclusions, but in greater depth and with more eloquence, as well as bringing up some other important points such as the very complex differences in needs between different contexts of application. My post does add things that Michael’s overlooks and the perspective is a little different (so do read it anyway!), but the overlapping parts are far better and more thoroughly expressed by Michael.

This is an idea that has been in the air and ripe for exploitation for a very long time but, as Michael says in his post and as I also claim in mine, there are some very big barriers when it comes down to implementing such a thing and a bunch of wicked problems that are very hard to resolve to everyone’s satisfaction. We have been here before, several times: let’s hope the team behind NGDLE finds ways to avoid the mistakes we made in the past.

Address of the bookmark: http://mfeldstein.com/the-educause-ngdle-and-an-api-of-ones-own/

How Do You Google? New Eye Tracking Study Reveals Huge Changes

Over the past ten years, the ‘golden triangle’ (the sequence of where people look when viewing Google search results and, indeed, many web pages) has changed to a fuzzy line straight down the left of the page. It used to be that people started on the left, scanned to the right, then moved on down the page – that’s what we have taught in interaction design classes, at least for web designers, for quite a while. Now, they just scroll down. They also make faster (but are they better?) decisions about where to click.

There are clearly many factors that influence this, not least of which being Google’s UI changes, improvements in Google’s algorithms, as well as increasing familiarity with the tools – people are getting better at knowing what to ignore, perhaps less influenced by a lifetime of reading on paper, not to mention the effects of the massive increase in mobile device usage, in which scrolling is pretty much the only game in town. It’s a massively complex self-organizing system and fascinating to see how design and use responsively interact on a web-wide scale. So, now, designers will work on the assumption that people are going to be scrolling down, so that’s what users will learn to do, more and more, and what they will come to expect. But will it last?

It’s intriguing to wonder what will happen next. Though I remain a bit sceptical about wearables like the Apple Watch (at least until battery life gets better and app makers get away from behaviourist models of user psychology), I suspect that might be the next thing to stir up this complex ecosystem. I expect to see more single-glance sites coming soon.

Address of the bookmark: http://www.forbes.com/sites/roberthof/2015/03/03/how-do-you-google-new-eye-tracking-study-reveals-huge-changes/

A $77 3D Printer is Unveiled! Say Hello to the Lewihe Play – 3DPrint.com

To be fair, there’s not much you could do with this $77 printer – it needs a fair bit more stuff added to it before it is fully functional, and more than a bit of assembly and skill is required to make it work. None-the-less, this is a sign of a more general trend. Good 3D printers that are easy to use (albeit mind-numbingly slow and not as reliable as 2D printers) are at least as affordable as laser printers used to be 10-15 years ago. They are increasing in quality, dropping in price, getting faster, becoming more flexible, and are getting closer to standard commodity items with each passing week. There is still a big leap in price from hobbyist machines that do fun and occasionally useful stuff (with some effort) to commercial machines that do really useful stuff (with relative ease), but the gap is closing fast. I want one. 

Address of the bookmark: http://3dprint.com/67280/lewihe-play-cheapest-3d-print

Smart learning – a new approach or simply a new name? | Smart Learning

Kinshuk has begun a blog on smart learning and, in this post, defines what that means. I particularly like:

 I have come to realize that while technology can help us in improving learning, a fundamental change is needed in the overall perception of educators and learners to see any real effect. Simply trying to create adaptive systems, intelligent systems, or any sort of mobile/ubiquitous environments is going to have only superficial impact, if we do not change the way we teach, and more importantly, the way we think of learning process (and assessment process). 

This very much echoes my own view. At least that fundamental change is needed in the context of formal education. Outside our ivory towers that fundamental change has already happened and continues to accelerate. Google Search, Wikipedia, Twitter, Reddit, StackExchange, Facebook and countless others of their net-enabled ilk are amongst the most successful learning technologies (more accurately, components of learning technologies) ever created, arguably up there with language and writing, ultimately way beyond printing or schools. 

Kinshuk goes on to talk of an ecosystem of technology and pedagogy, which I think is a useful way of looking at it. Terry Anderson, too, talks of the dance between technology and pedagogy with much the same intent. I agree that we have to take a total systems view of this. My own take on it is that pedagogies are technologies – learning technologies are simply those with pedagogies in the assembly, whether human-instantiated or embedded in tools. Technologies and pedagogies are not separate categories. Within the ecosystem there are many other technologies involved in the assembly apart from those we traditionally label as ‘learning technologies’ such as timetables, organizational structures, regulations, departmental roles, accreditation frameworks, curricula, organizational methods, processes and rituals, not to mention pieces like routers, protocols, software programs and whiteboards. But, though important, technologies are not the only objects in this ecology. We need to think of the entire ecosystem and consider things that are not technologies at all like friendship, caring, learning, creativity, belief, environment, ethics, and, of course, people. As soon as you get past the ‘if intervention x, then result y’ mindset that plagues much learning technology (and education) research, and start to see it as a complex adaptive system that is ultimately about what it means to be human, you enter a world of rich complexity that I think is far more productive territory. Its an ecosystem that is filled not just with process but with meaning and value. 

On a more mundane and pragmatic note, I think it is worth observing that learning and accreditation of competence must be entirely separated – accreditation is an invasive parasite in this ecosystem that feeds on and consumes learning. Or maybe it is more like the effluent that poisons it. Either way, I’d prefer that accreditation should not be lumped under the ‘smart learning’ banner at all. ‘Smart accreditation’ is fine – I have no particular concerns about that, as a separate field of study. In some ways it is worthy of study in smart learning because of its effects. That is somewhat along the lines of studying oil spills when considering natural ecosystems.  Assessment (feedback, critical reflection, judgement, etc), on the other hand, is a totally different matter. Assessment is a critical part of almost any pedagogy worthy of the name and so of course must be part of a smart learning ecology. I’m not sure that it warrants a separate category of its own but it is certainly important. It is, however, highly dangerous to take the ‘easy’ next step of using it to assert competence, especially when that assertion becomes the reason for learning in the first place, or is used as a tool to manipulate learners. That is what predominantly drives education now, to the point that it threatens the entire ecosystem. 

That said, I’d like to think that it is possible that the paths of accreditation and assessment might one day rejoin because they do share copious commonalities. It would be great to find ways that the smart stuff we are doing to support learning might, as a byproduct, also be useful evidence in accreditation, without clogging up the whole ecosystem. Technologies like Caliper, TinCan, and portfolios offer much promise for that. 

Address of the bookmark: http://www.kinshuk.info/2015/05/smart-learning/

The cost of time

A few days back, an email was sent to our ‘allstaff’ mailing list inviting us to join in a bocce tournament. This took me a bit of time to digest, not least because I felt impelled to look up what ‘bocce’ means (it’s an Italian variant of pétanque, if you are interested). I guess this took a couple of minutes of my time in total. And then I realized I was probably not alone in this – that over a thousand people had also been reading it and, perhaps, wondering the same thing. So I started thinking about how we measure costs.
 

The cost of reading an email

A single allstaff email at Athabasca will likely be read by about 1200 people, give or take. If such an email takes one minute to read, that’s 1200 minutes – 20 hours – of the institution’s time being taken up with a single message. This is not, however, counting the disruption costs of interrupting someone’s train of thought, which may be quite substantial. For example, this study from 2002 reckons that, not counting the time taken to read email, it takes an average of 64 seconds to return to previous levels of productivity after reading one. Other estimates based on different studies are much higher – some studies suggest the real recovery time from interruptions to tasks could be as high as 15-20 minutes. Conservatively, though, it is probably safe to assume that, taking interruption costs into account, an average allstaff email that is read but not acted upon consumes an average of two minutes of a person’s time: in total, that’s about 40 hours of the institution’s time, for every message sent. Put another way, we could hire another member of staff for a week for the time taken to deal with a single allstaff message, not counting the work entailed by those that do act on the message, nor the effort of writing it. It would therefore take roughly 48 such messages to account for a whole year of staff time. We get hundreds of such messages each year.
 
But it’s not just about such tangible interruptions. Accessing emails can take a lot of time before we even get so far as reading them. Page rendering just to view a list of messages on our web front end for our email system is an admirably efficient 2 seconds (i.e. 40 minutes of the organization’s time for everyone to be able to see a page of emails, not even to read their titles). Let’s say we all did that an average of 12 times a day –  that’s 8 hours, or more than a day of the institution’s time, taken up with waiting for that page to render each day. Put another way, as we measure such things, if it took four seconds, we would have to fire someone to pay for it. As it happens, for another university for which I have an account, using MS Exchange, simply getting to the login screen of its web front end takes 4 seconds. Once logged in (a further few seconds thanks to Exchange’s insistence on forcing you to tell it that your computer is not shared even though you have told it that a thousand times before), loading the page containing the list of emails takes a further 17 seconds. If AU were using the same system, using the same metric of 12 visits each day, that could equate to around 68 hours of the institution’s time every single day, simply to view a list of emails, not including a myriad of other delays and inefficiencies when it comes to reading, responding to and organizing such messages. Of course, we could just teach people to use a proper email client and reduce the delay to one that is imperceptible, because it occurs in the background – webmail is a truly terrible idea for daily use – or simply remind them not to close their web browsers so often, or to read their emails less regularly. There are many solutions to this problem. Like all technologies, especially softer ones that can be used in millions of ways, it ain’t what you do it’s the way that you do it. 
 

But wait – there’s more

Email is just a small part of the problem, though: we use a lot of other websites each day. Let’s conservatively assume that, on average, everyone at AU visits, say, 24 pages in a working day (for me that figure is always vastly much higher) and that each page averages out at about 5 seconds to load. That’s two minutes per person. Multiplied by 1200, it’s another week of the institution’s time ‘gone’ every day simply waiting to read a page.
 
And then there are the madly inefficient bureaucratized processes that are dictated and mediated by poorly tailored software. When I need to log into our CRM system I reckon that simply reading my tasks takes a good five minutes. Our leave reporting system typically eats 15 minutes of my time each time I request leave (it replaces one that took 2-3 minutes).  Our finance system used to take me about half an hour to add in expenses for a conference but, since downgrading to a baseline version, now takes me several hours, and it takes even more time from others that have to give approvals along the way. Ironically, the main intent behind implementing this was to save us money spent on staffing. 
 
I could go on, but I think you see where this is heading. Bear in mind, though, that I am just scratching the surface. 
 

Time and work

My point in writing this is not to ask for more efficient computer and admin systems, though that would indeed likely be beneficial. Much more to the point, I hope that you are feeling uncomfortable or even highly sceptical about how I am measuring this. Not with the figures: it doesn’t much matter whether I am wrong with the detailed timings or even the math. It is indisputable that we spend a lot of time dealing with computer systems and the processes that surround them every day, and small inefficiencies add up. There’s nothing particularly peculiar to ICTs about this either – for instance, think of the time taken to walk from one office to another, to visit the mailroom, to read a noticeboard, to chat with a colleague, and so on. But is that actually time lost or does it even equate precisely to time spent?  I hope you are wondering about the complex issues with equating time and dollars, how we learn, why and how we account for project costs in time, the nature of technologies, the cost vs value of ICTs, the true value of bocce tournament messages to people that have no conceivable chance of participating in them (much greater than you might at first imagine), and a whole lot more. I know I am. If there is even a shred of truth in my analysis, it does not automatically lead to the conclusion that the solution is simply more efficient computer systems and organizational procedures. It certainly does bring into question how we account for such things, though, and, more interestingly, it highlights even bigger intangibles: the nature and value of work itself, the nature and value of communities of practice, the role of computers in distributed intelligence, and the meaning, identity and purpose of organizations. I will get to that in another post, because it demands more time than I have to spend right now (perhaps because I receive around 100 emails a day, on average).