A modest proposal for improving exam invigilation

There has been a lot of negative reaction of late to virtual proctors of online exams. Perhaps students miss the cheery camaraderie of traditional proctored exams, sitting silently in a sweaty room with pen and paper, doing one of the highest stakes, highest stress tasks of their lives, with someone scrutinizing their every nervous tic whose adverse judgment may destroy their hopes and careers, for the benefit of an invisible examiner whose motives and wishes are unclear but whose approval they dearly seek. Lovely. Traditional. Reassuring. A ritual for us all to cherish. It’s enough to bring a tear to the eye.

But exams cost a huge amount of money to host and to invigilate. It is even worse when one of the outcomes might, for the student or the invigilator, be death or disability due to an inconvenient virus.

I have a better solution.

photo of a toy robotInstead of costly invigilators and invigilation centres, all we need to do is to send out small (returnable, postage-paid) robots to students’ homes. A little robot sitting on the student’s desk or kitchen table as they sit their written exam (on paper, of course – tradition matters), recording every blink, watching their fingers writing on the paper, with 360 degree panoramic camera and the ability to zoom in on anything suspicious or interesting. Perhaps it could include microphones, infrared and microwave sensors, and maybe sensors to monitor skin resistance, pulse, etc, in order to look for nefarious activities or to call the ambulance if the student seems to be having a heart attack or stroke due to the stress. It could be made to talk, too. Perhaps it could offer spoken advice on the process, and alerts about the time left at carefully selected intervals. Students could choose the voice. It would also allow students to sit exams wherever and whenever they please: we are all in favour of student choice. With a bit of ingenuity it could scan what the students have written or drawn, and send it back to an examiner. Or, with a bit more ingenuity and careful use of AI, it could mark the paper on the spot, saving yet more money. Everyone wins.

It would be important to be student-centric in its design. It could, for instance, be made to look like a cute little furry animal with googly eyes to put students more at ease. Maybe it could make soothing cooing noises like a tribble, or like a cat purring. Conversely, it could be made to scuttle ominously around the desk and to appear like a spider with venomous-looking fangs, making gentle hissing noises, to remind students of the much lamented presence of in-person invigilators. Indeed, maybe it could be made to look like a caricature of a professor. More advanced models could emit bad smells to replicate invigilator farts or secret smoking habits. It could be made small and mobile, so that students could take it with them if they needed a bathroom break, during which it might play soothing muzak to put the student at ease, while recording everything they do. It would have to be tough, waterproof, and sterilizable, in order to cope with the odd frustrated student throwing or dunking it.

Perhaps it could offer stern spoken warnings if anomalies or abuses are found, and maybe connect itself to a human invigilator (I hear that they are cheaper in developing nations) who could control it and watch more closely. Perhaps it could be equipped with non-lethal weaponry to punish inappropriate behaviour if the warnings fail, and/or register students on an offenders database.  It could be built to self-destruct if tampered with.

Though this is clearly something every university, school, and college would want, and the long-term savings would be immense, such technologies don’t come cheap. Quite apart from the hardware and software development costs, there would be a need for oodles of bandwidth and storage of the masses of data the robot would generate.

I have a solution to that, too: commercial sponsorship.

We could partner with, say, Amazon, who would be keen to mine useful information about the students’ surroundings and needs identified using the robot’s many sensors. A worn curtain? Stubborn stains? A shirt revealing personal interests? Send them to Amazon! Maybe Alexa could provide the voice for interactions and offer shopping advice when students stop to sharpen their pencils (need a better pencil? We have that in stock and can deliver it today!). And, of course, AWS would provide much of the infrastructure needed to support it, at fair educational prices. I expect early adopters would be described as ‘partners’ and offered slightly better (though still profitable) deals.

And there might be other things that could be done with the content. Perhaps the written answers could be analyzed to identify potential Amazon staffers. Maybe students expressing extremist views could be reported to the appropriate government agency, or at least added to a watch-list for the institution’s own use.

Naysayers might worry about hackers breaking into it or subverting its transmissions, or the data being sent to a country with laughable privacy laws, or the robot breaking down at a critical moment, or errors in handwriting recognition, but I’m sure that could be dealt with, the same as we deal with every other privacy, security, and reliability issue in IT in education. No problem. No sir. We have lawyers.

The details still need to be ironed out here and there, but the opportunities are endless. What could possibly go wrong? I think we should take this seriously. Seriously.

At last, a serious use for AI: Brickit

https://brickit.app/

Brickit is what AI was made for. You take a picture of your pile of LEGO with your phone or tablet, then the app figures out what pieces you have, and suggests models you could build with it, including assembly plans. The coolest detail, perhaps, is that, having done so, it highlights the bricks you will need in the photo you took of your pile, so you can find them more easily. I’ve not downloaded it yet, so I’m not sure how well it works, but I love the concept.

The fan-made app is iOS only for now, but an Android version is coming in the fall. It’s free, but I’m guessing it may make money in future from in-app purchases giving access to more designs, options to purchase missing bricks, or something along those lines.

It would be cooler if it connected Lego enthusiasts so that they could share their MOCs (my own constructions) with others. I’m guessing it might use the LXFML format, which LEGO® itself uses to export designs from its (unsupported, discontinued, but still available) LEGO DIgital Designer app, so this ought to be easy enough. It would be even cooler if it supported a swap and share feature, so users could connect via the app to get hold of or share missing bricks. The fact that it should in principle be able to catalogue all your pieces would make this fairly straightforward to do. There are lots of existing sites and databases that share MOCs, such as https://moc.bricklink.com/pages/moc/index.page, or the commercial marketplace https://rebrickable.com/mocs/#hottest; there are brick databases like https://rebrickable.com/downloads/ that allow you to identify and order the bricks you need;  there are even swap sites like http://swapfig.com/ (minifigures only); and, of course, there are many apps for designing MOCs or downloading others. However, this app seems to be the…er…missing piece that could make them much more useful. 

Reviews suggest that it doesn’t always succeed in finding a model and might not always identify all the pieces. Also, I don’t think there’s a phone camera in the world with fine enough resolution to capture my son’s remarkably large LEGO collection. Even spreading the bricks out to take pictures would require more floor-space than any of us have in our homes. But what a great idea!

Originally posted at: https://landing.athabascau.ca/bookmarks/view/9558928/at-last-a-serious-use-for-ai-brickit

Amazon helps and teaches bomb makers

Amazon’s recommender algorithm works pretty well: if people start to gather together ingredients needed for making a thermite bomb, Amazon helpfully suggests other items that may be needed to make it, including hardware like ball bearings, switches, and battery cables. What a great teacher!

It is disturbing that this seems to imply that there are enough people ordering such things for the algorithm to recognize a pattern. However, it would seem remarkably dumb for a determined terrorist to leave such a (figuratively and literally) blazing trail behind them, so it is just as likely to be the result of a very slightly milder form of idiot, perhaps a few Trump voters playing in their backyards. It’s a bit worrying, though, that the ‘wisdom’ of the crowd might suggest uses of and improvements to some stupid kids’ already dangerous backyard experiments that could make them way more risky, and potentially deadly.

Building intelligent systems is not too hard, as long as the activity demanding intelligence can be isolated and kept within a limited context or problem domain. Computers can beat any human at Go, Chess, or Checkers. They can drive cars more safely and more efficiently than people (as long as there are not too many surprises or ethical dilemmas to overcome, and as long as no one tries deliberately to fool them). In conversation, as long as the human conversant keeps within a pre-specified realm of expertise, they can pass the Turing Test. They are even remarkably much better than humans at identifying, from a picture, whether someone is gay or not. But it is really hard to make them wise. This latest fracas is essentially a species of the same problem as that reported last week of Facebook offering adverts targeted at haters of Jews. It’s crowd-based intelligence, without the wisdom to discern the meaning and value of what the crowd (along with the algorithm) chooses. Crowds (more accurately, collectives) are never wise: they can be smart, they can be intelligent, they can be ignorant, they can be foolish, they can even (with a really smart algorithm to assist) be (or at least do) good; but they cannot be wise. Nor can AIs that use them.

Human wisdom is a result of growing up as a human being, with human needs, desires, and interests, in a human society, with all the complexity, purpose, meaning, and value that it entails. An AI that can even come close to that is at best decades away, and may never be possible, at least not at scale, because computers are not people: they will always be treated differently, and have different needs (there’s an interesting question to explore as to whether they can evolve a different kind of machine-oriented wisdom, but let’s not go there – SkyNet beckons!). We do need to be working on artificial wisdom, to complement artificial intelligence, but we are not even close yet. Right now, we need to be involving people in such things to a much greater extent: we need to build systems that informate, that enhance our capabilities as human beings, rather than that automate and diminish them. It might not be a bad idea, for instance, for Amazon’s algorithms to learn to report things like this to real human beings (though there are big risks of error, reinforcement of bias, and some fuzzy boundaries of acceptability that it is way too easy to cross) but it would definitely be a terrible idea for Amazon to preemptively automate prevention of such recommendations.

There are lessons here for those working in the field of learning analytics, especially those that are trying to take the results in order to automate the learning process, like Knewton and its kin. Learning, and that subset of learning that is addressed in the field of education in particular, is about living in a human society, integrating complex ideas, skills, values, and practices in a world full of other people, all of them unique and important. It’s not about learning to do, it’s about learning to be. Some parts of teaching can be automated, for sure, just as shopping for bomb parts can be automated. But those are not the parts that do the most good, and they should be part of a rich, social education, not of a closed, value-free system.

Address of the bookmark: http://www.alphr.com/politics/1007077/amazon-reviewing-algorithms-that-promoted-bomb-materials

Original page

 

Update: it turns out that the algorithm was basing its recommendations on things used by science teachers and people that like to make homemade fireworks, so this is nothing like as sinister as it at first seemed. Nonetheless, the point still stands. Collective stupidity is just as probable as collective intelligence, possibly more so, and wisdom can never be expected from an algorithm, no matter how sophisticated.