At last, a serious use for AI: Brickit

https://brickit.app/

Brickit is what AI was made for. You take a picture of your pile of LEGO with your phone or tablet, then the app figures out what pieces you have, and suggests models you could build with it, including assembly plans. The coolest detail, perhaps, is that, having done so, it highlights the bricks you will need in the photo you took of your pile, so you can find them more easily. I’ve not downloaded it yet, so I’m not sure how well it works, but I love the concept.

The fan-made app is iOS only for now, but an Android version is coming in the fall. It’s free, but I’m guessing it may make money in future from in-app purchases giving access to more designs, options to purchase missing bricks, or something along those lines.

It would be cooler if it connected Lego enthusiasts so that they could share their MOCs (my own constructions) with others. I’m guessing it might use the LXFML format, which LEGO® itself uses to export designs from its (unsupported, discontinued, but still available) LEGO DIgital Designer app, so this ought to be easy enough. It would be even cooler if it supported a swap and share feature, so users could connect via the app to get hold of or share missing bricks. The fact that it should in principle be able to catalogue all your pieces would make this fairly straightforward to do. There are lots of existing sites and databases that share MOCs, such as https://moc.bricklink.com/pages/moc/index.page, or the commercial marketplace https://rebrickable.com/mocs/#hottest; there are brick databases like https://rebrickable.com/downloads/ that allow you to identify and order the bricks you need;  there are even swap sites like http://swapfig.com/ (minifigures only); and, of course, there are many apps for designing MOCs or downloading others. However, this app seems to be the…er…missing piece that could make them much more useful. 

Reviews suggest that it doesn’t always succeed in finding a model and might not always identify all the pieces. Also, I don’t think there’s a phone camera in the world with fine enough resolution to capture my son’s remarkably large LEGO collection. Even spreading the bricks out to take pictures would require more floor-space than any of us have in our homes. But what a great idea!

Originally posted at: https://landing.athabascau.ca/bookmarks/view/9558928/at-last-a-serious-use-for-ai-brickit

Amazon helps and teaches bomb makers

Amazon’s recommender algorithm works pretty well: if people start to gather together ingredients needed for making a thermite bomb, Amazon helpfully suggests other items that may be needed to make it, including hardware like ball bearings, switches, and battery cables. What a great teacher!

It is disturbing that this seems to imply that there are enough people ordering such things for the algorithm to recognize a pattern. However, it would seem remarkably dumb for a determined terrorist to leave such a (figuratively and literally) blazing trail behind them, so it is just as likely to be the result of a very slightly milder form of idiot, perhaps a few Trump voters playing in their backyards. It’s a bit worrying, though, that the ‘wisdom’ of the crowd might suggest uses of and improvements to some stupid kids’ already dangerous backyard experiments that could make them way more risky, and potentially deadly.

Building intelligent systems is not too hard, as long as the activity demanding intelligence can be isolated and kept within a limited context or problem domain. Computers can beat any human at Go, Chess, or Checkers. They can drive cars more safely and more efficiently than people (as long as there are not too many surprises or ethical dilemmas to overcome, and as long as no one tries deliberately to fool them). In conversation, as long as the human conversant keeps within a pre-specified realm of expertise, they can pass the Turing Test. They are even remarkably much better than humans at identifying, from a picture, whether someone is gay or not. But it is really hard to make them wise. This latest fracas is essentially a species of the same problem as that reported last week of Facebook offering adverts targeted at haters of Jews. It’s crowd-based intelligence, without the wisdom to discern the meaning and value of what the crowd (along with the algorithm) chooses. Crowds (more accurately, collectives) are never wise: they can be smart, they can be intelligent, they can be ignorant, they can be foolish, they can even (with a really smart algorithm to assist) be (or at least do) good; but they cannot be wise. Nor can AIs that use them.

Human wisdom is a result of growing up as a human being, with human needs, desires, and interests, in a human society, with all the complexity, purpose, meaning, and value that it entails. An AI that can even come close to that is at best decades away, and may never be possible, at least not at scale, because computers are not people: they will always be treated differently, and have different needs (there’s an interesting question to explore as to whether they can evolve a different kind of machine-oriented wisdom, but let’s not go there – SkyNet beckons!). We do need to be working on artificial wisdom, to complement artificial intelligence, but we are not even close yet. Right now, we need to be involving people in such things to a much greater extent: we need to build systems that informate, that enhance our capabilities as human beings, rather than that automate and diminish them. It might not be a bad idea, for instance, for Amazon’s algorithms to learn to report things like this to real human beings (though there are big risks of error, reinforcement of bias, and some fuzzy boundaries of acceptability that it is way too easy to cross) but it would definitely be a terrible idea for Amazon to preemptively automate prevention of such recommendations.

There are lessons here for those working in the field of learning analytics, especially those that are trying to take the results in order to automate the learning process, like Knewton and its kin. Learning, and that subset of learning that is addressed in the field of education in particular, is about living in a human society, integrating complex ideas, skills, values, and practices in a world full of other people, all of them unique and important. It’s not about learning to do, it’s about learning to be. Some parts of teaching can be automated, for sure, just as shopping for bomb parts can be automated. But those are not the parts that do the most good, and they should be part of a rich, social education, not of a closed, value-free system.

Address of the bookmark: http://www.alphr.com/politics/1007077/amazon-reviewing-algorithms-that-promoted-bomb-materials

Original page

 

Update: it turns out that the algorithm was basing its recommendations on things used by science teachers and people that like to make homemade fireworks, so this is nothing like as sinister as it at first seemed. Nonetheless, the point still stands. Collective stupidity is just as probable as collective intelligence, possibly more so, and wisdom can never be expected from an algorithm, no matter how sophisticated.