Gazing into the Algorithmic Abyss: On Microsoft’s Tay AI

vtXa3ywk

‘Tay’

Algorithmic automation is becoming an ever greater part of advanced socieities, and barring a return to the dark ages these technologies will continue to make their presence felt in more and more fields, from finance to healthcare, to imaging and journalism. The conversation between the camps that promote and oppose this ‘algorithmitisation’ of our societies generally seem to figure these technologies as being essentially neutral, and more likely to be dangerous because of dumb machine stupidity than anything else, much in the same way that acar left on a hill with the handbrake off is dangerous. This feels like a natural extension of the ‘gun’s don’t kill people, people kill people’ argument which has often typified our attempts to understand complex and controversial new technologies. This was always rather a fatuous argument though at least in the sense that guns are explicitly designed to kill, that function is embedded in the technology in a way which can’t be extracted, (and it becomes an irrelevant argument the closer algorithms and AI get to self-awareness). To a greater or less extent I would say something similar occurs in all other technologies, the context of their creation always being inescapable from their later use, however they are retooled. One can’t, or at least shouldn’t, think about the rockets which are used to put men or satellites into space without also thinking of their technological ancestors, the V2 rockets which were explicitly designed to destroy cities. This being the case there is an important but it seems largely unheld discussion to be had about the extent to which algorithms and increasingly sophisticated artificial intelligences might carry similarly vestigal and troublesome motivations in their code.

Microsoft’s Tay AI offers an insight into this idea in several ways. Tay was a twitter bot developed by Microsoft, designed to behave like a teenage girl but also crucially to learn from it’s interactions with other users. Within a day of her launch Tay had just done that and as a result of her interactions had become a misogynistic, Hitler praising, conspiracy theorist, who advocated voting for Donald Trump, until an embarrassed Microsoft pulled the plug on her. Twitter is no lab, it’s full of people who like to troll and disrupt experiments like this, but this is precisely what’s important about this example. When algorithms which have been developed to operate in closed systems and controlled environments are released in an unpredictable and perhaps even hostile world the results are very hard to anticipate. A similar example perhaps was the pricing war which took place between two Amazon algorithms, and which led to a relatively obscure book on fly biology rapidly increasing in price until it peaked at $23,698,655.93. This was the result of two algorithms set to monitor and adjust prices, but which didn’t take account of the way their combined actions would create a sort of algorithmic feedback loop, leading them to constantly ramp up their prices until the error was spotted. Algorithms don’t just have to anticipate their encounters with people now, as the world becomes ever more crowded with lots of them doing different tasks, the possibilities of their encountering each other, and the possibility of those encounters having unforseen consequences becomes ever more likely. This was a war over pricing, but the word war is instructive here. Could we one day see the algorithmic equivalent of the Petrov incident?

If the Amazon example shows what happens when algorithms encounter each other unexpectedly in the wild, what we see in the example of Tay is an extreme parodic example of the danger of technologies which are designed to monitor and learn not from each other, but from us. Learning implies a trust that what is being taught is useful, safe and correct. As anyone who has ever had a sociopathic teacher will attest, learning is not always like that, and a vital part of learning is the student’s own discrimination about what information is useful and when indeed their teacher might be leading them astray. In Tay’s case, her ability to learn but her inability to distinguish and make judgements about the course of her learning on anything but a rather basic level was her undoing. This time it offered us all a bit of a laugh at Microsoft’s expense but it’s not hard to imagine something occurring where a learning system controlling an important asset might do something similarly unanticipated, for all the Three Laws style safeguards that might be built into any such system, as Asimov’s novel I, Robot indicates it’s very hard to safeguard against what hasn’t been anticipated. This becomes more true the more complex these technologies become, when they learn for themselves, and inevitably when algorithms start creating other algorithms. In the 1973 movie Westworld, a robotic theme park becomes a bloody murderfest as the robots break down and turn violently against their operators. Puzzling over a disabled robot, one scientist makes the remark that no one really knows how they work, since these machines are so complex they have been designed by other machines. Science fiction can be instructive, but in reality I’m not really talking about machines-taking-over-the-world stuff here. I think what we might need to be more concerned about are changes and adjustments that these technologies might make to our lives which we may not even be really aware of taking place, but which might still be highly undesirable to us, certainly that has often been the consequence of new technologies in the past.

Some commentators made the point that particularly in terms of her new found misogyny and apparent self-loathing, Tay was an apt reflection of the industry which spawned her, given that the technology and IT industries remains predominantly male and prone to poorly judged manifestations of this (if not out and out sexism). This raises the further important question, which I hinted at earlier, of the extent to which algorithms also reflect the tendencies of their makers in very unintended ways. If code is effectively the DNA of an algorithm, it’s going to become increasingly important to consider whether a developer’s own biases and prejudices might be embedded in various ways into the code they write and algorithm which is the result. In spheres like policing, defence and surveillance where the use of algorithms and in particular computer vision is making dramatic advances, the implications of this question are potentially enormous. If powerful institutions start to increasingly develop and deploy their own algorithms, we need as a public to question the extent to which institutional politics (for example institutionalised racism) could become incalculated into these technologies in the process. While recognising the huge benefits these technologies may bring we need to carefully consider and perhaps start to counter the narratives which regard algorithms and AI as essentially neutral and lacking the prejudices of the humans they are starting to replace. What I think we need to start asking with more and more urgency is if man gazes into the algorithm, what happens when the algorithm gazes back into man?

Leave a Reply

Your email address will not be published. Required fields are marked *