At the end of last year I started a PhD in the Media and Communications department at the London School of Economics, where I’m researching the intersections of machine intelligence and photojournalism under the supervision of Dr. Dylan Mulvin and Professor Lilie Chouliaraki. Specifically, I’m looking at a series of questions around the relationship between visual media and deliberative democracy, the interplay between people and technologies in complex systems, and the essential debate over whether technologies are products of culture, or shapers of culture, and why this matters in the context of journalism particularly.
For the benefit of this work and with Dylan’s encouragement, I’ll occasionally be blogging here on ideas that I’m working through but which don’t necessarily make sense to integrate fully into my actual PhD. Their relevance to the photography audience that has built up around this blog will vary, and sometimes the questions under consideration may seem quite peripheral to the issues I’ve tended to write about in the past, but I hope they are still of some interest. In the end I started this blog to encourage myself to write, irrespective of who reads it, so this approach feels like something of a return to Disphotic’s roots nearly ten years ago. These pieces will be quick, rough and ready, ideas still in formation.
The first question that’s come up is one of authorship and specifically appropriate citation when it comes to machine intelligence systems designed to generate or ‘author’ media forms. This arose specifically in response to the problem of how to cite a number of articles which claimed to have been written entirely by machine learning systems without human intervention (for example this recent one published in the Guardian). At what point I began to wonder should one seriously consider a machine intelligence system sufficiently autonomous, or even ‘intelligent’ as to deserve the recognition we normally accord to a human author? It’s a seemingly simple question that opens up into a series of broader issues which I have been grappling with, and discussing with Dylan and others in the field.
A simple response which a few people have suggested would be to treat it as an article without a named author, or if there is an identifiable team or academic paper behind the system used to create the article, to cite them instead, the authors of the author as it were. This seems completely tenable for cruder, simpler systems dealing with structured, quantitative data, like say the LA Times’s Quakebot, which really just grabs information from a data source and repackages it as an article according to quite linear parameters set by human developers and journalists. Citing this system as an author would be about as strange as citing the camera as the author of an image taken by a photographer.
This answer seems less tenable to me when it comes to more sophisticated systems based on machine learning principles, where the developer might have set a large number of the parameters for the system, but the actual formulation of the final piece has occured relatively independently (the article above being, at least on the surface, a case in point). One could say that the developer of the system has played a large part in directing what the system writes about, but one could also make that claim of a human editor directing a human journalist. In other words, this discussion starts to open up the recognition that authorship resides in multiple locations, and the desire and expectation (not least in academia) to precisely specify authorship of an item like an article obviously doesn’t capture the subtlety that sometimes exists.
I think one final argument in favour of citation of a machine intelligence, is to do where the publication in question has specifically given the system in question a byline on their site (as again in the case above). On Twitter, Nick Seaver made the very valid point that this runs the risk of playing into the sometimes rather misleading mythologies of AI autonomy (to say nothing of opening up old questions about where responsibility lies, for example if a machine intelligence were to libel someone in an article). I agree with him, but I also think these mythologies are an interesting and important thing to reflect in the research I’m doing. The choice of a specific publication, either of foreground a machine learning system as a sort of proxy author, or to background it as merely a tool to be used by human journalists, maybe one of those things that helps to understand the different ways organisation are evolving alongside and in response to these technologies.
(Photo: a Mk.1 Perceptron, C. 1957)