Operational Listening and Computational Eugenics
185 Pelham Street
Operational Listening: Mark Andrejevic
This presentation takes as its starting point the work of Harun Farocki and Trevor Paglen on the rise of the 'operational image' to consider the related rise of what might be described as automatic or 'operationalised' listening. This type of listening is becoming increasingly familiar thanks to the deployment of smart speakers and a growing array of networked audio sensors (from gunshot sensors to workplace monitors, smart phones, and audio surveillance on public transport).
The talk describes the stakes of operationalism as the displacement of symbolic interpretation by action, and draws on psychoanalytic theory to consider the implications for subjectivity. The goal of the talk is to raise the question regarding what is lost in the shift from comprehension and interpretation to operation. What does it mean to say that Alexa will gather information about your words but it doesn’t know what you mean (beyond purely operational commands)? The talk makes some speculative claims about the emergence of a world in which symbolic efficiency is replaced by operational efficiency. This is, perhaps needless to say, a fundamentally undemocratic process that is already becoming all-too familiar in these post-truth, post-deliberative, post-political times.
Computational Eugenics: Jake Goldenfein
Over the past decade, researchers have been investigating new technologies for categorising people based on physical attributes alone. Unlike profiling with behavioural data created by interacting with informational environments, these technologies record and measure data from the physical world (i.e. signal) and use it to make a decision about the ‘world state’ – in this case a judgement about a person.
Automated personality analysis and automated personality recognition, for instance, are growing sub-disciplines of computer vision, computer listening, and machine learning. This family of techniques has been used to generate personality profiles and assessments of sexuality, political position and even criminality using facial morphologies and speech expressions. These profiling systems do not attempt to comprehend the content of speech or to understand actions or sentiments, but rather to read personal typologies and build classifiers that can determine personal characteristics.
While the knowledge claims of these profiling techniques are often tentative, they increasingly deploy a variant of ‘big data epistemology’ that suggests there is more information in a human face or in spoken sound than is accessible or comprehensible to humans. This paper explores the bases of those claims and the systems of measurement that are deployed in computer vision and listening. It asks if there is something new in these claims beyond ‘big data epistemology’, and attempts to understand what it means to combine computational empiricism, statistical analyses, and probabilistic representations to produce knowledge about people.