Dialogues on Artificial Intelligence

In December 2020, a group of social scientists gathered virtually at the LSE Department of Anthropology to discuss the relationship between data science and the social sciences. We all agreed on the pressing need to create space for meaningful dialogues between data science and the social sciences at a time when technical systems based on big data and machine-learning algorithms are increasingly touted as sources of ‘real-time’ and granular truth about individuals, social interactions, and the world writ-large. These dialogues are difficult to set up and sustain, not least on account of issues to do with scale and power: anthropologists and sociologists are few and far between compared to the engineers, statisticians, and computer scientists that throng contemporary governance and industry.

But there is also a problem of framing here. Interdisciplinary engagement on ‘AI’ and data-intensive digital technologies means joining discussions where the terms of debate are specific and variably alien versions of the very concepts that are foundational to the humanities and the social sciences – the ‘human’, the ‘social’, ‘ethics’, ‘trust’. How can we reclaim some space within these conversations, say, as anthropologists? Key leads emerging from our meetings are listed and detailed below. Overall, what featured in our presentations and discussions is a sense that anthropology has yet to contribute to data and computer science what it should have and still can: fresh means of comparison and critical thought as well as guidance for more creative and judicious technological design. 

 

We need to look within and beyond artificial intelligence systems to reveal the inequalities they generate in order to transform them. ‘Artificial Intelligence’ systems are intersections of the material and the virtual within which accumulation and inequalities are generated. 

Rather than claiming a monopoly on the ‘social’ or the ‘human’, stressing the design of digital technologies as inextricable from a broader world history of colonial projects may offer a more constructive way to find an audience, particularly among the data and computer scientists who are themselves distraught by the ease with which computational systems lend themselves to deepening pre-existing inequalities while enabling dominant groups or parts of the world to exploit others. Scope for building such rapport is readily at hand, most notably in the traffic of ideas between psychology, economics, and mathematics in the 20th century via cybernetics, information theory, and structural linguistics. The time is ripe for exploring this traffic, especially since data scientists and statisticians increasingly approach data visualisation or the collection and curation of data for machine-learning in ways that draw on intersectional feminism rather than the techno-libertarian canon. 

A critique of ‘Artificial Intelligence’ must begin, we reasoned, by calling ‘AI’ out for enabling and extending ‘natural’ postcolonialism. This point of departure is inevitable, rather than simply preferrable. Much as this provocation can create scope for an outward-facing anthropology to cultivate interdisciplinary forms of solidarity, evidence of continuities between digital technologies and past forms of colonialism and capitalism is incontrovertible (Couldry). Time and again we see such evidence emerge when the operationalisation of ideas about data and algorithmic analytics, often imagined as the mimesis of one or another human faculty or capacity (Amarianakis and Akasiadis), raises the prospect of enabling capital to discipline and exploit labour in ever-more violent and degrading ways (Anyadike-Danes), to render and monetise human capacities like attention (Seaver), emotion (White), or care (De Togni) as resources for extraction, or to profile and police, and ‘other’ (Jones). Future iterations of such discussions would need to also attend to the material infrastructures and environmental consequences of digital technologies. 

 

We need to move beyond a numeric computational understanding of algorithms to reveal their linguistic formations. By recognising computation as a form of linguistic labour we can strengthen our understanding of algorithms and broaden the influence of anthropological and social scientific engagements with AI

For a truly constructive critique that offers more than a knee-jerk politics of resistance, we need to take another, more ethnographic look at computation and data science as a field of technical activity. This may mean putting a concern with ‘the social’ aside for a moment, if only to better attend to the centrality of language and linguistic forms in computation. Indeed, computation can itself be considered as a form of linguistic labour. This marked a point of connection between multiple presentations and ensuing discussions. What animates ‘AI’ is quite simply the writing of code or instructions along with the inscription and collation of data points in databases. These written texts differ not just according to developers’ goals, but also depending on their methods of data collection, the programming languages they use, and their ideas about what language is and what it does or refer to or signify.

We learned, for example, that programming languages – like ‘natural’ languages – give rise to speech communities whose members identify and relate to one another as such (Heurich). In fact, the very mechanisms whereby computers read inputs and produce outputs are themselves designed based on ideas about language and communication as distinctly human capacities (Heurich; Bear and Zidaru-Bărbulescu). Importantly, these language ideologies are often expressions of the ways in which users and designers of AI technologies imagine sociality or relationality. As such, the language ideologies that guide computation provide leads for comparison. For example, at an AI lab based in Oxford, data scientists developing an online content moderation tool employed online workers to label social media interactions as toxic or not, such that the dataset that the algorithm learns from is diverse and incorporates the ‘wisdom of the crowd’ (Roichman). By contrast, for macroeconomists at the Bank of England, text mining and sentiment analysis techniques are neutral prosthetics that enable them to stabilise and correct wayward trends in the economy, conceived of as a complex field of signals and narratives where agents can come to act in irrational and disorderly ways (Bear and Zidaru-Bărbulescu). 

 

However, focusing on the technics that typify computation and data science does not necessarily mean abandoning ’the social’   

The ability to tack back and forth between computation as a creative act and the contexts that define its inner-workings as well as its affordances and implications in everyday life is the chief merit of grounding comparison in the very ‘technics’ that constitute AI systems and other digital technologies. This has all to do with the intimate connection between technology and society, a key point for Marcel Mauss and Gilbert Simondon alike, for whom the materials and techniques that go into making technical objects are ultimately co-terminus with wider moral, political, and cosmological orders. Thus, what distinguishes AI systems developed in China as Chinese are the specific continuities between, on the one hand, the techniques and inner-workings of digital infrastructures, and, on the other, a longer history of technological development being pursued through techniques, materials, and philosophies of technology different than those that typified European societies. This history may explain why, in China, privacy is more readily traded for convenience (Steinmüller).

 

Indeed, the technics employed in designing trustworthy or ‘trustless’ digital systems are showing that trust is an irreducibly social activity. 

A focus on technics and techniques themselves also opens up possibilities for disrupting received assumptions about trust in society, and especially in relation to technology. Cryptographers, for instance, view multi-party computation as a way of obviating the need for placing trust in other human beings or in third-party institutional arbiters. Instead, they claim, trust can be placed in numbers and code and mathematics. Yet, even when data are scrambled and distributed in a decentralised network, new intermediaries and forms of mediation arise, suggesting that the quest for ‘trustless trust’ is a techno-libertarian fantasy (Bruun). Importantly, critiques of prevailing ways in which trust is modelled in computational systems need not lead to a refusal or withdrawal from technological design. On the contrary, translating these critiques into technical activity can create scope for rebuilding trust on new terms. NeuroSpeculative AfroFeminism, a project developed by the Hyphen-Labs collective, is one such instance. Through immersive installations and VR technologies, NSAF invites visitors to enter a speculative conversation about possible futures from the perspectives of gendered and racialised subjectivities. As such, the project as a whole can be read as materialising the anthropological insistence that trust is a relational process and a performative activity which always involves elements of intimacy, doubt, contestation, and uncertainty (Jones). 

 

Data and computer scientists can draw on anthropology to inform technological design and address questions of trust. 

Our conversations around trust and technology were unlike the standard way of thinking about trust and trustworthiness as a bounded object or a quality that individuals, societies, technical systems, and communities of practice have or do not have. Instead, the understanding that shone through in our discussions was of trust as intrinsic to the relationship between humans and technological objects. There is always some form of delegation at play in that relationship. For example, trusting toasters makes it possible to prepare tea at the same time as toasting bread. In this respect, AI technologies – like other computational systems – mark the advent of new forms of delegation at the interface between humanity and technology. In effect, new problems and possibilities arise. For example, it is unclear if automated caregiving can be designed in a way that respects moral values and fulfills (immaterial) human needs while overcoming the negative relational dynamics that can arise between caregivers and those in need of care. The goal, then, is not mimicking human caregiving, but reflecting on the kinds of care and social relations that are possible and desirable at the interface between humans and machines (De Togni). Anthropologists are uniquely placed to guide such reflection and have already started formulating practical guidelines for engineers (White).

 

By making itself comprehensible to data and computer scientists, anthropologists can expand the horizons of tech humanism 

A key question today is whether digital technologies can be entrusted with the power to influence what people pay attention to and how. Yet the discursive emphasis on attention also obscures the arbitrariness of defining both the human and the techniques under discussion in terms of attention. In effect, attention is cast as a universal virtue and source of value, when in fact it is only the basis of the liberal-humanist subject 2.0: the attentional subject, defined by the control or lack there-of over one’s own emotions and neural biochemistry (Seaver). If attention overdetermines current debates on big data and machine-learning today, it is because computer science selectively takes theories from psychology and neuroscience as models for design. Anthropology could be an alternative source for technical innovation and creativity in data and computer science. What that might herald for anthropology itself is less clear and equally pressing to explore further.

In any case, a comparative anthropology of AI would have to be an exercise in public, outward-facing anthropology. To this end, it is worth noting that many data scientists are themselves critical of slapdash applications of machine-learning. For a case in point one need only look into the interdisciplinary outcry that ensued after Nature published a paper which wrongly claimed to have evidenced – through machine-learning techniques – that increases in living standards led to an increase in social trust over the past 500 years. There is, in other words, plenty of scope for interdisciplinary dialogues that could precipitate anthropological interventions in AI research and design. We can pursue these interventions by setting up or joining activist collectives of citizens and scholars, such as Tierra Común. We can cultivate spaces and languages through which to make anthropological critiques intelligible and actionable. And, by approaching computational systems as written and linguistic forms, we can work collaboratively with data scientists in new ways. For example, we can reflect on the merits of re-writing social media algorithms to expose people to a variety of views other than the ones they already hold. These are precisely the kinds of openings that activist and citizen traders are currently foreshadowing through r/wallstreetbets. Anthropologists can and should be part of midwifing these new possibilities. 

 

Workshop presentations:

Amarianakis, Stamatis and Charilaos Akasiadis. ‘Mimesis and alterity in the AI age: Revisiting the Concept of the Mimetic Faculty’.

Anyadike-Danes, Chima. ‘Verify and verify: Trust, AI, and communication in South Yorkshire’s logistics sector’.

Bear, Laura and Teodor Zidaru-Bărbulescu. ‘Artificial Intelligence as linguistic colonialism’.

Bruun, Maja Hojer. ‘Trustless trust in emerging cryptographic technologies’.

Couldry, Nick. ‘Artificial Intelligence seen from the perspective of data colonialism’.

De Togni, Giulia. ‘AI and health: what makes AI “intelligent” and “caring”?’

Heurich, Guilherme Orlandini. ‘What’s in an algorithm? Towards a linguistic anthropological approach to the study of machine learning code’.

Jones, Surya. ‘Making spaces: Innovation in the absence of trust’. 

Roichman, Maayan. ‘“The Black Box”: The use of the imagination in the design of AI systems for online content moderation’. 

Seaver, Nick. ‘Knowing where to look: Attention as value and virtue in machine learning worlds’.

Steinmüller, Hans. ‘Cosmo-technics and complexity in Chinese AI: anthropological perspectives’.

White, Daniel. ‘The robot’s wink: Anthropological and data science approaches to artificial emotional intelligence’

 

Workshop discussants:

Louise Amoore (Department of Geography, Durham University)

Daniel Allington (Department of Digital Humanities, King’s College London)

Antonia Walford (Department of Anthropology, University College London)

Hannah Knox (Department of Anthropology, University College London)

Ludovic Coupaye (Department of Anthropology, University College London)

 

 

Features image by Markus Spiske (courtesy of Pexels)

This article is desk reviewed. See our review guidelines.
Cite this article as: Zidaru-Bărbulescu, Teodor & Laura Bear. February 2021. 'Dialogues on Artificial Intelligence'. Allegra Lab. https://allegralaboratory.net/dialogues-on-artificial-intelligence/

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top