Wild Computing: a view from anthropology 

Star computation scientist Stephen Wolfram has said that in order to accelerate the capacity of artificial intelligence towards a much more powerful artificial general intelligence (whatever that might be), it is first necessary to “determine” the main “symbols” of “our civilization” (Wolfram 2023). These “symbols” should act as more abstract algorithmic categories or “pockets of reducibility” (ibid.), that is, as computational shortcuts that would allow data to be more rapidly and efficiently processed within “neural nets” (which is the brain-inspired name given to current AI algorithmic structures). Determining and mathematically codifying the “symbols” of “our civilization” would add a more abstract dimension to computational taxonomies, an overarching algorithmic layer that could enhance the capacity of current AI systems and allow them to rapidly process and articulate data without being ‘slowed down’ by the minutiae of real-life and difficult-to-classify nuances. The endeavor sounds ambitious and intriguing, but it also invites some urgent questions, such as: What is “our civilization”? What should its “symbols” be? And, above all, who could “determine” them?

What computer scientists mean by “our civilization” or its “symbols” surely differs from how people in anthropology, history, and related disciplines understand these terms. To begin with, the latter recognize the existence not of one but of many civilizations. Also, at least among anthropologists, there is unease with the not-so-clear context outlined by the pronoun “we” (Chua and Mathur 2018). The “our” in “our civilization” makes for a fuzzy and problematic positionality. Moreover, the hasty treatment of the notion of “civilization” is but a reverberation of how “we” understand the very idea of “intelligence”. It is thus of vital importance that we revisit and, perhaps, reclaim, the concepts that drive the current boom of artificial intelligence. 

In this short commentary, I argue that some of the currently proliferating computational narratives are running “wild” in the sense that they seem to be out of control in their blatant appropriation and reduction of central terms and analogies from cognitive or social sciences and the humanities.  These narratives not only involve conceptual appropriations but also enable new forms of concentration of epistemic and technopolitical power. I suggest contesting and even taming wild computational narratives by resituating key terms in a radically pluralistic and open-ended world, a world still full of in-flux civilizations, irreducible intelligences, and incomputable possibilities. 

Recursive asymmetries

The path toward the so-called artificial general intelligence is often depicted as one leading to a “superintelligence” of sorts. The superlative makes the asymmetry obvious. It is not only assumed that machine intelligence will be superior to that of humans. It is also that this grand computational achievement is designed by a handful of scientists and business people in elite and mainly Western research institutions and companies. These people imagine and design technologies according to their own definitional parameters for “civilization”, “intelligence”, and their superlative versions. And this, without much cultural self-reflection, or attention to historical precedent. The iteration of “superintelligence” resonates, in fact, with old affirmations of the superiority of Western soul, civilization, culture, or rationality. In the age of computation, artificial intelligence emerges as the new bible, the ultimate script or tool that all humans should embrace. Here is where the supposedly futurist scenario of “superintelligence” reveals itself as the potential repetition of an unfortunate gesture of the past, a colonial move that involves the erasure of the Other. In other words, AI could become yet another techno-hegemonic venture built on Western epistemic principles.

The problematic move is how “intelligence” is conceptualized […], and not just how it becomes mathematized in a given algorithmic structure.

From a cultural and historical perspective, the current unfolding of AI from technocapitalism (not AI as a cybernetic achievement) could be seen as the culmination of what philosopher Yuk Hui calls “synchronizaton” (2019), that is, a reduction of the diverse human “cosmotechnics” (encompassing the intermingling of cosmologies and technologies)  to a Western, and now globalized, technoscientific cannon. Hui rightfully suggests that the challenge lays not in rejecting AI, but in ensuring sufficient heterogeneity of technologies (i.e. of the “artificial”) for granting cross-cultural and cross-civilizational “technodiversity” (Hui 2020). I would add that we also need an explicitly pluralistic and down to earth framework for the very idea of “intelligence”. The problematic move is how “intelligence” is conceptualized (something at which Hui himself hints), and not just how it becomes mathematized in a given algorithmic structure.

I highlight “down to earth” to make an important distinction. Yuk Hui suggests that the recursive self-organization of contemporary systems of computation challenges the separation between mechanisms and organisms that has defined Western philosophy (and technology) for centuries. Inspired by the practical and epistemic possibilities unveiled by the field of cybernetics, current digital machines do not operate in linear, mechanistic logic, but with recursive algorithmic loops that enable them to constantly revise, reattune their responses, and, in a sense, re-configure themselves in a contingent manner. This constitutes, according to Hui, a sort of machine vitalism that requires that we go beyond the organisms – machines’ duality. 

Cybernetics have brought machines to another level of autonomy and agency, but they are still a key part of human strategies of power accumulation. Since their inception, machines have served to redistribute energy, time, and space for the benefit of a few (Hornborg 2001). From an economic, political, and ecological point of view, machines have not only enabled or reproduced the asymmetries of the modern era, they have also created new forms of inequality. An example of what AI may create is a world where automation takes away not only the space of factory workers but also the labour involved in many intellectual and creative professions. Translators, programmers, graphic designers, and the like will perhaps find new ways of working, but everything suggests that AI will not undo the divide between the mightiest and the more vulnerable.

We should neither underestimate the history, nor the politics that technologies embody. Thus, while philosophical and technical challenges to mechanic-organic dualisms are timely and relevant, we should continue to make some distinctions. Even though the cybernetic properties of contemporary machines (like the self-organizing or recursive nature of neural nets) make them resemble organisms, the fact is that these “intelligent” systems are inserted in a technocapitalist industry that enables a high concentration of automated power. This is why, besides the insights from cybernetic philosophy, we need to make sure that the concept of “intelligence” does not end up serving recursive machines that are also historically recursive, that is, machines that reinstate already-known social, economic, and ecological asymmetries.

On legitimate aliens

A related way of depicting the virtues and pitfalls of the current trend toward wild computing is the figure of the algorithmic alien. Philosopher of computation Luciana Parisi, for instance, suggests that we should embrace the “alien subject” of AI as a potentially emancipatory force (2019), something that could liberate humans from a master-slave model by challenging the “servo-mechanic” or instrumental idea of the machine. The “alienness” of AI is the condition of its exemplary autonomy or freedom. Such freedom is expressed in the fact that we already cannot understand the kind of thinking behind complex, self-constituting algorithms, because they neither obey natural laws nor logical reason. The idea resonates with what Stephen Wolfram calls “computational irreducibility” (Wolfram 2023) and the fact that, because neural nets construct their own logic of functioning, AI is already unpredictable. The “alien” argument also resonates (even if loosely) with some alarmistic prophecies of guru-historian Yuval Harari according to which alien autonomous machines are about to subjugate the entirety of humanity

Parisi rightly considers the “political possibilities” of a “denaturalized” form of thinking or intelligence that goes beyond patriarchal and colonial reason (2019: 31). But one keeps wondering about the very fetishization of a human-made algorithm that becomes, by the magic of computation, suddenly “alien”, truly autonomous, or superintelligent. Does the philosophical construction of an epistemic “alienation” with respect to machine thinking have an emancipatory potential after all? Is this particular alien fit for the task of denaturalizing our current problems? What if we consider self-organizing neural nets not only as a cybernetic achievement, but also as an instrument of technocapitalism which is de facto owned by a few big tech lords? Are then these autopoietic systems still a realistic way out of patriarchal and colonial forms of concentration of power?

“Machine thinking”, by the author. 

Anthropology has been classically occupied with tracing how colonial narratives conflate wildness with alienness. Connecting some aliens to some sort of inherent ferocity and lack of civilization has mostly been about legitimating the exercise of power. The story usually goes that, after verifying the wildness of the alien, uncivilized Others, a group of Selves find a casus belli: a reason to intervene and liberate these alien Others from their struggle with uncertainty, anarchy, nature, and savagery. The story usually ends with a violent transformation of Others into aspirants to Selves. The “true” Selves then take hold of the alien world, while aliens renunciate their alienness, and of course their wildness. 

Wild computational narratives also play with the notions of alienness and wildness as part of a neocolonial gesture. But they do so differently from previous colonial narratives. A key concept in the AI world is “computational irreducibility”. This idea implies that intelligent machines are already unpredictable (we cannot foresee their responses and creations). Because of computational irreducibility, AI machines embody a new, algorithmic form of wildness, and hence, we need to accept that the runaway world they will create for us can only be mastered again if we embrace, literally, “a new kind of science” (Wolfram 2002), an all-encompassing form of computation. 

The underlying problem, however, remains pretty much the same as in old colonial gestures, for it is a ridiculously small fraction of all participants in the story who can “determine” which are the appropriate forms of alienness, wildness, and civilization.

So, when it comes to the alien subject of AI or its computational irreducibility, there is an interesting shift in the narrative, since the alienness and the wildness of machines seem to be in that case a model, a reference, or even a source of metaphysical and technopolitical inspiration. The underlying problem, however, remains pretty much the same as in old colonial gestures, for it is a ridiculously small fraction of all participants in the story who can “determine” which are the appropriate forms of alienness, wildness, and civilization. They say, paraphrasing Wolfram’s opening words (2023): here are the “symbols” of “our civilization” and here are the sheer computational premises and tools with which we may go beyond them. In other words, we cannot just celebrate the emancipatory force of AI while neglecting that, behind the bewildering dance of zeros and ones, there is a bunch of guys who are monopolizing the right to both define civilization and become its legitimate aliens.
 

Partial incomputabilty

In fact, the problem we have is not only with how a few lords of algorithms aim at reducing and propagating an anthropocentric and overtly ethnocentric version of “civilization”.  The issue is perhaps even more deeply rooted in the epistemics of the very idea of “intelligence”. Wild computing is indeed operating with a rather poor conceptualization of intelligence and cognition. However, as with the hasty treatment of civilization, the scientifically disputed and culturally biased approach to the concept of intelligence has a possible upside.  After all, a good thing about the popularization of AI is, probably, that it has sparked an intense quest to define what are the limits (if any) between machine and human intelligence, thus requiring an increasingly nuanced definition of the idea. In this area, there are different epistemic positions.

 Unlike people within the world of computing, many scientists in social, cognitive, and biological sciences remain cautious (if not skeptical) of claims of having replicated human-like minds artificially. Among other things, AI cannot generalize what it learns, as humans do, nor act across contexts and sensory modalities (Mitchell 2019). Also, Large Language Models are designed to give statistically plausible but not necessarily true information, thus irreverently inventing facts. This pervasive phenomenon is now popularly known as “hallucinations” (Marcus and Davis 2023), yet some say it would be more accurate to identify it as mere computational forms of “bullshit” (Hicks, Humphries, and Slater 2024).

Another crucial point is that intelligence cannot really be dissociated from the feelings, emotions, and sensations that allow organisms to constantly attune and reattune to their everchanging environments; something that bodyless machines can’t possibly do for now (Damásio 2006). These and other shortcomings in computer-based notions of intelligence partly explain the fact that we don’t yet understand how AI can beat master chess players but not load a dishwasher better than a six-year-old (Bennett 2023).

So, my concluding question is that, if we cannot even replicate a child’s embodied “intelligence”, how could anyone algorithmically “determine” a whole “civilization”?

After all, there might be an important part of intelligence that has to remain, in Yuk Hui’s terms, “incomputable” (2019: 158). Interestingly, partial incomputability does not even need to have humans with their complex intelligences and civilisations in the equation. In fact, it just takes a very simple worm like the C. elegans with its 300 neurons (humans seem to have about 86 billion) to prove that we do not fully understand its choices or how its 300 hundred neurons work together. Therefore, we cannot claim to be able to algorithmically model anything close to its “general intelligence” (Sumpter 2019). 

These computational limitations seem to deeply pervade both human and nonhuman cognitive worlds.   

As a possible path to tame the most radical AI narratives, partial incomputability can also be traced by attending to the difference between what we could call probabilistic and possibilistic notions of intelligence. Playing with statistical probabilities and working with what already exists is the gift of the mathematized aliens that already write, design, or compose music for us. In return, organic intelligence, engaged in everchanging environments, seems to be equipped to face new situations, situations an organism does not know and for which it lacks a robust model (Mitchel 2023; Krakauer 2024.).

These forms of embodied cognition are not only what makes life viable, but also, and despite wild computing aspirations, what turns “intelligences” and “civilizations” into open-ended and hence incomputable arrays of possibilities. 

 Organic forms of cognition do not seem to be reducible to algorithmic models: unlike AIs, animals do not need to see a million predators before recognizing one that looks different from those it has met in past experiences. Unlike AIs, human babies do not need a huge set of training data to identify and play with a toy ball independently of its exact shape, size, colour, texture, and the context in which it is found. How do organisms do that? We still don’t know. But we know that this kind of intuitive, generalizing, and amazingly fast intelligence has less to do with data-hungry, thoroughly calculated probabilities, than with the felt patterns, qualities, and potentials that emerge from corporeally engaging in an ever-changing world. These forms of embodied cognition are not only what makes life viable, but also, and despite wild computing aspirations, what turns “intelligences” and “civilizations” into open-ended and hence incomputable arrays of possibilities.  



Photo credit: “Wild computing”, image (ironically) created by deepai.org.


References:

Bennett, Max S. 2023. A brief history of intelligence: evolution, AI, and the five breakthroughs that made our brains. First edition. New York: Mariner Books.

Chua, Liana, y Nayanika Mathur. 2018. Who Are «We»? Reimagining Alterity and Affinity in Anthropology. Methodology and History in Anthropology, volume 34. New York: Berghahn books.

Damasio, Antonio R. 2006. Descartes’ Error: Emotion, Reason and the Human Brain. Rev. ed. with a new preface. London: Vintage.

Hicks, Michael Townsen, James Humphries, y Joe Slater. 2024. «ChatGPT Is Bullshit». Ethics and Information Technology 26 (2): 38. https://doi.org/10.1007/s10676-024-09775-5.

Hornborg, Alf. 2001. The power of the machine: global inequalities of economy, technology, and environment. [Globalization and the environment, 1]. Walnut Creek, CA: AltaMira Press.

Hui, Yuk. 2019. Recursivity and contingency. Media philosophy. London ; New York: Rowman & Littlefield International.

———. 2020. Fragmentar el futuro: ensayos sobre tecnodiversidad. Buenos Aires: Caja Negra.

Krakauer, John. s. f. «Jonh Krakauer returna… again». Brain Inspired. https://www.youtube.com/watch?v=B_QoSVyi7Fs.

Marcus, Gary, y Ernest Davis. 2023. «Hello, Multimodal Hallucinations». Marcus on AI (blog). 21 de octubre de 2023. https://garymarcus.substack.com/p/hello-multimodal-hallucinations.

Mitchel, Kevin J. 2023. «Why free will is required for true artificial intelligence». Big Think (blog). 8 de octubre de 2023. https://bigthink.com/the-future/free-will-required-true-artificial-general-intelligence/.

Mitchell, Melanie. 2019. Artificial intelligence: a guide for thinking humans. New York: Farrar, Straus and Giroux.

Parisi, Luciana. 2019. «The Alien Subject of AI». Subjectivity 12 (1): 27-48. https://doi.org/10.1057/s41286-018-00064-3.

Sumpter, David. 2019. «What is the most complex animal for which we can model its general intelligence?» Medium(blog). 12 de noviembre de 2019. What is the most complex animal for which we can model its general intelligence?

Wolfram, Stephen. 2002. A new kind of science. Champaign, IL: Wolfram Media.

———. s. f. «ChatGPT and the Nature of Truth, Reality & Computation». Lex Fridman Podcast. https://www.youtube.com/watch?v=PdE-waSx-d8.

Abstract: The commercial unfolding of AI has sparked wild computational narratives that seem to be out of control in their blatant appropriation and reduction of concepts such as “intelligence” or “civilization”. These narratives not only involve conceptual appropriations but also enable new forms of concentration of epistemic and technopolitical power. Seen from anthropology, there is an emergent need to “tame” these runaway narratives by resituating them in a radically pluralistic and open-ended world, a world still full of in-flux civilizations, irreducible intelligences, and incomputable possibilities. 

This article is peer reviewed. See our review guidelines.
Cite this article as: Garcia Arregui, Aníbal. December 2024. 'Wild Computing: a view from anthropology '. Allegra Lab. https://allegralaboratory.net/wild-computing-a-view-from-anthropology/

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top