Modeling Value in the Anthropocene: Contributions to a metacosmics (Project Proposal)

Abstract

Modeling Value in the Anthropocene is an attempt to accomplish two goals, one more general and one more specific. The first, more general one, is an attempt to provide an example of using computational tools in the world of philosophy. Doing this could enhance philosophy by utilizing methods that can process quantitative information faster than the speed of light. I argue that the particular tool of vector semantics can model large corpora of text in such a way that a new perspective can be created that could guide a subsequent close reading. My hope is that this prior modeling can provide the potential for a close reading that leads to the creation of a concept, which is the aim of philosophy.

The particular question that this project will try to answer is something like: what concept should ground individual and collective value in the world beyond the Anthropocene? After using word2vec to model the conceptual networks of the concepts “profit,” “Anthropocene,” and “Neganthropocene” in the bibliography of philosopher Bernard Stiegler, I will then perform calculations on these vectors to unveil a new vector, the concept it’s representing being the one that will guide my close reading of one of his texts. This close reading will lend itself to the production of an article/potential book chapter. This will be part of a larger project of modeling the concepts of “entropy” and “negentropy” in the history of Western thought, which will result in a book on what Daniel Ross calls metacosmics.

List of Participants

2-3 participants to be responsible for collaborating on the digitizing of the texts, the operationalizing of the texts, the close reading, and the production of an article/book chapter.

Enhancing the Humanities through Innovation

In proposing a reading of philosophical texts that makes use of algorithms for natural language processing, one would have to consider what, if anything, these computational methods of reading and writing bring to doing the work of philosophy. Natural language processing consists of performing mathematical calculations on language. In text analysis specifically, we use digital text as data that we can then model in different ways by making quantitative calculations that a human mind would never be able to do with such large amounts of text on its own through close reading. There are many arguments for and against this kind of practice, and it is likely that many native to the world of philosophy would be especially skeptical of accepting a computational approach. I believe one way of arguing how text analysis could contribute to philosophical practice is through Michael Witmore’s blog post, “Text: A Massively Addressable Object.” In this post, he situates distant reading as simultaneously continuous with the practice of reading since its genesis and discontinuous with past modalities of reading.

Witmore writes that what separates digital text from older forms of text is that “it is massively addressable at different levels of scale” (Witmore). To understand this requires understanding what he means by “scale.” According to Witmore, reading has always been the practice of abstracting a material text at the ideal level at which one is to read it. For example, one can choose to address a text at the level of the word, the sentence, the paragraph, or the book, just to name a few. These are not material characteristics of the text itself, but rather subjective ways of dividing the text up and projecting them into/out of the text. A reader could just as easily choose to underline all the words of a text beginning with the letter ‘B’ and address it at that scale. How text is addressed is a matter of convention, contingent on the technical and normative limits of the given socio-historical context of the act of reading. The fact that we tend to read text as “books” or even “words” is simply a matter of convention. As Witmore writes, “The idea of a word is itself an artifact of manuscript culture, one that could be perpetuated in print through the affordances of moveable type” (Witmore). What makes digital text different, then, is the scale of addressability, or rather the many different scales at which text can now be addressed. Instead of having to read one word at a time or one sentence at a time, we can query and compare many words from many different documents at once. A popular form of this found in the digital humanities is topic modeling. Topic models are generated by running an unsupervised machine learning algorithm on a group of documents and approximating which words tend to be used together in the same document. This allows us to address the texts at the level of “topic,” creating ideal topics we can say that the texts appear to be drawing from (ideal in the sense of non-real; in the realm of the potential or virtual, discoverable in the imagination, as distinguished from the actual).

The form of computational address I propose using in a philosophical context is one called vector semantics analysis. It is a technique of computational linguistics wherein we can run an algorithm that will group words together that bear a semantic similarity to one another. This similarity is represented using a particular kind of vector analysis called word embeddings. Word embeddings assign each word in the text(s) a numerical value based on their distributional distance from every other word in the text. We can then map these words graphically which represent concepts as networks of words used “synonymously” in the text. This is useful for conceptual analysis because words that have similar vectors to each other will appear clustered together within the vector space, indicating that they are distributed throughout the texts in a similar way. The linguistic theory operating behind this model is that words that are deployed in similar contexts have some conceptual resonance: “The latent idea here is that different words will tend to appear in different contexts, and therefore one can guess at a word’s meaning by simply counting the words that appear near it” (Gavin et al.). Bringing it back to the language of Witmore, vector semantics is a way of addressing large amounts of text at the level of semantic similarity. It is an addressing the text at the level of the word, but it is a level where each word exists in relation to every other word, the level getting more discreet as we narrow our investigation. We can thus say this method allows us to address text at the level of concept. This level of address is obviously not new in philosophy. Finding continuity at the level of conceptuality is the modus operandi of philosophy, and we have been used to addressing concepts at the scale of the entire history of philosophy since at least Hegel. What it is new here is the way we can address this history and its concepts.

The vector semantic analysis I am proposing is that of the concepts of entropy and negentropy in the history of Western thought. Bernard Stiegler teaches us that “the relation entropy/negentropy is really the question of life par excellence” (Stiegler, 39). Entropy is a term taken from thermodynamics denoting the tendency of any closed system to tend toward a state of disorder. Physicists of the 19th century came to apply this to the entire universe conceived of as a closed system, effectively reversing the Western conception of the cosmos, which since Socrates had been understood as equilibrium. The concept of negentropy was introduced by Erwin Schrödinger in his lectures published as What Is Life? to give an account of life understood as the temporary struggle against entropy, organic matter being organized matter that wards off the dissipation of energy. The physicist Alfred Lotka takes a step beyond Schrödinger to argue that humans are a particular kind of negentropy, temporarily suspending entropy by means other than just biological organs generated by natural selection. In other words, humans produce negentropy through artificial means of various kinds: tools, writing, machines, etc. According to Stiegler, the implications of these breakthroughs in scientific thought have yet to have been thought through by philosophy. They must be thought through in the context of the Anthropocene, which he claims is an epoch of history marked by extreme accelerations of entropy through industrialization, accelerations that threaten the survival of the human race. The Anthropocene is a proposed geological epoch in which human beings become a geophysical force affecting the biosphere. However, for Stiegler, it is important to recognize that this epoch is firstly a technological, economic, and political epoch. Thinking through the concepts of entropy and negentropy in this context is the work that he called for, and it is the work he attempted until his death in August 2020.

The long-term project I would like to embark on is an investigation of these concepts (in their inclusion as well as in their absence) over a long period of time, covering the histories of physics, biology, philosophy, economics, and information science. Using vector semantics, I wish to find out what other words bear a semantic resemblance to the words “entropy” and negentropy”. The goal of this project would be a contribution to what Daniel Ross calls “metacosmics”, which is a destruction of/continuation of metaphysics centered on a general theory of entropy/negentropy. This project would be oriented toward understanding how these concepts change over time, as well as seeing what resonance certain thinkers (like Nietzsche) have with these concepts, despite not using their names. This would complexify and intensify our conceptions of entropy and negentropy. The scope of this project would start much smaller though, beginning with just modeling these concepts in Stiegler’s work alone, further branching out to other works and disciplines in future projects. I wish to perform a conceptual analysis of his work that I will describe in the following section, an analysis that would be part of a close reading, a reading that will be a small part of this larger project.

Environmental Scan

Much of the work of distant reading in the digital humanities utilizes methods of topic modeling. Something like topic modeling can be incredibly useful for philosophy, especially the history of philosophy. It can address large amounts of text at a scale that would allow us to understand what topics are characteristic of a particular school of thought (like German idealism) or a particular epoch of history (like the Enlightenment). Malaterre et al. run an unsupervised machine learning algorithm on the complete corpus of the journal Philosophy of Science, mining articles from 1934 to 2015. One result of this was the revelation that the topics that are consistent throughout the whole time are “metaphysical and ontological questioning” about “space and time, causation, natural kinds, [and] realism” (Malaterre et al, 215). Another was the discovery of how the topics of logic and language rose and fell throughout time.
Despite topic modeling’s potential fruitfulness, I wish to show why vector semantics is a better method specifically for doing the work of philosophy. There are two reasons for this. The first is because it allows us to model semantic similarity among different words. As Gavin et al. argue,

concepts are not words. The distinction can be glimpsed by considering any collection of synonyms, such as rubbish, trash, waste, junk. Each of these words has a distinct use, and they imply different connotations and meanings, but there remains a principle of synonymy among them, an underlying concept that ties them together (Gavin et al.)

Topic models show us what words tend to be used together. Word embeddings show us which words tend to be used with the same words. Topic modeling could perhaps guide a close reading exploring the topic deeper, or what role a key word might play in a topic, but it cannot get to the semantic depth we might want to in the work of philosophy.

The other reason why I prefer vector semantics for philosophy is because philosophy is not just the modeling of concepts. As Deleuze and Guattari write, “The philosopher is the concept’s friend; he is the potentiality of the concept…philosophy is the discipline that involves creating concepts” (Deleuze et al., 5). The operations of word embedding alone already described are useful for clarifying and perhaps deepening concepts, and could possibly lend themselves to some conceptual reevaluation. However, there is another operation made possible by word embeddings that contains so much more potential for philosophy. Because the words are embedded with numerical values, once the conceptual networks have been modeled in vector space, we can actually perform calculations on the vectors that create new vectors based on this math, and these new vectors can open a path to the creation of concepts, or what Stiegler calls “categorial invention,” which is the goal of philosophy, and perhaps of all theoretical work. A well-cited example is that of V(Queen) = V(King) + V(Woman) – V(Man). When taking the vector for “king”, adding the vector for “woman” to it, and subtract the vector for “man”, the model has been proven to successfully output the vector for queen. What this means conceptually is that if we add the qualities of women to the concept of kings, we have some concept which would have all the qualities of women and all the qualities of kings. If we then subtract from this new vector everything associated with men, we get the concept of queen. This is a simple example, but this functionality can prove exceptionally useful for philosophy.

One of the principal things Stiegler calls for is a rethinking of value to escape the Anthropocene, initializing what he calls the epoch of the Neganthropocene. One chief problem with capitalism, he claims, is that, under the conditions it initiates, all use value is reduced to exchange value. The usefulness of a thing is reified into how much it costs, or how much money it could make. This reduces everything to the rules of the market. The progression of this dynamic is how things like the law or works of art have been devalued, not to mention the health of the biosphere and the future itself. Thus, the Neganthropocene, which would be the epoch following the Anthropocene (if there is to be one), would have to be generated on the basis of a new valuation. The question, then, is if the value of everything is no longer to be based on profit, what is to be the new value founding this society? We can contribute to the thinking through of this question by treating Stiegler’s works with vector semantics. I propose querying a sample equation that looks something like V(profit) – V(Anthropocene) + V(Neganthropocene). This would take the concept of profit, which grounds value in this current stage of capitalism, subtract that which is characteristic of the Anthropocene, and add the vector representing the things that Stiegler writes about the Neganthropocene. This analogic calculation might point us in the direction of which words will be related together as all having to do with how we should re-ground value beyond the Anthropocene. I will run word2vec, a vector semantic algorithm, on two of Stiegler’s texts: Nanjing Lectures 2016-2019, where he lays out his theories of entropy, negentropy, Anthropocene, and Neganthropocene most systematically, and Technics and Time, Vol. 1: The Fault of Epimetheus, his first text, before he began to speak of these concepts at all, but where he laid the grounds for his philosophical work to come.

It should be made very clear that this type of calculation is not a magic wand that can reveal new concepts for us on its own. Witmore’s account of distant reading focuses on the scale of the address, but it does not take into full account the shape or contours of the address itself. I would argue that there are two main modes with which one can address text: analytic and synthetic. These neo-Kantian faculties that Stiegler articulates are two forces that make up the dialectic of knowledge production. The full explication of these arguments are beyond the scope of this proposal, but they show that calculating text (or any data) requires the synthetic work of the imagination to think according to standards of reason, and more importantly to dream up new concepts that do not fit into the analytic schema of the understanding. Information or data is the externalization of a prior synthetic act of reason that is calculable now that it is materialized. This act is a decomposition of the line of reasoning into discrete elements that can thus be quantified and calculated. This act is entropic in and of itself, but can produce new knowledge, new concepts, if it leads to a surprise which causes one to run it through their filter of reason and create a new idea which re-organizes the analytical understanding as it now stands. In other words, by modeling text, one divides it up into an enormous different pieces (in this case, vectors) that one can perform calculations on. On their own, these models and these calculations are useless. However, an act like querying Stiegler’s texts for the answer to V(profit) – V(Anthropocene) + V(Neganthropocene) could open up a path that one could wander down. And perhaps, by wandering down this path, which would include careful thought, reasoning, and close reading, one could perhaps experience a surprise in the text. This surprise could potentially cause one to rethink the text they are reading closely in a new way, and potentially lead to the production of a concept. There is of course no way to guarantee this, but it is only by seeking out that which is incalculable that philosophy can be done. Perhaps vector semantics could be a kind of calculation that leads the way toward thinking about value anew and how a new society can be built upon this new concept of value. This could then guide a close reading of some of Stiegler’s texts that could potentially concretize this new, currently unknown, concept.

Work Plan

Digitizing
The work of making the texts digital will take place over the course of a week. Both texts are available online in PDF format. The week will be spent turning them into plain text files manually by typing them into a plain text editor.

Operationalization
Participant will spend a few weeks learning how to use word2vec, a popular algorithm for performing vector semantics. Once familiar with the tool, he will train the algorithm on the two texts in question, creating vectors based on cosine similarity. These vectors will then be operationalized to determine the new vector-concept. This step will be open to the possibility of failure and the potential need for alternative lines of questioning opened up by playing around with the tool.

Close reading/writing
Utilizing the fruits of the vector analysis, the participant will then perform a close reading of the texts at hand guided by the vector produced by the algorithm. This will result in an article surrounding the subject of value in the Neganthropocene.

Final Product and Dissemination

As stated, this project is to form a small part of a larger project about entropy and negentropy in the history of Western thought more generally. This particular project will lend itself to a shorter piece of writing that will specifically be about the question of value in the Neganthropocene. It will initially be published online as a blog post. It will not only be provisionally about the conceptual framework need to reevaluate value, but it will also form the foreground for this larger project on metacosmics. Thus, this close reading and writing will also be the work of forming the questions I would like to pose in future work, as well as the kinds of texts that may need to be addressed.

Works Cited

Deleuze, Gilles, and Félix Guattari. What Is Philosophy? Translated by Hugh Tomlinson and Graham Burchell, Verso, 2015.

Gavin, Michael, et al. “Spaces of Meaning: Conceptual History, Vector Semantics, and Close Reading.” University of Minnesota Press, Minneapolis, MN, 2019.

Malaterre, Christophe, et al. “What Is This Thing Called Philosophy of Science? A Computational Topic-Modeling Perspective, 1934–2015.” HOPOS: The Journal of the International Society for the History of Philosophy of Science, vol. 9, no. 2, 2019, pp. 215–249., https://doi.org/10.1086/704372.

Stiegler, Bernard. The Neganthropocene. Translated by Daniel Ross, 1st ed., Open Humanities Press, 2018.

Witmore, Michael. “Text: A Massively Addressable Object.” Debates in the Digital Humanities, The University of Minnesota Press, Minneapolis, MN, 2012, https://dhdebates.gc.cuny.edu/read/untitled-88c11800-9446-469b-a3be-3fdb36bfbd1e/section/402e7e9a-359b-4b11-8386-a1b48e40425a#p4b3. Accessed 28 Oct. 2021.

1 thought on “Modeling Value in the Anthropocene: Contributions to a metacosmics (Project Proposal)

  1. H. A. Dodd (He/him)

    Hey Brian!

    Excellent work on this proposal. Despite the fact that tools such as vector semantics and topic modeling were either distant or unperceivable blips on my radar prior to digging through your work, the clarity of your goals and the way in which you communicate the necessary steps to reach them allowed for even a DH-rookie like myself to follow along and, more importantly, be inspired by the scope of your proposed project. Since we spoke in class, I’ve had the pleasure of working through Daniel Ross’s introduction to Stiegler’s “The Neganthropocene” and briefly revisited the long-dormant Nanjing Lectures PDF on the laptop. Stiegler’s work encompasses the blend of philosophical criticism of capitalism and technology that I’ve been recently enamored with through the (perhaps more accessible) works of thinkers such as Byung-Chul Han and Franco ‘Bifo’ Berardi so the opportunity to do a semester-long deep-dive into Technics and Time, 1: The Fault of Epimetheus and the aforementioned lecture series seems like a total treat. I’d be thrilled to join your project, should you choose to pursue it this semester.

    All of this said, I do have questions regarding the technical elements of this project:
    How familiar are you at this point with the word2vec algorithm and are you comfortable onboarding new project members who lack familiarity? Or, are we all going to be getting up to speed in unison? Either is A-OK, I’m just trying to get a sense of where we’d be starting from. I know that you mentioned in your skillset that you’d prefer to take on a secondary position in relation to the more technical elements of the project so what are you thinking in regards to filling the developer role?

    Otherwise, I’m without critiques! I really enjoyed reading through this and look forward to discussing it more in class tomorrow.

Comments are closed.