Modeling Value in the Anthropocene – White Paper

Project Narrative

Modeling Value in the Anthropocene is the prototype for a computational method that can assist in philosophical research and the theoretical work of reading and writing more generally. As we see it, it is both 1) a contribution to the theory and practice of the work of Bernard Stiegler, the Internation Collective, and the Association of Friends of the Thunberg Generation via the introduction of machine learning in general and word embedding in particular to reading and writing philosophy; and 2) a contribution to the fields of text analysis and distant reading, and the digital humanities more broadly, with an application of the philosophy of Bernard Stiegler that can situate natural language processing as an element of a technique of reading and writing in the digital age, while also scoping out the limits of such a technique.

The project began with a question as to the potential usefulness of distant reading for theoretical work. By theoretical, we mean all of the scientific disciplines through which bodies of researchers generate new knowledges of all kinds. Distant reading is the application of mathematical models to large databases of text after the text is made quantifiable, and thus calculable, by datafying it, and performs computations with it. These are analytical functions that the human mind does not have the capacity to perform. Most of the text analysis work done in the humanities (such as that of Franco Morretti, Ted Underwood, and Michael Gavin) takes up the literary field to study things like changes in language and style across different time periods or different literary movements. There are many arguments for and against this kind of work. Having a particular interest in the discipline of philosophy, we wanted to address the question of whether this kind of work could be used in the activity that philosophers partake in.

One way to approach the question of distant reading philosophy is through the thought of Michael Witmore, especially in his blog post, “Text: A Massively Addressable Object.” Here he defends the practice by positioning it as simultaneously continuous with the practice of reading since its genesis and discontinuous with past modalities of reading.
For Witmore, what separates digital text from older forms of text is that “it is massively addressable at different levels of scale” (Witmore). To understand this requires an understanding of what he means by “scale.” According to Witmore, reading has always been the practice of projecting a material text into an ideal level that one reads it at. For example, one can choose to address a text at the level of the word, the sentence, the paragraph, or the book, just to name a few. These levels are not material characteristics of the text itself, but rather subjective ways of dividing the text up and projecting them into/out of the text. A reader could just as easily choose to underline all the words of a text beginning with the letter ‘B’ and address it at that scale. How text is addressed is a matter of convention, a mode of attention contingent on the technical and normative limits of the given socio-historical context in which the reading occurs. Reading text as “books” or even “words” is a socially constructed mode of reading. As Witmore notes, “The idea of a word is itself an artifact of manuscript culture, one that could be perpetuated in print through the affordances of moveable type” (Witmore). This implies that there are other possible scales of reading one could address.
What makes digital text different is this scale of addressability, which in fact contains many different scales contingent on the new capacities of the technology. Instead of having to read one word at a time or one sentence at a time, we can query and compare many words from many different documents at once. Perhaps the most popular form of this found in the digital humanities is topic modeling. Topic models are generated by running an unsupervised machine learning algorithm on a group of documents and approximating which words tend to be used together in the same document. This allows us to address the texts at the level of “topic,” creating ideal topics we can say that the texts appear to be drawing from. This kind of modeling could prove useful for making clear what discourses various texts and authors and schools of thought might be drawing from, as Lisa Rhody has discussed with reference to ekphrastic poetry.

In philosophy, this kind of analysis could address large amounts of text at a scale that would allow us to understand what topics are characteristic of a particular school of thought (like German idealism) or a particular epoch of history (like the Enlightenment). Malaterre et al. run an unsupervised machine learning algorithm on the complete corpus of the journal Philosophy of Science, mining articles from 1934 to 2015. One result was the revelation that the topics which are consistent throughout are those related to “metaphysical and ontological questioning” about “space and time, causation, natural kinds, [and] realism” (Malaterre et al, 215). Another result was the discovery of how the topics of logic and language rose and fell throughout time.
 Despite topic modeling’s potential fruitfulness, though, we wish to show why word embedding is a better method for philosophy specifically. There are two reasons for this. The first is because it allows us to model conceptual similarity among different words. As Gavin et al. argue,

concepts are not words. The distinction can be glimpsed by considering any collection of synonyms, such as rubbish, trash, waste, junk. Each of these words has a distinct use, and they imply different connotations and meanings, but there remains a principle of synonymy among them, an underlying concept that ties them together (Gavin et al.)

With a word embedding model, an algorithm turns each unique word in a text corpus into a vector containing the relationship that each word bears to every other word in the corpus. With the numerical values based in a given word’s distributional distance from every other word in the text, semantic resonance can be calculated between words which have a similar relationship to the matrix of the text. This is useful for conceptual analysis because words that have similar vectors to each other will appear clustered together within the vector space, indicating that they are distributed throughout the texts in a similar way. The linguistic theory operating behind this model is that words that are deployed in similar contexts have some conceptual resonance: “The latent idea here is that different words will tend to appear in different contexts, and therefore one can guess at a word’s meaning by simply counting the words that appear near it” (Gavin et al.). Bringing it back to the language of Witmore, word embedding is a way of addressing large amounts of text, through calculating semantic similarity, at the level of the concept. It is an addressing of the text at the level of the word, but it is a level where each word exists in relation to every other word, the level getting more discreet as we narrow our investigation. Topic modeling could perhaps guide a close reading exploring the topic deeper, or what role a key word might play in a topic, but it cannot get to the semantic depth we might want to in the work of philosophy.

The other reason we prefer word embedding for philosophy is because philosophy is not just the modeling of concepts that already exist. As Deleuze and Guattari write, “The philosopher is the concept’s friend; he is the potentiality of the concept…philosophy is the discipline that involves creating concepts” (Deleuze et al., 5). The operations of word embedding alone already described are useful for clarifying and perhaps deepening concepts, and could possibly lend themselves to some conceptual re-evaluation. However, there is another operation made possible by word embeddings that contains so much more potential for philosophy. Because the words are embedded with numerical values, once the conceptual networks have been modeled in vector space, we can actually perform calculations on the vectors that create new vectors based on this math, and these new vectors can open a path to the creation of concepts, or what Stiegler calls “categorial invention,” which is the goal of philosophy, and perhaps of all theoretical work. A well-cited example is that of V(Queen) = V(King) + V(Woman) – V(Man). When taking the vector for “King”, adding the vector for “Woman” to it, and subtracting the vector for “Man”, the model has been proven to successfully output the vector for “Queen.” What this means conceptually is that if we add the qualities of women to the concept of kings, we have a concept which would have all the qualities of women and all the qualities of kings. If we then subtract from this new vector everything associated with men, we get the concept of queen. This is a simple example, but this functionality can prove exceptionally useful for philosophy.
One of the principal things Stiegler called for is a rethinking of value to escape the Anthropocene, initializing what he calls the epoch of the Neganthropocene. A chief problem of capitalism, he claims, is that, under the conditions it initiates, all use value is reduced to exchange value. The usefulness of a thing is reified into how much it costs, or how much money it could make. This reduces everything to the rules of the market. The progression of this dynamic is the way through which knowledge, art, politics, and life have been devalued, not to mention the health of the biosphere and the future itself. Thus, the Neganthropocene, which would be the epoch following the Anthropocene (if there is to be one), would have to be generated on the basis of a new valuation. The question, then, is if the value of everything is no longer to be based on profit and calculability, what is to be the new value founding this society? We hypothesized we could contribute to the thinking through of this question by treating Stiegler’s works with word embedding. We proposed querying a sample equation that looks something like V(value) – V(Anthropocene) + V(Neganthropocene). This would take the concept of value, subtract that which is characteristic of the Anthropocene from it, and add the vector representing the things that Stiegler writes about the Neganthropocene. This analogic calculation might point us in the direction of which words will be related together as having to do with how we should re-ground value beyond the Anthropocene. We planned to train word2vec, a word embedding algorithm, on a collection of texts by Stiegler and other members of the Internation Collective. The Stiegler works used were “The Neganthropocene”, “Nanjing Lectures 2016-2019”, and “Technics and Time, 4: Faculties and Functions of Noesis in the Post-Truth Age”. We also used “Psychopolitical Anaphylaxis: Steps Towards a Metacosmics” by Daniel Ross and “Bifurcate: ‘There Is No Alternative’”, a collection written by the Internation Collective. Then, we were to query the corpus for this new vector and see what insight could be granted into the question of value in the Neganthropocene.

It should be made very clear that this type of calculation is not a magic wand that can reveal new concepts for us on its own. Witmore’s account of distant reading focuses on the scale of the address, but it does not take into full account the shape or contours of the address itself. We would argue that there are two main modes with which one can address text: analytic and synthetic. These neo-Kantian faculties that Stiegler articulates are two faculties that make up the process of knowledge production. The full explication of these arguments are beyond the scope of this report, but they show that the calculation of data requires the synthetic work of the imagination to think according to standards of reason, and more importantly to dream up new concepts that do not fit into the analytic schema of the understanding. Information or data is the externalization of a prior synthetic act of reason that is calculable now that it is materialized. This act is a decomposition of the line of reasoning into discrete elements that can thus be quantified and calculated. This act is entropic in and of itself, but can produce new knowledge, new concepts, if it leads to a surprise which causes one to think it through according to the criteria of knowledge and create a new idea which re-organizes the analytical understanding as it now stands. In other words, by modeling text, one divides it up into a large number of different pieces (in this case, vectors) that one can perform calculations on. On their own, these models and these calculations are useless. However, an act like querying Stiegler’s texts for the answer to V(profit) – V(Anthropocene) + V(Neganthropocene) could open up a path that one could wander down. And perhaps, by wandering down this path, which would include careful thought, reasoning, and close reading, one could perhaps experience a surprise in the text. This surprise could potentially cause one to rethink the text they are reading closely in a new way, and potentially lead to the production of a concept. There is of course no way to guarantee this, but it is only by seeking out that which is incalculable that philosophy can be done. Perhaps word embedding could provide a kind of calculation that leads the way toward thinking about value anew and how a new society can be built upon this new concept of value. This could then guide a close reading of some of Stiegler’s texts that could potentially concretize this new, currently unknown, concept. This was the kind of work we hoped this project could make possible.

Audience

From the onset of Modeling Value in the Anthropocene, the esoteric nature of Bernard Stiegler’s philosophical thought along with the niche branch of natural language processing that is word embedding presented our project with the unique challenge of positioning our work in a way that equally engages with scholars in the digital humanities and philosophy, while simultaneously communicating our intention, our theoretical foundation, and our results in such a way that it might prove to be accessible to those on the periphery of these two disciplines. Though we predicted that our work would likely resonate most with those working closely with the theoretical and technical approaches employed in Modeling Value in the Anthropocene (and its sister project Modeling Memory in the Anthropocene), it is our hope that through the presentation of our findings now available on the project’s website, the various resources provided allowing users introductory insight into the theoretical framework of the Internation Collective, and the approachable and digestible nature of the NeganthropoZene, that our findings might be appropriately equipped to capture a broader scope of attention including that of students of philosophy, digital humanists throughout the field, and recreational scholars of theory and technology.

Though our initial audience proposal included ambitious social media outreach plans involving potential engagement and collaboration with popular philosophy and theory YouTube channels, podcasts, and blogs, in order to establish a social-networkless-social-network of thinkers to bolster our aim and philosophize over our findings, we quickly realized that networking in such a way is difficult without the results of the project finalized, properly assessed, and understood entirely. As those we’ve been in contact with throughout this process know well at this point, Modeling Value in the Anthropocene’s findings were not immediately evident as anything interesting, exciting, or even usable. This came as a source of brief anxiety and disappointment, triggering a critical reevaluation of our findings and a search for any subtle meaning that could be extracted from our word2vec results. With time and much discussion, “value” within our findings illuminated itself, allowing us just enough time to bolster our project’s website and adequately prepare for its fast-approaching presentation at the GC Digital Showcase, leaving little wiggle room in the remainder of our work plan to reconstruct an outreach plan, put together a “press kit,” and navigate the steps of building a working relationship necessary to coordinate any type of worthwhile collaborative project through the aforementioned mediums.

Considering that this initial aim was abandoned, the conclusion of our project has brought with it necessary reevaluations for the future social component of our work and the scholarly channels we’d like to see engaged with it as to be further critiqued, appreciated, developed, or collaborated on. Brian has managed to establish communication with Daniel Ross, Stiegler’s longtime friend, translator, and author of Psychopolitical Anaphylaxis: Steps Towards a Metacosmics, fulfilling one goal of scholarly outreach that we had presented in our initial proposal. Despite this dialogue being rooted in Brian’s work outside of Modeling Value in the Anthropocene, it elucidates the potential for such interlocution in the future stages of this work and allows for a sense of real possibility for our original, overly-ambitious audience proposal that sought out the likes of Ross Abbinnett and members of the Internation Collective as sounding boards for our work. This, along with our last-minute realization that the University of South Carolina’s resident digital humanist and text analysis authority, Michael Gavin, had been waiting for our request to take on a larger role within our project, has provided us with an exciting notion of what the future of this type of work could look like given the opportunity and time to foster such scholarly and consultative relationships.

As we’ve noted multiple times at this point, perhaps the biggest lesson learned through this project regarding audience and outreach has been, “Never hesitate to ask for help because you never know who might simply be waiting for you to ask.” The accessibility we’ve discovered to the very thinkers that inspired this work has been as intimidating as it is thrilling and we look forward to future iterations of this work now cognizant of the brilliant minds willing to engage with our work, regardless of how distantly.

Project Activities

Amidst this final stage of our project’s creation, reflecting on the initial work plan for Modeling Value in the Anthropocene set out in March reminds us of the experimental nature of our approach to this analysis and the necessary skill set that we worked to develop from the ground up in order to facilitate our intended investigation of Stiegler’s Nanjing Lectures. Though our original scope for this project included objectives such as the creation of an essay to detail our findings, our overly ambitious social media outreach goals noted above, and an unrealistic reading plan for the text that was quickly reassessed, the core outline of our inceptive work plan is surprising similar to that which we followed up until this point of retrospection. Our navigation of Python and text analysis via workshops, tutorials, and the NLTK workbook, along with countless YouTube videos and troubleshooting coding forums, was presciently outlined and planned for, allowing us to approach the immeasurable amount of information on such topics deliberately and assiduously as to make the most of our limited time. As a result of this intentionality in conjunction with our consultations with project mentors such as Leanne Fan, Michael Gavin, Filipa Colado, and Rafael Davis Portela, we were able to build the necessary foundation of skills as to properly and effectively carry out the text analysis equations as they were described in our project’s proposal at the start of the semester.

Though we deserted the notion of creating an essay to accompany our text analysis work in the early weeks of its development, we feel that our website, through the written portions located throughout detailing Stiegler’s philosophy as well as the philosophy and technicity of our approach, operates to bolster both our purpose in producing this project and the findings presented as a result of this process. Despite time being the factor that dissuaded us from the creation of such an essay, it was in the loosely structured final month stretch of our work plan that opportunities revealed themselves allowing for such efforts to communicate our theoretical and technical program to originate and enhance the Modeling Value in the Anthropocene website. The vague nature of the last month of our work plan further illuminates the creative ambiguity that was left open so that our work could mature without naively calculated restraints placed before our ideas and skillset could fully ripen. It was this openness that allowed us to expand our corpus to include texts from the Internation Collective and Daniel Ross, extend our vector analysis to include a wide array of unanticipated equations, and include Modeling Memory in the Anthropocene as a complementary element in our analysis.
Each stage of Modeling Value in the Anthropocene brought with it challenges that required us to reevaluate and restructure components of our project, eventually culminating in the briefly disappointing realization that the equation central to our word2vec analysis (V(value) – V(Anthropocene) + V(Neganthropocene)) had rendered less than immediately compelling results. However, upon further reflection and direction provided by Bret Maney and Michael Gavin, we were able and inspired to salvage such “non-results” and transform them into the bountiful grounds of interpretation that produced the philosophical exegesis elaborated on through our presentation at the Graduate Center’s Digital Showcase. Though our work has undoubtedly provided us with an opportunity for growth in our understanding of Bernard Stiegler and the scholarly possibilities provided to us through text analysis, it has also been an exercise in interpersonal problem solving, troubleshooting, and skill-development. Due to the complex nature of the Modeling Value in the Anthropocene’s proposal, it was fundamental to the project’s success that we expeditiously immersed ourselves in the world of Stieglerian thought and text analysis, regardless of rudimentary knowledge of one or the other, and advance our understanding through a cohesive and ambitious methodology.
Accomplishments

After completing this work, what we have are three products, all hosted on our website, which can be found at https://metacosmics.commons.gc.cuny.edu/. These products are: a Jupyter notebook file containing the Python script for our text analysis, some very basic and provisional writing containing some reflections on the results of our analyses’ queries, and a digital zine providing readers with an introduction to the philosophy of Bernard Stiegler. The Python script contains code for how to upload the (or any) text, train the word2vec model on the text, create new vectors in the model, and query for vectors with the greatest cosine similarity. The code is notated for the sake of intelligibility. The reflections on the results are some provisional thoughts on where this work could go and how it could guide a close reading of the work of Stiegler and others. The zine became a seeming necessity after our engagement with the digital humanities community regarding this project came up against the almost complete absence of familiarity with Stiegler’s work in this community. We felt a zine such as this could assist in our hopes for more appropriation of his ideas in the digital humanities world.

Evaluation

As we progressed this semester, feedback provided through our consultations with professors, digital fellows, and colleagues each acted as intermittent lodestars that we predominantly chose to follow, only occasionally neglecting to fully internalize such delineated directives and finding this out the hard way down the line. Aside from the thoughtful and supportive feedback provided by Bret Maney each week as we provided updates detailing our progress, our first piece of notable feedback from outside of the Graduate Center’s purview came from Michael Gavin at the University of South Carolina. In our initial meeting, Michael shared with us guidance regarding the struggles of interpretive clarity inherent in word2vec analyses, the benefits of utilizing a pre-trained model, the upsides to employing topic modeling and network graphs, and the necessity of breaking the corpus into subsections to be treated as individual documents and queried in comparison. Though our first meeting with Professor Gavin was wildly illuminating, our understanding of that which we were immersing ourselves in was still too limited to fully incorporate his invaluable instruction effectively into our project’s operation and general direction.

Our second meeting with Michael came shortly before our presentation at the Graduate Center Digital Showcase. Updating him on the progress we had made, along with the roadblocks and missteps, he evoked the cautionary advice he had provided months prior, suggesting that we had attempted exactly that which he had suggested not to attempt. Going on to question why we hadn’t reached back out sooner to engage with him further and avoid such lapses in project production, we realized that his feedback could have (and should have) played a larger role in our work, providing us with a deeply beneficial but hard-learned lesson to embrace as we move forward in our scholarly pursuits.

The feedback received from digital advisors such as Filipa Colado and Leanne Fan generally came in the form of collaborative working sessions via Zoom, allowing them to get their hands on our Python script, critique it, amend it, and provide recommendations for future development. These sessions were crucial to our advancement as coders, allowing us to troubleshoot and experiment under the instructive watch of some of the Graduate Center’s most talented digital scholars. It was primarily through these working sessions, along with our engagement with our peers in these early stages of script development, that we realized that the theoretical unpinning ushering along this code’s production required an accessible elucidation as to make the core objectives of our work both compelling and digestible to digital scholars unfamiliar with Stiegler’s philosophical project. To address this “weakness” of obscurity brought about through the sea of neologisms that one must swim through in order to grasp the core arguments of Stiegler’s work, we devised the “NeganthropoZene” to act as an introductory brochure for those interested but perhaps intimidated by the occasionally abstruse nature of our work. As this opacity was also mentioned in the Digital Showcase dress rehearsal, we are excited to have produced a resource that might help to shed light on Stiegler’s thought for curious citizens of the Anthropocene.

Lastly, the feedback received via the Digital Showcase was largely positive and restricted to brief kudos in the Zoom chat, providing little to extract and apply to the betterment of our project. However, after a semester of applying the critiques and directives of those we admire, we feel that our project is in a place that has recognized its weaknesses, engaged with them as an element of our presentation at the Digital Showcase, and addressed them to the best of our ability on the project’s website.

Future of the Project

Modeling Value in the Anthropocene is just the beginning of the work we will be doing utilizing text analysis in philosophical and other academic research. The goals we have for the future of this work are twofold. On the one hand, we will be taking the lessons gained from this project and bringing them to more mature text analysis that will lend itself to close reading and the production of philosophical writing that utilizes such analysis and reading. On the other hand, we hope to develop an application that can do the kind of word embedding conducted here (and in the ways we hope to adjust it in the future with the perspective we gained here) with a user-friendly GUI that will allow academics and other interested folks to do this kind of work without needing to know how to program. This will allow more researchers access to this tool and will hopefully contribute the work that so desperately needs to be done in the Anthropocene. This could also potentially be part of a larger idea of how to produce a word processor that could link tools such as this along with others to produce writing that could be hyperlinked in a large collaborative research network that could allow new knowledge to be transmitted and shared by others in new ways. This dimension of the project is, however, much more long-term in scope.