Felicity Howlett, Bio and Contribution

Felicity Howlett received a CUNY BA degree in psychology from City College in 2021 and is currently a student in the MALS program at the Graduate Center. Previously, she earned a Ph.D. in musicology from Cornell University with a focus on the twentieth century. Her thesis explored the solo piano interpretations of Art Tatum. She became an executive assistant to David Judelson (a co-founder of Gulf+Western Corporation), who devoted his entrepreneurial skills to pharmaceutical research (developing a safe blood substitute) and the “last mile” problem (digital technology). Over the years, she transcribed and edited his recollections, and in 2016, she produced a two-volume memoir of his life and work. As a pianist, she has entertained in piano bars and in various hospital and elderly community settings. Presently, she volunteers in a “Music for Veterans,” music therapy program under the direction of Concetta Tomaino at the Institute for Music and Neurologic Function.

The Sounds of Music project is inspired by the creative efforts of Concetta Tomaino. As Project Director, Felicity will contribute her personal experience, contacts, and concentrate especially on helping to frame the model program, outreach, and research, including examination of other interactive music programs, resources to enhance accessibility to the internet, and problems of latency and other interactive connection issues. She is fortunate to have creative, technically well-equipped partners who have solid experience in design and project management.

 

 

Modeling Value in the Anthropocene – Work Plan

3/2-3/9:

Stiegler Lectures: Finish the 2016 Lectures and work through the entirety of the 2017 Lectures (Pages 50 – 169)

Discuss and decide how to prepare the corpus

Continue expanding Python / Text Analysis skillset through workshops and tutorials
Work through NLTK work book

Reach out to Daniel Ross to establish contact with the Internation Collective as part of audience-building and scholarly outreach

3/10-3/16:

Stiegler Lectures: 2018 Lectures (Pages 169 – 269)

Begin to clean and prepare the corpus

Go to the Word Embeddings workshop

Continue expanding Python / Text Analysis skillset
Go through “Automate the boring stuff with Python” lesson online

Meet with Filipa

3/17-3/23:

Stiegler Lectures: 2019 Lectures (Pages 269 – 345)

Lemmatize corpus, remove punctuation, start writing loops

Begin experimenting with the text in Jupyter (via topic modeling, word embedding, etc.)

Continue expanding Python / Text Analysis skillset through workshops and tutorials

3/24 – 3/30:

Begin to formalize and finalize the research methods and functions through which we will be carrying out our close reading of the text.

Proceed with distant reading of the text.

3/31 – 4/6:

Finalize and begin to implement the methods through which we will be carrying out our distant and close reading of the text.

Continue to grow and apply our capabilities with Python, Text Analysis, & Word Embedding.

Develop essay outline (and section delegation) and formalize our method of presentation.

4/7 – 4/13:

Proceed with close reading and essay development.

4/14 – 4/20:

Continue to develop project.

4/21 – 4/27:

Assemble a rough draft of the project’s final product

Proceed with group revisions

Consultation with the “two-two’s” for project critiques

4/28 – 5/04:

Continue to develop and finalize the project.

Finalize method of presentation.

5/05 – 5/11:

Make final preparations.

Personal Blog Connie Cordon 03/01

H316 General Purpose Digital Computer Brochure, 1965


Connie Cordon graduated in 2018 from Pratt Institute with a BFA in Communication Design with a concentration in Illustration. Publications and exhibitions include 3×3 Illustration Annual No. 16, Creative Quarterly 55, Society of Illustrators 2019, Society of Illustrators 2018, and Melted City 4 at YUI Gallery in 2018.

Her main interests include collaging, which later turned into an interest in archives, as she spent most of her time researching materials through physical and digital ones. She became interested in humanities while working on her illustration thesis, in which she created a series of images based on other individuals’ personal recounts of sexual trauma they experienced during childhood. By researching how trauma can impact an individual’s memory, she also explored the media’s interpretation of similar topics about abuse and sexuality, and how it can be misconstrued into something else entirely. For example, how the media has managed to make Lolita a positive cultural icon in our society despite the controversy the book caused. This research led her to questions about the notions of censorship and truth, censorship in advertising, and more specifically the ethics and morals regarding censorship in art and photography.

She is eager to explore how humanities and social sciences are intertwined with visual media in regards to storytelling, as well as the moral responsibility that visual storytellers have in regards to its impact on culture. She hopes to gain the tools needed in order to practice multidisciplinary experimentation with emerging media technologies that deal critically and logically with subjective, complex, and imperfect information.  

Her main contributions to the project is researching material, creating a visual identity, outreach and social media, as well as project management in conjunction with Kai– the main project leader.

Contribution Statement

Hampton Dodd graduated from the CUNY School of Professional Studies with a B.A. in Communication and Media studies in 2021. Throughout his time there, he focused primarily on the relationship between technology and power through the theoretical lens of Neo-Marxist and Foucauldian analysis. These influences ultimately culminated in a senior thesis entitled Physiognomy, Facial Recognition Technology, & Biopolitics, which sought to uncover and trace the common genealogical thread of pseudoscientific physiognomic thought from the Age of Enlightenment through the Third Reich and into the emergent webs of facial recognition technologies presently proliferating across the world. Beyond this, his research interests encompass the development and application of technological tools in the advancement of digital cultural criticism, the critical analysis of big data and surveillance capitalism, and labor in the age of automation and the platform economy. Currently, Hampton is developing a collaborative project called Modeling Value in the Anthropocene: Contributions to a metacosmics, a vector semantics analysis of the Nanjing Lectures given by philosopher Bernard Stiegler between 2016 and 2019, alongside project-manager and co-author Brian Millen.

Revised What is a Community Garden in the Digital Age?

Brief overview of project

New York City is often known as a “concrete jungle.” While we envision a gray and black landscape, not many pay attention to the actual green foliage that quietly engulfs the city. During the Covid-19 pandemic the interest in green spaces heightened and highlighted the necessity of green areas for people living in New York in order to ensure their well being. One way to encounter and interact with green spaces are the many different community gardens spread all over NYC which have a long history dating back to the 1970s. But as the world progresses deeper and deeper into a tech driven society, humanity must do their due diligence in keeping up. This seems to be a contradiction given the wholly untechnical nature (pun intended) the point of these gardens hold. But how can we come to understand community gardens by exploring their digital presence? and what does this mean to the garden’s work? Those are the questions we hope to answer in our project. To complete this task we will look into data sets and information taken from already established websites such as GrowNYC and Greenthumb as well as data census’ on community gardens. Once we compile as much information as we can, we will visualize them through Tableau and upload our data vis models onto an interactive website. Our goal for this project is to not only make this curated data more readily available, but to also highlight trends and oddities that have gone unnoticed from previous data collection. Additionally, to create a project that also reflects the people of these gardens, we will also try to conduct a smaller ethnographic research study to better get an understanding of the importance of these gardens’ digital presence. Moreover, with the myriad of gardens available in NYC we anticipate the possibility of some not having info available on the web. Our project will also likely be an outlet for these “web anonymous gardens” to have a digital footprint. In all we consider our work to be data collection as well as an awareness project. Our end goal is to help both the gardens and those interested in learning or helping these gardens. 

The question/problem it helps answer:

In essence, we are aware there is already a  plethora of digital information on community gardens available. Our objective is focused on helping these gardens become even more digitally accessible in a way that is more engaging. The graphic representation we will provide will be an effective and easier way to communicate the vast data already collected. As mentioned above we will complete this task by looking into data sets and information taken from already established websites as well as a public data census’. Then we will scan the digital presence of each and every community garden in New York CIty and create a visual map of gardens that have a presence and those that do not. This map will have different components that will aggregate the data found in our research. We will also do a deep dive that showcases the different types of community gardens and how they label themselves. We believe that this can be beneficial to local residents who would like to know what the gardens around them have to offer as well as an indicator to divulge if individual info is available on a website/social media account. The omission of a web presence can bring awareness to the community and possibly provide an incentive for those involved to create one for their garden. This can in turn highlight potential new ways of engaging volunteers.   

Project audience

This project will be for those interested in getting a better understanding of what a Community Garden is, but we will mostly focus on people interested in joining a community garden. Our work will also be beneficial to future data researchers/digital humanists who wish to expand on the digital work centered around these gardens. 

Contribution to DH & potential impact:

We do not wish to recreate digital portals like GrowNYC and Greenthumb. Instead we hope to explore what is a Community Garden as digital humanists by looking at data available online. Our project will create a digital interactive map of NYC showing information about the different types of gardens, sizes, accessibility etc. It will be for anyone to explore what these gardens’ digital presence can tell us about what it means to be a community garden in New York City. This blueprint web resource can hopefully be a hub for those interested in learning or interacting with these spaces, and help create awareness for the community gardens. In addition we will re-engage dead or outdated websites and have their digital presence reignited. Our ethnographic research will attempt to find out the reason why some of these gardens do not have a digital presence established, among other unforeseen questions. Perhaps their answers will lead to the discovery of discrepancies within the community. As mentioned above this will also be beneficial for other digital humanists working with community gardens.

Feasibility assessment:

The data needed to create a proof of concept model for this digital project can be collected in the duration of a college semester. We are aware that our small ethnographic research might only be possible later, since many gardens are closed at the moment due to the winter weather and non public entry. 

What digital tools/methods/team member skill sets will be needed to realize the project?  Project Manager/Documentation: Faihaa

Will be the general overseer of the group and plan for steps to move along progress, will keep track of Google spreadsheets listing data that has been found thus far, Will keep track of all contact info, will be involved with collecting, organizing and analyzing data.

Developer/Designer/ethnographic research: Benjamin

Will design and conduct small ethnographic research with the targeting group, build a website (or other digital representation) and assist on researching

Documentation/Research– Nelson 

Will be able to collect data. Able to travel and communicate with community garden leaders. Able to research historical information regarding community gardens in NYC. Eager to learn digital tools in order to present data visually, either by graphs or maps. 

All team members will be involved with Outreach. Our project also counts as Outreach!

TOOLS

We are interested in using Tableau Online and creating a WordPress interactive website

Legal or technical barriers

  1. Communicating with community leaders at the garden. Possibility that they won’t respond or want to divulge information that can be made public. 
  2. The data we collect might not be sufficient enough to create a meaningful project.
  3. Time constraints might be too much .
  4. Unforeseen events: Weather, Covid restrictions, online tools become unavailable?

How will you tackle those potential pitfalls if your project is selected?:

  1. The team would have to find creative ways to collect data overwise, via already established information on the web or contacting people who do similar projects. 

2.Reduce and shift the amount of gardens that are being contacted. Possibly restrict the type of data that is being collected. 

3.The team would need to have the foresight and have conversations among themselves on how to lead this project in the most ethical way possible. 

 4.The team would need to react accordingly and be quick thinkers.

Final product (even if tentative):

It is our intention to build an interactive website/tableau/map to present the collected data that we hope gives a better understanding of the digital presence of community gardens and their accessibility.

Brian Millen – Personal Bio/Contribution Statement

Brian Millen received his B.A. in Philosophy from SUNY Purchase, and is currently an M.A. student in the Digital Humanities at the CUNY Graduate Center. During his time there, he submitted a senior thesis entitled Education and a Discipline Beyond Punish, which argued for a renewed role of education in global political transformation. His theoretical interests involve the relationship between humans and technology and the political consequences thereof. His research interests concern utilizing a combination of computational text analysis with more traditional scholarly work of philosophy to investigate technology, politics, economics, and strategies for overcoming the Anthropocene. Currently, Brian is working on a project called Modeling Value in the Anthropocene: Contributions to a metacosmics, a vector semantics analysis, for which he is fulfilling a role as project manager, as well as working co-extensively with research collaborator Hampton Dodd as co-developer and co-author.

Caitlin Cacciatore: Bio & Contribution Statement

Caitlin Cacciatore (she/hers) graduated from Macaulay Honors at Baruch College, CUNY, in 2021 with a degree in Artificial Intelligence Studies. She comes from a background largely grounded in theory and the ethical implications of the proliferation of automation, machine learning, intelligent agents, the Internet, the inherently problematic nature of the digital divide, issues of accessibility, and the problem of machine bias. Caitlin also took courses in Python, C++, and SQL, as well as Excel, Access, and other database management tools.

During her time at Baruch, Caitlin became a published author, and possesses copyediting skills as well as website design and maintenance skills borne of administrating her own blog since 2016. In light of her intimate familiarity with WordPress, she will serve as the Sounds of Music team’s Technical Coordinator and Website Administrator. She will also work in a secondary capacity as a Copy Editor. Her duty is to polish content produced by herself and her colleagues, and transfer this content to a public-facing website. Her role is also to ensure that everything is running smoothly behind the scenes, and to co-create a seamless user experience for anyone visiting the website.

Caitlin is also responsible for co-creating a manifesto of best practices for music enrichment and engagement programs, as well as co-authoring a Sound of Music Workshop Toolkit, ideal for allowing individuals, groups, nursing homes, hospitals, daycares, schools, and other public and private facilities to create their own music enrichment programs. Additionally, Caitlin will brainstorm and iterate with her team members to launch a virtual synchronous Sounds of Music trial program, as well as a pilot program for the team’s target population of homebound elders.

Sounds of Music Work Plan

Deliverable 1: Sounds of Music WordPress Website

  • Creating & populating a public-facing WordPress platform 

Responsible: Caitlin

Resources: WordPress via CUNY Academic Commons

Duration: Ongoing from February – May 2022

  • Research visual references for the website user interface

Responsible: Raquel

Resources: Web

Duration: March 2022

  • Create visual assets for the website

Responsible: Caitlin & Raquel

Resources: Web (Image Banks), Illustrator 

Duration: March – May 2022

  • Latency Information & Information about Troubleshooting 

Responsible: Caitlin & Felicity 

Resources: WordPress

Duration: March 2022

  • Produce a manifesto of best practices for music enrichment

Responsible: Caitlin & Felicity 

Resources: WordPress

Duration: March 2022

  • Launch the public-facing version of the website

Responsible: Caitlin

Resources: WordPress via CUNY Academic Commons

Duration: May 2022

 

Deliverable 2: Accessibility Toolkit

  • Outreach for select potential users to interview 

Responsible: Felicity

Duration: March 2022

  • Research about accessibility tools

Responsible: Raquel & Felicity

Resources: Databases, consultantes, etc. 

Duration: March 2022

  • Interview potential users

Responsible: Raquel & Felicity

Resources: Use interview methods to determine what potential users wish to get out of a music enrichment program

Duration: March 2022

  • Develop first version of the accessibility toolkit

Responsible: Raquel

Resources: WordPress, Youtube, Adobe Illustrator

Duration: March 2022

 

Deliverable 3: Sounds of Music Workshop (Synchronous Online Experience Pilot Program)

  • Interview Workshop facilitator (Jeremy)

Responsible: Felicity, Caitlin, Raquel 

Resources: Zoom

Duration: March 2022

  • Design first version of Sounds of Music Workshop

Responsible: Felicity, Caitlin, Raquel

Resources: Zoom

Duration: March – April 2022

  • Perform outreach to participants to join Sounds of Music Workshop  

Responsible: Felicity

Resources: E-mail, Facebook

Duration: March 2022 

  • Launch first version of Sounds of Music Workshop

Responsible: Felicity, Jeremy (?)

Resources: Zoom

Duration: April 2022

  • Collect feedback from the participants of Sounds of Music Workshop

Responsible: Felicity & Raquel

Resources: Zoom

Duration: April 2022

  • Define main changes/iterations about the Sounds of Music Workshop

Responsible: Felicity & Raquel

Resources: Zoom

Duration: April 2022

 

Deliverable 4: Sounds of Music Workshop Toolkit (Do-It-Yourself workshop version)

  • Brainstorm musical activities for groups using Zoom 

Responsible: Raquel, Felicity, Caitlin, Jeremy

Resources: Zoom, Jamboard

Duration: May 2022

  • Develop Sounds of Music Workshop Toolkit

Responsible: Raquel & Caitlin

Resources: WordPress, Illustrator 

Duration: May 2022

  • Collect feedback of Sounds of Music Workshop Toolkit

Responsible: Caitlin, Felicity, Raquel

Resources: WordPress, Zoom

Duration: May 2022

  • Define main changes/iterations about the Sounds of Music Workshop Toolkit

Responsible: Caitlin, Felicity, Raquel, Jeremy (?)

Resources: WordPress, Zoom 

Duration: June 2022

  • Launch Sounds of Music Workshop Toolkit 

Responsible: Caitlin

Resources: WordPress 

Duration: June 2022

 

For more details about our schedule, check out our gantt chart:

work plan

 

Revised Mainframe Project Proposal

Overview

The Mainframe Project [new title needed] is a digital collection of archival media that is manipulated through deformance and other means to correlate and complicate our understanding of social relationships and human interactions with computers, using mainframes as a point of departure. By focusing on the computing devices in the workplace before the advent of personal computers, we hope to defamiliarize some of common assumptions about computing informed by contemporary culture, and think through how the social context around computing may have evolved differently given the set of perceptions about computing from the 1950 – 1970s. Specifically, this project will probe the underlying assumptions of human computer interaction from that time period, and also the gender politics of computing in the business place before the personalization of computing. It will conduct these types of investigations by looking at archived material such as advertising, manuals, and other ephemera, and relating it to both scholarly work tangentially related to mainframes, as well as scholarly work on computer-human interaction contemporaneous with mainframes, like cybernetics.

Problem addressed

There is a dearth of (digital) humanities scholarship about mainframe computers. Antecedents can be traced in works from various “media archaeologists”: Tung-Hui Hu’s A Prehistory of the Cloud (cultural poetics of cloud computing), Kirschenbaum’s Track Changes: A Literary History of Word Processing (word processing as software), Liu’s The Laws of Cool (knowledge work), Lisa Gitleman’s  “Raw Data” is an Oxymoron (various histories and pre-histories of data), Friedrich Kittler (general theory), among a few others. But mainframe computing is an implementation detail in larger arguments, often glossed over. So much of our computing inherits key concepts from the features and limitations of the age of Big Iron.

This project may also turn its eye towards other scholarly and non-scholarly writing coterminous with the Mainframe and related to computing. Examples include Leary’s speculative work like How Have Computers Empowered Humans?, but also early scholarly efforts to understand human and machine interaction, like Ashby’s An Introduction to Cybernetics, the work of Norbert Wiener, and Bateson’s Steps to an Ecology of Mind. Books for larger audiences that retrospectively evaluate mainframe computers may also be helpful in probing some of the social and interactive nuances of mainframe computing.

Our project also involves the manipulation of visual media in the tradition of deformance as initially conceived by Lisa Samuels and Jerome McGann in Deformance and Interpretation. Scholars practicing deformance in the visual realm serve as an inspiration as well, including the many videographic deformations Singin’ in the Rain by Jason Mittel (https://vimeo.com/jmittell) and the photo glitches of Michael J. Kramer (Glitching History).

Project audience

Scholars and students interested in imaginative explorations of what it was like interacting with and encountering mainframe interfaces in person, by proxy (timesharing) and as a cultural phenomenon in popular media and the workplace. We believe there’s a requirement for educational material for non-technical audiences in the broader public due to relative unfamiliarity with the technologies involved.

Contribution and impact

The study of mainframe computing is not only a historical exercise in preservation. Because of the foreignness of mainframe computing from contemporary experience, it becomes a prism to explore larger topics related to information technology in societies. As mentioned above, we’re interested in analyzing gender dynamics and computer-human interactions during the rise of mainframe computers. This is a meaningful downpayment on other computing topics that could be explored in a digital humanities context before the introductions of minicomputers in the 1980s and the rise of personal computing in future projects.

Final product

A digital collection of augmented photographs (and potentially video) with supporting written analysis and context will be hosted on a website. Material in the collection will come from marketing, manuals and other archival artifacts.

Feasibility assessment

Tool selection

At this moment in time, the team has leaned toward Wax as the platform on which to build our digital collection. That said, we will reevaluate that tool in relationship with other options like CollectionBuilder and Omeka, depending on time constraints and time necessary for research and time manipulating, potential to include video, etc. Depending on our

Our tools for manipulating photos and videos will vary. Physical collage will be done by hand, though we reserve the right to use digital tools like Adobe’s InDesign to perform the same operation. PhotoMosh, Hex editors and Python libraries may be used in glitching, pixelsorting and other deformance techniques.

Finally, depending on the media types included in our collection, we may use a third party platform to host assets like video (i.e. Youtube or Vimeo).

Team composition

Our team is composed of two scholars (Connie Cordon and Kai Prenger) that cover most tasks in a primary/secondary structure.

  • Project Manager → Connie Cordon primary, Kai Prenger secondary
  • Developer → Kai Prenger primary, Connie Cordon secondary
  • Visual archival researcher → Connie Cordon primary, Kai Prenger secondary
  • Textual researcher → Kai Prenger primary, Connie Cordon secondary
  • Outreach → Connie Cordon primary, Kai Prenger secondary
  • Project updates → Drafted together, Kai Prenger primary, Connie Condon secondary

Barriers and challenges

We believe the biggest challenges are related to original research and tracking down archival material that meets specific subject matter, though I’d imagine university archives/websites and institutions like The Computer History Museum may help mitigate finding nothing worth writing about. We have some anxiety about how much customization we’ll want in the collection itself, leading us to evaluate whether Wax might be appropriate, or whether CollectionBuilder or even Omeka might lead to (using this gist to evaluate). Time spent developing the site in a short amount of time will be traded off against time to research and develop intriguing assets.

Updated Project Proposal: Modeling Value in the Anthropocene

Abstract

Modeling Value in the Anthropocene is an attempt to accomplish two goals, one more general and one more specific. The first, more general one, is an attempt to provide an example of using computational tools in the world of philosophy. Doing this could enhance philosophy by utilizing methods that can process quantitative information faster than the speed of light. We argue that the particular tool of vector semantics can model large corpora of text in such a way that a new perspective can be created that could guide a subsequent close reading. Our hope is that this prior modeling can provide the potential for a close reading that leads to the creation of a concept, which is the aim of philosophy.
The particular question that this project will try to answer is something like: what concept should ground individual and collective value in the world beyond the Anthropocene? After using word2vec to model the conceptual networks of the concepts “profit,” “Anthropocene,” and “Neganthropocene” in the bibliography of philosopher Bernard Stiegler, we will then perform calculations on these vectors to unveil a new vector, the concept it’s representing being the one that will guide our close reading of one of his texts. This close reading will lend itself to the production of an article/potential book chapter. This will be part of a larger project of modeling the concepts of “entropy” and “negentropy” in the history of Western thought, which might result in a book on what Daniel Ross calls metacosmics.

 

List of Participants

Brian Millen, Digital Humanities Masters Student at CUNY Graduate Center, will be the project manager, co-developer, and co-researcher/author.
Hampton Dodd, Digital Humanities Masters Student at CUNY Graduate Center, will be the secretary, co-developer, and co-researcher/author

 

Enhancing the Humanities through Innovation

In proposing a reading of philosophical texts that makes use of algorithms for natural language processing, one would have to consider what, if anything, these computational methods of reading and writing bring to doing the work of philosophy. Natural language processing consists of performing mathematical calculations on language. In text analysis specifically, we use digital text as data that we can then model in different ways by making quantitative calculations that a human mind would never be able to do with such large amounts of text on its own through close reading. There are many arguments for and against this kind of practice, and it is likely that many native to the world of philosophy would be especially skeptical of accepting a computational approach. We believe one way of arguing how text analysis could contribute to philosophical practice is through Michael Witmore’s blog post, “Text: A Massively Addressable Object.” In this post, he situates distant reading as simultaneously continuous with the practice of reading since its genesis and discontinuous with past modalities of reading.

Witmore writes that what separates digital text from older forms of text is that “it is massively addressable at different levels of scale” (Witmore). To understand this requires understanding what he means by “scale.” According to Witmore, reading has always been the practice of abstracting a material text at the ideal level at which one is to read it. For example, one can choose to address a text at the level of the word, the sentence, the paragraph, or the book, just to name a few. These are not material characteristics of the text itself, but rather subjective ways of dividing the text up and projecting them into/out of the text. A reader could just as easily choose to underline all the words of a text beginning with the letter ‘B’ and address it at that scale. How text is addressed is a matter of convention, contingent on the technical and normative limits of the given socio-historical context of the act of reading. The fact that we tend to read text as “books” or even “words” is simply a matter of convention. As Witmore writes, “The idea of a word is itself an artifact of manuscript culture, one that could be perpetuated in print through the affordances of moveable type” (Witmore). What makes digital text different, then, is the scale of addressability, or rather the many different scales at which text can now be addressed. Instead of having to read one word at a time or one sentence at a time, we can query and compare many words from many different documents at once. A popular form of this found in the digital humanities is topic modeling. Topic models are generated by running an unsupervised machine learning algorithm on a group of documents and approximating which words tend to be used together in the same document. This allows us to address the texts at the level of “topic,” creating ideal topics we can say that the texts appear to be drawing from (ideal in the sense of non-real; in the realm of the potential or virtual, discoverable in the imagination, as distinguished from the actual).

The form of computational address we propose using in a philosophical context is one called vector semantics analysis. It is a technique of computational linguistics wherein we can run an algorithm that will group words together that bear a semantic similarity to one another. This similarity is represented using a particular kind of vector analysis called word embeddings. Word embeddings assign each word in the text(s) a numerical value based on their distributional distance from every other word in the text. We can then map these words graphically which represent concepts as networks of words used “synonymously” in the text. This is useful for conceptual analysis because words that have similar vectors to each other will appear clustered together within the vector space, indicating that they are distributed throughout the texts in a similar way. The linguistic theory operating behind this model is that words that are deployed in similar contexts have some conceptual resonance: “The latent idea here is that different words will tend to appear in different contexts, and therefore one can guess at a word’s meaning by simply counting the words that appear near it” (Gavin et al.). Bringing it back to the language of Witmore, vector semantics is a way of addressing large amounts of text at the level of semantic similarity. It is an addressing of the text at the level of the word, but it is a level where each word exists in relation to every other word, the level getting more discreet as we narrow our investigation. We can thus say this method allows us to address text at the level of concept. This level of address is obviously not new in philosophy. Finding continuity at the level of conceptuality is the modus operandi of philosophy, and we have been used to addressing concepts at the scale of the entire history of philosophy since at least Hegel. What is new here is the way we can address this history and its concepts.

The vector semantic analysis we are proposing is that of the concepts of entropy and negentropy in the history of Western thought. Bernard Stiegler teaches us that “the relation entropy/negentropy is really the question of life par excellence” (Stiegler, 39). Entropy is a term taken from thermodynamics denoting the tendency of any closed system to tend toward a state of disorder. Physicists of the 19th century came to apply this to the entire universe conceived of as a closed system, effectively reversing the Western conception of the cosmos, which since Socrates had been understood as equilibrium. The concept of negentropy was introduced by Erwin Schrödinger in his lectures published as What Is Life? to give an account of life understood as the temporary struggle against entropy, organic matter being organized matter that wards off the dissipation of energy. The physicist Alfred Lotka takes a step beyond Schrödinger to argue that humans are a particular kind of negentropy, temporarily suspending entropy by means other than just biological organs generated by natural selection. In other words, humans produce negentropy through artificial means of various kinds: tools, writing, machines, etc. According to Stiegler, the implications of these breakthroughs in scientific thought have yet to have been thought through by philosophy. They must be thought through in the context of the Anthropocene, which he claims is an epoch of history marked by extreme accelerations of entropy through industrialization, accelerations that threaten the survival of the human race. The Anthropocene is a proposed geological epoch in which human beings become a geophysical force affecting the biosphere. However, for Stiegler, it is important to recognize that this epoch is firstly a technological, economic, and political epoch. Thinking through the concepts of entropy and negentropy in this context is the work that he called for, and it is the work he attempted until his death in August 2020.

The long-term project we would like to embark on is an investigation of these concepts (in their inclusion as well as in their absence) over a long period of time, covering the histories of physics, biology, philosophy, economics, and information science. Using vector semantics, we wish to find out what other words bear a semantic resemblance to the words “entropy” and negentropy”. The goal of this project would be a contribution to what Daniel Ross calls “metacosmics”, which is a destruction of/continuation of metaphysics centered on a general theory of entropy/negentropy. This project would be oriented toward understanding how these concepts change over time, as well as seeing what resonance certain thinkers (like Nietzsche) have with these concepts, despite not using their names. This would complexify and intensify our conceptions of entropy and negentropy. The scope of this project would start much smaller though, beginning with just modeling these concepts in Stiegler’s work alone, further branching out to other works and disciplines in future projects. We wish to perform a conceptual analysis of his work that we will describe in the following section, an analysis that would be part of a close reading, a reading that will be a small part of this larger project.

 

Environmental Scan

Much of the work of distant reading in the digital humanities utilizes methods of topic modeling. Something like topic modeling can be incredibly useful for philosophy, especially the history of philosophy. It can address large amounts of text at a scale that would allow us to understand what topics are characteristic of a particular school of thought (like German idealism) or a particular epoch of history (like the Enlightenment). Malaterre et al. run an unsupervised machine learning algorithm on the complete corpus of the journal Philosophy of Science, mining articles from 1934 to 2015. One result of this was the revelation that the topics that are consistent throughout the whole time are “metaphysical and ontological questioning” about “space and time, causation, natural kinds, [and] realism” (Malaterre et al, 215). Another was the discovery of how the topics of logic and language rose and fell throughout time.
Despite topic modeling’s potential fruitfulness, we wish to show why vector semantics is a better method specifically for doing the work of philosophy. There are two reasons for this. The first is because it allows us to model semantic similarity among different words. As Gavin et al. argue,

concepts are not words. The distinction can be glimpsed by considering any collection of synonyms, such as rubbish, trash, waste, junk. Each of these words has a distinct use, and they imply different connotations and meanings, but there remains a principle of synonymy among them, an underlying concept that ties them together (Gavin et al.)

Topic models show us what words tend to be used together. Word embeddings show us which words tend to be used with the same words. Topic modeling could perhaps guide a close reading exploring the topic deeper, or what role a key word might play in a topic, but it cannot get to the semantic depth we might want to in the work of philosophy.

The other reason why we prefer vector semantics for philosophy is because philosophy is not just the modeling of concepts. As Deleuze and Guattari write, “The philosopher is the concept’s friend; he is the potentiality of the concept…philosophy is the discipline that involves creating concepts” (Deleuze et al., 5). The operations of word embedding alone already described are useful for clarifying and perhaps deepening concepts, and could possibly lend themselves to some conceptual reevaluation. However, there is another operation made possible by word embeddings that contains so much more potential for philosophy. Because the words are embedded with numerical values, once the conceptual networks have been modeled in vector space, we can actually perform calculations on the vectors that create new vectors based on this math, and these new vectors can open a path to the creation of concepts, or what Stiegler calls “categorial invention,” which is the goal of philosophy, and perhaps of all theoretical work. A well-cited example is that of V(Queen) = V(King) + V(Woman) – V(Man). When taking the vector for “King”, adding the vector for “Woman” to it, and subtracting the vector for “Man”, the model has been proven to successfully output the vector for “Queen.” What this means conceptually is that if we add the qualities of women to the concept of kings, we have some concept which would have all the qualities of women and all the qualities of kings. If we then subtract from this new vector everything associated with men, we get the concept of queen. This is a simple example, but this functionality can prove exceptionally useful for philosophy.

One of the principal things Stiegler calls for is a rethinking of value to escape the Anthropocene, initializing what he calls the epoch of the Neganthropocene. One chief problem with capitalism, he claims, is that, under the conditions it initiates, all use value is reduced to exchange value. The usefulness of a thing is reified into how much it costs, or how much money it could make. This reduces everything to the standards of valuation of the market. The progression of this dynamic accounts for how things like the law or works of art have been devalued, not to mention the health of the biosphere and the future itself. Thus, the Neganthropocene, which would be the epoch following the Anthropocene (if there is to be one), would have to be generated on the basis of a new valuation. The question, then, is if the value of everything is no longer to be based on profit, what is to be the new value founding this society? We can contribute to the thinking through of this question by treating Stiegler’s works with vector semantics. We propose starting off by querying a sample equation that looks something like V(profit) – V(Anthropocene) + V(Neganthropocene). This would take the concept of profit, which grounds value in this current stage of capitalism, subtract that which is characteristic of the Anthropocene, and add the vector representing the things that Stiegler writes about the Neganthropocene. This analogic calculation might point us in the direction of which words will be related together as all having to do with how we should re-ground value beyond the Anthropocene. We will run word2vec, a vector semantic algorithm, on two of Stiegler’s texts: Nanjing Lectures 2016-2019, where he lays out his theories of entropy, negentropy, Anthropocene, and Neganthropocene most systematically.

It should be made very clear that this type of calculation is not a magic wand that can reveal new concepts for us on its own. Witmore’s account of distant reading focuses on the scale of the address, but it does not take into full account the shape or contours of the address itself. We would argue that there are two main modes with which one can address text: analytic and synthetic. These neo-Kantian faculties that Stiegler articulates are two forces that make up the dialectic of knowledge production. The full explication of these arguments is beyond the scope of this proposal, but they show that calculating text (or any data) requires the synthetic work of the imagination to think according to standards of reason, and more importantly to dream up new concepts that do not fit into the analytic schema of the understanding. Information or data is the externalization of a prior synthetic act of reason that is calculable now that it is materialized. This act is a decomposition of the line of reasoning into discrete elements that can thus be quantified and calculated. This act is entropic in and of itself, but can produce new knowledge, new concepts, if it leads to a surprise which causes one to run it through their filter of reason and create a new idea which re-organizes the analytical understanding as it now stands. In other words, by modeling text, one divides it up into enormous different pieces (in this case, vectors) that one can perform calculations on. On their own, these models and these calculations are useless. However, an act like querying Stiegler’s texts for the answer to V(profit) – V(Anthropocene) + V(Neganthropocene) could open up a path that one could wander down. And perhaps, by wandering down this path, which would include careful thought, reasoning, and close reading, one could perhaps experience a surprise in the text. This surprise could potentially cause one to rethink the text they are reading closely in a new way, and potentially lead to the production of a concept. There is of course no way to guarantee this, but it is only by seeking out that which is incalculable that philosophy can be done. Perhaps vector semantics could be a kind of calculation that leads the way toward thinking about value anew and how a new society can be built upon this new concept of value. This could then guide a close reading of some of Stiegler’s texts that could potentially concretize this new, currently unknown, concept.

 

Work Plan

Education
Both participants will spend the first portion of the semester learning the basics of Python and text analysis, followed by time spent learning how to use word2vec. The specific work plan will be fleshed out more when this is done. Regular (weekly or bi-weekly) re-assessment will be a necessary part of the work flow, which is in nature more open and porous.

Digitizing
Making the text machine-readable will be a collaborative effort. The text is available in an open access searchable PDF format. Software will be used to convert it to plain text format and the results will be hand-checked against the original.

Operationalization
Once participants are familiar with the word2vec tool, they will train the algorithm on the text in question, creating vectors based on cosine similarity. These vectors will then be operationalized to determine the new vector-concept. This step will be open to the possibility of failure and the potential need for alternative lines of questioning opened up by playing around with the tool. There are two options for doing the actual work of text analysis collaboratively. Participants will experiment with both a) doing the work separately and meeting regularly to exchange and discuss results and b) pair programming, doing the text analysis on the same machine or sharing a screen via video chat.

Close reading/writing
Utilizing the fruits of the vector analysis, the participants will then perform a close reading of the texts at hand guided by the vector produced by the algorithm. This will require the development of a strategy for collaborative note-taking and writing. Participants may utilize a collaborative version of the Zettelkasten method to bridge emerging ideas into the production of a unified text. The scope of this project will focus on a rough draft of a piece of writing, revision, and publication of which will be the work of a future iteration.

 

Final Product and Dissemination

As stated, this project is to form a small part of a larger project about entropy and negentropy in the history of Western thought more generally. This particular project will lend itself to a shorter piece of writing that will specifically be about the question of value in the Neganthropocene. It will initially be published online as a blog post. It will not only be provisionally about the conceptual framework needed to reevaluate value, but it will also form the foreground for this larger project on metacosmics. Thus, this close reading and writing will also be the work of forming the questions we would like to pose in future work, as well as the kinds of texts that may need to be addressed.