Category Archives: Project Proposal

Revised What is a Community Garden in the Digital Age?

Brief overview of project

New York City is often known as a “concrete jungle.” While we envision a gray and black landscape, not many pay attention to the actual green foliage that quietly engulfs the city. During the Covid-19 pandemic the interest in green spaces heightened and highlighted the necessity of green areas for people living in New York in order to ensure their well being. One way to encounter and interact with green spaces are the many different community gardens spread all over NYC which have a long history dating back to the 1970s. But as the world progresses deeper and deeper into a tech driven society, humanity must do their due diligence in keeping up. This seems to be a contradiction given the wholly untechnical nature (pun intended) the point of these gardens hold. But how can we come to understand community gardens by exploring their digital presence? and what does this mean to the garden’s work? Those are the questions we hope to answer in our project. To complete this task we will look into data sets and information taken from already established websites such as GrowNYC and Greenthumb as well as data census’ on community gardens. Once we compile as much information as we can, we will visualize them through Tableau and upload our data vis models onto an interactive website. Our goal for this project is to not only make this curated data more readily available, but to also highlight trends and oddities that have gone unnoticed from previous data collection. Additionally, to create a project that also reflects the people of these gardens, we will also try to conduct a smaller ethnographic research study to better get an understanding of the importance of these gardens’ digital presence. Moreover, with the myriad of gardens available in NYC we anticipate the possibility of some not having info available on the web. Our project will also likely be an outlet for these “web anonymous gardens” to have a digital footprint. In all we consider our work to be data collection as well as an awareness project. Our end goal is to help both the gardens and those interested in learning or helping these gardens. 

The question/problem it helps answer:

In essence, we are aware there is already a  plethora of digital information on community gardens available. Our objective is focused on helping these gardens become even more digitally accessible in a way that is more engaging. The graphic representation we will provide will be an effective and easier way to communicate the vast data already collected. As mentioned above we will complete this task by looking into data sets and information taken from already established websites as well as a public data census’. Then we will scan the digital presence of each and every community garden in New York CIty and create a visual map of gardens that have a presence and those that do not. This map will have different components that will aggregate the data found in our research. We will also do a deep dive that showcases the different types of community gardens and how they label themselves. We believe that this can be beneficial to local residents who would like to know what the gardens around them have to offer as well as an indicator to divulge if individual info is available on a website/social media account. The omission of a web presence can bring awareness to the community and possibly provide an incentive for those involved to create one for their garden. This can in turn highlight potential new ways of engaging volunteers.   

Project audience

This project will be for those interested in getting a better understanding of what a Community Garden is, but we will mostly focus on people interested in joining a community garden. Our work will also be beneficial to future data researchers/digital humanists who wish to expand on the digital work centered around these gardens. 

Contribution to DH & potential impact:

We do not wish to recreate digital portals like GrowNYC and Greenthumb. Instead we hope to explore what is a Community Garden as digital humanists by looking at data available online. Our project will create a digital interactive map of NYC showing information about the different types of gardens, sizes, accessibility etc. It will be for anyone to explore what these gardens’ digital presence can tell us about what it means to be a community garden in New York City. This blueprint web resource can hopefully be a hub for those interested in learning or interacting with these spaces, and help create awareness for the community gardens. In addition we will re-engage dead or outdated websites and have their digital presence reignited. Our ethnographic research will attempt to find out the reason why some of these gardens do not have a digital presence established, among other unforeseen questions. Perhaps their answers will lead to the discovery of discrepancies within the community. As mentioned above this will also be beneficial for other digital humanists working with community gardens.

Feasibility assessment:

The data needed to create a proof of concept model for this digital project can be collected in the duration of a college semester. We are aware that our small ethnographic research might only be possible later, since many gardens are closed at the moment due to the winter weather and non public entry. 

What digital tools/methods/team member skill sets will be needed to realize the project?  Project Manager/Documentation: Faihaa

Will be the general overseer of the group and plan for steps to move along progress, will keep track of Google spreadsheets listing data that has been found thus far, Will keep track of all contact info, will be involved with collecting, organizing and analyzing data.

Developer/Designer/ethnographic research: Benjamin

Will design and conduct small ethnographic research with the targeting group, build a website (or other digital representation) and assist on researching

Documentation/Research– Nelson 

Will be able to collect data. Able to travel and communicate with community garden leaders. Able to research historical information regarding community gardens in NYC. Eager to learn digital tools in order to present data visually, either by graphs or maps. 

All team members will be involved with Outreach. Our project also counts as Outreach!

TOOLS

We are interested in using Tableau Online and creating a WordPress interactive website

Legal or technical barriers

  1. Communicating with community leaders at the garden. Possibility that they won’t respond or want to divulge information that can be made public. 
  2. The data we collect might not be sufficient enough to create a meaningful project.
  3. Time constraints might be too much .
  4. Unforeseen events: Weather, Covid restrictions, online tools become unavailable?

How will you tackle those potential pitfalls if your project is selected?:

  1. The team would have to find creative ways to collect data overwise, via already established information on the web or contacting people who do similar projects. 

2.Reduce and shift the amount of gardens that are being contacted. Possibly restrict the type of data that is being collected. 

3.The team would need to have the foresight and have conversations among themselves on how to lead this project in the most ethical way possible. 

 4.The team would need to react accordingly and be quick thinkers.

Final product (even if tentative):

It is our intention to build an interactive website/tableau/map to present the collected data that we hope gives a better understanding of the digital presence of community gardens and their accessibility.

Revised Mainframe Project Proposal

Overview

The Mainframe Project [new title needed] is a digital collection of archival media that is manipulated through deformance and other means to correlate and complicate our understanding of social relationships and human interactions with computers, using mainframes as a point of departure. By focusing on the computing devices in the workplace before the advent of personal computers, we hope to defamiliarize some of common assumptions about computing informed by contemporary culture, and think through how the social context around computing may have evolved differently given the set of perceptions about computing from the 1950 – 1970s. Specifically, this project will probe the underlying assumptions of human computer interaction from that time period, and also the gender politics of computing in the business place before the personalization of computing. It will conduct these types of investigations by looking at archived material such as advertising, manuals, and other ephemera, and relating it to both scholarly work tangentially related to mainframes, as well as scholarly work on computer-human interaction contemporaneous with mainframes, like cybernetics.

Problem addressed

There is a dearth of (digital) humanities scholarship about mainframe computers. Antecedents can be traced in works from various “media archaeologists”: Tung-Hui Hu’s A Prehistory of the Cloud (cultural poetics of cloud computing), Kirschenbaum’s Track Changes: A Literary History of Word Processing (word processing as software), Liu’s The Laws of Cool (knowledge work), Lisa Gitleman’s  “Raw Data” is an Oxymoron (various histories and pre-histories of data), Friedrich Kittler (general theory), among a few others. But mainframe computing is an implementation detail in larger arguments, often glossed over. So much of our computing inherits key concepts from the features and limitations of the age of Big Iron.

This project may also turn its eye towards other scholarly and non-scholarly writing coterminous with the Mainframe and related to computing. Examples include Leary’s speculative work like How Have Computers Empowered Humans?, but also early scholarly efforts to understand human and machine interaction, like Ashby’s An Introduction to Cybernetics, the work of Norbert Wiener, and Bateson’s Steps to an Ecology of Mind. Books for larger audiences that retrospectively evaluate mainframe computers may also be helpful in probing some of the social and interactive nuances of mainframe computing.

Our project also involves the manipulation of visual media in the tradition of deformance as initially conceived by Lisa Samuels and Jerome McGann in Deformance and Interpretation. Scholars practicing deformance in the visual realm serve as an inspiration as well, including the many videographic deformations Singin’ in the Rain by Jason Mittel (https://vimeo.com/jmittell) and the photo glitches of Michael J. Kramer (Glitching History).

Project audience

Scholars and students interested in imaginative explorations of what it was like interacting with and encountering mainframe interfaces in person, by proxy (timesharing) and as a cultural phenomenon in popular media and the workplace. We believe there’s a requirement for educational material for non-technical audiences in the broader public due to relative unfamiliarity with the technologies involved.

Contribution and impact

The study of mainframe computing is not only a historical exercise in preservation. Because of the foreignness of mainframe computing from contemporary experience, it becomes a prism to explore larger topics related to information technology in societies. As mentioned above, we’re interested in analyzing gender dynamics and computer-human interactions during the rise of mainframe computers. This is a meaningful downpayment on other computing topics that could be explored in a digital humanities context before the introductions of minicomputers in the 1980s and the rise of personal computing in future projects.

Final product

A digital collection of augmented photographs (and potentially video) with supporting written analysis and context will be hosted on a website. Material in the collection will come from marketing, manuals and other archival artifacts.

Feasibility assessment

Tool selection

At this moment in time, the team has leaned toward Wax as the platform on which to build our digital collection. That said, we will reevaluate that tool in relationship with other options like CollectionBuilder and Omeka, depending on time constraints and time necessary for research and time manipulating, potential to include video, etc. Depending on our

Our tools for manipulating photos and videos will vary. Physical collage will be done by hand, though we reserve the right to use digital tools like Adobe’s InDesign to perform the same operation. PhotoMosh, Hex editors and Python libraries may be used in glitching, pixelsorting and other deformance techniques.

Finally, depending on the media types included in our collection, we may use a third party platform to host assets like video (i.e. Youtube or Vimeo).

Team composition

Our team is composed of two scholars (Connie Cordon and Kai Prenger) that cover most tasks in a primary/secondary structure.

  • Project Manager → Connie Cordon primary, Kai Prenger secondary
  • Developer → Kai Prenger primary, Connie Cordon secondary
  • Visual archival researcher → Connie Cordon primary, Kai Prenger secondary
  • Textual researcher → Kai Prenger primary, Connie Cordon secondary
  • Outreach → Connie Cordon primary, Kai Prenger secondary
  • Project updates → Drafted together, Kai Prenger primary, Connie Condon secondary

Barriers and challenges

We believe the biggest challenges are related to original research and tracking down archival material that meets specific subject matter, though I’d imagine university archives/websites and institutions like The Computer History Museum may help mitigate finding nothing worth writing about. We have some anxiety about how much customization we’ll want in the collection itself, leading us to evaluate whether Wax might be appropriate, or whether CollectionBuilder or even Omeka might lead to (using this gist to evaluate). Time spent developing the site in a short amount of time will be traded off against time to research and develop intriguing assets.

Updated Project Proposal: Modeling Value in the Anthropocene

Abstract

Modeling Value in the Anthropocene is an attempt to accomplish two goals, one more general and one more specific. The first, more general one, is an attempt to provide an example of using computational tools in the world of philosophy. Doing this could enhance philosophy by utilizing methods that can process quantitative information faster than the speed of light. We argue that the particular tool of vector semantics can model large corpora of text in such a way that a new perspective can be created that could guide a subsequent close reading. Our hope is that this prior modeling can provide the potential for a close reading that leads to the creation of a concept, which is the aim of philosophy.
The particular question that this project will try to answer is something like: what concept should ground individual and collective value in the world beyond the Anthropocene? After using word2vec to model the conceptual networks of the concepts “profit,” “Anthropocene,” and “Neganthropocene” in the bibliography of philosopher Bernard Stiegler, we will then perform calculations on these vectors to unveil a new vector, the concept it’s representing being the one that will guide our close reading of one of his texts. This close reading will lend itself to the production of an article/potential book chapter. This will be part of a larger project of modeling the concepts of “entropy” and “negentropy” in the history of Western thought, which might result in a book on what Daniel Ross calls metacosmics.

 

List of Participants

Brian Millen, Digital Humanities Masters Student at CUNY Graduate Center, will be the project manager, co-developer, and co-researcher/author.
Hampton Dodd, Digital Humanities Masters Student at CUNY Graduate Center, will be the secretary, co-developer, and co-researcher/author

 

Enhancing the Humanities through Innovation

In proposing a reading of philosophical texts that makes use of algorithms for natural language processing, one would have to consider what, if anything, these computational methods of reading and writing bring to doing the work of philosophy. Natural language processing consists of performing mathematical calculations on language. In text analysis specifically, we use digital text as data that we can then model in different ways by making quantitative calculations that a human mind would never be able to do with such large amounts of text on its own through close reading. There are many arguments for and against this kind of practice, and it is likely that many native to the world of philosophy would be especially skeptical of accepting a computational approach. We believe one way of arguing how text analysis could contribute to philosophical practice is through Michael Witmore’s blog post, “Text: A Massively Addressable Object.” In this post, he situates distant reading as simultaneously continuous with the practice of reading since its genesis and discontinuous with past modalities of reading.

Witmore writes that what separates digital text from older forms of text is that “it is massively addressable at different levels of scale” (Witmore). To understand this requires understanding what he means by “scale.” According to Witmore, reading has always been the practice of abstracting a material text at the ideal level at which one is to read it. For example, one can choose to address a text at the level of the word, the sentence, the paragraph, or the book, just to name a few. These are not material characteristics of the text itself, but rather subjective ways of dividing the text up and projecting them into/out of the text. A reader could just as easily choose to underline all the words of a text beginning with the letter ‘B’ and address it at that scale. How text is addressed is a matter of convention, contingent on the technical and normative limits of the given socio-historical context of the act of reading. The fact that we tend to read text as “books” or even “words” is simply a matter of convention. As Witmore writes, “The idea of a word is itself an artifact of manuscript culture, one that could be perpetuated in print through the affordances of moveable type” (Witmore). What makes digital text different, then, is the scale of addressability, or rather the many different scales at which text can now be addressed. Instead of having to read one word at a time or one sentence at a time, we can query and compare many words from many different documents at once. A popular form of this found in the digital humanities is topic modeling. Topic models are generated by running an unsupervised machine learning algorithm on a group of documents and approximating which words tend to be used together in the same document. This allows us to address the texts at the level of “topic,” creating ideal topics we can say that the texts appear to be drawing from (ideal in the sense of non-real; in the realm of the potential or virtual, discoverable in the imagination, as distinguished from the actual).

The form of computational address we propose using in a philosophical context is one called vector semantics analysis. It is a technique of computational linguistics wherein we can run an algorithm that will group words together that bear a semantic similarity to one another. This similarity is represented using a particular kind of vector analysis called word embeddings. Word embeddings assign each word in the text(s) a numerical value based on their distributional distance from every other word in the text. We can then map these words graphically which represent concepts as networks of words used “synonymously” in the text. This is useful for conceptual analysis because words that have similar vectors to each other will appear clustered together within the vector space, indicating that they are distributed throughout the texts in a similar way. The linguistic theory operating behind this model is that words that are deployed in similar contexts have some conceptual resonance: “The latent idea here is that different words will tend to appear in different contexts, and therefore one can guess at a word’s meaning by simply counting the words that appear near it” (Gavin et al.). Bringing it back to the language of Witmore, vector semantics is a way of addressing large amounts of text at the level of semantic similarity. It is an addressing of the text at the level of the word, but it is a level where each word exists in relation to every other word, the level getting more discreet as we narrow our investigation. We can thus say this method allows us to address text at the level of concept. This level of address is obviously not new in philosophy. Finding continuity at the level of conceptuality is the modus operandi of philosophy, and we have been used to addressing concepts at the scale of the entire history of philosophy since at least Hegel. What is new here is the way we can address this history and its concepts.

The vector semantic analysis we are proposing is that of the concepts of entropy and negentropy in the history of Western thought. Bernard Stiegler teaches us that “the relation entropy/negentropy is really the question of life par excellence” (Stiegler, 39). Entropy is a term taken from thermodynamics denoting the tendency of any closed system to tend toward a state of disorder. Physicists of the 19th century came to apply this to the entire universe conceived of as a closed system, effectively reversing the Western conception of the cosmos, which since Socrates had been understood as equilibrium. The concept of negentropy was introduced by Erwin Schrödinger in his lectures published as What Is Life? to give an account of life understood as the temporary struggle against entropy, organic matter being organized matter that wards off the dissipation of energy. The physicist Alfred Lotka takes a step beyond Schrödinger to argue that humans are a particular kind of negentropy, temporarily suspending entropy by means other than just biological organs generated by natural selection. In other words, humans produce negentropy through artificial means of various kinds: tools, writing, machines, etc. According to Stiegler, the implications of these breakthroughs in scientific thought have yet to have been thought through by philosophy. They must be thought through in the context of the Anthropocene, which he claims is an epoch of history marked by extreme accelerations of entropy through industrialization, accelerations that threaten the survival of the human race. The Anthropocene is a proposed geological epoch in which human beings become a geophysical force affecting the biosphere. However, for Stiegler, it is important to recognize that this epoch is firstly a technological, economic, and political epoch. Thinking through the concepts of entropy and negentropy in this context is the work that he called for, and it is the work he attempted until his death in August 2020.

The long-term project we would like to embark on is an investigation of these concepts (in their inclusion as well as in their absence) over a long period of time, covering the histories of physics, biology, philosophy, economics, and information science. Using vector semantics, we wish to find out what other words bear a semantic resemblance to the words “entropy” and negentropy”. The goal of this project would be a contribution to what Daniel Ross calls “metacosmics”, which is a destruction of/continuation of metaphysics centered on a general theory of entropy/negentropy. This project would be oriented toward understanding how these concepts change over time, as well as seeing what resonance certain thinkers (like Nietzsche) have with these concepts, despite not using their names. This would complexify and intensify our conceptions of entropy and negentropy. The scope of this project would start much smaller though, beginning with just modeling these concepts in Stiegler’s work alone, further branching out to other works and disciplines in future projects. We wish to perform a conceptual analysis of his work that we will describe in the following section, an analysis that would be part of a close reading, a reading that will be a small part of this larger project.

 

Environmental Scan

Much of the work of distant reading in the digital humanities utilizes methods of topic modeling. Something like topic modeling can be incredibly useful for philosophy, especially the history of philosophy. It can address large amounts of text at a scale that would allow us to understand what topics are characteristic of a particular school of thought (like German idealism) or a particular epoch of history (like the Enlightenment). Malaterre et al. run an unsupervised machine learning algorithm on the complete corpus of the journal Philosophy of Science, mining articles from 1934 to 2015. One result of this was the revelation that the topics that are consistent throughout the whole time are “metaphysical and ontological questioning” about “space and time, causation, natural kinds, [and] realism” (Malaterre et al, 215). Another was the discovery of how the topics of logic and language rose and fell throughout time.
Despite topic modeling’s potential fruitfulness, we wish to show why vector semantics is a better method specifically for doing the work of philosophy. There are two reasons for this. The first is because it allows us to model semantic similarity among different words. As Gavin et al. argue,

concepts are not words. The distinction can be glimpsed by considering any collection of synonyms, such as rubbish, trash, waste, junk. Each of these words has a distinct use, and they imply different connotations and meanings, but there remains a principle of synonymy among them, an underlying concept that ties them together (Gavin et al.)

Topic models show us what words tend to be used together. Word embeddings show us which words tend to be used with the same words. Topic modeling could perhaps guide a close reading exploring the topic deeper, or what role a key word might play in a topic, but it cannot get to the semantic depth we might want to in the work of philosophy.

The other reason why we prefer vector semantics for philosophy is because philosophy is not just the modeling of concepts. As Deleuze and Guattari write, “The philosopher is the concept’s friend; he is the potentiality of the concept…philosophy is the discipline that involves creating concepts” (Deleuze et al., 5). The operations of word embedding alone already described are useful for clarifying and perhaps deepening concepts, and could possibly lend themselves to some conceptual reevaluation. However, there is another operation made possible by word embeddings that contains so much more potential for philosophy. Because the words are embedded with numerical values, once the conceptual networks have been modeled in vector space, we can actually perform calculations on the vectors that create new vectors based on this math, and these new vectors can open a path to the creation of concepts, or what Stiegler calls “categorial invention,” which is the goal of philosophy, and perhaps of all theoretical work. A well-cited example is that of V(Queen) = V(King) + V(Woman) – V(Man). When taking the vector for “King”, adding the vector for “Woman” to it, and subtracting the vector for “Man”, the model has been proven to successfully output the vector for “Queen.” What this means conceptually is that if we add the qualities of women to the concept of kings, we have some concept which would have all the qualities of women and all the qualities of kings. If we then subtract from this new vector everything associated with men, we get the concept of queen. This is a simple example, but this functionality can prove exceptionally useful for philosophy.

One of the principal things Stiegler calls for is a rethinking of value to escape the Anthropocene, initializing what he calls the epoch of the Neganthropocene. One chief problem with capitalism, he claims, is that, under the conditions it initiates, all use value is reduced to exchange value. The usefulness of a thing is reified into how much it costs, or how much money it could make. This reduces everything to the standards of valuation of the market. The progression of this dynamic accounts for how things like the law or works of art have been devalued, not to mention the health of the biosphere and the future itself. Thus, the Neganthropocene, which would be the epoch following the Anthropocene (if there is to be one), would have to be generated on the basis of a new valuation. The question, then, is if the value of everything is no longer to be based on profit, what is to be the new value founding this society? We can contribute to the thinking through of this question by treating Stiegler’s works with vector semantics. We propose starting off by querying a sample equation that looks something like V(profit) – V(Anthropocene) + V(Neganthropocene). This would take the concept of profit, which grounds value in this current stage of capitalism, subtract that which is characteristic of the Anthropocene, and add the vector representing the things that Stiegler writes about the Neganthropocene. This analogic calculation might point us in the direction of which words will be related together as all having to do with how we should re-ground value beyond the Anthropocene. We will run word2vec, a vector semantic algorithm, on two of Stiegler’s texts: Nanjing Lectures 2016-2019, where he lays out his theories of entropy, negentropy, Anthropocene, and Neganthropocene most systematically.

It should be made very clear that this type of calculation is not a magic wand that can reveal new concepts for us on its own. Witmore’s account of distant reading focuses on the scale of the address, but it does not take into full account the shape or contours of the address itself. We would argue that there are two main modes with which one can address text: analytic and synthetic. These neo-Kantian faculties that Stiegler articulates are two forces that make up the dialectic of knowledge production. The full explication of these arguments is beyond the scope of this proposal, but they show that calculating text (or any data) requires the synthetic work of the imagination to think according to standards of reason, and more importantly to dream up new concepts that do not fit into the analytic schema of the understanding. Information or data is the externalization of a prior synthetic act of reason that is calculable now that it is materialized. This act is a decomposition of the line of reasoning into discrete elements that can thus be quantified and calculated. This act is entropic in and of itself, but can produce new knowledge, new concepts, if it leads to a surprise which causes one to run it through their filter of reason and create a new idea which re-organizes the analytical understanding as it now stands. In other words, by modeling text, one divides it up into enormous different pieces (in this case, vectors) that one can perform calculations on. On their own, these models and these calculations are useless. However, an act like querying Stiegler’s texts for the answer to V(profit) – V(Anthropocene) + V(Neganthropocene) could open up a path that one could wander down. And perhaps, by wandering down this path, which would include careful thought, reasoning, and close reading, one could perhaps experience a surprise in the text. This surprise could potentially cause one to rethink the text they are reading closely in a new way, and potentially lead to the production of a concept. There is of course no way to guarantee this, but it is only by seeking out that which is incalculable that philosophy can be done. Perhaps vector semantics could be a kind of calculation that leads the way toward thinking about value anew and how a new society can be built upon this new concept of value. This could then guide a close reading of some of Stiegler’s texts that could potentially concretize this new, currently unknown, concept.

 

Work Plan

Education
Both participants will spend the first portion of the semester learning the basics of Python and text analysis, followed by time spent learning how to use word2vec. The specific work plan will be fleshed out more when this is done. Regular (weekly or bi-weekly) re-assessment will be a necessary part of the work flow, which is in nature more open and porous.

Digitizing
Making the text machine-readable will be a collaborative effort. The text is available in an open access searchable PDF format. Software will be used to convert it to plain text format and the results will be hand-checked against the original.

Operationalization
Once participants are familiar with the word2vec tool, they will train the algorithm on the text in question, creating vectors based on cosine similarity. These vectors will then be operationalized to determine the new vector-concept. This step will be open to the possibility of failure and the potential need for alternative lines of questioning opened up by playing around with the tool. There are two options for doing the actual work of text analysis collaboratively. Participants will experiment with both a) doing the work separately and meeting regularly to exchange and discuss results and b) pair programming, doing the text analysis on the same machine or sharing a screen via video chat.

Close reading/writing
Utilizing the fruits of the vector analysis, the participants will then perform a close reading of the texts at hand guided by the vector produced by the algorithm. This will require the development of a strategy for collaborative note-taking and writing. Participants may utilize a collaborative version of the Zettelkasten method to bridge emerging ideas into the production of a unified text. The scope of this project will focus on a rough draft of a piece of writing, revision, and publication of which will be the work of a future iteration.

 

Final Product and Dissemination

As stated, this project is to form a small part of a larger project about entropy and negentropy in the history of Western thought more generally. This particular project will lend itself to a shorter piece of writing that will specifically be about the question of value in the Neganthropocene. It will initially be published online as a blog post. It will not only be provisionally about the conceptual framework needed to reevaluate value, but it will also form the foreground for this larger project on metacosmics. Thus, this close reading and writing will also be the work of forming the questions we would like to pose in future work, as well as the kinds of texts that may need to be addressed.

Sounds of Music: Updated Project Proposal

Sounds of Music

By Caitlin Cacciatore, Felicity Howlett, and Raquel Neris

Team Members & Roles

Felicity Howlett

Primary Role: Project Director

Secondary Role(s): Notetaker, Outreach Specialist, & Primary Researcher

Caitlin Cacciatore

Primary Role: Web Designer & Technical Coordinator

Secondary Role(s): Copy Editor & Secondary Researcher

Raquel Neris

Primary Role: Operations Manager

Secondary Role(s): UX Designer & Visual Design Specialist 

Abstract

On March 11, 2020, when the WHO declared Covid-19 a global pandemic, the abrupt termination of in-person, interpersonal communication upended traditional practices in work, play, and study. The sudden social isolation that followed was alleviated somewhat by the rapid installation of dedicated online networks intended to re-establish communication patterns and networks. As time went by, it became apparent that these newly constructed online pipelines could be creatively employed to connect people who had, for reasons of age, health, handicap, or other disability, remained out of the loop, isolated from the mainstream of human social activity, and for the most part, overlooked.

Sounds of Music is an interactive online experience that strives to reach an audience of elderly, homebound, and handicapped populations. Those who find it difficult to leave their home for these reasons often become isolated. We seek to remediate this through community participation opportunities for music aficionados for those for whom in-person activities are not always an accessible option. Our pilot program focuses on providing services and education for a small group in the NYC Metropolitan area. We hope to build lasting connections, encourage neuroplasticity, and ignite a sense of passion for music within participants. This proposal invites the participation of the digital humanities community as its role expands from the academy into the broader public sphere.

The program operates from a WordPress website platform in order to disseminate information about Sounds of Music. Zoom will be used to facilitate interactive communication and screen sharing.

Environmental Scan

In 2018, Felicity volunteered for a music therapy program under the direction of Concetta Tomaino, musician, music therapist, educator, cognitive specialist, and director of the Institute of Music and Neurologic Function (IMNF). IMNF developed from the research and clinical studies that she and Oliver Sacks[1] conducted during their years of working together at the Beth Abraham Hospital in the Bronx. Her experience includes decades of exploring how music can assist, support, and enhance the quality of life for people afflicted with cognitive, physical, and/or emotional difficulties. The program, designed for veterans, did not exclude other interested individuals. Some demonstrated signs of dementia, while others suffered from anxiety or PTSD. Cognitive difficulties and physical handicaps were also in evidence.

There were an abundance of drums, tambourines, and shakers to choose from. Different combinations of flute, piano, guitar, and bass were performed, depending on who attended, and vocals and karaoke accompaniments were produced from online sources via cell phones. Sessions were jovial, lively, and compelling. People who swore they could not/would not sing, sang, and even soloed with a microphone. One gentleman, long past 90, danced with amazing grace. The music often stimulated memories which would then inspire discussions. Songs were drawn from the deep traditions of American, Latin American, Caribbean folk and popular culture and jazz. The atmosphere of well-being in the room reflected the joy at sharing this music together.

The gradual evolution of the program to an online interface brought with it the realization of its potential to bring music, interaction, enjoyment, and a sense of well-being to a largely untapped audience of elderly people—those who are isolated, confined, and unable to travel.

Participation in this program inspired the desire to build a similar project, one for elderly, aging, homebound populations. Sounds of Music was borne of a wish to bring music enrichment to all who could benefit from it, and features a ‘do-it-yourself’ workshop component through which any interested party could create a framework for their own music enrichment experience. Sounds of Music differs from other programs of its kind in that it is more than just a pilot program and musical enrichment experience; it is a tool for creating, facilitating, curating, enhancing, and enabling music enrichment experiences for any age group or population demographic.

A Sample of Similar Programs and Activities      

Although we are not aware at the present moment of online, interactive music programs specifically created for the isolated, elderly, disabled population, a survey of online music programs reflects intentions ranging from pedagogy to entertainment, from therapy to general interest, from probing deeply into specific categories to simple, popular entertainment.

Daniel’s Music Foundation.                           https://www.danielsmusic.org/virtualcommunity

New York City, NY

Daniel’s Music Foundation is a thriving music center in New York City that offers musical activities and lessons for individuals with handicaps from age three through adult. Its enrollment increased significantly since it has gone virtual.  Activities include many types of instrumental lessons, games, music education, songwriting, and special videos. It is an astonishingly creative, vibrant source for music education, performance, and enjoyment.

In contrast to the educational and ensemble focus of Daniel’s Music Foundation, our program will focus on shared musical experiences through listening, singing, performing, and conversation. Opportunities for further exploration of specific items of interest may become part of the texture.

Fred Miller’s Lectures in Song                       https://www.fredmillermusic.com/lectures-in-song/

Sergeantsville, NJ 08557

Singer, entertainer, lecturer, sage, and raconteur, Fred performs at various venues – libraries, community groups, retirement facilities—60-75-minute theme-based musical programs specializing in American Popular Song’s Golden Age. From the piano, he performs, lectures, and offers insights into the material. Several dozen previews are available on YouTube, and, on occasion, he offers free online programs from a base at the Hunterdon Public Library.

The comfortable spirit of Fred Miller’s highly entertaining, and informative programs is something we would like to invoke. We are focusing our efforts on creating a program with a higher level of interactivity and engagement.

Music Mends Minds: Restoring the Rhythm of Life             https://www.musicmendsminds.org

Los Angeles, CA

Music Mends Minds is a nonprofit organization with a strong therapeutic focus that creates musical support groups for individuals with Alzheimer’s, dementia, Parkinson’s, traumatic brain injury, PTSD, and other neurological disorders. It uses music “to help elders cope with neurogenerative diseases across the globe.”  Global online singalong sessions are conducted through Zoom every Monday, Wednesday, and Friday from 11:30 am – 12:30 pm.  A board-certified music therapist leads each session in song.

This is a music therapy-based interactive group that incorporates activities that will also be part of our program such as featuring a soloist/participant or singing along with a leader. In addition, to live performances and group participation, we would also explore historical videos of famous performers, occasionally compare performances by different singers, and encourage a more humanities-based approach.

The Sofa Singers                                                                                            thesofasingers.com

North Wales, UK

Hundreds of people from all over the world join together in real time for 45 minutes of simultaneous singing twice a week on Zoom, led and directed by James Sills.  Signup for individual programs at a small participation fee is handled through Eventbrite. Sills successfully circumvents the latency problem for simultaneous choral participation by providing an upfront invitation and explanation for his format: “The Sofa Singers encourages you to sing as if no-one is listening because they won’t be.  Due to latency (delay) it is not possible to synchronize and hear all of the singers at the same time. But you will be able to see each other so sing with a smile. Afterwards, there’s a virtual tea break followed by some open floor slots where individuals can say hello and share a song, a poem or a story.”

Video clips of Italians singing from their balconies that emerged in the early days of the Covid-19 crisis (see https://www.youtube.com/watch?v=DDRiINXik00) inspired Sills, who had previous involvement with choral groups, to create a website for singers to participate online. Practically overnight, he had it up and running, and its popularity continues.

In contrast to the huge gathering of Sofa Singers, our program intends to bring together small numbers of people in an ambiance where they can get to know one another as well as participate.  While certain activities, for better audio reception, will require inactive participants to be muted, most activities will encourage people to participate in the discussion.

Technologies Used

  • Zoom will be used to facilitate the Sounds of Music pilot project. We will need to consider how to navigate issues of potential Zoom-bombing, or technological failures and issues.
  • WordPress via CUNY Academic Commons will be used as a platform for the dissemination of relevant information about the pilot program of Sounds of Music, as well as the framework we create for future projects involving music engagement and enrichment programs for an aging/elderly, homebound population.
  • Assistive Technologies, including hardware and software for the visually impaired, magnifiers, screen readers, text readers, as well as various alternatives for operating equipment manually or by voice. For defining these technologies, our team is still in the process of doing research and learning how they operate.

Project Management

Felicity Howlett is the Project Director, and is responsible for note-taking during meetings. Raquel Neris is the Project Manager, having set up a Trello for our group where deadlines can be managed and aggregated. We will communicate on Slack, with the expectation that each team member checks the platform daily on workdays.

Milestones

February:

  • Create a prototype of a WordPress website platform;
  • Populate the WordPress platform with information about best practices, frameworks, toolkits, research, resources, and more;
  • Explore various opportunities for accessibility on the website;
  • Identify and include links to other, similar projects that are of import;
  • Interview Jeremey Deloitte for information regarding other music enrichment programs.

March:

  • Continue to add to the WordPress website, including a section on addressing issues of latency and troubleshooting other problems that might arise;
  • Produce a manifesto of best practices for music enrichment;
  • Identify and interview potential users;
  • Continue Research about the benefits of music enrichment and enhancement programs;
  • Define a model for the Sounds of Music experience that involves a program itinerary;
  • Solicit feedback about the design, layout, and content from potential users.

April:

  • Launch the public-facing version of the website;
  • Promote the website and pilot program on social media;
  • Launch a single session of the pilot program on Zoom with 8+ participants;
  • Solicit feedback from the participants on how to improve the Sounds of Music

May:

  • Create a prototype of the Sounds of Music Workshop (the do-it-yourself guide to creating an interactive music enrichment experience with any given population);
  • Solicit feedback from potential users about how to improve Sounds of Music Workshop;
  • Evolve the website, framework, manifesto, and other elements of the Sounds of Music.

June:

  • Launch the second version of the Sounds of Music project with the Sounds of Music Workshop published on the website.

Sounds of Music

Dhpraxis2, Project Proposal, Felicity Howlett

Overview

The Sounds of Music project is a plan for an interactive music program designed to encourage participation, awaken memories, and foster well-being in a home-bound, fragile elderly population. Its focus is on people who, for one reason or another, have been isolated in their living quarters, unable to participate in activities outside their home. Ease of access is a crucial concern. The program operates from a WordPress website platform embedded with Zoom, a video communications application that enables interactive communication and screen sharing. The challenge to the project is to design a platform where people may enjoy a congenial social experience through singing (or other music making), listening, and discussion. At present, technical limitations in audio transfer prevent successful ensemble participation from people in different locations. Specifically, these are problems of synchronicity due to latency (time lag) issues. Sounds of Music offers a working model that can be put into effect immediately, with the flexibility of incorporating improvements in audio interactivity as they become available. This proposal invites the participation of the digital humanities community as its role expands from the academy into the broader public sphere. It encourages creative input and program design to ensure comfortable access to people who have previously may have been excluded from opportunities for social participation.

Problems to Address

 On March 11, 2020, when the WHO declared Covid-19 a global pandemic, the abrupt termination of in-person, interpersonal communication upended traditional practices in classrooms, office buildings, service industries, performing arts, social programs, and more. Those suddenly confined to their living quarters found themselves in situations not so far removed from an established group of outliers—the approximately ten percent of the United States population over sixty-five years of age “who are considered housebound and in need of home-based care” (Qiu et al. 2010). Overnight, traditional in-person activities became on-line encounters, and people began to appreciate what extraordinary opportunities were possible in an online format. What this means for us is that suddenly an overlooked population can participate with the same equipment that everyone else is using. Suddenly people who cannot leave home do not have to be left out. It is hoped that this project can support this participation through the creative planning and structure of an accessible program that invites participation, encourages discussion, listening, and music making.  Creative applications of available material and perhaps new inventions will be required to assist participants to enjoy the greatest accessibility.

Intended Audience

The intended audience includes people who have missed out on face-to-face encounters because of their inability to leave their living quarters, and the intention of the project is to address that issue by creating an inclusive, participatory and comfortable forum focusing on music. We may receive assistance in establishing an audience for a pilot project from Concetta Tomaino, a CUNY instructor, and executive director of The Institute for Music and Neurologic Function, for whom I have been a volunteer piano player for the past three years in both in-house and online music therapy sessions. While this project is not intended to be a therapy program, she has considerable experience in this area and may be especially helpful when considering how best to accommodate certain handicaps as well as in our outreach activities.

Contribution to DH and Potential Impact

With “Sounds of Music” students of the digital humanities have an opportunity to “engage the world beyond the academy” (Gold and Klein, 2019) by offering an overlooked and underserved population access to internet communication and participation in a humanities-based online program. The project also inspires innovative thinking and opportunities to propose and invent creative tools to support those who have difficulty gaining access to digital portals without assistance. For participants, a program that shines light on musical highlights from past years provides a background for the discovery of threads from previous life experiences and the opportunity to tie them into the present fabric of their lives. Enabling a group to share space and time on topics of mutual musical interest can be particularly fulfilling.

Final Product

Through the program format, the goal is to break down distinctions among participants and to create a participatory, informative, and engaging musical program available to anyone who would like to use it but with a special focus on making it accessible to people unable to attend in-person events. A primary objective is to provide a way for people who are confined to their living quarters to enjoy music, musical participation and social communication as adults. At the same time, it is hoped that the final product will serve to open gates that sometimes separate the worlds within and without the university. In the future, the flexible design intended for Sounds of Music might be adapted to a variety of projects, for example, programs coordinating with people experiencing a single cognitive or physical condition or a musical/educational format for children.

Information about the program will be disseminated through appropriate internet links, key words, by notifying local information sites, community organizations, and municipal programs who share such information, with libraries, medical facilities and rehab operations as well. The program may even be of interest to insurance companies and other entities who are interested in seeking ways to enhance quality of life for their clients.

Initial Feasibility Assessment

A number of successful online music programs exist although none was found was exactly this focus.  Two particularly successful examples are:

Daniel’s Music Foundation.                           https://www.danielsmusic.org/virtualcommunity

New York City, NY  

A thriving music center in New York City offers musical activities and lessons for individuals with handicaps from age three through adult. It shifted to an online presence almost immediately in March 2020, thanks to excellent music studios. No one anticipated the power and value of their online presence. Enrollment has increased significantly since going virtual.

The Sofa Singers                                                                     thesofasingers.com

North Wales, UK

People from all over the world join in real time for 45 minutes of simultaneous singing twice a week on Zoom for a small participation fee. The latency problem is successfully circumvented with an upfront explanation: “The Sofa Singers encourages you to sing as if no-one is listening because they won’t be. Due to latency (delay) it is not possible to synchronise and hear all of the singers at the same time. But you will be able to see each other so sing with a smile.” Afterwards, there’s a break with group discussion.

Our proposal takes advantage of the practical and efficient advice shared by James Sills of The Sofa Singers. Until a better alternative comes along, when participatory singing occurs, singers will hear only the musical output from the moderator and their own voice. At other times, channels will open so that people can enjoy communicating with one another.

Special Applications and Equipment to Enhance Autonomy

 Virtual assistants such as Siri and Alexa have transformed the accessible landscape, and they are getting more capable by the moment. They can turn equipment on and off, set alarms and reminders, and produce programs on demand. Except for the advantages of having a physical presence available, these assistants can perform many tasks that were impossible to manage without assistance for many people not so long ago. Recently new devices attached to these virtual assistants have the capacity to use video and to keep the viewer positioned properly in the video frame. New advances have been made in earphones and watches, and other, older enhancements such as headphones and audio enhancers should not be overlooked.

Continual attention is required to improvements in this areas as well as to the synchronicity problem. These areas may inspire digital humanities students to discover ways to improve accessibility and user friendliness. We do not plan to promise solutions in this area during the life of this project, but we do plan to pay attention to what works and what is needed. Along the way, creative solutions may be discovered.

In its initial form, the program will be free, and the music will be obtained from sources such as YouTube which are openly accessible, so there should not be licensing problems involved.

It is estimated that during the semester, it would be possible to design a WordPress website platform embedded with a Zoom Meeting link, produce a working description and structural program design detailing focus, agenda, protocol, audio-visual equipment, presentation, interactive participation possibilities, etc., and explore options for a flexible program (define the intended content and delivery options) and create a pilot model.

References 

Gold, Matthew K., and Lauren F. Klein. 2019. “A DH That Matters” In Debates in the Digital Humanities 2019, edited by Matthew K. Gold and Lauren F. Klein. University of Minnesota Press.

Qiu, Wei Qiao, et al. “Physical and Menta Health of Homebound Elderly: An Overlooked Population.” Journal of the American Geriatrics Society, vol. 58, no. 12, Dec. 2010, pp. 2423-28.  PubMed Central.  https://doi.org/10.1111/j.1532-5415.2010.03161.x.

  

 

Draft Project Proposal – Broken English Project

Broken English: an ethnographic research about ESL Schools in NYC challenges during the Coronavirus pandemic in 2020

The field of Digital Pedagogy offers an excellent opportunity for us to rethink traditional education through digital media. Educommunication, on the other hand, is a field grounded in Latin American studies that analyzes practices related to the creation and strengthening of communicative ecosystems in educational contexts. Combining both lenses, the goal of this capstone project is to develop a critical view over the challenges faced by English as a Second Language (ESL) Schools in New York City during the Coronavirus pandemic, engaging students and teachers to rethink their online, hybrid, and in-person learning experiences. By building a web-based platform that exposes ethnographic research, combined with a series of learning resources and other sorts of engagement activities for students and teachers, we aim to form a community of learners engaged in educational practices that are more meaningful to their real-life challenges, encouraging expression, access to information and mutual care.

__

The question/problem it helps answer 

In 2020, by the time the Coronavirus pandemic imposed the need for students and teachers to adapt to online learning, it became urgent to quickly shift to this new reality since it was something entirely new for most educational contexts. Under the feeling of uncertainty, anxiety, and great effort to overcome the educational crisis, one of the most challenging problems faced by educators was discovering how to engage students in remote classes. In English as a Second Language (ESL) programs in particular, this challenge was even bigger since students often have other intentions to attend classes besides learning English. The desire to dive into a new culture, be part of a community of friendship and support, or even find an affordable way to stay legally in the USA are considered as necessary (or even more) than developing a new skill.

According to EducationUSA, Intensive English Programs (IEP), one of the most common ESL programs, have classes designed for any level of English proficiency. In addition to that, these programs are significantly more affordable for international students when compared to other educational opportunities, such as graduate and undergraduate programs. Even though they require the attendance of at least 20 hours per week, they are one of the most accessible options for foreigners to stay longer in the USA legally, turning quite obvious the perception that their business’ success is more connected to VISA issues than to providing a genuine educational experience.

Given this context, when ESL programs faced the need to shift to online classes, the weaknesses of their educational experience became extreme. Fragilities regarding accessibility (difficulties in having proper devices and access to the internet), teaching preparation, and challenges in giving proper support related to health and financial issues required significant levels of resilience and flexibility from teachers, students, and schools’ staff. However, what kind of opportunity emerges from this experience? How can we rethink learning experiences in ESL programs in order to attend to students’ actual needs as foreigners in the USA? In this project, we will address these questions, which are still invisible among other issues that have been explored about the pandemic crisis.

__

The project’s intended audience

The primary audience of the Broken English project is teachers and students of ESL programs located in New York City.

__

Its contribution to DH & potential impact

Why is Broken English a relevant initiative to the Digital Humanities? The answer to this question has different perspectives. First, we should consider the changes that the DH scenario has faced over the years concerning its definition as a field of studies and practices. In the 2012 edition of Debates in the Digital Humanities, Matthew Gold presents this field as a place of discussion between those who relate Digital Humanities to an area that explores new digital tools to aid relatively traditional scholarly projects and those who believe that it is a disruptive political force, with the potential to reshape fundamental aspects of academic practice. In the 2019 edition, the same author demonstrates a need for scholars to engage in the second perspective, presenting the DH as a field that must evoque ethical, social, and political issues, transforming academic discussions more collaborative and exciting to the general public. In “A DH That Matters,” Gold and Klein point out that:

“Our work within the digital humanities is enabled by larger social, political, and technological systems. In the present moment, we need work that exposes the impact of our embeddedness in those larger systems, and that brings our technical expertise to bear on the societal problems that those systems sustain.” (GOLD, 2019)

In addition to that, the Digital Humanities has also seen the rise of initiatives related to digital pedagogy, presenting the urge to enlighten blind spots in this arena. According to Curtis Fletcher (2019), the DH has to open its eyes to explore issues related to educational technologies, especially to help develop ecosystems of humanities-tailored tools for teaching. During the Coronavirus pandemic, this topic has become even more relevant since distance learning turned into a trending topic in educational institutions. 

By combining these perspectives, Broken English is aligned with how the Digital Humanities field has positioned itself, having an excellent chance to engage scholars and practitioners in participating in its development. In addition, it also brings a good opportunity to promote the dialogue between Digital Humanities and Educommunication, a field of studies and practices founded by the Latin American theoretical currents of liberating pedagogy, popular communication, and cultural studies (Mateus and Quiroz, 2017). As its focus is building open, dialogic, and creative communication ecosystems in educational spaces, breaking down hierarchical distribution of knowledge (Consani, 2010), the educommunicative approach is a good starting point to the work of rethinking ESL communities, as well as enriching discussions in the field of the DH related to digital pedagogy.

To complement these arguments, it’s essential to note that projects exploring the effects of the Coronavirus pandemic in the Digital Humanities, especially about ESL, are still to be explored.

__

Final product

Broken English project is envisioned as a web-based platform that presents a narrative divided in two parts:

Part 1: Understanding the problem – What are ESL programs, and what were the challenges they faced during the Pandemic through the lenses of students and teachers in New York City?

In the first part of the platform, users will be introduced to English as a Second Language programs (ESL). They will learn about the challenges that their communities (students, teachers, and staff) faced during the Coronavirus pandemic. In order to provide users a deep understanding of the problem through different lenses, it should present a combination of desk research, presenting secondary data based on government publications, websites, books, journal articles, etc., with insights from the ethnographic research conducted for the project. Along with this deep dive into the problem, the first part of the website will also present stories of some research participants exploring multimodal forms of expression such as photos, videos, audios, and texts.

Part 2: ESL reimagined

Based on the insights presented in the first part of the platform, the second part has the intention to present alternatives for the ESL communities to reimagine their relations as a community of learners. Initially, this part will provide educational resources aligned with their needs and contexts. In a second phase, after launching the platform and identifying early adopters, it will evolve to become an environment that promotes the connection of students and educators through initiatives such as in-person and online events.

Broken English will be developed using mainly ArcGIS StoryMaps, a story-authoring web-based application that enables users to share maps in the context of narrative text and other multimedia content.  As it has a free version available for general users and a very user-friendly interface that offers the main features that Broken English intends to present, StoryMaps is considered a very affordable option that can also enable a fast launch of the platform.

__

Work plan 

Broken English will be developed in two phases:

Phase 1: Pilot research, prototype, and testing

Phase 1 will be conducted by me and a small group of volunteers students from the Graduate Center, in an estimated period of four months. It will start in the form of pilot research, which will involve interviewing 8 to 12 people, including students and teachers, of two ESL schools based in New York City, which I have easy access to: New York Language Center and ZONI.

The result expected in this phase is the creation of an interactive prototype of the platform, which should be used for testing sessions with ESL students and teachers. The platform should present the results of the ethnographic research and a first version on a learning object/experience. The prototype of Broken English should use the free version of Story Maps.

Phase 2: Complete research and platform evolution

Based on the learnings acquired in Phase 1, Phase 2 consists in evolving the platform and communicating its launch for the general public. It will involve conducting more interviews with different ESL schools in New York City and implementing features. Providing learning experiences, resources, events, and a collaborative archive with pandemic stories from ESL communities are some of the ideas that can be developed in this phase, which can be confirmed as relevant for users in Phase 1.

With an estimated period of four months, Phase 2 should involve investments for a dedicated multidisciplinary team composed of specialists in qualitative research, learning experience design, and ESL programs, as well to have access to the full paid version of Story Maps.

__

Team

For the development of this project, It is expected to have a team formed by:

  • one qualitative researcher, who will develop the ethnographic research with the ESL communities;
  • one learning experience designer, who will work in transforming the research insights into educational resources for ESL communities;
  • one English as a Second Language program specialist, who will collaborate with guidance for creating the learning resources. 

My role in this team is to be the project director, with responsibilities for all aspects of the project.

__

References

Consani, Marciel. 2010. Como usar o rádio na sala de aula. São Paulo: Contexto.

Fletcher, Curtis. 2019. Educational Technology and the Humanities: A History of Control In Debates in the Digital Humanities 2019, edited by Matthew K. Gold and Lauren F. Klein. University of Minnesota Press. Available in: https://dhdebates.gc.cuny.edu/read/untitled-f2acf72c-a469-49d8-be35-67f9ac1e3a60/section/ed3d53dd-d7aa-4369-a41f-1098a121e41b#node-883c4791b5207ae8c432eafa638169a3138043a3. Accessed on 12/16/2021.

 Gold, Matthew K. 2012. The Digital Humanities Moment In Debates in the Digital Humanities, edited by Matthew K. Gold. University of Minnesota Press. Available in: <https://dhdebates.gc.cuny.edu/read/untitled-88c11800-9446-469b-a3be-3fdb36bfbd1e/section/fcd2121c-0507-441b-8a01-dc35b8baeec6#intro>. Accessed on 12/16/2021.

Gold, Matthew K., and Lauren F. Klein. 2019. A DH That Matters In Debates in the Digital Humanities 2019, edited by Matthew K. Gold and Lauren F. Klein. University of Minnesota Press. Available in: <https://dhdebates.gc.cuny.edu/read/untitled-f2acf72c-a469-49d8-be35-67f9ac1e3a6

Draft Project Proposal – Rethinking What We Teach – Elevating Diverse Voices in the Classroom

Project Overview

Searching for Open Educational Resources (OER) is time consuming and often frustrating for instructors, particularly when trying to represent diverse voices or content in a Language Other Than English (LOTE). This project will build a digital collection of diverse OER, sourced from the library archive collections held across the 25 CUNY campuses. It will focus on materials from marginalized voices — Women, BIPOC, and LGBTQI+. This collection will function as an OER repository where instructors can easily search and find diverse resources for use in their classrooms.

Keeping the diverse voices framework (Women, BIPOC, and LGBTQI+ ) in mind our team will build this project using the following work plan:

  1. Identify a CUNY instructor, department, or institute in a subject area of interest to the team.
  2. Work with this person or group to identify a subject, author, event, etc. that fits within the diverse voices framework.
  3. Locate materials in a CUNY library that match the criteria identified and work with a librarian to see if this material can be used as an OER.
  4. If the material is not in a digital format, work with library staff to get the material digitized. If the material is in a digital format, proceed to the next stage.
  5. Create a digital project on Manifold.
  6. Ingest the digital material into the newly created Manifold project collection.
  7. Promote the collection through social media and in conjunction with the instructor, department, or institute with whom you have worked.

Question/Problem

It is often difficult to find OER that reflect diverse voices, especially materials that are not written in English. Many instructors at CUNY would like to diversify the materials that they use in the classroom and save their students money by creating a Zero Textbook Cost (ZTC) course. The creation of this digital OER collection would bring together in one single repository diverse materials sourced from numerous archives housed across the 25 CUNY campuses, such as The Dominican Studies Institute and The Center for Puerto Rican Studies (CENTRO), to name just a few.

Project Audience

This digital OER collection will be built with the CUNY instructor in mind, but as an OER it will be open for anyone in the world to use.

Contribution to DH & Impact

This collection has the potential to change the canon of authors that is regularly taught by instructors at CUNY and around the world. This collection will elevate diverse and marginalized voices bringing them into the classroom where students have the opportunity to interact with materials by authors that better reflect who they are as individuals.

Final Product

A digital OER collection of materials on CUNY’s Manifold digital publishing platform primarily, but not exclusively, created for teaching.

Feasibility Assessment

The digital OER collection will be created and housed on CUNY’s Manifold instance. There is a possibility that we may need to digitize materials and, therefore, it may be necessary to have access to digital scanning equipment. It is unclear what formats we may encounter when we begin to evaluate these archive materials, so even a rudimentary knowledge of, or desire to learn, Google Docs, HTML, CSS, Microsoft Word, Markdown, will be useful. Consultation with CUNY librarians to determine rights and licensing will be essential in determining what can and cannot be published as an OER in this collection. Online research skills are essential. Outreach is very important to this project and team members should feel comfortable corresponding with different audiences in person and via email.

Modeling Value in the Anthropocene: Contributions to a metacosmics (Project Proposal)

Abstract

Modeling Value in the Anthropocene is an attempt to accomplish two goals, one more general and one more specific. The first, more general one, is an attempt to provide an example of using computational tools in the world of philosophy. Doing this could enhance philosophy by utilizing methods that can process quantitative information faster than the speed of light. I argue that the particular tool of vector semantics can model large corpora of text in such a way that a new perspective can be created that could guide a subsequent close reading. My hope is that this prior modeling can provide the potential for a close reading that leads to the creation of a concept, which is the aim of philosophy.

The particular question that this project will try to answer is something like: what concept should ground individual and collective value in the world beyond the Anthropocene? After using word2vec to model the conceptual networks of the concepts “profit,” “Anthropocene,” and “Neganthropocene” in the bibliography of philosopher Bernard Stiegler, I will then perform calculations on these vectors to unveil a new vector, the concept it’s representing being the one that will guide my close reading of one of his texts. This close reading will lend itself to the production of an article/potential book chapter. This will be part of a larger project of modeling the concepts of “entropy” and “negentropy” in the history of Western thought, which will result in a book on what Daniel Ross calls metacosmics.

List of Participants

2-3 participants to be responsible for collaborating on the digitizing of the texts, the operationalizing of the texts, the close reading, and the production of an article/book chapter.

Enhancing the Humanities through Innovation

In proposing a reading of philosophical texts that makes use of algorithms for natural language processing, one would have to consider what, if anything, these computational methods of reading and writing bring to doing the work of philosophy. Natural language processing consists of performing mathematical calculations on language. In text analysis specifically, we use digital text as data that we can then model in different ways by making quantitative calculations that a human mind would never be able to do with such large amounts of text on its own through close reading. There are many arguments for and against this kind of practice, and it is likely that many native to the world of philosophy would be especially skeptical of accepting a computational approach. I believe one way of arguing how text analysis could contribute to philosophical practice is through Michael Witmore’s blog post, “Text: A Massively Addressable Object.” In this post, he situates distant reading as simultaneously continuous with the practice of reading since its genesis and discontinuous with past modalities of reading.

Witmore writes that what separates digital text from older forms of text is that “it is massively addressable at different levels of scale” (Witmore). To understand this requires understanding what he means by “scale.” According to Witmore, reading has always been the practice of abstracting a material text at the ideal level at which one is to read it. For example, one can choose to address a text at the level of the word, the sentence, the paragraph, or the book, just to name a few. These are not material characteristics of the text itself, but rather subjective ways of dividing the text up and projecting them into/out of the text. A reader could just as easily choose to underline all the words of a text beginning with the letter ‘B’ and address it at that scale. How text is addressed is a matter of convention, contingent on the technical and normative limits of the given socio-historical context of the act of reading. The fact that we tend to read text as “books” or even “words” is simply a matter of convention. As Witmore writes, “The idea of a word is itself an artifact of manuscript culture, one that could be perpetuated in print through the affordances of moveable type” (Witmore). What makes digital text different, then, is the scale of addressability, or rather the many different scales at which text can now be addressed. Instead of having to read one word at a time or one sentence at a time, we can query and compare many words from many different documents at once. A popular form of this found in the digital humanities is topic modeling. Topic models are generated by running an unsupervised machine learning algorithm on a group of documents and approximating which words tend to be used together in the same document. This allows us to address the texts at the level of “topic,” creating ideal topics we can say that the texts appear to be drawing from (ideal in the sense of non-real; in the realm of the potential or virtual, discoverable in the imagination, as distinguished from the actual).

The form of computational address I propose using in a philosophical context is one called vector semantics analysis. It is a technique of computational linguistics wherein we can run an algorithm that will group words together that bear a semantic similarity to one another. This similarity is represented using a particular kind of vector analysis called word embeddings. Word embeddings assign each word in the text(s) a numerical value based on their distributional distance from every other word in the text. We can then map these words graphically which represent concepts as networks of words used “synonymously” in the text. This is useful for conceptual analysis because words that have similar vectors to each other will appear clustered together within the vector space, indicating that they are distributed throughout the texts in a similar way. The linguistic theory operating behind this model is that words that are deployed in similar contexts have some conceptual resonance: “The latent idea here is that different words will tend to appear in different contexts, and therefore one can guess at a word’s meaning by simply counting the words that appear near it” (Gavin et al.). Bringing it back to the language of Witmore, vector semantics is a way of addressing large amounts of text at the level of semantic similarity. It is an addressing the text at the level of the word, but it is a level where each word exists in relation to every other word, the level getting more discreet as we narrow our investigation. We can thus say this method allows us to address text at the level of concept. This level of address is obviously not new in philosophy. Finding continuity at the level of conceptuality is the modus operandi of philosophy, and we have been used to addressing concepts at the scale of the entire history of philosophy since at least Hegel. What it is new here is the way we can address this history and its concepts.

The vector semantic analysis I am proposing is that of the concepts of entropy and negentropy in the history of Western thought. Bernard Stiegler teaches us that “the relation entropy/negentropy is really the question of life par excellence” (Stiegler, 39). Entropy is a term taken from thermodynamics denoting the tendency of any closed system to tend toward a state of disorder. Physicists of the 19th century came to apply this to the entire universe conceived of as a closed system, effectively reversing the Western conception of the cosmos, which since Socrates had been understood as equilibrium. The concept of negentropy was introduced by Erwin Schrödinger in his lectures published as What Is Life? to give an account of life understood as the temporary struggle against entropy, organic matter being organized matter that wards off the dissipation of energy. The physicist Alfred Lotka takes a step beyond Schrödinger to argue that humans are a particular kind of negentropy, temporarily suspending entropy by means other than just biological organs generated by natural selection. In other words, humans produce negentropy through artificial means of various kinds: tools, writing, machines, etc. According to Stiegler, the implications of these breakthroughs in scientific thought have yet to have been thought through by philosophy. They must be thought through in the context of the Anthropocene, which he claims is an epoch of history marked by extreme accelerations of entropy through industrialization, accelerations that threaten the survival of the human race. The Anthropocene is a proposed geological epoch in which human beings become a geophysical force affecting the biosphere. However, for Stiegler, it is important to recognize that this epoch is firstly a technological, economic, and political epoch. Thinking through the concepts of entropy and negentropy in this context is the work that he called for, and it is the work he attempted until his death in August 2020.

The long-term project I would like to embark on is an investigation of these concepts (in their inclusion as well as in their absence) over a long period of time, covering the histories of physics, biology, philosophy, economics, and information science. Using vector semantics, I wish to find out what other words bear a semantic resemblance to the words “entropy” and negentropy”. The goal of this project would be a contribution to what Daniel Ross calls “metacosmics”, which is a destruction of/continuation of metaphysics centered on a general theory of entropy/negentropy. This project would be oriented toward understanding how these concepts change over time, as well as seeing what resonance certain thinkers (like Nietzsche) have with these concepts, despite not using their names. This would complexify and intensify our conceptions of entropy and negentropy. The scope of this project would start much smaller though, beginning with just modeling these concepts in Stiegler’s work alone, further branching out to other works and disciplines in future projects. I wish to perform a conceptual analysis of his work that I will describe in the following section, an analysis that would be part of a close reading, a reading that will be a small part of this larger project.

Environmental Scan

Much of the work of distant reading in the digital humanities utilizes methods of topic modeling. Something like topic modeling can be incredibly useful for philosophy, especially the history of philosophy. It can address large amounts of text at a scale that would allow us to understand what topics are characteristic of a particular school of thought (like German idealism) or a particular epoch of history (like the Enlightenment). Malaterre et al. run an unsupervised machine learning algorithm on the complete corpus of the journal Philosophy of Science, mining articles from 1934 to 2015. One result of this was the revelation that the topics that are consistent throughout the whole time are “metaphysical and ontological questioning” about “space and time, causation, natural kinds, [and] realism” (Malaterre et al, 215). Another was the discovery of how the topics of logic and language rose and fell throughout time.
Despite topic modeling’s potential fruitfulness, I wish to show why vector semantics is a better method specifically for doing the work of philosophy. There are two reasons for this. The first is because it allows us to model semantic similarity among different words. As Gavin et al. argue,

concepts are not words. The distinction can be glimpsed by considering any collection of synonyms, such as rubbish, trash, waste, junk. Each of these words has a distinct use, and they imply different connotations and meanings, but there remains a principle of synonymy among them, an underlying concept that ties them together (Gavin et al.)

Topic models show us what words tend to be used together. Word embeddings show us which words tend to be used with the same words. Topic modeling could perhaps guide a close reading exploring the topic deeper, or what role a key word might play in a topic, but it cannot get to the semantic depth we might want to in the work of philosophy.

The other reason why I prefer vector semantics for philosophy is because philosophy is not just the modeling of concepts. As Deleuze and Guattari write, “The philosopher is the concept’s friend; he is the potentiality of the concept…philosophy is the discipline that involves creating concepts” (Deleuze et al., 5). The operations of word embedding alone already described are useful for clarifying and perhaps deepening concepts, and could possibly lend themselves to some conceptual reevaluation. However, there is another operation made possible by word embeddings that contains so much more potential for philosophy. Because the words are embedded with numerical values, once the conceptual networks have been modeled in vector space, we can actually perform calculations on the vectors that create new vectors based on this math, and these new vectors can open a path to the creation of concepts, or what Stiegler calls “categorial invention,” which is the goal of philosophy, and perhaps of all theoretical work. A well-cited example is that of V(Queen) = V(King) + V(Woman) – V(Man). When taking the vector for “king”, adding the vector for “woman” to it, and subtract the vector for “man”, the model has been proven to successfully output the vector for queen. What this means conceptually is that if we add the qualities of women to the concept of kings, we have some concept which would have all the qualities of women and all the qualities of kings. If we then subtract from this new vector everything associated with men, we get the concept of queen. This is a simple example, but this functionality can prove exceptionally useful for philosophy.

One of the principal things Stiegler calls for is a rethinking of value to escape the Anthropocene, initializing what he calls the epoch of the Neganthropocene. One chief problem with capitalism, he claims, is that, under the conditions it initiates, all use value is reduced to exchange value. The usefulness of a thing is reified into how much it costs, or how much money it could make. This reduces everything to the rules of the market. The progression of this dynamic is how things like the law or works of art have been devalued, not to mention the health of the biosphere and the future itself. Thus, the Neganthropocene, which would be the epoch following the Anthropocene (if there is to be one), would have to be generated on the basis of a new valuation. The question, then, is if the value of everything is no longer to be based on profit, what is to be the new value founding this society? We can contribute to the thinking through of this question by treating Stiegler’s works with vector semantics. I propose querying a sample equation that looks something like V(profit) – V(Anthropocene) + V(Neganthropocene). This would take the concept of profit, which grounds value in this current stage of capitalism, subtract that which is characteristic of the Anthropocene, and add the vector representing the things that Stiegler writes about the Neganthropocene. This analogic calculation might point us in the direction of which words will be related together as all having to do with how we should re-ground value beyond the Anthropocene. I will run word2vec, a vector semantic algorithm, on two of Stiegler’s texts: Nanjing Lectures 2016-2019, where he lays out his theories of entropy, negentropy, Anthropocene, and Neganthropocene most systematically, and Technics and Time, Vol. 1: The Fault of Epimetheus, his first text, before he began to speak of these concepts at all, but where he laid the grounds for his philosophical work to come.

It should be made very clear that this type of calculation is not a magic wand that can reveal new concepts for us on its own. Witmore’s account of distant reading focuses on the scale of the address, but it does not take into full account the shape or contours of the address itself. I would argue that there are two main modes with which one can address text: analytic and synthetic. These neo-Kantian faculties that Stiegler articulates are two forces that make up the dialectic of knowledge production. The full explication of these arguments are beyond the scope of this proposal, but they show that calculating text (or any data) requires the synthetic work of the imagination to think according to standards of reason, and more importantly to dream up new concepts that do not fit into the analytic schema of the understanding. Information or data is the externalization of a prior synthetic act of reason that is calculable now that it is materialized. This act is a decomposition of the line of reasoning into discrete elements that can thus be quantified and calculated. This act is entropic in and of itself, but can produce new knowledge, new concepts, if it leads to a surprise which causes one to run it through their filter of reason and create a new idea which re-organizes the analytical understanding as it now stands. In other words, by modeling text, one divides it up into an enormous different pieces (in this case, vectors) that one can perform calculations on. On their own, these models and these calculations are useless. However, an act like querying Stiegler’s texts for the answer to V(profit) – V(Anthropocene) + V(Neganthropocene) could open up a path that one could wander down. And perhaps, by wandering down this path, which would include careful thought, reasoning, and close reading, one could perhaps experience a surprise in the text. This surprise could potentially cause one to rethink the text they are reading closely in a new way, and potentially lead to the production of a concept. There is of course no way to guarantee this, but it is only by seeking out that which is incalculable that philosophy can be done. Perhaps vector semantics could be a kind of calculation that leads the way toward thinking about value anew and how a new society can be built upon this new concept of value. This could then guide a close reading of some of Stiegler’s texts that could potentially concretize this new, currently unknown, concept.

Work Plan

Digitizing
The work of making the texts digital will take place over the course of a week. Both texts are available online in PDF format. The week will be spent turning them into plain text files manually by typing them into a plain text editor.

Operationalization
Participant will spend a few weeks learning how to use word2vec, a popular algorithm for performing vector semantics. Once familiar with the tool, he will train the algorithm on the two texts in question, creating vectors based on cosine similarity. These vectors will then be operationalized to determine the new vector-concept. This step will be open to the possibility of failure and the potential need for alternative lines of questioning opened up by playing around with the tool.

Close reading/writing
Utilizing the fruits of the vector analysis, the participant will then perform a close reading of the texts at hand guided by the vector produced by the algorithm. This will result in an article surrounding the subject of value in the Neganthropocene.

Final Product and Dissemination

As stated, this project is to form a small part of a larger project about entropy and negentropy in the history of Western thought more generally. This particular project will lend itself to a shorter piece of writing that will specifically be about the question of value in the Neganthropocene. It will initially be published online as a blog post. It will not only be provisionally about the conceptual framework need to reevaluate value, but it will also form the foreground for this larger project on metacosmics. Thus, this close reading and writing will also be the work of forming the questions I would like to pose in future work, as well as the kinds of texts that may need to be addressed.

Works Cited

Deleuze, Gilles, and Félix Guattari. What Is Philosophy? Translated by Hugh Tomlinson and Graham Burchell, Verso, 2015.

Gavin, Michael, et al. “Spaces of Meaning: Conceptual History, Vector Semantics, and Close Reading.” University of Minnesota Press, Minneapolis, MN, 2019.

Malaterre, Christophe, et al. “What Is This Thing Called Philosophy of Science? A Computational Topic-Modeling Perspective, 1934–2015.” HOPOS: The Journal of the International Society for the History of Philosophy of Science, vol. 9, no. 2, 2019, pp. 215–249., https://doi.org/10.1086/704372.

Stiegler, Bernard. The Neganthropocene. Translated by Daniel Ross, 1st ed., Open Humanities Press, 2018.

Witmore, Michael. “Text: A Massively Addressable Object.” Debates in the Digital Humanities, The University of Minnesota Press, Minneapolis, MN, 2012, https://dhdebates.gc.cuny.edu/read/untitled-88c11800-9446-469b-a3be-3fdb36bfbd1e/section/402e7e9a-359b-4b11-8386-a1b48e40425a#p4b3. Accessed 28 Oct. 2021.

Draft Project Proposal – Mapping the surplus food of Community Gardens

  • A brief overview of your project
    • New York City is often known as a “concrete jungle.” While we envision a grey and black landscape, not many pay attention to the actual green foliage that quietly engulfs the city. There are many community gardens that one can quickly step into to find space or organically grown food. However, since these gardens are available to the public, many people often overstep their boundaries and disrupt a delicate ecosystem by taking food not meant for everyone. If only there was a way to effectively inform the local community what food is available for the public.
  • The question/problem it helps answer (The importance of this can’t be understated. The gap or problem your project will address is what makes it relevant. You can see my own attempt to answer this question for a current digital writing project here.)
    • The project aims to collect current food inventory in these gardens. The data would be collected in real time over two months. The data would also come from any past inventory that these community gardens may have already collected. The focus area would be in a couple of community gardens in Manhattan, and we hope to scale it to all five boroughs if possible. The map can showcase real time inventory of available food and can also lead to locations where any surplus of food was donated to a local food bank. This would be a community awareness building project which will lead to a reduction of food theft and an opportunity for local residents to increase interaction with a community space.
  • The project’s intended audience
    • This project will be of use to local community garden residents and to any local resident in search of locally grown organic food.  
  • Its contribution to DH & potential impact
    • Any inspiring data humanist who would like to expand their skills on crowd sourcing and data collection in NYC. As well as a potential inspiration for individuals whom want to create similar food bank projects using free tools on the internet. Local residents can use this free online tool to find food. Building awareness of local issues of food insecurity even in the global north.
    • We can compare this to an already created similar vastly funded project. There are Food growers in Europe that are using crowd sourcing community data programs that are currently underway. The Organization is called GrowObservatory and they collect data to improve farmers’ growth technique in Europe. Their website is located here: https://growobservatory.org/about/#grow-open-data-challenge-as-part-of-data-pitch–2018-19One of the programs they use focuses on the community to input their Soil levels based on the soil sensors, they use Microsoft Power bi in order to collect and create data visualization.
  • Final product (even if tentative)
    • This project aims to create a mapping project using two data collection tools, Google Maps and Tableau online. Tableau online is a free analytics platform fully hosted in a cloud. This tool will be used in order to collect raw data and organize it. Meanwhile, after analyzing the collected data, the project will use Google Maps, a web-based geospatial service that provides detailed information about geographical regions and sites around the world, to visually present the collected data and make it available to the general public. 
  • An initial feasibility assessment (even if tentative).
    • This data can be collected in the duration of a college semester, or a longer period of time. Possibility of collecting data only in a handful of community gardens. No more than possibly 5 gardens in Manhattan in the time allotted. A website could be set up as well as social media posts can be distributed to the general public. 
  • What digital tools/methods/team member skillsets will be needed to realize the project? 
    • Project Lead: Someone with project management skills or someone who wants to tackle on leading a project
    • Data Entry(s): People with data entering skillsets, comfortable with monotonous work
    • Mapping coordinator: Someone who can navigate with google maps and geospatial experience 
    • Data Collection Specialist(s): People who can reach out, collect, organize and possibly analyze data
    • Copy Editor(s): People who can review, vet and organize any information found. Social media skillset. 
  • Can you foresee any legal or technical barriers to the project’s viability? 
    • Plenty, 
    1. Communicating with community leaders at the garden. Possibility that they won’t respond or want to divulge information that can be made public. 
    2. The data we collect might not be useful enough to create a sustainable project. As well as the project cannot be scaled into a bigger project for future use. 
    3. Time constraints might be too much. There might not be enough “live” data to be useful. 
    4. This data can be used for malicious intent, this can unfortunately be used as a policing project on where food is available and can lead to be use as indicator of where people gather. 
    5. Unforeseen events: Weather, covid restrictions, online tools become unavailable? 
  • How will you tackle those potential pitfalls if your project is selected?
    1. The team would have to find creative ways to collect data overwise, via already established information on the web or contacting people who do similar projects. 
    2. The team would have to shift the project’s end goal and target a new feasible project in where whatever collected data can be used. 
    3. Reduce and shift the amount of gardens that are being contacted. Possibly restrict the type of data that is being collected. 
    4. The team would need to have the foresight and have conversations among themselves on how to lead this project in the most ethical way possible. 
    5. The team would need to react accordingly. Would need quick thinkers

Project Proposal for Mainframes: Present-Past

Overview

Mainframes: Present-Past is a web journal collecting scholarship for the humanistic study of mainframe computing. The project hopes to attract a multidisciplinary effort to recover the experience and heritage of computing on mainframe systems in the era before the personal computer (1950s – 1970s). Contributions can vary in focus: archived visual material, economic/social history, general education, business and critical infrastructure studies, interactive emulation of mainframe software, and the cultural poetics of this brand of computing will be welcomed into the project.

Problem addressed

There is a dearth of (digital) humanities scholarship about mainframe computers. Antecedents can be traced in works from various “media archaeologists”: Tung-Hui Hu’s A Prehistory of the Cloud (cultural poetics of cloud computing), Kirschenbaum’s Track Changes: A Literary History of Word Processing (word processing as software), Liu’s The Laws of Cool (knowledge work), Lisa Gitleman’s  “Raw Data” is an Oxymoron (various histories and pre-histories of data), Friedrich Kittler (general theory), among a few others. But mainframe computing is an implementation detail in larger arguments, often glossed over. So much of our computing inherits key concepts from the features and limitations of the age of Big Iron. Surprisingly, critical infrastructure studies practically skips mainframes as a topic, despite continued reliance on mainframes for batch transaction processing (financial transactions), more recently highlighted as a problem area of critical infrastructure when states struggled issue checks during the pandemic as a result of the COBOL programmer shortage.

Project audience

Scholars and students interested in imaginative explorations of what it was like interacting with and encountering mainframe interfaces in person, by proxy (timesharing) and as a cultural phenomenon in popular media and the workplace. I believe there’s a space for educational material for non-technical audiences in the broader public depending on contributor interest.

Contribution and impact

The study of mainframe computing is not only a historical exercise in preservation. Because of the foreignness of mainframe computing from contemporary experience, it becomes a prism to explore larger topics related to information technology in societies. One could study gender in  as American women in programming during this time outnumbered their descents, being seen as “more meticulous” than their male counterparts until the formalization of computer science in engineering departments in the university system. Mainframe computers still play in integral role in critical infrastructure – there’s a reason why there’s a cut-off in the afternoon as to when deposits of checks become valuable in checking accounts, and state payment processing still flows through these systems. Issues of technical knowledge and complexity required to operate mainframes relate to the abstraction of these technical details, touching on workplace casualization . I could imagine an analysis on the effect of line editors in word processing, and the style of business writing exemplified in office memos. The subject of mainframe computing lends itself to different flavors of humanist study too numerous to mention here.

Final product

A web archive/journal about mainframe computing. This site could include writing, collection of visual material and ephemera (manuals, advertising, etc) , and (for the adventurous contributor) interactive emulations of mainframe computing in the browser.

Feasibility assessment

Tool selection

Depending on the contribution and research done, we’ll use either Ed or Wax (thanks minimal computing working group), both of which are based on Jekyll, the static site generator. This choice has a number of benefits in reducing the cost of hosting/deployment (no financial cost if on Github or GitLab) and maintenance (no backend), and is eminently secure (see no backend systems or database). Jekyll is also relatively simple in design, and easy to reason about without too much technical overhead. Use of these tools also allow for preservation in the face of changing web technology, as text is rendered from markdown. The design of Ed and Wax also account for low bandwidth scenarios, a boon for accessibility.

Team composition

In my original proposal, I listed the following roles:

  • One to two researchers
  • One information architect
  • One front end developer (HTML, CSS, Javascript)

The combination of the platform on which this project will be built and the subject matter make team composition flexible though, changing shape based on contribution interest and skill set. I’m confident we’d produce interesting material regardless of team composition. If people are interested in gaining experience in some foundational developer skills like git/Github, terminal usage, HTML/CSS/Javascript, this a project where you could likely learn these skills. But they aren’t required.

Barriers and challenges

I believe the biggest challenges are related to original research and tracking down archival material that meets specific subject matter, though I’d imagine university archives/websites and institutions like The Computer History Museum may help mitigate finding nothing worth writing about.