The original data source for the text is a searchable PDF file that is publicly available for free via Open Humanities Press’ website. We will also be appropriating Python scripts for running word embedding models also available online for free. Lastly, we will be generating notes to be stored in Jupyter Notebook files. The text and Python data will be regenerable so longs as the sources remain public and free. The PDF file will be converted to plain text. The code will initially be stored as .py files. Any visualizations will be image files (type TBD). The finished product (including code, text, and images) will be stored as an .ipynb Jupyter Notebook file. Jupyter Notebook will also be saved as PDF for sustainability. The tools we are utilizing are: Calibre for file conversion from PDF to .txt; Python and word2vec for text operationalization; Github for provisional code storage; Jupyter Notebook for project creation/storage. Github will be used to store, edit, and view code as we find, create, and alter it in the process of making. The final product will will also be stored on Github. We will backup files on our personal iClouds. File names will be the name of the author of the code and the version number. The different segments of code will be stored in the same directory on Github.
Data collection procedures are documented in a collaborative work place accessible as a collaborative .pages file. We are ensuring good project and data documentation via weekly in-person check-in meetings in addition to several virtual meetings per week. Both participants are responsible for implementing this data management plan. Project will follow the open access and open source practices of the Digital Humanities, the finished project being code and comments on said code that will be publicly available to all.
Our data is not sensitive. There is no embargo period for our data. The text is licensed under a Creative Commons Attribution 4.0 Unported License, which allows us to share and adapt the text, so long as appropriate credit is given and a link to the license is provided with an indication that changes were made.
Re-use and re-distribution of data
The sharing of the data from this project are not subject to any external requirements. Our hopeful audience is members of the philosophical community interested in appropriating computation methods for theorizing. We will publish the data on our public website in May 2022. Jupyter will necessary to access the data published.
Long Term Archiving and Preservation
Data will be stored for 5-10 years unless a more permanent means of storage becomes available in the meantime. Data should be archived for Stiegler scholars or other researchers interested in the Neganthropocene. Jupyter notebook(s) will be stored as PDF’s for sustainability. Data will be submitted to CUNY Academic Works for long-term maintenance. A subject-based archive may also be found appropriate if found.